106

I have been using sshfs to work remotely, but it is really slow and annoying, particularly when I use eclipse on it.

Is there any faster way to mount the remote file system locally? My no.1 priority is speed.

Remote machine is Fedora 15, local machine is Ubuntu 10.10. I can also use Windows XP locally if necessary.

studiohack
  • 13,468
  • 19
  • 88
  • 118
CuriousMind
  • 1,931
  • 5
  • 23
  • 29

14 Answers14

55

If you need to improve the speed for sshfs connections, try these options:

oauto_cache,reconnect,defer_permissions,noappledouble,nolocalcaches,no_readahead

command would be:

sshfs remote:/path/to/folder local -oauto_cache,reconnect,defer_permissions
Meetai.com
  • 721
  • 1
  • 6
  • 10
  • 3
    Thanks, worked for me! Had to remove `defer_permissions` though (unknown option). – Mathieu Rodic Mar 10 '15 at 11:35
  • 6
    Won't `nolocalcaches` *decrease* performance by forcing lookups **every** operation? Does this contradict `auto_cache`? – earthmeLon Jun 15 '15 at 18:13
  • 1
    The way I read the docs, nolocalcaches only disables the kernel side of things, sshfs still has its own cache. I could imagine that the kernel level checks are tuned for "real" file systems and as such more extensive. On the sshfs side "cache_timeout" looks promising, too. Here's a list: http://www.saltycrane.com/blog/2010/04/notes-sshfs-ubuntu/ ... lots of good stuff. :-) – Mantriur Oct 29 '15 at 17:21
  • 3
    nolocalcaches and defer_permissions don't seem valid (anymore?) on Debian Jessie. – Mantriur Oct 29 '15 at 17:31
  • I find that "kernel_cache" is faster than "auto_cache", but afaik it assumes exclusive access, so only use it if nothing else is changing that data. – Mantriur May 30 '16 at 14:52
  • 6
    Why `no_readahead`? – studgeek Aug 09 '16 at 00:40
  • 1
    What do you mean by "oauto_cache"? – ManuelSchneid3r Mar 15 '17 at 14:35
  • 1
    Removed 'defer_permission' as I think that is mac specifid, no linux. – Elijah Lynn Feb 10 '18 at 09:35
  • 1
    @ManuelSchneid3rI know it's a bit late, but it's the same as `-o auto_cache` because the argument and parameters do not need to be spaced. – Abandoned Cart Apr 30 '19 at 03:46
  • 1
    On Mac OS X Catalina, the local folder just disappears when it gets mounted and you can't see anything in it when trying to ls it. "No such file or directory". unmount it and then the folder becomes visible again. Any thoughts? – LewlSauce Dec 26 '19 at 18:49
  • The defer_permissions option fixes some issues on translating filesystem permissions when mounting SSH filesystem from Mac OS, but the option does not exist in Linux. – ThankYee Feb 12 '20 at 20:24
  • most of options above are unknown on debian wheezy for seagate dockstar. only no_readahead and kernel_cache is accepted. kernel_cache seems to bring more stability in data transfer to ssd but I get same speed as before. I have this in fstab: sshfs#home@win10_host:/users /mnt/ssh/win10_host fuse reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,uid=0,gid=0,umask=0,allow_other,auto,ssh_command=sshpass\040-f\040/root/.ssh/mydevice.password\040ssh,no_readahead,kernel_cache 0 0 – skyrail Jun 02 '20 at 18:21
25

sshfs is using the SSH file transfer protocol, which means encryption.

If you just mount via NFS, it's of course faster, because not encrypted.

are you trying to mount volumes on the same network? then use NFS.

Tilo
  • 464
  • 4
  • 5
  • 53
    It's not slow because of the encryption, it's slow because it's FUSE and it keeps checking the file system state. – w00t May 19 '13 at 13:40
  • 3
    @w00t I don't think that it's FUSE slowing it down, and not the encryption. Changing the encryption to arcfour sped it up for me, whereas using `scp` was just as slow as `sshfs`. – Sparhawk Sep 28 '13 at 04:57
  • 30
    @Sparhawk there's a difference between throughput and latency. FUSE gives you pretty high latency because it has to check the filesystem state a lot using some pretty inefficient means. arcfour gives you good throughput because the encryption is simpler. In this case latency is most important because that's what causes the editor to be slow at listing and loading files. – w00t Sep 29 '13 at 11:16
  • 3
    @w00t. Ah okay. Good points. – Sparhawk Sep 29 '13 at 12:42
  • I think NFS traffic can be transferred over SSH pipe so that should result in the best performance if you can use NFS on the remote localhost. The problem with sshfs is that's technically running SFTP and fuse just pretends to be a POSIX compatible filesystem over that protocol. The SFTP protocol doesn't support any fancy features and as a result, any protocol over that will end up having pretty poor overall performance. If you replace SFTP with NFS and keep encryption, it will be much faster. – Mikko Rantalainen Sep 18 '20 at 08:06
  • Unfortunately, it is not because of encryption (wireguard is fast and encrypted) and not because of fuse (userspace is not as fast as kernel space, but still fast enough for this), BUT because of ssh: ssh is notoriously slow with transfering files, it seems to be a built in defect. – Markus Bawidamann Mar 17 '23 at 01:17
22

Besides already proposed solutions of using Samba/NFS, which are perfectly valid, you could also achieve some speed boost sticking with sshfs by using quicker encryption (authentication would be as safe as usual, but transfered data itself would be easier to decrypt) by supplying -o Ciphers=arcfour option to sshfs. It is especially useful if your machine has weak CPU.

Sparhawk
  • 1,815
  • 15
  • 25
aland
  • 2,990
  • 16
  • 26
  • `-oCipher=arcfour` made no difference in my tests with a 141 MB file created from random data. – Sparhawk Sep 28 '13 at 04:39
  • 8
    That's because there were multiple typos in the command. I've edited it. I noticed a 15% speedup from my raspberry pi server. (+1) – Sparhawk Sep 28 '13 at 04:56
  • 6
    The [email protected] cipher is also an option worth considering now arcfour is obsolete. Chacha20 is faster on ARM processors than AES but far worse on x86 processors with AES instructions (which all modern desktop CPUs have as standard these days). https://klingt.net/blog/ssh-cipher-performance-comparision/ You can list supported ciphers with "ssh -Q cipher" – TimSC Nov 20 '17 at 20:48
  • This is not doable anymore as the fastest ciphers (eg `arcfour`) have now been permanently removed from recent SSH versions – MasterScrat Jan 07 '22 at 10:04
15

I do not have any alternatives to recommend, but I can provide suggestions for how to speed up sshfs:

sshfs -o cache_timeout=115200 -o attr_timeout=115200 ...

This should avoid some of the round trip requests when you are trying to read content or permissions for files that you already retrieved earlier in your session.

sshfs simulates deletes and changes locally, so new changes made on the local machine should appear immediately, despite the large timeouts, as cached data is automatically dropped.

But these options are not recommended if the remote files might be updated without the local machine knowing, e.g. by a different user, or a remote ssh shell. In that case, lower timeouts would be preferable.

Here are some more options I experimented with, although I am not sure if any of them made a differences:

sshfs_opts="-o auto_cache -o cache_timeout=115200 -o attr_timeout=115200   \
-o entry_timeout=1200 -o max_readahead=90000 -o large_read -o big_writes   \
-o no_remote_lock"

You should also check out the options recommended by Meetai in his answer.

Recursion

The biggest problem in my workflow is when I try to read many folders, for example in a deep tree, because sshfs performs a round trip request for each folder separately. This may also be the bottleneck that you experience with Eclipse.

Making requests for multiple folders in parallel could help with this, but most apps don't do that: they were designed for low-latency filesystems with read-ahead caching, so they wait for one file stat to complete before moving on to the next.

Precaching

But something sshfs could do would be to look ahead at the remote file system, collect folder stats before I request them, and send them to me when the connection is not immediately occupied. This would use more bandwidth (from lookahead data that is never used) but could improve speed.

We can force sshfs to do some read-ahead caching, by running this before you get started on your task, or even in the background when your task is already underway:

find project/folder/on/mounted/fs > /dev/null &

That should pre-cache all the directory entries, reducing some of the later overhead from round trips. (Of course, you need to use the large timeouts like those I provided earlier, or this cached data will be cleared before your app accesses it.)

But that find will take a long time. Like other apps, it waits for the results from one folder before requesting the next one.

It might be possible to reduce the overall time by asking multiple find processes to look into different folders. I haven't tested to see if this really is more efficient. It depends whether sshfs allows requests in parallel. (I think it does.)

find project/folder/on/mounted/fs/A > /dev/null &
find project/folder/on/mounted/fs/B > /dev/null &
find project/folder/on/mounted/fs/C > /dev/null &

If you also want to pre-cache file contents, you could try this:

tar c project/folder/on/mounted/fs > /dev/null &

Obviously this will take much longer, will transfer a lot of data, and requires you to have a huge cache size. But when it's done, accessing the files should feel nice and fast.

joeytwiddle
  • 1,715
  • 1
  • 15
  • 22
  • 1
    If you want to read file contents to get it into cache the `wc -l` is pretty good. It just counts occurrences of 0x10 in the file so it simply reads the file once without outputting the contents. – Mikko Rantalainen Mar 26 '20 at 15:04
8

After searching and trial. I just found add -o Compression=no speed it a lot. The delay may be caused by the compression and uncompression process. Besides, use 'Ciphers=aes128-ctr' seems faster than others while some post has done some experiments on this. Then, my command is somehow like this:

sshfs -o allow_other,transform_symlinks,follow_symlinks,IdentityFile=/Users/maple/.ssh/id_rsa -o auto_cache,reconnect,defer_permissions -o Ciphers=aes128-ctr -o Compression=no [email protected]:/home/maple ~/mntpoint

maple
  • 180
  • 1
  • 4
  • 1
    fun enough Compression=yes seems to speed it up for me while all the others didn't seem to make a difference – Fuseteam Jun 17 '21 at 13:50
8

I found turning off my zsh theme that was checking git file status helped enourmously - just entering the directory was taking 10+ minutes. Likewise turning off git status checkers in Vim.

bloke_zero
  • 181
  • 1
  • 3
4

I've been doing testing with various tools on MacOS 12.1 on an M1 mac and wanted to share some possibly helpful results.

Short Version: Try using rclone mount instead of sshfs. This enabled me to get full gigabit speed both up and down.

A little more about my experience and testing:

Setup: M1 Mac connected over gigabit ethernet to a server running Rocky 8, with a big high speed raid filesystem. Speeds below will be in MB/s, so wire speed would be about 125 MB/s (1 Gb/S).

For me, default settings of sshfs gave me ~30 MB/s down from the server and full 120 MB/s up. Using the option -o Ciphers=aes128-ctr increased that to about 50MB/s down (arcfour is no longer supported on open SSH, so didn't work).

Using rclone mount, I was able to get full 120+ MB/s both up and down, and the mount has otherwise worked great so far as well.

Most other non-mount tools I tried gave me roughly full wire speed up and down (Forklift, command line sftp, filezilla, rclone copy, rsync).

Cyberduck gave me very slow performance up and down, ~15 MB/s, I suspect due to compression that I have not been able to figure out how to turn off.

Jazz Weisman
  • 141
  • 1
  • One current issue with rclone is that it doesn't support symlinks: they can either be ignored, or treated as the files they're pointing to. That's probably a non-issue for media files, but it breaks my attempted use as a temp folder :( – Warbo Aug 24 '23 at 18:09
4

SSHFS is really slow because it transfers the file contents even if it does not have to (when doing cp). I reported this upstream and to Debian, but no response :/

  • 5
    It is efficient with `mv`. Unfortunately when you run `cp` locally, FUSE only sees requests to open files for reading and writing. It does not know that you are making a copy of a file. To FUSE it looks no different from a general file write. So I fear this cannot be fixed unless the local `cp` is made more FUSE-aware/FUSE-friendly. (Or FUSE might be able to send block hashes instead of entire blocks when it suspects a `cp`, like rsync does, but that would be complex and might slow other operations down.) – joeytwiddle Sep 08 '16 at 05:00
2

NFS should be faster. How remote is the filesystem? If it's over the WAN, you might be better off just syncing the files back and forth, as opposed to direct remote access.

Adam Wagner
  • 121
  • 3
1

Either NFS or Samba if you have large files. Using NFS with something like 720p Movies and crap is really a PITA. Samba will do a better job, tho i dislike Samba for a number of other reasons and i wouldn't usually recommend it.

For small files, NFS should be fine.

Franz Bettag
  • 249
  • 2
  • 5
0

I use plain SFTP. I did it primarily to cut out unneeded authentication but I am sure that dropping the layer of encryption helps, too. (Yes, I need to benchmark it.)

I describe a trivial usage here: https://www.quora.com/How-can-I-use-SFTP-without-the-overhead-of-SSH-I-want-a-fast-and-flexible-file-server-but-I-dont-need-encryption-or-authentication

0

sshfs is for sure not a very performant way to mount a remote file system in general and other options are often faster. However, if you experience incredibly sluggish performance it might be that some I/O is happening over the SSH connection that you are not aware of.

To investigate what is happening, you can mount with sshfs -d which will spawn sshfs in the foreground, but it will then display debugging information so you can see what kind of requests are being done to the remote host. This will help you understand what is happening and see if any of the I/O should be happening in the first place.

This is not relevant to the question, but here's what my problem was specifically: A simple ls was taking 8 seconds to complete. I found out using the debug mode that during the ls command there were requests like /libselinux.so.1 and /libpcre.so.3, etc. This made no sense to me. I then figured out that my LD_LIBRARY_PATH variable contained a trailing :, thus it essentially contained an empty entry, which caused it to look up shared libraries over SSHFS.

jlh
  • 101
  • 4
0

@meetai.com 's answer was pure magic for me...

i'm on linux mint cinnamon 20.0 right now... just to add on to the answer, here is a little script i jacked meetai's solution to - a list of hosts in config file pop to select from - my two cents.

#!/bin/bash

# list hosts segregating aliases from user's ssh config file
hosts="$(grep -P "^Host ([^*]+)$" $HOME/.ssh/config | sed 's/Host //')"

# select host from list
select host in ${hosts}; do echo "You selected ${host}"; break; done

# call sshfs to mount host
sshfs $host:/ ~/mnt/$host -oauto_cache,reconnect,no_readahead
  • As it’s currently written, your answer is unclear. Please [edit] to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Apr 20 '22 at 03:29
  • This does not provide an answer to the question. Once you have sufficient [reputation](https://superuser.com/help/whats-reputation) you will be able to [comment on any post](https://superuser.com/help/privileges/comment); instead, [provide answers that don't require clarification from the asker](https://meta.stackexchange.com/questions/214173/why-do-i-need-50-reputation-to-comment-what-can-i-do-instead). - [From Review](/review/late-answers/1120387) – music2myear Apr 20 '22 at 03:54
0

New option: max_conns

Since version 3.7.0 sshfs includes an option called max_conns.

This option has the potential to greatly improve your performance.

Check your sshfs version with the following command:

sshfs -V

If your version is >= 3.7.0, then consider adding the below parameters:

-o max_conns=4

Where 4 is the number of cores on your machine (you can check this with the below command):

# To retrieve the number of cores:
grep -c ^processor /proc/cpuinfo

NOTE

This might have an impact on the CPU load used by ssh / sshfs. If you do not want to saturate your CPU for disk access, consider using a lower connection count.

JohannesB
  • 101
  • 3
  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Feb 19 '23 at 15:27