Any chance this work can be upstreamed into mainline SSH? I'd love to have better performance for SSH, but I'm probably not going to install and remember to use this just for the few times it would be relevant.
I doubt this would ever be accepted upstream. That said if one wants speed play around with lftp [1]. It has a mirror subsystem that can replicate much of rsync functionality in a chroot sftp-only destination and can use multiple TCP/SFTP streams in a batch upload and per-file meaning one can saturate just about any upstream. I have used this for transferring massive postgres backups and then because I am paranoid when using applications that automatically multipart transfer files I include a checksum file for the source and then verify the destination files.<p>The only downside I have found using lftp is that given there is no corresponding daemon <i>for rsync</i> on the destination then directory enumeration can be slow if there are a lot of nested sub-directories. Oh and the syntax is a little odd <i>for me anyway</i>. I always have to look at my existing scripts when setting up new automation.<p><i>Demo to play with, download only. Try different values. This will be faster on your servers, especially anything within the data-center.</i><p><pre><code> ssh mirror@mirror.newsdump.org # do this once to accept key as ssh-keyscan will choke on my big banner
mkdir -p /dev/shm/test && cd /dev/shm/test
lftp -u mirror, -e "mirror --parallel=4 --use-pget=8 --no-perms --verbose /pub/big_file_test/ /dev/shm/test;bye" sftp://mirror.newsdump.org
</code></pre>
For automation add <i>--loop</i> to repeat job until nothing has changed.<p>[1] - <a href="https://linux.die.net/man/1/lftp" rel="nofollow">https://linux.die.net/man/1/lftp</a>
The normal answer that I have heard to the performance problems in the conversion from scp to sftp is to use rsync.<p>The design of sftp is such that it cannot exploit "TCP sliding windows" to maximize bandwidth on high-latency connections. Thus, the migration from scp to sftp has involved a performance loss, which is well-known.<p><a href="https://daniel.haxx.se/blog/2010/12/08/making-sftp-transfers-fast/" rel="nofollow">https://daniel.haxx.se/blog/2010/12/08/making-sftp-transfers...</a><p>The rsync question is not a workable answer, as OpenBSD has reimplemented the rsync protocol in a new codebase:<p><a href="https://www.openrsync.org/" rel="nofollow">https://www.openrsync.org/</a><p>An attempt to combine the BSD-licensed rsync with OpenSSH would likely see it stripped out of GPL-focused implementations, where the original GPL release has long standing.<p>It would be more straightforward to design a new SFTP implementation that implements sliding windows.<p>I understand (but have not measured) that forcibly reverting to the original scp protocol will also raise performance in high-latency conditions. This does introduce an attack surface, should not be the default transfer tool, and demands thoughtful care.<p><a href="https://lwn.net/Articles/835962/" rel="nofollow">https://lwn.net/Articles/835962/</a>
Wow, I hadn't heard of this before. You're saying it can "chunk" large files when operating against a remote sftp-subsystem (OpenSSH)?<p>I often find myself needing to move a single large file rather than many smaller ones but TCP overhead and latency will always keep speeds down.
Also upstream is extremely well audited. That's a huge benefit i don't want to loose by using fork.
OpenSSH is from the people at OpenBSD, which means performance improvements have to be carefully vetted against bugs, and, judging by the fact that they're still on fastfs and the lack of TRIM in 2025, that will not happen.
There's nothing inherently slow about UFS2; the theoretical performance profile should be nearly identical to Ext4. For basic filesystem operations UFS2 and Ext4 will often be faster than more modern filesystems.<p>OpenBSD's filesystem operations are slow not because of UFS2, but because they simply haven't been optimized up-and-down the stack the way Ext4 has been Linux or UFS2 on FreeBSD. And of course, OpenBSD's implementation doesn't have a journal (both UFS and Ext had journaling bolted late in life) so filesystem checks (triggered on an unclean shutdown or after N boots) can take a long time, which often cause people to think their system has frozen or didn't come up. That user interface problem notwithstanding, UFS2 is extremely robust. OpenBSD is very conservative about optimizations, especially when they increase code complexity, and particularly for subsystems where the project doesn't have time available to give it the necessary attention.
I admittedly don't really know how SSH is built but it looks to me like the patch that "makes" it HPN-SSH is already present upstream[1], it's just not applied by default?
Nixpkgs seems to allow you to build the pkg with the patch [2].<p>[1] <a href="https://github.com/freebsd/freebsd-ports/blob/main/security/openssh-portable/files/extra-patch-hpn" rel="nofollow">https://github.com/freebsd/freebsd-ports/blob/main/security/...</a><p>[2] <a href="https://github.com/NixOS/nixpkgs/blob/d85ef06512a3afbd6f90825dd8f5b6cef017bdd6/pkgs/tools/networking/openssh/default.nix#L40" rel="nofollow">https://github.com/NixOS/nixpkgs/blob/d85ef06512a3afbd6f9082...</a>
Upstream is either OpenBSD itself or <a href="https://github.com/openssh/openssh-portable" rel="nofollow">https://github.com/openssh/openssh-portable</a> , not the FreeBSD port. I'm... not sure why nix is pulling the patch from FreeBSD, that's odd.
Unlikely. These patches have been carried out-of-tree for over a decade precisely because upstream OpenSSH won't accept them.
Depending on your hardware architecture and security needs, fiddling with ciphers in mainline might improve speed.
If folks find this interesting, maybe also mosh[1] is for you.
Different trade offs.<p>[1]: <a href="https://mosh.org/" rel="nofollow">https://mosh.org/</a>
This is cool very cool and I think I'll give it a try, though I'm wary about using a forked SSH so would love to see things land upstream.<p>I've been using mosh now for over a decade and it is amazing. Add on rsync for file transfers and I've felt pretty set. If you haven't checked out mosh, you should definitely do so!
The contracting activity in terms of rsync and async, where SFTP is secure tunneling, either with SSH or OpenSSH, which -p flag specifies as the port: 22, but /ssh/.configuring 10901 works for TCP.
I don't think it comes as a surprise that you can improve performance by re-implementing ciphers, but what is the security trade-off? Many times, well audited implementations of ciphers are intentionally less performant in order to operate in constant time and avoid side channel attacks. Is it even possible to do constant time operations while being multithreaded?<p>The only change I see here that is probably harmless and a speed boost is using AES-NI for AES-CTR. This should probably be an upstream patch. The rest is more iffy.
It's not clear if you need it on both ends to get an advantage?