But both HTTP/2 and QUIC (the "transport layer" of HTTP/3) are so general-purpose that I'm not sure the HTTP part really has a lot of meaning anymore. At least QUIC is relatively openly promoted as an alternative to TCP, with HTTP its primary usecase.
> SSH3 is probably going to change its name. It is still the SSH Connection Protocol (RFC4254) running on top of HTTP/3 Extended connect, but the required changes are heavy and too distant from the philosophy of popular SSH implementations to be considered for integration. The specification draft has already been renamed ("Remote Terminals over HTTP/3"), but we need some time to come up with a nice permanent name.
Of course you need to wait for ACKs at some point though, otherwise they would be useless. That's how we detect, and potentially recover from, broken links. They are a feature. And HTTP3 has that feature.
Is it better implemented than the various TCP algorithms we use underneath regular SSH? Perhaps. That remains to be seen. The use case of SSH (long lived connections with shorter lived channels) is vastly different from the short lived bursts of many connections that QUIC was intented for. My best guess is that it could go both ways, depending on the actual implementation. The devil is in the details, and there are many details here.
Should you find yourself limited by the default buffering of SSH (10+Gbit intercontinental links), that's called "long fat links" in network lingo, and is not what TCP was built for. Look at pages like this Linux Tuning for High Latency networks: https://fasterdata.es.net/host-tuning/linux/
There is also the HPN-SSH project which increases the buffers of SSH even more than what is standard. It is seldom needed anymore since both Linux and OpenSSH has improved, but can still be useful.
If you ever using wifi in the airport or even some hotel with work suite unit around the world, you will notice that Apple Mail can't send or receive emails. It is probably some company wide policy to first block port 25 (that is even the case with some hosting providers) all in the name of fighting SPAM. Pretty soon, 143, 587, 993, 995.... are all blocked. Guess 80 and 443 are the only ones that can go through any firewalls now days. It is a shame really. Hopefully v6 will do better.
So there you go. And know EU wants to do ChatControl!!!! Please stop this none-sense, listen to the people who actually knows tech.
Host *.internal.example.com
ProxyCommand ssh -q -W %h:%p hop.internal.example.com
in the SSH client config would make everything in that domain hop over that hop server. It's one extra connection - but with everything correctly configured that should be barely noticeable. Auth is also proxied through.The reasons states to use http3 and not QUIC directly makes sense with littlest downside - you can run it behind any standard http3 reverse proxy, under some subdomain or path of your choosing, without standing out to port scanners. While security through obscurity is not security, there's no doubt that it reduces the CPU overhead that many scanners might incur if they discover your SSH server and try a bunch of login attempts.
Running over HTTP3 has an additional benefit. It becomes harder to block. If your ssh traffic just looks like you're on some website with lots of network traffic, eg google meet, then it becomes a lot harder to block it without blocking all web traffic over http3. Even if you do that, you could likely still get a working but suboptimal emulation over http1 CONNECT
This SSH window size limit is per ssh "stream", so it could be overcome by many parallel streams, but most programs do not make use of that (scp, rsync, piping data through the ssh command), so they are much slower than plain TCP as measured eg by iperf3.
I think it's silly that this exists. They should just let TCP handle this.
People were (wisely) blocking port 25 twenty years ago.
Host *.internal.example.com
ProxyJump hop.internal.example.com
ssh -J hop.internal.example.com foo.internal.example.com
But it's still irrelevant here; specifically called out in README:
> The keystroke latency in a running session is unchanged.
A better 'working name' would be something like sshttp3, lol. Obviously not the successor to SSH2
Non-doers are the bottom rung of the ladder, don't ever forget that :).
https://www.ietf.org/archive/id/draft-michel-ssh3-00.html
However, it can also use HTTP mechanisms for authentication/authorization.
EDIT: Looking at the relevant RFC [1] and the OpenSSH sshd_config manual [2], it looks like the answer is that the protocol supports having the jump server decide what to do with the host/port information, but the OpenSSH server software doesn't present any relevant configuration knobs.
[1]: https://www.rfc-editor.org/rfc/rfc4254.html#section-7.2
[2]: https://man7.org/linux/man-pages/man5/sshd_config.5.html
"It is often the case that some SSH hosts can only be accessed through a gateway. SSH3 allows you to perform a Proxy Jump similarly to what is proposed by OpenSSH. You can connect from A to C using B as a gateway/proxy. B and C must both be running a valid SSH3 server. This works by establishing UDP port forwarding on B to forward QUIC packets from A to C. The connection from A to C is therefore fully end-to-end and B cannot decrypt or alter the SSH3 traffic between A and C."
More or less, maybe but not automatically like you suggest, I think. I don't see why you couldn't configure a generic proxy to set it up, though.
Having SSH in the name helps developers quickly understand the problem domain it improves upon.
Is it because it is hard to detect what type of the request that is being sent? Stream vs Non Stream etc?
Firstly, I love the satirical name of tempaccount420, I was also just watching memes and this post is literally me (ryan gosling)
As I was also thinking about this thing literally yesterday being a bit delusional on hoping to create a better ssh using http/3 or something or some minor improvement because I made a comment about tor routing and linking it to things like serveo, I was thinking of enhancing that idea or something lol.
Actually, it seems that I have already starred this project but I had forgotten about it, this is primarily the reason why I star github project and this might be where I might have got some inspiration of http/3 in the first place with SSH.
Seems like a really great project (I think)
Now, one question that I have is could SSH be made modular in the sense that we can split the transport layer apart from SSH as this project does, without too much worries?
Like, I want to create a SSH-ish something to lets say something like iroh being the transport layer, are there any libraries or resources which can do something like that? (I won't do it for iroh but I always like mixing and matching and I am thinking of some different ideas like SSH over matrix/xmpp/signal too/ the possibilities could be limitless!)
So having the ease of mind that when I block someone in Entra ID, they will also be locked out of all servers immediately—that would be great actually.
> PAM TOTP (or even just password+OTP) into HTTP auth
But why would you? Before initiating a session, users will have to authorise to the IdP, which probably includes MFA or Passkeys anyway. No need for PAM anymore at all.
I meant this in jest but now that I think about it, it actually could be a decent name (?)
It's not like we see a lot of downsides that the world collectively agreed on TCP/IP over IPX/SPX or DECNet or X.25. Or that the linux kernel is everywhere.
20 years ago (2005) STARTTLS was still widely in use. Clients can be configured to call it when STARTTLS isn't available. But clients can also be served bogus or snake oil TLS certs. Certificate pinning wasn't widely in use for SMTP in 2005.
Seems STARTTLS is deprecated since 2018 [1]
Quote: For email in particular, in January 2018 RFC 8314 was released, which explicitly recommends that "Implicit TLS" be used in preference to the STARTTLS mechanism for IMAP, POP3, and SMTP submissions.
[1] https://serverfault.com/questions/523804/is-starttls-less-sa...
The secret path, otherwise giving 404 would need brute-force protection (on HTTPd level?). I think it is easier to run SSH on a non-standard port on IPv6, but it remains true that anyone with network read access between the endpoints can figure it out.
What isn't explained is why would one care about 100 ms latency during auth? I rather have mosh which has resuming support and will work on high latency (tho IIRC won't work over TOR?). But even then, with LTE and NG, my connections over mobile have become very stable here in NL (YMMV).
> The stream multiplexing capabilities of QUIC allow reducing the head-of-line blocking that SSHv2 encounters when multiplexing several SSH channels over the same TCP connection
....
> Each channel runs over a bidirectional HTTP/3 stream and is attached to a single remote terminal session
[0] https://www.ietf.org/archive/id/draft-michel-remote-terminal...
Listing all the deficiencies of something, and putting together a thing that fixes all of them, is the kind of "designed by committee" project that everyone hates. Real progress requires someone to put together a quick project, with new features they think are useful, and letting the public decide if it is useful or not.
But it's a good start. Props to exploring that kind of space that needs improvement but is difficult to get a foothold in.
With ssh everybody does TOFU or copies host fingerprints around, vs https where setting up letsencrypt is a no-brainer and you’re a weirdo of you even think about self-signed certs. Now you can do the same with ssh but do you?
For authentication, ssh relies on long lived keys rather than short lived tokens. Yes, I know about ssh certificates but again, it’s a hassle to set up compared to using any of a million IdP with oauth2 support. This enables central place to manage access and mandate MFA.
Finally, you better hope your corporate IT has not blocked the SSH port as a a security threat.
As long as said proxy supports a http CONNECT to a bi-directional connection. Which most I know of do, but may require additional configuration.
Another advantage of using http/3 is it makes it easier to authenticate using something like oauth 2, oidc, saml, etc. since it can use the normal http flow instead of needing to copy a token from the http flow to a different flow.
Bummer. From a user perspective, I don't see the appeal. Connection setup time has never been an annoyance for me.
SSH is battle-tested. This feels risky to trust, even whenever they end up declaring it production-ready.
How would that even work? Do you open your browser, log in, and then somehow transfer the session into your ssh client in a terminal? Does the browser assimilate the terminal?
And let me remind you, HTTP authentication isn't a login form. It's the browser built-in "HTTP username + password" form and its cousins. We're talking HTTP 401. The only places this is widely used is API bearer tokens and NTLM/Kerberos SSO.
> Before initiating a session, users will have to authorise to the IdP, which probably includes MFA or Passkeys anyway. No need for PAM anymore at all.
Unfortunately I need to pop your bubble, PAM also does session setup, you'd still need it. And the other thing here is — you're solving your problem. Hard-relying on HTTP auth for this SSH successor needs to solve everyone's problem. And it's an incredibly bad fit for a whole bunch of things.
Coincidentally, SSH's mechanisms are also an incredibly bad fit; password authentication is in there as a "hard" feature; it's not an interactive dialog and you can't do password+TOTP there either. For that you need keyboard-interactive auth, which I'm not sure but feels like it was bolted on afterwards to fix this. Going with HTTP auth would probably repeat history quite exactly here, with at some point something else getting bolted to the side…
Hopefully provides a way to pin certs or at least pin certificate authorities && has PFS.
My conspiracy hat doesn't trust all the cert auths out there.
To be fair, a go project as sole implementation (I assume it is that?) is a no-go, for example we couldn't even deploy it on all our systems since last I checked Go doesn't support ppc64. (BE, not ppc64le)
I also don't see a protocol specification in there.
[edit] actually, no, this is not SSH over QUIC. This is SSH over single bidi stream transport over QUIC, it's just a ProxyCommand. That's not how SSH over QUIC should behave, it needs to be natively QUIC so it can take advantage of the multi-stream features. And the built-in TLS.
qsh might be taken by QShell
https://en.m.wikipedia.org/wiki/Qshell
There's a whole github issue where the issue was bike shed to death.
You start the ssh client in the terminal, it opens a browser to authenticate, and once you're logged in you go back to the terminal. The usual trick to exfiltrate the authentication token from the browser is that the ssh client runs an HTTP server on localhost to which you get redirected after authenticating.
That’s a shame. Lowered latency (and persistent sessions, so you don’t pay the connection cost each time) are the best things about Mosh (https://mosh.org/).
Why not just SSH/QUIC, what does the HTTP/3 layer add that QUIC doesn’t already have?
The YouTube and social media eras made everyone so damn dramatic. :/
Mosh solves a problem. tmux provides a "solution" for some that resolves a design decision that can impact some user workflows.
I guess what I'm saying here, is it you NEED mosh, then running tmux is not even a hard ask.
That's pretty well covered in RFC8628 and doesn't even require a browser on the same device where the SSH client is running.
> And let me remind you, HTTP authentication isn't a login form. It's the browser built-in "HTTP username + password" form and its cousins. We're talking HTTP 401. The only places this is widely used is API bearer tokens and NTLM/Kerberos SSO.
That depends entirely on the implementation. It could also be a redirect response which the client chooses to delegate to the user's web browser for external authentication. It's just the protocol. How the client interprets responses is entirely up to the implementation.
> Unfortunately I need to pop your bubble, PAM also does session setup, you'd still need it.
I don't see why, really. It might just as well be an opaque part of a newer system to reconcile remote authorization with local identity, without any interaction with PAM itself necessary at all.
> And the other thing here is — you're solving your problem. Hard-relying on HTTP auth for this SSH successor needs to solve everyone's problem. And it's an incredibly bad fit for a whole bunch of things.
But isn't that the nice part about HTTP auth, that it's so extensible it can solve everyone's problems just fine? At least it does so on the web, daily, for billions of users.
It's done because the web stack exists and is understood by the web/infrastructure folks, not because it represents any kind of local design optima in the non-web space.
Using the web stack draws in a huge number of dependencies on protocols and standards that are not just very complex, but far more complex than necessary for a non-web environment, because they were designed around the constraints and priorities of the web stack. Complicated, lax, text-based formats easily parsed by javascript and safe to encode in headers/json/query parameters/etc, but a pain to implement anywhere else.
Work-arounds (origin checks, CORS, etc) for the security issues inherent in untrusted browsers/javascript being able to make network connections/etc.
We'be been using kerberos and/or fetching SSH keys out of an LDAP directory to solve this problem for literal decades, and it worked fine, but if that won't cut it, solving the SSH certificate tooling problem would be a MUCH lighter-weight solution here than adopting OAuth and having to tie your ssh(1) client implementation to a goddamn web browser.
Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.
Mosh is like vnc or rdp for terminal contents: natively variable frame rate and somewhat adaptive predictive local echo for reducing latency perception; think client side cursor handling with vnc or with rdp I'd even assume there might be capability for client-side text echo rendering.
If you haven't tried mosh in situations with a mobile device that have you experience connection changes during usage, you don't know just how much better it is than "mere tmux over ssh".
I honestly don't know of a more resilient protocol than mosh that's in regular usage, other than possibly link-layer 802.11n aka "the Wi-Fi that got these 150 Mbit and those 300 Mbit and some 450 Mbit speed claims advertised onto the marker", where link-layer retransmissions and adaptive negotiation of coding parameters and actively-multipath-exploiting MIMO-OFDM (and AES crypto from WPA2) combine for a setup that hides radio interference to not be visible to higher level protocols beyond the unavoidable jitter of the retransmissions and varying throughput potentials from varying radio conditions.
Oh, I think when viewed regarding computers not the congestion control schemes adjusting the individual connection speeds, there'd also be BitTorrent with DHT and PEX that only needs an infohash: with 160 bits of hash a client seeded into the (mainline) DHT swarm can go and retrieve a (folder of) files from an infohash-specific swarm that's at least partially connected to the DHT (PEX takes care of broadening the connectivity among those that care about the specific infohash).
In the realm of digital coding schemes that are widely used but aren't of the "transmission" variety, there's also Redbook CD audio that starts off easy with lossless error correction, followed by perceptually effective lossy interpolation to cover severe scratches to the disc's surface.
I am concerned however about the tedious trend of cramming absolutely everything into HTTP. DNS-over-HTTP is already very dumb, and I'm quite sure SSH-over-HTTP is not something I'm going to be interested in at all.
All of that isn't really important, though. What makes a major point for using HTTP w/ TLS as a transport layer is the ecosystem and tooling around it. You'll get authorization protocols like OIDC, client certificate authentication, connection resumption and migration, caching, metadata fields, and much more, out of the box.
This is HTTP authentication: https://httpd.apache.org/docs/2.4/mod/mod_auth_basic.html
https://github.com/francoismichel/ssh3/blob/5b4b242db02a5cfb...
https://www.iana.org/assignments/http-authschemes/http-auths...
Note the OAuth listed there is OAuth 1.0. Support for "native" HTTP authentication was removed in OAuth 2.0.
This discussion is about using HTTP authentication. I specifically said HTTP authentication in the root post. If you want to do SSH + web authentication, that's a different thread.
Rule of thumb: if you need HTML in any step of it —and that includes as part of generating a token— it's web auth, not HTTP.
It has always bothered me somewhat. I sometimes use ssh to directly execute a command on a remote host.
I'm guessing SSH3 doesn't do anything to improve that aspect? (although I guess QUIC will help a bit, but isn't quite the same as Mosh is it?)
I have to disagree pretty strongly on this one. Case in point: WebSockets. That protocol switch is "nifty" but breaks fundamental assumptions about HTTP and to this day causes headaches in some types of server deployments.
If you are designing a protocol, unless you have a secret deal with telcos, I suggest you masquerade it as something like HTTP so that it is more difficult to slow down your traffic.
So your super speedy HTTP SSH connection then ends up being slower than if you just used ssh. Especially if your http traffic looks rogue.
At least when its its own protocol you can come up with strategies to work around the censorship.
Feels like a spinning hammer meant to drive screws because somebody has never seen a drill before.
1. High latency, maybe even packet-dropping connections;
2. You’re roaming and don’t want to get disconnected all the time.
For 2, sure tmux is mostly okay, it’s not as versatile as the native buffer if you use a good terminal emulator but whatever. For 1, using tmux in mosh gives you an awful, high latency scrollback buffer compared to the local one you get with regular ssh. And you were specifically taking about 1.
For read-heavy, reconnectable workloads over high latency connections I definitely choose ssh over mosh or mosh+tmux and live with the keystroke latency. So saying it’s a huge downside is not an exaggeration at all.
Wait, what? Does it actually work?
If yes, this is a huge deal. This potentially solves the ungodly clusterfuck of SSH key/certificate management.
(I don't know how OpenID is supposed to interact with private keys here.)
Please don't give short abbreviated names. Useful full names for commands. Teach full names. When you present something, show full names. If this project used a full name like `remote-terminals-over-http3`, we would not be having this debate about ssh3.
Of course, end users and system administrators and even package managers/distributions are free to add abbreviations but we should be teaching people to use full names.
Prefer things like Set-Location over cd. Prefer npm install --global over npm i -g. Prefer remote-terminals-over-http3 over ssh3.
I've seen very little do that. Probably just HTTP, and it's using a slash specifically to emphasize a big change.
Be it PAM, or whatever OpenBSD is doing, the session setup kills performance, whether you're re-using the SSH connection or not, every time you start something within that connection.
Now obviously for long running stuff, that doesn't matter as much as the total overhead. But if you're doing long running ssh you're probably using SSH for its remote terminal purposes and you don't care if it takes 0.5 seconds or 1 second before you can do anything. And if you want file transfer, we already had a HTTP/3 version of that - it's called HTTP/3.
Ansible, for example, performs really poorly in my experience precisely because of this overhead.
Which is why I ended up writing my own mini-ansible which instead runs a remote command executor which can be used to run commands remotely without the session cost.
Of course, maybe there's a perfectly obvious word which can apply to all of those kinds of situations just as clearly without being a misnomer I've just never thought to mention in reply :D.
SSH multiplexes multiple channels on the same TCP connection which results in head of line blocking issues.
> Should you find yourself limited by the default buffering of SSH (10+Gbit intercontinental links), that's called "long fat links" in network lingo, and is not what TCP was built for.
Not really, no. OpenSSH has a 2 MB window size (in the 2000s, 64K), even with just ~gigabit speeds it only takes around 10-20 ms of latency to start being limited by the BDP.
No, unfortunately it'snecessary so that the SSH proocol can multiplex streams independently over a single established connection.
If one of the multiplexed streams stalls because its receiver is blocked or slow, and the receive buffer (for that stream) fills up, then without window-based flow control, that causes head-of-line blocking of all the other streams.
That's fine if you don't mind streams blocking each other, but it's a problem if they should flow independently. It's pretty much a requirement for opportunistic connection sharing by independent processes, as SSH does.
In some situations, this type of multiplexed stream blockiing can even result in a deadlock, depending on what's sent over the streams.
Solutions to the problem are to either use window-based flow control, separate from TCP,, or to require all stream receive buffers to expand without limit, which is normally unacceptable.
HTTP/2 does something like this.
I once designed a protocol without this, thinking multipexing was enough by itself, and found out the hard way when processes got stuck for no apparent reason.
You’ll see when the logs drop!
RFC 4253(SSH Transport Layer Protocol)[1] says:
It is expected that in most environments, only 2 round-trips will be needed for full key exchange, server authentication, service request, and acceptance notification of service request. The worst case is 3 round-trips.
I've never experienced any issues w/ session initialization time. It should be affected by the configuration of both server and client.Not that I've ever noticed this being an issue (no matter how much we complain, internet here is pretty decent)
Edit: seeing as someone downvoted your hour-old comment just as I was adding this first reply, I guess maybe they 'voted to disagree'... Would be nice if the person would comment. It wasn't me anyway
Because it’s insecure to use on multiuser systems, as it presents an opportunistic access to remote systems for root users on your local system: root can read and write into your UDS too.
As a user, you have to explicitly opt into this scenario if you deem it acceptable.
Also, HTTP/3 must obviously also be using some kind of acknowledgements, since for fairness reasons alone it must be implementing some congestion control mechanism, and I can't think of one that gets by entirely without positive acknowledgements.
It could well be more efficient than TCP's default "ack every other segment", though. (This helps in the type of connection mentioned above; as far as I know, some DOCSIS modems do this via a mechanism called "ack compression", since TCP is generally tolerant of losing some ACKs.)
In a sense, the win of QUIC/HTTP/3 in this sense isn’t that it’s not TCP (it actually provides all the components of TCP per stream!); it’s rather that the application layer can “provide its own TCP”, which might well be more modern than the operating system’s.
If you use the former without the latter, you'll inevitably have head-of-line blocking issues if your connection is bandwidth or receiver limited.
Of course not every SSH user uses protocol multiplexing, many do, as it can avoid repeated and relatively expensive (terms of CPU, performance, and logging volume) handshakes.
Filtering inbound UDP on one side is usually enough to break mosh, in my experience. Maybe they use better NAT traversal strategies since I last checked, but there's usually no workaround if at least one network admin involved actively blocks it.
A network admin can reasonably want to have the users of their network not run mail servers on it (as that gets IPs flagged very quickly if they end up sending or forwarding spam), while still allowing mail submission to their servers.
Blocking ports 587, 993, 995 etc. is indeed silly.
Importantly, it does not seem to switch out any security mechanisms and is both an implementation and a specification draft, which means that OpenSSH could eventually pick it up too so that people don't have to trust a different implementing party.
If implemented with latency in mind, yes. After a quick look at the code, it seems they are buffering data on both sides with hardcoded buffer sizes at 1500 (TCP packet size) or 30Kb, which could be negating any latency improvements.
Plus—HTTP auth isn’t limited to Basic, Digest, and Bearer schemes. There’s nothing stopping an implementation from adding a new scheme if necessary, and add it to the IANA registry.
Similarly to your secret Google Drive documents, your SSH3 server can be
hidden behind a secret link and only answer to authentication attempts that
made an HTTP request to this specific link, like the following:
ssh3-server -bind 192.0.2.0:443 -url-path <my-long-secret>
Remember when Github had to rotate its host keys? It was hitting the news far and wide, and likely broke pretty close to every single CI pipeline out there. There was little heads up because it's the friggin host key, you have to act now.
It's also pretty annoying when you have to deal with that in your own infra. Even if you have a pretty good network/service map, you'll probably have silent breakage somewhere.
I'm not saying CAs should be the future of SSH, but TOFU is certainly a problem at scale.
HMU on my email. I've been working on/with this since 2016, and I'd love to discuss: <https://github.com/rollcat/judo>
Remember OpenSSH = OpenBSD. They have an opinionated & conservative approach towards adopting certain technologies, especially if it involves a complex stack, like QUIC.
"It has to be simple to understand, otherwise someone will get confused into doing the wrong thing."
And what's "lighter" than Wireguard? It's about as simple as it can get (certainly simpler than QUIC).
HTTP/3 (and hopefully this project) does not have this problem.
* Give users a config options so I can adjust it to my use case, like I can for TCP. Don't just hardcode some 2 MB (which was even raised to this in the past, showing how futile it is to hardcode it because it clearly needs adjustments to people's networks and and ever-increasing speeds). It is extremely silly that within my own networks, controlling both endpoints, I cannot achieve TCP speeds over SSH, but I can with nc and a symmetric encryption piped in. It is silly that any TCP/HTTP transfer is reliably faster than SSH.
* Implement data dropping and retransmissions to handle blocking -- like TCP does. It seems obviously asking for trouble to want to implement multiplexing, but then only implement half of the features needed to make it work well.
When one designs a network protocol, shouldn't one of the first sanity checks be "if my connection becomes 1000x faster, does it scale"?
ssh is not a shell and ssh is not a terminal, so please everybody stop suggesting name improvements that more deeply embed that confusion.
back in the day, we had actual terminals, and running inside was our shell which was sh. then there was also csh. then there was the idea of "remote" so rsh from your $SHELL would give you a remote $SHELL on another machine. rsh was not a shell, and it was not a terminal. There were a whole bunch of r- prefixed commands, it was a family, and nobody was confused, these tools were not the thing after the r-, these tools were just the r- part.
then it was realized that open protocols were too insecure so all of the r- remote tools became s- secure remote tools.
http is a network protocol that enables other things and gets updated from time to time, and it is not html or css, or javascript; so is ssh a network protocol, and as I said, not a shell and not a terminal.
just try to keep it in mind when thinking of new names for new variants.
and if somebody wants to reply that tcp/ip is actually the network protocol, that's great, more clarification is always good, just don't lose sight of the ball.
There is not only censorship, but traffic shaping when some apps are given a slow lane to speed up other apps. By making your protocol identifiable you gain nothing good.
What am I missing?
However, it looks like pipelining (and obviously forking) could do a lot to help.
That being said, there were _many_ reasons for me to drop Ansible. Including poor non-linux host support, Yaml, the weird hoops you have to jump through to make a module, and difficulty achieving certain results given the abstraction choices.
I think Ansible is great, it solves a problem, but my problem was very specific, Ansible was a poor fit for it, and performance was just one of many nails in the coffin for me.
But we don't have to do that. Not on our own time. Don't use QUIC unless you're getting paid to do it.
From my stance, and where I've used mosh has been in performing quick actions on routers and servers that may have bad connections to them, or may be under DDoS, etc. "Read" is extremely limited.
So from that perspective and use case, the "huge downside" has never been a problem.
i.e. this package being somehow abandoned and therefore should not be trusted is likely to be false
> i would imagine that distros are not keeping important patches like security to themselves.
I'm not 100% sure what "keeping to themselves" means in context of GPL 3 code, but one can verify with the mosh GitHub link to see the upstream project has not had a single commit on any branch for the last 2.5 years.
The project is dead, it's up to your trust+verification of any specific downstream packaging as to how much of a problem that is for the binary you may be using. Some maintainers may not have noticed/cared enough yet, some maintainers may only carry security fixes of known CVEs, some maintainers may be managing a full fork. The average reader probably wants to note that for their specific binary rather than note Fedora still packages a downstream version (which may be completely different).
Or, better but more difficult, it should track the dynamic TCP window size, from the OS when possible, combined with end-to-end measurements, and ensure the SSH mux channel windows grow to accomodate the TCP window, without growing so much they starve other channels.
To your second point, you can't do data dropping and retransmission for mux'd channels over a single TCP connection. After data is sent from the application to the kernel socket, it can't be removed from the TCP transmission queue, will be retransmitted by the kernel socket as often as needed, and will reach the destination eventually, provided the TCP connection as a whole survives.
You can do mux'd data dropping and retransmission over a single UDP connection, but that's basically what QUIC is.