It's a shame that all of the X replacements on the horizon also ditch the network transparency part, even though that's been a killer feature for me. I hate how much of a hack Remote Desktop is on Windows machines.
It's a shame that all of the X replacements on the horizon also ditch the network transparency part, even though that's been a killer feature for me. I hate how much of a hack Remote Desktop is on Windows machines.
I've used ssh -X many times in my life, but at the same time, I understand why network transparency doesn't make as much sense anymore.
Classically, the X protocol kept a huge amount of state on the server, and clients would just send small requests to manipulate that data. Drawing would take place server-side, and it was relatively simple drawing at that, since UI needs were fairly modest. Text rendering occurred on the server with small strings sent from the client. And overall, it made sense to send a small amount of data to produce a large amount of graphics.
These days, however, rendering occurs almost exclusively on a combination of the client and the GPU. Everything takes place via large memory buffers, and compositing. And pushing those over the wire doesn't work nearly as well; if you're going to do that, you need protocols designed for it, with compression and delta-encoding, much like modern video codecs (but optimized for remote-desktop).
So I think it makes sense that network transparency is being rethought, in favor of protocols that look a lot more like remote-video (and remote-audio).
ssh -X is useful for something like clipboard forwarding from your local desktop to remote server (and back). You can work in something like nvim remotely, and have a nice clipboard integration. What will happen with such functionality after the switch to Wayland?
In theory, a saner approach is using a local editor to edit remote files.
But people don’t do that. Why? Because the available implementations suck. NFS sucks because it’s slow and hangs your programs; sshfs is the same but worse. Vim has native remote file support in the form of scp://, which invokes the scp command every time you open or save a file - synchronously, so even if you’re saving you have to wait. This might be vaguely tolerable if you configure ssh to share connections, but in general it sucks. And with any of those, you have to set up a separate connection to the server and re-find the directory you’re in, which may or may not be time consuming. By contrast, typing vi into the SSH prompt usually just works, so that’s what people do.
But there’s no fundamental technical limitation here. Imagine if you could type ‘edit foo.conf’ on the server and have it pop open a native editor on the client side. File transfer would be tunneled through the existing SSH connection, with no setup time or need for re-authentication; it would be fully asynchronous, so the editor would never hard-block waiting for an operation to complete, but it would still indicate when saves have been completed, so you know when it’s safe to run commands on the server that use the new files. Thing is, none of what I just said is especially complex or challenging from a technical perspective; it’s just that it would require changes to a bunch of components (ssh on the client and server; vim) that weren’t designed with that use case in mind.
Anyway, I’m getting a little off topic here - but I think the point generalizes. In my ideal world, the future of X forwarding would be fixing the most common use cases so you don’t need X forwarding. For the rest, well, the ‘server sets up a tunnel over SSH’ approach could work just as well for VNC, or for an improved remote display protocol that uses a video codec.
In the real world, I suppose things will just fester, remote display support will get worse, and remote editing will continue to suck. But I can dream. Or maybe implement something myself, but there are so many other things to do…
I.e. both methods have their trade-offs. I find using remote nvim easier, even if you have to install and configure it first. At least if we are talking about some stable commonly used environment, and not about ad-hoc editing, in which case remote mounting might be a better option.
Yes, this is the basis of the very popular and widely used TRAMP component in Emacs.
>Imagine if you could type ‘edit foo.conf’ on the server and have it pop open a native editor on the client side.
Yes, this is possible with emacsclient.
> Yes, this is the basis of the very popular and widely used TRAMP component in Emacs.
I’m sure TRAMP is better than vim’s netrw, but at least according to [1] it’s not fully asynchronous. It also seems to require a separate SSH invocation; I know you can configure SSH to reuse physical connections, but at least when I once tried it, it didn’t work very well…
> Yes, this is possible with emacsclient.
Judging by [2], it doesn’t seem to work well for that use case (talking to emacs over an SSH tunnel)… Assuming you can get it to work, can you make it work with multiple SSH connections to arbitrary servers, without any per-server configuration? If so, that’s quite interesting and maybe I should check it out. If not, then it still may be quite useful, but it wouldn’t be versatile enough to truly render editor-over-SSH unnecessary.
[1] https://www.reddit.com/r/emacs/comments/23dm4c/is_it_possibl...
[2] https://emacs.stackexchange.com/questions/15144/emacsclient-...
It has a lot of what you're talking about. Typing `atom foo.conf` to open up files locally. Keeping the same connection open and reconnecting without needing to reauthenticate with SSH.
There's also a lot of other stuff that needs to be handled. File searching needs to be hooked up a reasonable way. Language services do as well.
Facebook has done a great job with remote editing (although I don't think the open source version of Nuclide is a priority).
A split editor, with a local ui and filesystem remote can be made to work, but you do have to install something on the remote end, which brings you back to the remote end doesn't have the environment you want, so you just suffer with whatever is there problem.
It seems when you connect on a remote box many times what you really want is to import its files. But despite there have been a lot of network shared file systems none seams to have become the obvious solution, supposedly because it's hard to fix that problem efficiently in the general case, especially for an operation system, Unix, that attach to files an API that have not been designed for this.
And you may find yourself
Behind the wheel of a large Microsoft Mouse
And you may find yourself on a beautiful PC
With a beautiful Window
And you may ask yourself, well
How did I get here?
>Can you make it work with multiple SSH connections to arbitrary servers, without any per-server configuration?
Certainly, of course. Depending on your level of hackery, you can: - Make ssh automatically copy an emacsclient wrapper script over which does the correct thing - Use OpenSSH's Unix socket forwarding on each connection to connect emacsclient to your local Emacs server - Use https://github.com/magit/with-editor
I've never bothered, though, because I run all my shells on remote hosts via TRAMP within Emacs, so when I am in a shell, I can just use open the remote files through the normal Emacs keybindings, rather than typing "emacsclient filename". Thanks to with-editor, this also extends automatically to commands invoked on the remote host which run $EDITOR.
>I don’t have any experience with Emacs, but…
It's not surprising. :) You said "using vi over ssh - or any editor - has always struck me as a hack", "is an unpopular opinion". In fact, it's the consensus in Emacs land. :) Though perhaps that doesn't prevent it from being an unpopular opinion, these days. :)