It's a shame that all of the X replacements on the horizon also ditch the network transparency part, even though that's been a killer feature for me. I hate how much of a hack Remote Desktop is on Windows machines.
It's a shame that all of the X replacements on the horizon also ditch the network transparency part, even though that's been a killer feature for me. I hate how much of a hack Remote Desktop is on Windows machines.
I've used ssh -X many times in my life, but at the same time, I understand why network transparency doesn't make as much sense anymore.
Classically, the X protocol kept a huge amount of state on the server, and clients would just send small requests to manipulate that data. Drawing would take place server-side, and it was relatively simple drawing at that, since UI needs were fairly modest. Text rendering occurred on the server with small strings sent from the client. And overall, it made sense to send a small amount of data to produce a large amount of graphics.
These days, however, rendering occurs almost exclusively on a combination of the client and the GPU. Everything takes place via large memory buffers, and compositing. And pushing those over the wire doesn't work nearly as well; if you're going to do that, you need protocols designed for it, with compression and delta-encoding, much like modern video codecs (but optimized for remote-desktop).
So I think it makes sense that network transparency is being rethought, in favor of protocols that look a lot more like remote-video (and remote-audio).
Windows and Mac use similar terminology to "display server" too, but I guess those are proprietary components that few people look into...
Actually, I think of the client/server relationship as a higher level abstraction above the mechanics of the actual socket connection, which can go either way, or both, or many relationships at the same time in different directions and between different sites.
All sites can offer any number of different asynchronous message based services to each other at all times.
It shouldn't matter who was listening, who connected first or how they connected, just that all sites can send messages back and forth on the same channel. It could be UDP, multicast, WebRTC, distributed peer to peer, or a central message dispatcher.
A (web client / X server) site with the display and GPU and mouse offers presentation and rendering and input services to different (web server / X client) sites, which themselves offer database and storage and computing services the other way, at the same time over the same channels.
That's just false. A web server listen(2)s/accept(2)s. A web browser or other HTTP client connect(2)s. An X server listen(2)s/accept(2)s. An X client connect(2)s.
In either case it would not make sense to invert this relationship. In both cases a server is an endpoint that serves its resources to multiple clients. Are people advocating conceptually that the graphics card connect to its programs, and not the program request access to it? That makes very little sense. Just like a web server would not proactively connect to its end user web browsers. The requests happen in the other direction.
For example, whether you weed dealer "Dave" calls you on the phone and tells you he can drop by if you're interested, or if you call "Dave" and ask him to drop by, "Dave" is still the server, and you're still the client.