←back to thread

Ancient X11 scaling technology

(flak.tedunangst.com)
283 points todsacerdoti | 2 comments | | HN request time: 0.001s | source
Show context
pedrocr ◴[] No.44369891[source]
That's probably better than most scaling done on Wayland today because it's doing the rendering directly at the target resolution instead of doing the "draw at 2x scale and then scale down" dance that was popularized by OSX and copied by Linux. If you do it that way you both lose performance and get blurry output. The only corner case a compositor needs to cover is when a client is straddling two outputs. And even in that case you can render at the higher size and get perfect output in one output and the same downside in blurryness in the other, so it's still strictly better.

It's strange that Wayland didn't do it this way from the start given its philosophy of delegating most things to the clients. All you really need to do arbitrary scaling is tell apps "you're rendering to a MxN pixel buffer and as a hint the scaling factor of the output you'll be composited to is X.Y". After that the client can handle events in real coordinates and scale in the best way possible for its particular context. For a browser, PDF viewer or image processing app that can render at arbitrary resolutions not being able to do that is very frustrating if you want good quality and performance. Hopefully we'll be finally getting that in Wayland now.

replies(12): >>44370069 #>>44370123 #>>44370577 #>>44370717 #>>44370769 #>>44371423 #>>44371694 #>>44372948 #>>44373092 #>>44376209 #>>44378050 #>>44381061 #
kccqzy ◴[] No.44370123[source]
> doing the "draw at 2x scale and then scale down" dance that was popularized by OSX

Originally OS X defaulted to drawing at 2x scale without any scaling down because the hardware was designed to have the right number of pixels for 2x scale. The earliest retina MacBook Pro in 2012 for example was 2x in both width and height of the earlier non-retina MacBook Pro.

Eventually I guess the cost of the hardware made this too hard. I mean for example how many different SKUs are there for 27-inch 5K LCD panels versus 27-inch 4K ones?

But before Apple committed to integer scaling factors and then scaling down, it experimented with more traditional approaches. You can see this in earlier OS X releases such as Tiger or Leopard. The thing is, it probably took too much effort for even Apple itself to implement in its first-party apps so Apple knew there would be low adoption among third party apps. Take a look at this HiDPI rendering example in Leopard: https://cdn.arstechnica.net/wp-content/uploads/archive/revie... It was Apple's own TextEdit app and it was buggy. They did have a nice UI to change the scaling factor to be non-integral: https://superuser.com/a/13675

replies(4): >>44370977 #>>44371108 #>>44374789 #>>44375798 #
pedrocr ◴[] No.44370977[source]
> Originally OS X defaulted to drawing at 2x scale without any scaling down because the hardware was designed to have the right number of pixels for 2x scale.

That's an interesting related discussion. The idea that there is a physically correct 2x scale and fractional scaling is a tradeoff is not necessarily correct. First because different users will want to place the same monitor at different distances from their eyes, or have different eyesight, or a myriad other differences. So the ideal scaling factor for the same physical device depends on the user and the setup. But more importantly because having integer scaling be sharp and snapped to pixels and fractional scaling a tradeoff is mostly a software limitation. GUI toolkits can still place all ther UI at pixel boundaries even if you give them a target scaling of 1.785. They do need extra logic to do that and most can't. But in a weird twist of destiny the most used app these days is the browser and the rendering engines are designed to output at arbitrary factors natively and in most cases can't because the windowing system forces these extra transforms on them. 3D engines are another example, where they can output whatever arbitrary resolution is needed but aren't allowed to. Most games can probably get around that in some kind of fullscreen mode that bypasses the scaling.

I think we've mostly ignored these issues because computers are so fast and monitors have gotten so high resolution that the significant performance penalty (2x easily) and introduced blurryness mostly goes unnoticed.

> Take a look at this HiDPI rendering example in Leopard

That's a really cool example, thanks. At one point Ubuntu's Unity had a fake fractional scaling slider that just used integer scaling plus font size changes for the intermediate levels. That mostly works very well from the point of view of the user. Because of the current limitations in Wayland I mostly do that still manually. It works great for single monitor and can work for multiple monitors if the scaling factors work out because the font scaling is universal and not per output.

replies(2): >>44371039 #>>44371226 #
sho_hn ◴[] No.44371039[source]
What you want is exactly how fractional scaling works (on Wayland) in KDE Plasma and other well-behaved Wayland software: The scale factor can be something quirky like your 1.785, and the GUI code will generally make sure that things nevertheless snap to the pixel grid to avoid blurry results, as close to the requested scaling as possible. No "extra window system transforms".
replies(5): >>44371141 #>>44371886 #>>44371928 #>>44373804 #>>44380686 #
pedrocr ◴[] No.44371141[source]
That's what I referred to with "we'll be finally getting that in Wayland now". For many years the Wayland protocol could only communicate integer scale factors to clients. If you asked for 1.5 what the compositors did was ask all the clients to render at 2x at a suitably fake size and then scale that to the final output resolution. That's still mostly the case in what's shipping right now I believe. And even in integer scaling things like events are sent to clients in virtual coordinates instead of just going "here's your NxM buffer, all events are in those physical coordinates, all scaling is just metadata I give you to do whatever you want with". There were practical reasons to do that in the beginning for backwards compatibility but the actual direct scaling is having to be retrofitted now. I'll be really happy when I can just set 1.3 scaling in sway and have that just mean that sway tells Firefox that 1.3 is the scale factor and just gets back the final buffer that doesn't need any transformations. I haven't checked very recently but it wasn't possible not too long ago. If it is now I'll be a happy camper and need to upgrade some software versions.
replies(2): >>44371184 #>>44371338 #
sho_hn ◴[] No.44371338[source]
In KDE Plasma we've supported the way you like for quite some years, because Qt is a cross-platform toolkit that supported fractional on e.g. Windows already and we just went ahead and put the mechanisms in place to make use of that on Wayland.

The standardized protocols are more recent (and of course we heavily argued for them).

Regarding the way the protocol works and something having to be retrofitted, I think you are maybe a bit confused about the way the scale factor and buffer scale work on wl_output and wl_surface?

But in any case, yes, I think the happy camper days are coming for you! I also find the macOS approach attrocious, so I appreciate the sentiment.

replies(2): >>44371467 #>>44372499 #
1. pedrocr ◴[] No.44371467[source]
Thanks! By retrofitting I mean having to have a new protocol with this new opt-in method where some apps will be getting integer scales and go through a transform and some apps will be getting a fractional scale and rendering directly to the output resolution. If this had worked "correctly" from the start the compositors wouldn't even need to know anything about scaling. As far as they knew the scaling metadata could have been an opaque value that they passed from the user config to the clients to figure out. I assume we're stuck forever with all compositors having to understand all this instead of just punting the problem completely to clients.

When you say you supported this for quite some years was there a custom protocol in KWin to allow clients to render directly to the fractionally scaled resolution? ~4 years ago I was frustrated by this when I benchmarked a 2x slowdown from RAW file to the same number of pixels on screen when using fractional scaling and at least in sway there wasn't a way to fix it or much appetite to implement it. It's great to see it is mostly in place now and just needs to be enabled by all the stack.

replies(1): >>44371855 #
2. sho_hn ◴[] No.44371855[source]
Oh, ok. Yeah, this I agree with, and I think plenty of people do - having integer-only scaling in the core protocol at the start was definitely a regretable oversight and is a wart on things.

> When you say you supported this for quite some years was there a custom protocol in KWin to allow clients to render directly to the fractionally scaled resolution?

Qt had a bunch of different mechanisms for how you could tell it to use a fractional scale factor, from setting an env var to doing it inside a "platform plugin" each Qt process loads at runtime (Plasma provides one), etc. We also had a custom-protocol-based mechanism (zwp_scaler_dev iirc) that basically had a set_scale with a 'fixed' instead of an 'int'. Ultimately this was all pretty Qt-specific though in practice. To get adoption outside of just our stack a standard was of course needed, I guess what we can claim though is that we were always pretty firm we wanted proper fractional and to put in the work.