Most active commenters
  • eqvinox(6)
  • LoganDark(6)
  • PlutoIsAPlanet(3)
  • arghwhat(3)
  • Dylan16807(3)

←back to thread

204 points WithinReason | 36 comments | | HN request time: 0.203s | source | bottom
1. mrweasel ◴[] No.40715746[source]
An once that becomes generally available operating systems will eat the bandwidth in an instance and any speed-up to be gained on a desktop will be completely negated.

It seems like we're stuck at a pre-set level of latency, which is just within what people tolerate. I was watching a video of someone running Windows 3.11 and notice that the windows closes instantly, which on Windows 10 and 11 I've never seen there NOT be a small delay between the user clicking close and the window disappearing.

replies(5): >>40715815 #>>40716021 #>>40716089 #>>40716389 #>>40717169 #
2. vladvasiliu ◴[] No.40715815[source]
> which on Windows 10 and 11 I've never seen there NOT be a small delay between the user clicking close and the window disappearing.

Isn't that delay related to the default animations? On my particular machine with animations disabled, if I click the minimize button, the window disappears instantly. This is your standard win11 on a shitty enterprise laptop running some kind of 11th gen i7u with the integrated graphics and a 4k external display.

Maximization is sometimes janky, but I guess it's because the window needs to redraw its contents at the new size.

replies(1): >>40715875 #
3. PlutoIsAPlanet ◴[] No.40715875[source]
Modern operating systems render to buffers on the GPU and then composite them, which I would guess adds some latency (although likely unnoticeable).
replies(1): >>40716455 #
4. abofh ◴[] No.40716021[source]
Part of me wonders if this is the natural outcome of things. The world gets faster, so we're more efficient, now we have more memory and time in a datacenter, let's do it on our servers so we can charge a monthly fee! That works, but now things are slow until the next boost (say bandwidth/storage/whatever) - now we're more efficient, let's shove ads on it, there's plenty of end-user CPU cycles we can profit from.

It seems like enshittification is not just the inevitable outcome, but almost desirable (from a profit standpoint), and thus things getting faster for you only help me (the vendor) if I can extract _more_ value by things being faster - otherwise why would I spend money to make things better?

5. eqvinox ◴[] No.40716089[source]
> It seems like we're stuck at a pre-set level of latency,

Bandwidth isn't latency, and PCIe 7.0 running as fast as 128 GT/s is no statement at all about its latency. I remember this great analogy from university: a truck carrying a full load of backup tapes across a country has amazing bandwidth but atrocious latency.

(I still agree with your sentiment, just PCIe is not one of the problems in this regard. The connection between bandwidth becoming available and being eaten up vs. latency is a red herring; it's all about properly engineering software for responsitivity.)

replies(2): >>40716402 #>>40716632 #
6. szundi ◴[] No.40716389[source]
Yes we should ban any progress for 10 years and everything would be so cheap.

Then one leap, next os eats it all, but cheap again.

/s

replies(1): >>40716418 #
7. szundi ◴[] No.40716402[source]
If your Win27k startup is a 8k 120fps video of a butterfly transforming to a windows logo - then it is latency

Btw all bandwith is built to reduce latency, aren’t they. Bit philosophy heh

replies(2): >>40716686 #>>40716931 #
8. LoganDark ◴[] No.40716418[source]
The problem isn't that computers are more capable, it's that they're being used inefficiently.
9. LoganDark ◴[] No.40716455{3}[source]
It's not unnoticeable. Ever notice how on Windows, when you start to drag a window, your cursor disappears for a frame? That's Windows replacing your cursor with a software-rendered one so it doesn't appear ahead of the window. But drag anything else (i.e. browser tabs, text highlighting) and you'll quickly notice it lagging behind the cursor. Why? Because the cursor is a hardware overlay that can be moved before the composition is actually complete. The composition lags one frame behind. In other words, the price of the compositor is lagging one frame behind. That may not sound like much, but it is, especially when most displays are only 60 FPS.

Of course, it's only one of the contributing factors to the total latency of things like keystrokes: https://danluu.com/input-lag/

replies(2): >>40717055 #>>40717112 #
10. ZiiS ◴[] No.40716632[source]
GT/s is a measure of latency (not total system latency, but the bus itself is only adding 128 billionth of a second). In fact it does not say anything about bandwidth if you don't know how many bits in a transfer.
replies(2): >>40716746 #>>40716981 #
11. eqvinox ◴[] No.40716686{3}[source]
No, neither of these are true. If Win27k startup is an 8k 120fps video, it is either latency or stutter if you don't have enough bandwidth. You can absolutely design a system with priorities set such that latency is above stutter-/drop-free playback, and if you do, the startup time will be unaffected by that bandwidth.

And, no, not all bandwidth is built to reduce latency. There is a lot of bulk, best-effort traffic - for example, YouTube and Netflix proactively distributing videos between datacenters across the world. (They totally do that before anyone ever clicks play, they have enough data to know what is likely to be needed where.)

The same applies to your YouTube/Netflix playback at home. It doesn't need to be low latency. The only effect of latency is a longer time between you clicking play and playback actually starting. From there onwards, you just need enough bandwidth to keep the buffer filled, and you can do that quite a bit ahead of reaching playback position. Latency is a real non-issue there.

Same locally for bulk copying files around. If your OS & FS is designed well, latency only shows up at the beginning of the operation. Most file systems were designed when data was on rotating rust, and that's dealt with readahead and the likes.

12. out_of_protocol ◴[] No.40716746{3}[source]
Throughput != latency, and often is tradeoff to latency (e.g. if you send stuff in big batches, database can process 100k tx/sec, but one by one it's 1k tx/sec at most)
13. retrac ◴[] No.40716931{3}[source]
Latency and bandwidth are often in tension. (And guaranteeing low latency can eat up a big chunk of theoretically available bandwidth, due to overhead.)

The canonical example is probably a dial-up modem or other slow link between two locations. The latency is under 1 second to send one byte over the modem. But it's probably faster to just ship a hard disk if you want to send 100 gigabytes from one location to the other, even though the latency might be hours or even days, until the first byte arrives.

In practice, you can send lots of tiny little packets with lots of overhead (but low latency) or you can send lots of big heavily buffered packets with low overhead (but with high latency).

This is why multiplayer game protocols often consist of a constant stream of tiny UDP packets containing events like "character moved 40 nits east at game time ..." or "character fired weapon at game time ...." Even a 10 kilobyte bulk state update is going to cost at least a few milliseconds, more probably tens or even hundreds of milliseconds over some wireless connection. And that's a very noticeable lag.

replies(1): >>40720712 #
14. eqvinox ◴[] No.40716981{3}[source]
I'm sorry, but you're multiply wrong. First, a "transfer" is not a term in the PCIe spec; if anything, there's "transaction". But GT/s does not refer to transactions as you seem to be implying, and in fact "GT" does not have an assigned long form in the PCIe base specification. The term is introduced / defined like this:

| The primary Link attributes for PCI Express Link are:

| · The basic Link – PCI Express Link consists of dual unidirectional differential Links, implemented as a Transmit pair and a Receive pair. A data clock is embedded using an encoding scheme (see Chapter 4) to achieve very high data rates.

| · Signaling rate – Once initialized, each Link must only operate at one of the supported signaling levels. For the first generation of PCI Express technology, there is only one signaling rate defined, which provides an effective 2.5 Gigabits/second/Lane/direction of raw bandwidth. The second generation provides an effective 5.0 Gigabits/second/Lane/direction of raw bandwidth. The third generation provides an effective 8.0 Gigabits/second/Lane/direction of raw bandwidth. The data rate is expected to increase with technology advances in the future.

| · Lanes – A Link must support at least one Lane – each Lane represents a set of differential signal pairs (one pair for transmission, one pair for reception). To scale bandwidth, a Link may aggregate multiple Lanes denoted by xN where N may be any of the supported Link widths. A x8 Link operating at the 2.5 GT/s data rate represents an aggregate bandwidth of 20 Gigabits/second of raw bandwidth in each direction. This specification describes operations for x1, x2, x4, x8, x12, x16, and x32 Lane widths.

(from PCIe 4.0 base specification)

So, GT/s is used to be less ambiguous on multi-lane links.

Next,

> the bus itself is only adding 128 billionth of a second).

no, the bus does actually add more latency since almost all receivers need to reassemble the whole transaction (generally tens to hundreds of bytes) to checksum validate and then dispatch further to continue. This latency can show up multiple times if you have PCIe switches, but (unlike endpoints) these are frequently cut-through.

However, that latency is seriously negligible compared to anything else in your system.

> In fact it does not say anything about bandwidth if you don't know how many bits in a transfer.

How many bits are in a transaction does in fact influence that latency mentioned right above, but has no impact on bandwidth. What does have an impact on available end-user bandwidth is how small you chunk longer transactions since each of them has per-transaction overhead.

And finally —

> GT/s is a measure of latency

— absolutely not. It is a measure of raw bandwidth. It indirectly influences minimum and maximum latency, but those are complicated relationships especially on multi-lane links, and especially maximum latency depends on a whole host of factors from hardware capabilities, to BIOS and OS settings in PCIe config, to driver behavior.

replies(2): >>40717622 #>>40720198 #
15. arghwhat ◴[] No.40717055{4}[source]
Unless the issue is that your setup cannot composite at 60 fps (don’t get me wrong, not pretending that Windows isn’t at fault if that’s the case), then neither double buffering nor software cursors introduce delay.

Unless your goal is tearing updates (a whole other discussion), then your only cause of latency is missed frame deadlines due to slow or badly scheduled rendering.

There is no need to switch to software cursor rendering unless you want to render something incompatible with the cursor plane, e.g. massive buffers or underlaying the cursor under another surface. Synchronization with primary plane updates is not at all an issue.

replies(1): >>40717350 #
16. PlutoIsAPlanet ◴[] No.40717112{4}[source]
Hardware planes will hopefully reduce the latency again as it will allow windows to skip the compositor and allow mapping parts of the framebuffer to a window buffer.

I believe there's some work on Linux already for them, but I'm not so sure on Windows. I would be surprised if macOS doesn't already use them in some capacity given Apple's obsession with delegating everything to firmware on a co-processor.

replies(1): >>40717536 #
17. Aurornis ◴[] No.40717169[source]
> It seems like we're stuck at a pre-set level of latency,

I booted and used an old computer recently. Not Windows 3.11 old, but old enough to have a mechanical hard drive.

The experience was anything but low latency. It’s easy to forget just how slow mechanical hard drives were in the past.

Modern desktops are extremely fast. Closing a window and having a tiny delay doesn’t bother me in the slightest because it has zero impact in my workflow.

I can launch code editors quickly, grep through files at an incredible rate, and compile large projects with a dozen threads in parallel. Getting worked up over a split second delay in closing a window is not a concern in the slightest.

Regardless, it has nothing to do with next generation PCIe bandwidth. I don’t understand why this is the top voted comment on this otherwise interesting article. Is HN just a place to find creative ways to be cynical and complain about things these days?

replies(2): >>40720208 #>>40720470 #
18. LoganDark ◴[] No.40717350{5}[source]
> Synchronization with primary plane updates is not at all an issue.

While I wouldn't be surprised if this is technically true in a hardware sense, software-wise, Windows knows where the cursor is before it's finished rendering the rest of the screen, and updates the hardware layer that contains the cursor before rendering has finished.

replies(1): >>40718470 #
19. LoganDark ◴[] No.40717536{5}[source]
> Hardware planes will hopefully reduce the latency again as it will allow windows to skip the compositor and allow mapping parts of the framebuffer to a window buffer.

Hardware planes are great but there are a limited number of them. Right now I believe Windows only uses them for the mouse cursor, and exclusive fullscreen.

replies(2): >>40719900 #>>40721023 #
20. vinay_ys ◴[] No.40717622{4}[source]
PCIe 1.0 to 5.0 used NRZ (non return to zero) electrical signaling for transmission. In NRZ signaling at high rate and over long distances, there are challenges w.r.t clock recovery, DC balance and error correction. To deal with this, encoding is used.

Encoding is basically a block of data (a sequence of zeros and ones) is represented as a sequence of electrical voltage changes (a block of symbols).

GT/s does stand for Giga Transfers per second. Here, the transfers are referring to number of symbols transferred per second, and not actual usable data bits per second.

We say GT/s instead of Gbps, because the actual usable bits/sec is determined by the encoding scheme used.

PCIe 1.0 and 2.0 encoded 8 data bits in 10 symbols (NRZ electrical signals). That's 20% overhead.

PCIe 3.0 to 5.0 encoded 128 bits of data in 130 symbols. That's a much lower overhead of 1.54%.

PCIe 6.0 (& yet to be standardized PCIe 7.0) use PAM4 for signaling and doesn't required any encoding on top –hence it is written as 1b/1b. (btw, in PAM4, each symbol is 2 bits).

You can see similar NRZ signaling with encoding in SATA, Ethernet, Fiber Channel etc. Btw, PAM4 with NRZ is used as well!

Coming to latency, latency is the time it takes for a single bit of usable data to transfer from A to B. Many factors affect this latency. Signaling medium's speed of transmission (a fraction of speed of light), signaling medium's length, signaling frequency (Mhz, Ghz etc of voltage switching), encoding scheme (encoding overhead, clock recovery or its failure and hence retransmissions, error detection/correction quality or its failure and hence retransmissions) - each of these things affect the latency of usable data.

GT/s = Signal Frequency x Bits per cycle.

Remember, PAM4 encoding in PCIe6.0 has 2 bits per cycle (2 bits per symbol).

replies(2): >>40719656 #>>40720056 #
21. arghwhat ◴[] No.40718470{6}[source]
> While I wouldn't be surprised if this is technically true in a hardware sense, software-wise, Windows knows where the cursor is before it's finished rendering the rest of the screen

The earlier you sample the cursor position and update the cursor plane, the more the position is out of date once the next scanout comes around, increasing the perceived input delay.

The approach that leads to the smallest possible input latency is to sample the cursor position just before issuing the transaction that updates the cursor position and swaps in the new primary plane buffer (within Linux, this is called an atomic commit), whereas you maximize content consistency with still very good input latency by sampling just before the composition started.

Note that "composition" does not involve rendering "content" as the user perceives it, but just placing and blending already rendered window content, possibly with a color transform applied as the pixels hit the screen. Unless Microsoft is doing something weird, this should be extremely fast. <1ms fast.

replies(1): >>40719705 #
22. Dylan16807 ◴[] No.40719656{5}[source]
> PCIe 6.0 (& yet to be standardized PCIe 7.0) use PAM4 for signaling and doesn't required any encoding on top –hence it is written as 1b/1b. (btw, in PAM4, each symbol is 2 bits).

Nothing on top if you exclude the error correction bits, which I don't think you should.

23. LoganDark ◴[] No.40719705{7}[source]
> The earlier you sample the cursor position and update the cursor plane, the more the position is out of date once the next scanout comes around, increasing the perceived input delay.

No, the cursor position is more up-to-date than the rest of the screen because it doesn't need to wait for a GPU pipeline to finish after it's moved.

> Unless Microsoft is doing something weird, this should be extremely fast. <1ms fast.

Look, I'm saying this is what's going on. (not to scale)

    ... | vsync                                                         ...
    ...  | cursor updated for frame 0                                   ...
    ...   | frame 0 scanout                                             ...
    ...     | frame 1 ready                                             ...
    ...                                   | vsync                       ...
    ...                                    | cursor updated for frame 1 ...
    ...                                     | frame 1 scanout           ...
    ...                                       | frame 2 ready           ...
Frames are extremely fast to render, but they arrive the frame after they were originally scheduled, because GPU pipelines are asynchronous. However, the cursor position arrives immediately because the position of the hardware layer can be synchronously updated immediately before scanout. The effect is that updates to the cursor position are (essentially) displayed 1 frame sooner than updates to the rest of the screen. If you actually try any of the tests I mentioned in my original comment you'll see this for yourself.
replies(2): >>40720660 #>>40727219 #
24. zamadatix ◴[] No.40719900{6}[source]
Windows 11 has pushed the use of MPO for windowed games and video display.
25. eqvinox ◴[] No.40720056{5}[source]
You're confusing GT and Gbaud. Gbaud/s = Symbol rate × bits per symbol.

GT/s is in fact G"bit"/s, before line coding (I left that can of worms unopened because line coding wasn't relevant to the bandwidth vs. latency discussion.) PCIe 6.0 is "64GT/s", but only 32 Gbaud, since as you correctly point out it uses PAM-4.

> GT/s does stand for Giga Transfers per second.

If you have a citable source for this, that'd be nice — it's not in the PCIe spec, and AFAIK the term is not used elsewhere.

replies(1): >>40722921 #
26. ZiiS ◴[] No.40720198{4}[source]
I do totally agree about the relative merits of bandwidth vs latency. However, I also still think GT/s is generally accepted as an abbreviation for gigatransfers per second and that the PCIe spec is assuming it as such. I also note you have had to pull in lots of additional specifications to describe the bandwidth of the complete Link supporting my assertion it is not a pure function of the GT/s.
replies(1): >>40726329 #
27. hinkley ◴[] No.40720208[source]
My first laptop with an SSD booted into games so much faster that I didn’t even get mad if the machine crashed while playing cooperative games. I’ll be back online in 45 seconds guys.
28. Nifty3929 ◴[] No.40720470[source]
I agree with you, but I also agree with GP. Raw compute is many orders of magnitude faster, so for exactly those things you mention it's super awesome.

But for user interfaces at least, it really does feel like things are slower, or at least no faster than they were. As he mentions - at a level just within what we will tolerate.

As far as code editors - I don't know, Sublime (and Notepad!) is fast, but IntelliJ, VS Code and such still feel pretty 'heavy.' And I still sometimes have that experience of my computer not being able to keep up with my typing rate which is dumb. I don't even type fast.

29. Dylan16807 ◴[] No.40720660{8}[source]
> Unless Microsoft is doing something weird, this should be extremely fast. <1ms fast.

And it should also be scheduled for near the end of the frame period, not happening right at the start.

But all this stuff is hard to do right and higher refresh rates make it simpler to do a good job.

30. Dylan16807 ◴[] No.40720712{4}[source]
Another good example is the memory in your computer. DDR is much lower latency, and GDDR is much higher bandwidth.
31. PlutoIsAPlanet ◴[] No.40721023{6}[source]
Arguably the only window that needs to be in a hardware plane is the window currently in focus.
32. vinay_ys ◴[] No.40722921{6}[source]
Here's a pcisig article talking about symbol rate in terms of GT/s:

https://pcisig.com/blog/pci-express®-50-architecture-channel...

replies(1): >>40726178 #
33. eqvinox ◴[] No.40726178{7}[source]
That's great, but…

https://pcisig.com/pci-express-6.0-specification "64 GT/s raw data rate and up to 256 GB/s via x16 configuration"

The symbol rate for 6.0 is only 32Gsym/s. So GT/s can't be symbol rate. (And the references to PCIe 6.0 putting it at "64 GT/s" seem to be far more common, and in particular the PCIe (4.0, newest I have access to) specification explicitly equates GT/s with data rate.

My takeaway (even before this discussion) is to avoid "GT/s" as much as possible since the unit is really not well defined.

(And, still, I don't even know if there is a definition of it anywhere. I can't find one. The PCIe spec uses it without defining it, but it is not a "textbook common" unit IMHO. If you are aware of an actual definition, or maybe even just a place¹ that can confirm the T is supposed to mean "transfer", I'd appreciate that!)

¹ yes I know wikipedia says that too, but their sources are… very questionable.

P.S.: I really don't even disagree with you, because ultimately I'm saying "GT/s is confusing and can be interpreted different ways". The links from each of us just straight up conflict with each other in their use of GT/s. Yours uses it for symbol rate, mine uses it for data rate. ⇒ why I try to avoid using this unit at all.

34. eqvinox ◴[] No.40726329{5}[source]
I think we're having some communication/understanding issues, but that's OK. To be clear my main issue with GT/s is that even the PCI SIG doesn't agree with itself and uses the term in conflicting ways (see discussion in sibling thread.)

As far as I can research, GT/s is a "commoner's unit" that someone invented and started using at some point, but there is no hard reliable definition of it. Nowadays it seems to be used for RAM and PCIe (and nothing else really), though some search results I found claim it was also used for SCSI.

35. arghwhat ◴[] No.40727219{8}[source]
Pekka Paalanen wrote a nice blogpost about the concept of repaint scheduling with graphs: https://ppaalanen.blogspot.com/2015/02/weston-repaint-schedu... (note that the Weston examples gives a whopping 7ms for composition).

I'm making some assumptions about your chart as it is not to scale, but it looks like the usual worst-case strategy. Given a 60Hz refresh rate and a 1ms composition time an example of an optimized composition strategy would look something like this:

    +0ms      vblank, frame#-1 starts scanout

    +15.4ms   read cursor position #0, initiate composite #0
    +16.4ms   composition buffer #0 ready
    +16.5ms   update cursor plane position #0 and attach primary plane buffer #0
    +16.6ms   vblank, frame #0 starts scanout

    +32.1ms   read cursor position, initiate composite #1
    +33.1ms   composition buffer #1 ready
    +33.2ms   update cursor position and attach primary plane buffer #1
    +33.3ms   vblank, frame #1 starts scanout
In this case, both the composite and the cursor position is only 1.2ms old at the time the GPU starts scanning it out, and hardware vs. software cursor has no effect on latency. Moving the cursor update closer would make the cursor out of sync with the displayed content, which is not really worth it.

(Games and other fullscreen applications can have their render buffer directly scanned out to remove the composition delay and read input at their own pace for simulation reasons, and those applications tend to be the subject at hand when discussing single or sub-millisecond input latency optimizations.)

> Frames are extremely fast to render, but they arrive the frame after they were originally scheduled, because GPU pipelines are asynchronous.

The display block is synchronous. While render pipelines are asynchronous, that is not a problem - as long as the render task completes before the scanout deadline, the resulting buffer can be included in that immediate scanout. Synchronization primitives are also there when you need it, and high-priority and compute queues can be used if you are concerned that the composition task ends up delayed by other things.

Also note that the scanout deadline is entirely virtual - the display block honors whatever framebuffer you point a plane to at any point, we just try to only do that during vblank to avoid tearing.

> If you actually try any of the tests I mentioned in my original comment you'll see this for yourself.

While it might be fun to see if Microsoft screwed up their composition and paint scheduling, that does not change that it is not related to GPUs or the graphics stack itself. Working in the Linux display server space makes me quite comfortable in my understanding of GPU's display controllers.

replies(1): >>40734753 #
36. LoganDark ◴[] No.40734753{9}[source]
> that does not change that it is not related to GPUs or the graphics stack itself. Working in the Linux display server space makes me quite comfortable in my understanding of GPU's display controllers.

I didn't mean to suggest some sort of fundamental limitation in GPUs that makes it impossible to synchronize this. If you take a look at my previous comments, you'll see me explicitly pointing out that I'm talking about Windows, specifically, and I'm only using it as an example of how short a latency is still perceptible. How exactly that latency happens is almost certainly not a hardware issue, however, and I never meant to imply such.