Also the speed is per lane, eg an x8 slot / port / device is called that because it has 8 lanes, which all transfer in parallel.
So... That's about 16 terabytes per second per lane. AKA more bandwidth than I can imagine any use for, though I'm sure we will find ways to take advantage...
(Seriously, that's enough to move 16 largish laptop drives every second, on a single lane.)
Right now optical seems exotic & expensive but we seem near a severe tipping point. Copper keeps facing increasingly channeling signal integrity challenges, requiring expensive & energy consuming retimers. Meanwhile we think we can keep scaling optical down, integrating silicon photonics, getting increasingly lower pJ/b energy costs. Without the range & signal integrity issues. Not super duper deep but this 2 year old Cadence blog post goes into it, and it seems indeed to be where things are heading. https://community.cadence.com/cadence_blogs_8/b/breakfast-by...
You got it. We can't make optical transceivers as good as electrical ones. Not as small or power-efficient.
They require significantly different fabrication processes, and we don't know how to fab them into the same chip as electrical ones. I mean: you can either have photonics, or performant digital (or analog) electronics.
We've gotten really, really good at making small electronics, per the latest tech coming out of Intel & TSMC. We are... not that good at making photonics.
Additionally, we don’t have a decent way of transferring significant power over fiber optics.
So since everything has to have copper power fed to it anyway, unless there is some compelling reason (like distance) to make optical/fibers disadvantages worth it, copper only is usually simpler and better.
At least for now.
Though the exact details of the overhead don't matter very much. They add 6% extra bits, good enough.
The part I want to call out as complicated/confusing is that a PCIe 7.0 lane puts out a voltage 64 billion times per second, but because each voltage is based on two bits that counts as 128 billion "transfers".
I wonder what the latency for switching medium is these days too (for the super small transceivers). To my understanding optical is better for attenuation than electric (less noise, and thus easier to shove more frequencies and higher frequencies on the same pipe), and can be faster (both medium dependent, neither yet approaching the upper bound of c).
I'm imaging the latency incurred by the transceiver is eventually offset from the gains in the signal path (for signal paths relevant to circuit boards and ICs)
Actually at that point, a pcie7 nvme would be faster than ddr6
https://www.pcworld.com/article/2237799/ddr6-ram-what-you-sh...
That said, per-pin, 16GB/s seems to be the same ballpark as contemporary (to pcie7) main or graphics memory..... Like, actually more if I'm reading this right?
https://www.anandtech.com/show/21287/jedec-publishes-gddr7-s...
In other news, I'm getting a Xeon with 4 whole GiB of DDR2 ECC RAM shipped from China that has 3 ISA slots. ;D
The big issue is really 1. Photonic waveguides are much larger than electronic ones (due to the wavelength) 2. You loose dynamic range and in EO conversion (shot noise is significant at optical frequencies) 3. Co integration of optics and photonics components is nontrivial due to the different materials and processes. 4. Power efficiency of EO conversion is also not that great.
Where photonics shines is for transmission of high frequencies (i.e. a lot of data) over long distances and being immune to EM interference. So there is certainly a tradeoff for at what transmission distances to go optical and as data rates keep going up the tradeoff length has become shorter and shorter. Intel, Nvidia, AMD et al. All do research into optical interconnect.
- cells for a chip to send/receive at 128Gb/s - this solution requires 8 of them running in parallel (like 8 PCIe lanes) - a module that takes 8 lanes in/out and drives/receives a single fiber
FWIW, this is only true for newer hardware. ie if you plugged in a pcie gen3x16 device into a pcie gen4x8 slot, although the bandwidth provided is in the same ballpark, the device will only run at pcie gen3x8.
So we'll need until the devices upgrade themselves to gen4 in this scenario to make use of higher bandwidth.
800G ethernet is here at the switches (Dell Z9864F-ON is beautiful... 128 ports of 400G), but not yet at the server/NIC level, that comes with PCIe6. We are also limited to 16 chassis/128 GPUs in a single cluster right now.
NVMe is getting faster all the time, but is pretty standard now. We put 122TB into each server, so that enables local caching of data, if needed.
All of this is designed for the highest speed available today that we can get on the various bus where data is transferred.
Is non-ISO unit "T" / "transfer" a marketing term or really specialised jargon? "transfer" just doesn't click in my mind, at best "a transfer" (countable) is about moving a sizeable aggregate chunk that has some semantic meaning, not a single fundamental quantum of information.
Unrelated: "gigatesla per second" is such a mind-boggling unit.
If so, does that matter at all here? Dunno if that holds up for such kind of devices and/or at these scales (much shorter distance, but also much higher speed).
Which brings the question: why operating wavelengths are smaller but “waveguides” are bigger in optical fiber communication. In fact, fiber itself is a waveguide and its diameter is tens of micro meters.
I'm not sure what kind of refractive indices are possible in much smaller photonic circuits, particularly if it's not practical to develop and run everything in a permanent vacuum.
You get to this result if you take the electromagntic wave equation - a partial differential equation - and solve that for your transmission line configuration.
The proper analogy in the realm of electrical waveguides is the hollow waveguide. The hollow waveguide supports TE- and TM-modes but not TEM modes just like a dielectric conductor. The size is also a function of the dielectric constant ε.
What we mostly use are TEM waveguides like microstrips or coaxial cables. The difference between electrical waveguides that supports TEM modes and waveguides that supports TE/TM modes is that the former has two independent potential planes and the latter only one. Also TEM waveguides do not have a lower cutoff frequency. A TEM wave with any frequency can propagate on any microstrip configuration.
This is not true for TE/TM waves.
What's important to understand is that for microstrips/coaxial cables the power isn't transferred in the metal but in the space (dielectric) around the metal - see Poynting vector. So what happens if you have a second conductor in that space? You get crosstalk! So TEM transmission lines do not contain the wave like hollow waveguides or optical fibers (edit: ok coaxial cables do, microstrips don't)
Now the question, how big is the microstrip? Is it just the width of the signal conductor? No, it is not.
Edit: The width of the metal lines in a chip is given by the current it must carry - current density requirement, electro-migration issues. Power lines are wide because they have to supply power to the circuit but logic traces in CMOS technology only carry negligible amount of current. In circuits like RF power amplifiers with bipolar transistors the trace width is much larger because it has to carry a much larger current. But again, microstrip lines do not have a lower cutoff frequency.
It seems like we're stuck at a pre-set level of latency, which is just within what people tolerate. I was watching a video of someone running Windows 3.11 and notice that the windows closes instantly, which on Windows 10 and 11 I've never seen there NOT be a small delay between the user clicking close and the window disappearing.
Not all pci-e lanes on your motherboard are created equal: Some are directly attached to the CPU others are connected to the chipset, which in turn is connected to the CPU.
It's possible to convert a single 5.0 x16 connection coming from the CPU to 2 4.0 x16 connections.
Isn't that delay related to the default animations? On my particular machine with animations disabled, if I click the minimize button, the window disappears instantly. This is your standard win11 on a shitty enterprise laptop running some kind of 11th gen i7u with the integrated graphics and a 4k external display.
Maximization is sometimes janky, but I guess it's because the window needs to redraw its contents at the new size.
I was under the impression that for 10Gb and above network transceivers, optical SFPs weren't getting as hot as copper ones. Is that difference related to something else?
What purpose could this possibly serve? Enjoying worse performance than a 7900X?
It seems like enshittification is not just the inevitable outcome, but almost desirable (from a profit standpoint), and thus things getting faster for you only help me (the vendor) if I can extract _more_ value by things being faster - otherwise why would I spend money to make things better?
Bandwidth isn't latency, and PCIe 7.0 running as fast as 128 GT/s is no statement at all about its latency. I remember this great analogy from university: a truck carrying a full load of backup tapes across a country has amazing bandwidth but atrocious latency.
(I still agree with your sentiment, just PCIe is not one of the problems in this regard. The connection between bandwidth becoming available and being eaten up vs. latency is a red herring; it's all about properly engineering software for responsitivity.)
You can also think about it another way: SFPs are also connected with high bandwidth electrical links; for 10GE that signal is a pure straight 10.3125 Gbaud. Yet the SFPs don't heat up as much. You can also look up 10Gbase-KR, which is "stretching those plain PCB signals as far as we possibly can", as well as DAC cables and their ranges.
State of the art [cf. https://www.xilinx.com/products/technology/high-speed-serial... ] for SERDES blocks (= what makes your short-range PCB electrical link) is ca. ≤ 150Gbaud at PAM4 (2 bits per baud), i.e. ca. 300Gbit/s, but you need error correction at that point. PCIe 7.0 pulls back to a safe (and cheaper to manufacture) 64Gbaud with PAM4 to get its 128GT/s.
The speed of light in optical fiber — for all types based on glass, ignoring miniscule differences — is 68% that of air/vaccuum. And that's not changing, and no state-of-the-art high speed applications are being developed on plastic fibre or free air.
So, latency wise, on runs with non-negligible length, optical will lose out to electrical, which is generally quite close to the speed of light. (Except of course after some point the electrical signal is just noise, and if you factor in delay caused by amplifiers/repeaters it becomes much harder of a question.)
There's this kinda-famous story of some HFT company pulling a copper cable across some bay, because they'd gain some nanoseconds compared to the fiber they had.
The transceiver latency for "long-range" links (well, call 100m long range for copper…) is actually worse for the copper links, as the whole DSP getup you need for that takes a few symbol times to process. Optical transceivers are just optodiodes and "simple" amplifiers, the latency is much less than a symbol.
(symbol = unit of transmission, roughly 1 bit on "old" fibre [≤ 25G lane rate], ca. 4 bit for 10Gbase-T, will vary more for faster connections.)
Of course, it's only one of the contributing factors to the total latency of things like keystrokes: https://danluu.com/input-lag/
And, no, not all bandwidth is built to reduce latency. There is a lot of bulk, best-effort traffic - for example, YouTube and Netflix proactively distributing videos between datacenters across the world. (They totally do that before anyone ever clicks play, they have enough data to know what is likely to be needed where.)
The same applies to your YouTube/Netflix playback at home. It doesn't need to be low latency. The only effect of latency is a longer time between you clicking play and playback actually starting. From there onwards, you just need enough bandwidth to keep the buffer filled, and you can do that quite a bit ahead of reaching playback position. Latency is a real non-issue there.
Same locally for bulk copying files around. If your OS & FS is designed well, latency only shows up at the beginning of the operation. Most file systems were designed when data was on rotating rust, and that's dealt with readahead and the likes.
The canonical example is probably a dial-up modem or other slow link between two locations. The latency is under 1 second to send one byte over the modem. But it's probably faster to just ship a hard disk if you want to send 100 gigabytes from one location to the other, even though the latency might be hours or even days, until the first byte arrives.
In practice, you can send lots of tiny little packets with lots of overhead (but low latency) or you can send lots of big heavily buffered packets with low overhead (but with high latency).
This is why multiplayer game protocols often consist of a constant stream of tiny UDP packets containing events like "character moved 40 nits east at game time ..." or "character fired weapon at game time ...." Even a 10 kilobyte bulk state update is going to cost at least a few milliseconds, more probably tens or even hundreds of milliseconds over some wireless connection. And that's a very noticeable lag.
| The primary Link attributes for PCI Express Link are:
| · The basic Link – PCI Express Link consists of dual unidirectional differential Links, implemented as a Transmit pair and a Receive pair. A data clock is embedded using an encoding scheme (see Chapter 4) to achieve very high data rates.
| · Signaling rate – Once initialized, each Link must only operate at one of the supported signaling levels. For the first generation of PCI Express technology, there is only one signaling rate defined, which provides an effective 2.5 Gigabits/second/Lane/direction of raw bandwidth. The second generation provides an effective 5.0 Gigabits/second/Lane/direction of raw bandwidth. The third generation provides an effective 8.0 Gigabits/second/Lane/direction of raw bandwidth. The data rate is expected to increase with technology advances in the future.
| · Lanes – A Link must support at least one Lane – each Lane represents a set of differential signal pairs (one pair for transmission, one pair for reception). To scale bandwidth, a Link may aggregate multiple Lanes denoted by xN where N may be any of the supported Link widths. A x8 Link operating at the 2.5 GT/s data rate represents an aggregate bandwidth of 20 Gigabits/second of raw bandwidth in each direction. This specification describes operations for x1, x2, x4, x8, x12, x16, and x32 Lane widths.
(from PCIe 4.0 base specification)
So, GT/s is used to be less ambiguous on multi-lane links.
Next,
> the bus itself is only adding 128 billionth of a second).
no, the bus does actually add more latency since almost all receivers need to reassemble the whole transaction (generally tens to hundreds of bytes) to checksum validate and then dispatch further to continue. This latency can show up multiple times if you have PCIe switches, but (unlike endpoints) these are frequently cut-through.
However, that latency is seriously negligible compared to anything else in your system.
> In fact it does not say anything about bandwidth if you don't know how many bits in a transfer.
How many bits are in a transaction does in fact influence that latency mentioned right above, but has no impact on bandwidth. What does have an impact on available end-user bandwidth is how small you chunk longer transactions since each of them has per-transaction overhead.
And finally —
> GT/s is a measure of latency
— absolutely not. It is a measure of raw bandwidth. It indirectly influences minimum and maximum latency, but those are complicated relationships especially on multi-lane links, and especially maximum latency depends on a whole host of factors from hardware capabilities, to BIOS and OS settings in PCIe config, to driver behavior.
Unless your goal is tearing updates (a whole other discussion), then your only cause of latency is missed frame deadlines due to slow or badly scheduled rendering.
There is no need to switch to software cursor rendering unless you want to render something incompatible with the cursor plane, e.g. massive buffers or underlaying the cursor under another surface. Synchronization with primary plane updates is not at all an issue.
I believe there's some work on Linux already for them, but I'm not so sure on Windows. I would be surprised if macOS doesn't already use them in some capacity given Apple's obsession with delegating everything to firmware on a co-processor.
I booted and used an old computer recently. Not Windows 3.11 old, but old enough to have a mechanical hard drive.
The experience was anything but low latency. It’s easy to forget just how slow mechanical hard drives were in the past.
Modern desktops are extremely fast. Closing a window and having a tiny delay doesn’t bother me in the slightest because it has zero impact in my workflow.
I can launch code editors quickly, grep through files at an incredible rate, and compile large projects with a dozen threads in parallel. Getting worked up over a split second delay in closing a window is not a concern in the slightest.
Regardless, it has nothing to do with next generation PCIe bandwidth. I don’t understand why this is the top voted comment on this otherwise interesting article. Is HN just a place to find creative ways to be cynical and complain about things these days?
This assumption is very correct. Optical interconnects are extraordinarily expensive relative to copper. We have the art of manufacturing copper PCBs and connectors mastered. Putting optical interconnects into a system requires that the signal go through transceivers at either end as well as external optical cables, which are not integrated into the PCB. It’s extra components and complexity everywhere.
The reason optical interconnects are being explored here is that next gen PCIe is so extremely fast that the signals cannot travel very far in PCBs without significant losses. PCBs built for these speeds require special, expensive materials on the layers with those signals. They might require retimer chips to amplify the signal past a certain distance. These limitations may not apply to consumer motherboards with a single GPU very near to the CPU, but datacenter motherboards might need to connect many GPUs across a large chassis. The distances involved could require multiple retimer chips along the way and very expensive PCBs. Going to optical interconnects could introduce much more flexibility into where the GPUs or other add in cards are located.
While I wouldn't be surprised if this is technically true in a hardware sense, software-wise, Windows knows where the cursor is before it's finished rendering the rest of the screen, and updates the hardware layer that contains the cursor before rendering has finished.
Meanwhile Samtec has PCIe active optical cables, they have had them since 2012, it's a very niche application currently.
There are actually a few commercial fabs that will monolithically integrate the photonics, analog electronics, and digital electronics, all in the same CMOS process. See for example GF’s process:
https://www.cmc.ca/globalfoundries-fotonix-45spclo/
Integrating good optical sources in silicon remains a challenge, but companies like Intel have mastered hybrid bonding and other packaging techniques. TSMC too has a strong silicon photonics effort.
Hardware planes are great but there are a limited number of them. Right now I believe Windows only uses them for the mouse cursor, and exclusive fullscreen.
Encoding is basically a block of data (a sequence of zeros and ones) is represented as a sequence of electrical voltage changes (a block of symbols).
GT/s does stand for Giga Transfers per second. Here, the transfers are referring to number of symbols transferred per second, and not actual usable data bits per second.
We say GT/s instead of Gbps, because the actual usable bits/sec is determined by the encoding scheme used.
PCIe 1.0 and 2.0 encoded 8 data bits in 10 symbols (NRZ electrical signals). That's 20% overhead.
PCIe 3.0 to 5.0 encoded 128 bits of data in 130 symbols. That's a much lower overhead of 1.54%.
PCIe 6.0 (& yet to be standardized PCIe 7.0) use PAM4 for signaling and doesn't required any encoding on top –hence it is written as 1b/1b. (btw, in PAM4, each symbol is 2 bits).
You can see similar NRZ signaling with encoding in SATA, Ethernet, Fiber Channel etc. Btw, PAM4 with NRZ is used as well!
Coming to latency, latency is the time it takes for a single bit of usable data to transfer from A to B. Many factors affect this latency. Signaling medium's speed of transmission (a fraction of speed of light), signaling medium's length, signaling frequency (Mhz, Ghz etc of voltage switching), encoding scheme (encoding overhead, clock recovery or its failure and hence retransmissions, error detection/correction quality or its failure and hence retransmissions) - each of these things affect the latency of usable data.
GT/s = Signal Frequency x Bits per cycle.
Remember, PAM4 encoding in PCIe6.0 has 2 bits per cycle (2 bits per symbol).
The relative permittivity εr of SiO2 is ~4.
c = c0 / sqrt(εr)
c0 = 1 /sqrt(ε0εr × μ0μr) and in vacuum εr=μr=1.
But the frequency needs to be sufficiently high in order to observe wave propagation, let's say >10GHz.
For low frequencies the electric conductor behaves more like a RC chain.
The earlier you sample the cursor position and update the cursor plane, the more the position is out of date once the next scanout comes around, increasing the perceived input delay.
The approach that leads to the smallest possible input latency is to sample the cursor position just before issuing the transaction that updates the cursor position and swaps in the new primary plane buffer (within Linux, this is called an atomic commit), whereas you maximize content consistency with still very good input latency by sampling just before the composition started.
Note that "composition" does not involve rendering "content" as the user perceives it, but just placing and blending already rendered window content, possibly with a color transform applied as the pixels hit the screen. Unless Microsoft is doing something weird, this should be extremely fast. <1ms fast.
I mean - one can build a 16-fiber, CWDM16 monstrosity that would transport an 8x 128GT/s PCIe using mere 8 Gbit/s per channel - all of which is technology that was "groundbreaking" circa 2005.
Nothing on top if you exclude the error correction bits, which I don't think you should.
No, the cursor position is more up-to-date than the rest of the screen because it doesn't need to wait for a GPU pipeline to finish after it's moved.
> Unless Microsoft is doing something weird, this should be extremely fast. <1ms fast.
Look, I'm saying this is what's going on. (not to scale)
... | vsync ...
... | cursor updated for frame 0 ...
... | frame 0 scanout ...
... | frame 1 ready ...
... | vsync ...
... | cursor updated for frame 1 ...
... | frame 1 scanout ...
... | frame 2 ready ...
Frames are extremely fast to render, but they arrive the frame after they were originally scheduled, because GPU pipelines are asynchronous. However, the cursor position arrives immediately because the position of the hardware layer can be synchronously updated immediately before scanout. The effect is that updates to the cursor position are (essentially) displayed 1 frame sooner than updates to the rest of the screen. If you actually try any of the tests I mentioned in my original comment you'll see this for yourself.I upgraded to a 4 TB NVMe drive that theoretically reads/writes at up to 7.7 GB/s, but I only get 3.5 GB/s because my CPU is still an i9-9900K running PCI-E 3.0.
Planning on upgrading once the next Intel generation drops.
There is no DMA there, so your Sound Blaster wouldn't work [from the box].
This is not the same as Gb/s. There's a few percentage points difference due to error correction.
GT/s is in fact G"bit"/s, before line coding (I left that can of worms unopened because line coding wasn't relevant to the bandwidth vs. latency discussion.) PCIe 6.0 is "64GT/s", but only 32 Gbaud, since as you correctly point out it uses PAM-4.
> GT/s does stand for Giga Transfers per second.
If you have a citable source for this, that'd be nice — it's not in the PCIe spec, and AFAIK the term is not used elsewhere.
But you're right, I might have accidentally mixed in some radio connection bits, with the HFT company anecdote.
Also it turns out the speed of light in glass is not that impressive. So encoding and decoding at the ends eats up the speed advantage. That’s my impression as to why a lot of high profile articles on optical logic came out shortly thereafter. What if we just keep it as light for longer?
But for user interfaces at least, it really does feel like things are slower, or at least no faster than they were. As he mentions - at a level just within what we will tolerate.
As far as code editors - I don't know, Sublime (and Notepad!) is fast, but IntelliJ, VS Code and such still feel pretty 'heavy.' And I still sometimes have that experience of my computer not being able to keep up with my typing rate which is dumb. I don't even type fast.
And it should also be scheduled for near the end of the frame period, not happening right at the start.
But all this stuff is hard to do right and higher refresh rates make it simpler to do a good job.
I expect consumer machines to keep doing some conversion and expansion in the chipset, but nowhere else. I expect servers to directly attach almost everything and drop down to smaller lane counts for large numbers of devices.
It's worth noting that when Kioxia first put out PCIe 5.0 EDSFF drives, they were marketing them as being optimized for 2 lanes at the higher speed.
https://pcisig.com/blog/pci-express®-50-architecture-channel...
I'm not an expert on integrated electronic circuits, but I guess the difference could matter depending on application.
https://pcisig.com/pci-express-6.0-specification "64 GT/s raw data rate and up to 256 GB/s via x16 configuration"
The symbol rate for 6.0 is only 32Gsym/s. So GT/s can't be symbol rate. (And the references to PCIe 6.0 putting it at "64 GT/s" seem to be far more common, and in particular the PCIe (4.0, newest I have access to) specification explicitly equates GT/s with data rate.
My takeaway (even before this discussion) is to avoid "GT/s" as much as possible since the unit is really not well defined.
(And, still, I don't even know if there is a definition of it anywhere. I can't find one. The PCIe spec uses it without defining it, but it is not a "textbook common" unit IMHO. If you are aware of an actual definition, or maybe even just a place¹ that can confirm the T is supposed to mean "transfer", I'd appreciate that!)
¹ yes I know wikipedia says that too, but their sources are… very questionable.
P.S.: I really don't even disagree with you, because ultimately I'm saying "GT/s is confusing and can be interpreted different ways". The links from each of us just straight up conflict with each other in their use of GT/s. Yours uses it for symbol rate, mine uses it for data rate. ⇒ why I try to avoid using this unit at all.
As far as I can research, GT/s is a "commoner's unit" that someone invented and started using at some point, but there is no hard reliable definition of it. Nowadays it seems to be used for RAM and PCIe (and nothing else really), though some search results I found claim it was also used for SCSI.
You still need the copper wires to do power delivery, so either you end up with an even thicker cable, or multi-purpose the copper cables for signaling too.
I'm making some assumptions about your chart as it is not to scale, but it looks like the usual worst-case strategy. Given a 60Hz refresh rate and a 1ms composition time an example of an optimized composition strategy would look something like this:
+0ms vblank, frame#-1 starts scanout
+15.4ms read cursor position #0, initiate composite #0
+16.4ms composition buffer #0 ready
+16.5ms update cursor plane position #0 and attach primary plane buffer #0
+16.6ms vblank, frame #0 starts scanout
+32.1ms read cursor position, initiate composite #1
+33.1ms composition buffer #1 ready
+33.2ms update cursor position and attach primary plane buffer #1
+33.3ms vblank, frame #1 starts scanout
In this case, both the composite and the cursor position is only 1.2ms old at the time the GPU starts scanning it out, and hardware vs. software cursor has no effect on latency. Moving the cursor update closer would make the cursor out of sync with the displayed content, which is not really worth it.(Games and other fullscreen applications can have their render buffer directly scanned out to remove the composition delay and read input at their own pace for simulation reasons, and those applications tend to be the subject at hand when discussing single or sub-millisecond input latency optimizations.)
> Frames are extremely fast to render, but they arrive the frame after they were originally scheduled, because GPU pipelines are asynchronous.
The display block is synchronous. While render pipelines are asynchronous, that is not a problem - as long as the render task completes before the scanout deadline, the resulting buffer can be included in that immediate scanout. Synchronization primitives are also there when you need it, and high-priority and compute queues can be used if you are concerned that the composition task ends up delayed by other things.
Also note that the scanout deadline is entirely virtual - the display block honors whatever framebuffer you point a plane to at any point, we just try to only do that during vblank to avoid tearing.
> If you actually try any of the tests I mentioned in my original comment you'll see this for yourself.
While it might be fun to see if Microsoft screwed up their composition and paint scheduling, that does not change that it is not related to GPUs or the graphics stack itself. Working in the Linux display server space makes me quite comfortable in my understanding of GPU's display controllers.
I didn't mean to suggest some sort of fundamental limitation in GPUs that makes it impossible to synchronize this. If you take a look at my previous comments, you'll see me explicitly pointing out that I'm talking about Windows, specifically, and I'm only using it as an example of how short a latency is still perceptible. How exactly that latency happens is almost certainly not a hardware issue, however, and I never meant to imply such.