←back to thread

362 points tosh | 2 comments | | HN request time: 0.001s | source
Show context
trollied ◴[] No.42069524[source]
>In a typical TCP/IP network connected via ethernet, the standard MTU (Maximum Transmission Unit) is 1500 bytes, resulting in a TCP MSS (Maximum Segment Size) of 1448 bytes. This is much smaller than our 3MB+ raw video frames.

> Even the theoretical maximum size of a TCP/IP packet, 64k, is much smaller than the data we need to send, so there's no way for us to use TCP/IP without suffering from fragmentation.

Just highlights that they do not have enough technical knowledge in house. Should spend the $1m/year saving on hiring some good devs.

replies(5): >>42069956 #>>42070181 #>>42070248 #>>42070804 #>>42070811 #
adamrezich ◴[] No.42070804[source]
This reminds me of when I was first starting to learn “real game development” (not using someone else's engine)—I was using C#/MonoGame, and, while having no idea what I was doing, decided I wanted vector graphics. I came across libcairo, figured out how to use it, set it all up correctly and everything… and then found that, whoops, sending 1920x1080x4 bytes to your GPU to render, 60 times a second, doesn't exactly work—for reasons that were incredibly obvious, in retrospect! At least it didn't cost me a million bucks to learn from my mistake.
replies(1): >>42077690 #
namibj ◴[] No.42077690[source]
The sending is fine; cairo just won't create these bitmaps fast enough.
replies(1): >>42082926 #
1. adamrezich ◴[] No.42082926[source]
Was this true back in 2011 or so? I'm genuinely curious—this may be yet another layer of me having no idea of what I was doing at the time, but I thought I remember determining (somehow) that the problem was the CPU-to-GPU bottleneck. It may have been that I got 720p 30FPS working just fine, but then 1080p was in the single digits, and I just made a bad assumption, or something.
replies(1): >>42083542 #
2. jmb99 ◴[] No.42083542[source]
1080p@60 is “only” around 500MB/s, which should have been possible a decade ago. PCIe 1.0 x16 bandwidth maxed out at 4GB/s, so even if you weren’t on a top of the line system with PCIe 2.0 (or brand new 3.0!) you should have been fine on that front[1].

More than likely the CPU wasn’t able to keep up. The pipeline was likely generating a frame, storing it to memory, copying from memory to the PCIe device memory, displaying the frame, then generating the next frame. It wouldn’t surprise me if a ~2010 era CPU struggled doing so.

[1] Pretty much any GPU’s memory bandwidth is going to be limited by link speed. An 8800GTS 320MB from 2007 had a theoretical memory bandwidth of around 64GB/s, for reference.