Most active commenters
  • lysace(14)
  • Dylan16807(9)
  • austin-cheney(7)
  • dan-robertson(6)
  • chgs(5)
  • ratorx(5)
  • jiggawatts(5)
  • bawolff(5)
  • nine_k(4)
  • tsimionescu(4)

306 points carlos-menezes | 274 comments | | HN request time: 3.582s | source | bottom
1. lysace ◴[] No.41890996[source]
> We find that over fast Internet, the UDP+QUIC+HTTP/3 stack suffers a data rate reduction of up to 45.2% compared to the TCP+TLS+HTTP/2 counterpart.

Haven't read the whole paper yet, but below 600 Mbit/s is implied as being "Slow Internet" in the intro.

replies(9): >>41891071 #>>41891077 #>>41891146 #>>41891362 #>>41891480 #>>41891497 #>>41891574 #>>41891685 #>>41891800 #
2. exabrial ◴[] No.41891042[source]
I wish QUIC had a non-TLS mode... if I'm developing locally I really just want to see whats going over the wire sometimes and this adds a lot of un-needed friction.
replies(2): >>41891081 #>>41891135 #
3. spott ◴[] No.41891057[source]
Here “fast internet” is 500Mbps, and the reason is that quic seems to be cpu bound above that.

I didn’t look closely enough to see what their test system was to see if this is basic consumer systems or is still a problem for high performance desktops.

4. Fire-Dragon-DoL ◴[] No.41891071[source]
That is interesting though. 1gbit is becoming more common
replies(2): >>41891194 #>>41891645 #
5. Dylan16807 ◴[] No.41891077[source]
Just as important is > we identify the root cause to be high receiver-side processing overhead, in particular, excessive data packets and QUIC's user-space ACKs

It doesn't sound like there's a fundamental issue with the protocol.

6. krater23 ◴[] No.41891081[source]
You can add the private key of your server in wireshark and it will automatically decrypt the packets.
replies(2): >>41891463 #>>41908681 #
7. Tempest1981 ◴[] No.41891085[source]
From September:

QUIC is not quick enough over fast internet (acm.org)

https://news.ycombinator.com/item?id=41484991 (327 comments)

replies(2): >>41891107 #>>41893876 #
8. lysace ◴[] No.41891107[source]
My personal takeaway from that: Perhaps we shouldn't let Google design and more or less unilaterally dictate and enforce internet protocol usage via Chromium.

Brave/Vivaldi/Opera/etc: You should make a conscious choice.

replies(3): >>41891197 #>>41891355 #>>41891374 #
9. superkuh ◴[] No.41891109[source]
Since QUIC was designed for Fast Internet as used by the megacorporations like Google and Microsoft how it performs at these scales does matter even if it doesn't for a human person's end.

Without it's designed for use case all it does is slightly help mobile platforms that don't want to hold open a TCP connection (for energy use reasons) and bring in fragile "CA TLS"-only in an environment where cert lifetimes are trending down to single months (Apple etc latest proposal).

replies(1): >>41891301 #
10. skybrian ◴[] No.41891114[source]
Looking at Figure 5, Chrome tops out at ~500 Mbps due to CPU usage. I don't think many people care about these speeds? Perhaps not using all available bandwidth for a few speedy clients is an okay compromise for most websites? This inadvertent throttling might improve others' experiences.

But then again, being CPU-throttled isn't great for battery life, so perhaps there's a better way.

replies(1): >>41893586 #
11. guidedlight ◴[] No.41891135[source]
QUIC reuses parts of the TLS specification (e.g. handshake, transport state, etc).

So it can’t function without it.

12. jvanderbot ◴[] No.41891138[source]
Well latency/bandwidth tradeoffs make sense. After bufferbloat mitigations my throughout halved on my router. But for gaming while everyone is streaming, it makes sense to settle with half a gigabit.
13. Aurornis ◴[] No.41891146[source]
Internet access is only going to become faster. Switching to a slower transport just as Gigabit internet is proliferating would be a mistake, obviously.
replies(3): >>41891187 #>>41891205 #>>41891292 #
14. kachapopopow ◴[] No.41891157[source]
This sounds really really wrong. I've achieved 900mbps speeds on quic+http3 and just quic... Seems like a bad TLS implementation? Early implementation that's not efficient? The CPU usage seemed pretty avg at around 5% on gen 2 epyc cores.
replies(1): >>41891399 #
15. tomxor ◴[] No.41891187{3}[source]
In terms of maximum available throughput it will obviously become greater. What's less clear is if the median and worst throughput available throughout a nation or the world will continue to become substantially greater.

It's simply not economical enough to lay fibre and put 5G masts everywhere (5G LTE bands covers less area due to being higher frequency, and so are also limited to being deployed in areas with a higher enough density to be economically justifiable).

replies(2): >>41891425 #>>41891795 #
16. schmidtleonard ◴[] No.41891194{3}[source]
It's wild that 1gbit LAN has been "standard" for so long that the internet caught up.

Meanwhile, low-end computers ship with a dozen 10+Gbit class transceivers on USB, HDMI, Displayport, pretty much any external port except for ethernet, and twice that many on the PCIe backbone. But 10Gbit ethernet is still priced like it's made from unicorn blood.

replies(6): >>41891250 #>>41891304 #>>41891326 #>>41891460 #>>41891692 #>>41892294 #
17. vlovich123 ◴[] No.41891197{3}[source]
So because the Linux kernel isn’t as optimized for QUIC as it has been for TCP we shouldn’t design new protocols? Or it should be restricted to academics that had tried and failed for decades and would have had all the same problems even if they succeeded? And all of this only in a data center environment really and less about the general internet Quic was designed for?

This is an interesting hot take.

replies(1): >>41891219 #
18. p1necone ◴[] No.41891200[source]
I thought QUIC was optimized for latency - loading lots of little things at once on webpages and video games (which send lots of tiny little packets - low overall throughput but highly latency senstive) and such. I'm not surprised that it falls short when overall throughput is the only thing being measured.

I wonder if this can be optimized at the protocol level by detecting usage patterns that look like large file transfers or very high bandwidth video streaming and swapping over to something less cpu intensive.

Or is this just a case of less hardware/OS level optimization of QUIC vs TCP because it's new?

replies(1): >>41891429 #
19. jiggawatts ◴[] No.41891205{3}[source]
Here in Australia there’s talk of upgrading the National Broadband Network to 2.5 Gbps to match modern consumer Ethernet and WiFi speeds.

I grew up with 2400 baud modems as the super fast upgrade, so talk of multiple gigabits for consumers is blowing my mind a bit.

replies(2): >>41891278 #>>41891437 #
20. lysace ◴[] No.41891219{4}[source]
I'm struggling to parse my comment in the way you seem to think it did. In what way did or would my comment restrict your ability to design new protocols? Please explain.
replies(1): >>41892122 #
21. jrpelkonen ◴[] No.41891238[source]
Curl creator/maintainer Daniel Stenberg blogged about HTTP/3 in curl a few months ago: https://daniel.haxx.se/blog/2024/06/10/http-3-in-curl-mid-20...

One of the things he highlighted was the higher CPU utilization of HTTP/3, to the point where CPU can limit throughput.

I wonder how much of this is due to the immaturity of the implementations, and how much this is inherit due to way QUIC was designed?

replies(4): >>41891693 #>>41891790 #>>41891813 #>>41891887 #
22. nijave ◴[] No.41891250{4}[source]
2.5Gbps is becoming pretty common and fairly affordable, though

My understanding is right around 10Gbps you start to hit limitations with the shielding/type of cable and power needed to transmit/send over Ethernet.

When I was looking to upgrade at home, I had to get expensive PoE+ injectors and splitters to power the switch in the closet (where there's no outlet) and 10Gbps SFP+ transceivers are like $10 for fiber or $40 for Ethernet. The Ethernet transceivers hit like 40-50C

replies(4): >>41891378 #>>41891404 #>>41891559 #>>41892154 #
23. austin-cheney ◴[] No.41891263[source]
EDITED.

I preference WebSockets over anything analogous to HTTP.

Commented edited because I mentioned performance conditions. Software developers tend to make unfounded assumptions/rebuttals of performance conditions they have not tested.

replies(6): >>41891324 #>>41891333 #>>41891426 #>>41891517 #>>41891549 #>>41891575 #
24. TechDebtDevin ◴[] No.41891278{4}[source]
Is Australia's ISP infrastructure nationalized?
replies(1): >>41891529 #
25. ratorx ◴[] No.41891292{3}[source]
It depends on whether it’s meaningfully slower. QUIC is pretty optimized for standard web traffic, and more specifically for high-latency networks. Most websites also don’t send enough data for throughput to be a significant issue.

I’m not sure whether it’s possible, but could you theoretically offload large file downloads to HTTP/2 to get best of both worlds?

replies(3): >>41891490 #>>41891616 #>>41892614 #
26. dathinab ◴[] No.41891301[source]
not really it's (mainly) designed by companies like Google to connect to all their end users

Such a internet connection becoming so low latency that the latency of receiver side processing becomes dominant is in practice not the most relevant. Sure theoretically you can hit it with e.g. 5G but in practice even with 5G many real world situations won't. Most importantly a slow down of such isn't necessary bad for Google and co. as it only add limited amounts on strain on their services, infrastructure, internet and is still fast enough for most users to not care for most Google and co. use cases.

Similar being slow due to receiver delays isn't necessary bad enough to cause user noticeable battery issues, on of the main reasons seem to many user<->kernel boundary crossings which are slow due to cache missues/ejections etc. but also don't boost your CPU clock (which is one of the main ways to drain your battery, besides the screen)

Also like the article mentions the main issue is sub optimal network stack usage in browsers (including Chrome) not necessary a fundamental issue in the protocol. Which brings us to inter service communication for Google and co. which doesn't use any of the tested network stacks but very highly optimized stacks. I mean it really would be surprising if such network stacks where slow as there had been exhaustive perf. testing during the design of QUIC.

27. jsheard ◴[] No.41891304{4}[source]
Those very fast consumer interconnects are distinguished from ethernet by very limited cable lengths though, none of them are going to push 10gbps over tens of meters nevermind a hundred. DisplayPort is up to 80gbps now but in that mode it can barely even cross 1.5m of heavily shielded copper before the signal dies.

In a perfect world we would start using fiber in consumer products that need to move that much bandwidth, but I think the standards bodies don't trust consumers with bend radiuses and dust management so instead we keep inventing new ways to torture copper wires.

replies(2): >>41891533 #>>41891614 #
28. 10000truths ◴[] No.41891318[source]
TL;DR: Nothing that's inherent to QUIC itself, it's just that current QUIC implementations are CPU-bound because hardware GRO support has not yet matured in commodity NICs.

But throughput was never the compelling aspect of QUIC in the first place. It was always the reduced latency. A 1-RTT handshake including key/cert exchange is nothing to scoff at, and the 2-RTT request/response cycle that HTTP/3-over-QUIC offers means that I can load a blog page from a rinky-dink server on the other side of the world in < 500 ms. Look ma, no CDN!

replies(1): >>41891384 #
29. quotemstr ◴[] No.41891324[source]
> * String headers > * round trips > * many sockets, there is additional overhead to socket creation, especially over TLS > * UDP. Yes, in theory UDP is faster than TCP but only when you completely abandon integrity.

Have you ever read up on the technical details of QUIC? Every single of one of your bullets reflects a misunderstanding of QUIC's design.

replies(1): >>41891440 #
30. michaelt ◴[] No.41891326{4}[source]
Agree that a widespread faster ethernet is long overdue.

But bear in mind, standards like USB4 only support very short cables. It's impressive that USB4 can offer 40 Gbps - but it can only do so on 1m cables. On the other hand, 10 gigabit ethernet claims to go 100m on CAT6A.

replies(1): >>41891634 #
31. akira2501 ◴[] No.41891333[source]
I'd use them more, but WebSockets are just unfortunately a little too hard to implement efficiently in a serverless environment, I wish there was a protocol that spoke to that environment's tradeoffs more effectively.

The current crop aside from WebSockets all seem to be born from taking a butcher knife to HTTP and hacking out everything that gets in the way of time to first byte. I don't think that's likely to produce anything worthwhile.

replies(1): >>41891438 #
32. GuB-42 ◴[] No.41891355{3}[source]
Maybe, but QUIC is not bad as a protocol. The problem here is that OSes are not as well optimized for QUIC as they are for TCP. Just give it time, the paper even has suggestions.

QUIC has some debatable properties, like mandatory encryption, or the use of UDP instead of being a protocol under IP like TCP, but there are good reasons for it, related to ossification.

Yes, Google pushed for it, but I think it deserves its approval as a standard. It is not perfect but it is practical, they don't want another IPv6 situation.

33. dathinab ◴[] No.41891362[source]
They also mainly identified a throughput reduction due to latency issues caused by ineffective/too many syscalls in how browsers implement it.

But such a latency issue isn't majorly increasing battery usage (compared to a CPU usage issue which would make CPUs boost). Nor is it an issue for server-to-server communication.

It basically "only" slows down high bandwidth transmissions on end user devices with (for 2024 standards) very high speed connection (if you take effective speeds from device to server, not speeds you where advertised to have bough and at best can get when the server owner has a direct pairing agreement with you network provider and a server in your region.....).

Doesn't mean the paper is worthless, browser should improve their impl. and it highlights it.

But the title of the paper is basically 100% click bait.

replies(1): >>41891784 #
34. ratorx ◴[] No.41891374{3}[source]
Having read through that thread, most of the (top) comments are somewhat related to the lacking performance of the UDP/QUIC stack and thoughts on the meaningfulness of the speeds in the test. There is a single comment suggesting HTTP/2 was rushed (because server push was later deprecated).

QUIC is also acknowledged as being quite different from the Google version, and incorporating input from many different people.

Could you expand more on why this seems like evidence that Google unilaterally dictating bad standards? None of the changes in protocol seem objectively wrong (except possibly Server Push).

Disclaimer: Work at Google on networking, but unrelated to QUIC and other protocol level stuff.

replies(1): >>41891400 #
35. akira2501 ◴[] No.41891378{5}[source]
Ironically.. 2.5 Gbps is created by taking a 10GBASE-T module and effectively underclocking it. I wonder if "automatic speed selection" is around the corner with modules that automatically connect at 100Mbps to 10Gbps based on available cable quality.
replies(1): >>41891448 #
36. o11c ◴[] No.41891384[source]
There's also the fact that TCP has an unfixable security flaw - any random middleware can inject data (without needing to block packets) and break the connection. TLS only can add Confidentiality and Integrity, it can do nothing about the missing Availability.
replies(2): >>41891453 #>>41891651 #
37. kachapopopow ◴[] No.41891399[source]
This is actually very well known: current QUIC implementation in browsers is *not stable* and is built of either rustls or in another similar hacky way.
replies(2): >>41892611 #>>41893375 #
38. lysace ◴[] No.41891400{4}[source]
> Could you expand more on why this seems like evidence that Google unilaterally dictating bad standards?

I guess I'm just generally disgusted in the way Google is poisoning the web in the worst way possible: By pushing ever more complex standards. Imagine the complexity of the web stack in 2050 if we continue to let Google run things. It's Microsoft's old embrace-extend-and-extinguish scheme taken to the next level.

In short: it's not you, it's your manager's manager's manager's manager's strategy that is messed up.

replies(3): >>41891503 #>>41891552 #>>41894048 #
39. cyberax ◴[] No.41891404{5}[source]
40-50C? What is the brand?

Mine were over 90C, resulting in thermal shutdowns. I had to add an improvised heat exchanger to lower it down to ~70C: https://pics.archie.alex.net/share/U0G1yiWzShqOGXulwe1AetDjR...

replies(1): >>41903914 #
40. ◴[] No.41891425{4}[source]
41. Aurornis ◴[] No.41891426[source]
> QUIC is faster than prior versions of HTTP, but its still HTTP. It will never be fast enough because its still HTTP: > * String headers > * round trips > * many sockets, there is additional overhead to socket creation, especially over TLS

QUIC is a transport. HTTP can run on top of QUIC, but the way you’re equating QUIC and HTTP doesn’t make sense.

String headers and socket opening have nothing to do with the performance issues being discussed.

String headers aren’t even a performance issue at all. The amount of processing done for when the most excessive use of string headers is completely trivial relative to all of the other processing that goes into sending 1,000,000,000 bits per second (Gigabit) over the internet, which is the order of magnitude target being discussed.

I don’t think you understand what QUIC is or even the prior art in HTTP/2 that precedes these discussions of QUIC and HTTP/3.

replies(1): >>41891484 #
42. zamalek ◴[] No.41891429[source]
It seems that syscalls might be the culprit (ACKs occur completely inside the kernel for TCP, where anything UDP acks from userspace). I wonder if BGP could be extended for protocol development.
43. austin-cheney ◴[] No.41891438{3}[source]
That is a fair point. I wrote my own implementation of WebSockets in JavaScript and learned much in doing so, but it took tremendous trial and effort to get right. Nonetheless, the result was well worth the effort. I have a means to communicate to the browser and between servers that is real time with freedom to extend and modify it at my choosing. It is unbelievably more responsive than reliance upon HTTP in any of its forms. Imagine being able to execute hundreds of end-to-end test automation scenarios in the browser in 10 seconds. I can do that, but I couldn't with HTTP.
44. Kodiack ◴[] No.41891437{4}[source]
Meanwhile here in New Zealand we can get 10 Gbps FTTH already.

Sorry about your NBN!

replies(1): >>41891508 #
45. Aurornis ◴[] No.41891440{3}[source]
Honestly the entire comment is a head scratcher, from comparing QUIC to HTTP (different layers of the stack) or suggesting that string headers are a performance bottleneck.

Websockets are useful in some cases where you need to upgrade an HTTP connection to something more. Some people learn about websockets and then try to apply them to everything, everywhere. This seems to be one of those cases.

46. cyberax ◴[] No.41891448{6}[source]
My 10G modules automatically drop down to 2.5G or 1G if the cable is not good enough. There's also 5G, but I have never seen it work better than 2.5G.
replies(2): >>41891893 #>>41893870 #
47. suprjami ◴[] No.41891453{3}[source]
What does that have to do with anything here? This post is about QUIC performance, not TCP packet injection.
replies(1): >>41891674 #
48. Aurornis ◴[] No.41891460{4}[source]
> Meanwhile, low-end computers ship with a dozen 10+Gbit class transceivers on USB, HDMI, Displayport, pretty much any external port except for ethernet, and twice that many on the PCIe backbone. But 10Gbit ethernet is still priced like it's made from unicorn blood.

You really can’t think of any major difference between 10G Ethernet and all of those other standards that might be responsible for the price difference?

Look at the supported lengths and cables. 10G Ethernet over copper can go an order of magnitude farther over relatively generic cables. Your USB-C or HDMI connections cannot go nearly as far and require significantly more tightly controlled cables and shielding.

That’s the difference. It’s not easy to accomplish what they did with 10G Ethernet over copper. They used a long list of tricks to squeeze every possible dB of SNR out of those cables. You pay for it with extremely complex transceivers that require significant die area and a laundry list of complex algorithms.

replies(2): >>41891597 #>>41892302 #
49. jborean93 ◴[] No.41891463{3}[source]
This only works tor RSA keys and I believe ciphers that do not have forward secrecy. Quic is TLS 1.3 and all the ciphers in that protocol do forward secrecy so cannot be decrypted in this way. You’ll have to use a tool that provides the TLS session info through the SSLKEYLOGFILE format.
replies(1): >>41896920 #
50. nh2 ◴[] No.41891480[source]
In Switzerland you get 25 Gbit/s for $60/month.

In 30 years it will be even faster. It would be silly to have to use older protocols to get line speed.

replies(1): >>41891509 #
51. austin-cheney ◴[] No.41891484{3}[source]
> String headers aren’t even a performance issue at all.

That is universally incorrect. String instructions require parsing as strings are for humans and binary is for machines. There is performance overhead to string parsing always, and it is relatively trivial to perf. I have performance tested this in my own WebSocket and test automation applications. That performance difference scales in logarithmic fashion provided the quantity of messages to send/receive. I encourage you to run your own tests.

replies(1): >>41891710 #
52. pocketarc ◴[] No.41891490{4}[source]
> could you theoretically offload large file downloads to HTTP/2

Yes, you can! You’d have your websites on servers that support HTTP/3 and your large files on HTTP/2 servers, similar to how people put certain files on CDNs. It might well be a great solution!

53. wkat4242 ◴[] No.41891497[source]
For local purposes that's certainly true. It seems that quic trades a faster connection establishment for lower throughput. I personally prefer tcp anyway.
54. bawolff ◴[] No.41891503{5}[source]
> It's Microsoft's old embrace-extend-and-extinguish scheme taken to the next level.

It literally is not.

replies(1): >>41891506 #
55. lysace ◴[] No.41891506{6}[source]
Because?

Edit: I'm not the first person to make this comparison. Witness the Chrome section in this article:

https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...

replies(2): >>41891571 #>>41891590 #
56. wkat4242 ◴[] No.41891508{5}[source]
Here in Spain too.

I don't see a need for it yet though. I'm a really heavy user (it specialist with more than a hundred devices in my networks) and I really don't need it.

replies(1): >>41892619 #
57. 77pt77 ◴[] No.41891509{3}[source]
Now do the same in Germany...
58. bawolff ◴[] No.41891517[source]
This is an insane take.

Just to pick at one point of this craziness, you think that communicating over web sockets does not involve round trips????

replies(1): >>41891544 #
59. jiggawatts ◴[] No.41891529{5}[source]
It's a long story featuring nasty partisan politics, corrupt incumbents, Rupert Murdoch, and agile upstarts doing stealth rollouts at the crack of dawn.

Basically, the old copper lines were replaced by the NBN, which is a government-owned corporation that sells wholesale networking to telcos. Essentially, the government has a monopoly, providing the last-mile fibre links. They use nested VLANs to provide layer-2 access to the consumer telcos.

Where it got complicated was that the right-wing government was in the pocket of Rupert Murdoch, who threatened them with negative press before an upcoming election. They bent over and grabbed their ankles like the good little Christian school boys they are, and torpedoed the NBN network technology to protect the incumbent Fox cable network. Instead of fibre going to all premises, the NBN ended up with a mix of technologies, most of which don't scale to gigabit. It also took longer and cost more, despite the government responsible saying they were making these cuts to "save taxpayer money".

Also for political reasons, they were rolling it out starting at the sparse rural areas and leaving the high-density CBD regions till last. This made it look bad, because if they spent $40K digging up the long rural dirt roads to every individual farmhouse, it obviously won't have much of a return on the taxpayer's investment... like it would have if deployed to areas with technology companies and their staff.

Some existing smaller telcos noticed that there was a loophole in the regulation that allowed them to connect the more lucrative tech-savvy customers to their own private fibre if it's within 2km of an existing line. Companies like TPG had the entire CBD and inner suburban regions of every major city already 100% covered by this radius, so they proceeded to leapfrog the NBN and roll out their own 100 Mbps fibre-to-the-building service half a decade ahead. I saw their unmarked white vans stealthily rolling out extra fibre at like 3am to extend their coverage area before anyone in the government noticed.

The funny part was that FttB uses VDSL2 boxes in the basement for the last 100m going up to apartments, but you can only have one per building because they use active cross-talk cancellation. So by the time the NBN eventually got around to wiring the CBD regions, they got to the apartments to discover that "oops, too late", private telcos had gotten there first!

There were lawsuits... which the government lost. After all, they wrote the legislation, they were just mad that they hadn't actually understood it.

Meanwhile, some other incumbent fibre providers that should have disappeared persisted like a stubborn cockroach infestation. I've just moved to an apartment serviced by OptiComm, which has 1.1 out of 5 stars on Google... which should tell you something. They even have a grey fibre box that looks identical to the NBNCo box except it's labelled LBNCo with the same font so that during a whirlwind apartment inspection you might not notice that you're not going to be on the same high-speed Internet as the rest of the country.

replies(2): >>41891961 #>>41894333 #
60. schmidtleonard ◴[] No.41891533{5}[source]
Sure you need fiber for long runs at ultra bandwidth, but short runs are common and fiber is not a good reason for DAC to be expensive. Not within an order of magnitude of where it is.
replies(1): >>41892108 #
61. austin-cheney ◴[] No.41891544{3}[source]
That is correct.
62. FridgeSeal ◴[] No.41891549[source]
QUIC isn’t HTTP, QUIC is a protocol that operates at a similar level to UDP and TCP.

HTTP/3 is HTTP over QUIC. HTTP protocols v2 and onwards use binary headers. QUIC, by design, does 0-RTT handshakes.

> Yes, in theory UDP is faster than TCP but only when you completely abandon integrity

The point of QUIC, is that it enables application/userspace level reconstruction with UDP levels of performance. There’s no integrity being abandoned here: packets are free to arrive out of order, across independent sub-streams, and the protocol machinery puts them back together. QUIC also supports full bidirectional streams, so HTTP/3 also benefits from this directly. QUIC/HTTP3 also supports multiple streams per client with backpressure per substream.

Web-sockets are a pretty limited special case, built on-top of HTTP and TCP. You literally form the http connection and then upgrade it to web-sockets, it’s still TCP underneath.

Tl;Dr: your gripes are legitimate, but they refer to HTTP/1.1 at most, QUIC and HTTP/3 are far more sophisticated and performant protocols.

replies(1): >>41891587 #
63. ratorx ◴[] No.41891552{5}[source]
This is making a pretty big assumption that the web is perfectly fine the way it is and never needs to change.

In reality, there are perfectly valid reasons that motivate QUIC and HTTP/2 and I don’t think there is a reasonable argument that they are objectively bad. Now, for your personal use case, it might not be worth it, but that’s a different argument. The standards are built for the majority.

All systems have tradeoffs. Increased complexity is undesirable, but whether it is bad or not depends on the benefits. Just blanket making a statement that increasing complexity is bad, and the runaway effects of that in 2050 would be worse does not seem particularly useful.

replies(1): >>41891586 #
64. crote ◴[] No.41891559{5}[source]
The main issue is switches, really. 5Gbps USB NICs are available for $30 on Amazon, or $20 on AliExpress. 10Gbps NICS are $60, so not exactly crazy expensive either.

But switches haven't really kept up. A simple unmanaged 5-port or 8-port 2.5GigE isn't too bad, but anything beyond that gets tricky. 5GigE switches don't seem to exist, and you're already paying $500 for a budget-brand 10GigE switch with basic VLAN support. You want PoE? Forget it.

The irony is that at 10Gbps fiber suddenly becomes quite attractive. A brand-new SFP+ NIC can be found for $30, with DACs only $5 (per side) and transceivers $30 or so. You can get an actually-decent switch from Mikrotik for less than $300.

Heck, you can even get brand-new dualport SFP28 NICs for $100, or as little as $25 on Ebay! Switch-wise you can get 16 ports of 25Gbps out of a $800 Mikrotik switch: not exactly cheap, but definitely within range for a very enthusiastic homelabber.

The only issue is that wiring your home for fiber is stupidly expensive, and you can't exactly use it to power access points either.

replies(2): >>41891984 #>>41892578 #
65. bawolff ◴[] No.41891571{7}[source]
Well it may be possible to make the comparison in other things google does (they have done a lot of things) it makes no sense for quic/http3.

What are they extending in this analogy? Http3 is not an extension of http. What are they extinguishing? There is no plan to get rid of http1/2, since you still need it in lots of networks that dont allow udp.

Additionally, its an open standard, with an rfc, and multiple competing implementations (including firefox and i believe experimental in safari). The entire point of embrace, extend, extinguish is that the extension is not well specified making it dufficult for competitors to implement. That is simply not what is happening here.

replies(1): >>41891709 #
66. cj ◴[] No.41891574[source]
In other words:

Enable http/3 + quic between client browser <> edge and restrict edge <> origin connections to http/2 or http/1

Cloudflare (as an example) only supports QUIC between client <> edge and doesn’t support it for connections to origin. Makes sense if the edge <> origin connection is reusable, stable, and “fast”.

https://developers.cloudflare.com/speed/optimization/protoco...

replies(1): >>41892865 #
67. sleepydog ◴[] No.41891575[source]
QUIC is a reliable transport. It's not "fire and forget", there is a mechanism for recovering lost messages similar, but slightly superior to TCP. QUIC has the significant advantage of 0- and 1-rtt connection establishments which can hide latency better than TCP's 3-way handshake.

Current implementations have some disadvantages to TCP, but they are not inherent to the protocol, they just highlight the decades of work done to make TCP scale with network hardware.

Your points seem better directed at HTTP/3 than QUIC.

68. lysace ◴[] No.41891586{6}[source]
Nothing is perfect. But gigantic big bang changes (like from HTTP 1.1 to 2.0) enforced by a browser mono culture and a dominant company with several thousands of individually well-meaning Chromium software engineers like yourself - yeah, pretty sure that's bad.
replies(1): >>41891762 #
69. austin-cheney ◴[] No.41891587{3}[source]
WebSockets are not built on top of HTTP, though that is how they are commonly implemented. WebSockets are faster when HTTP is not considered. A careful reading of RFC6455 only mentions the handshake and its response must be a static string resembling a header in style of RFC2616 (HTTP), but a single static string is not HTTP. This is easily provable if you attempt your own implementation of WebSockets.
replies(1): >>41892523 #
70. ratorx ◴[] No.41891590{7}[source]
Contributing to an open standard seems to be the opposite of the classic example.

Assume that change X for the web is positive overall. Currently Google’s strategy is to implement in Chrome and collect data on usefulness, then propose a standard and have other people contribute to it.

That approach seems pretty optimal. How else would you do it?

replies(1): >>41891628 #
71. schmidtleonard ◴[] No.41891597{5}[source]
There was a time when FFE, DFE, CTLE, and FEC could reasonably be considered an extremely complex bag of tricks by the standards of the competition. That time passed many years ago. They've been table stakes for a while in every other serial standard. Wifi is beating ethernet at the low end, ffs, and you can't tell me that air is a kinder channel. A low-end PC will ship with a dozen transceivers implementing all of these tricks sitting idle, while it'll be lucky to have a single 2.5Gbe port and you'll have to pay extra for the privilege.

No matter, eventually USB4NET will work out of the box. The USB-IF is a clown show and they have tripped over their shoelaces every step of the way, but consumer Ethernet hasn't moved in 20 years so this horse race still has a clear favorite, lol.

72. crote ◴[] No.41891614{5}[source]
> In a perfect world we would start using fiber in consumer products that need to move that much bandwidth

We are already doing this. USB-C is explicitly designed to allow for cables with active electronics, including conversion to & from fiber. You could just buy an optical USB-C cable off Amazon, if you wanted to.

replies(1): >>41892073 #
73. ◴[] No.41891616{4}[source]
74. crote ◴[] No.41891634{5}[source]
USB4 does support longer distances, but those cables need active electronics to guarantee signal integrity. That's how you end up with Apple's $160 3-meter cable.
replies(1): >>41893835 #
75. ◴[] No.41891645{3}[source]
76. ratorx ◴[] No.41891648{9}[source]
How does this have any relevance to my comment?
replies(1): >>41891655 #
77. ChocolateGod ◴[] No.41891651{3}[source]
> There's also the fact that TCP has an unfixable security flaw - any random middleware can inject data (without needing to block packets) and break the connection

I am unsure how this is a security flaw of TCP? Any middleman could block UDP packets too and get the same effect, or modify UDP packets in an attempt to cause the receiving application to crash.

replies(1): >>41891976 #
78. lysace ◴[] No.41891655{10}[source]
How does your comment have any relevance to what we are discussing throughout this thread?
79. kibwen ◴[] No.41891673[source]
How does it compare to HTTP/1 on similar benchmarks?
80. o11c ◴[] No.41891674{4}[source]
"Accept worse performance in order to fix security problems" is a standard tradeoff.
replies(1): >>41892201 #
81. nine_k ◴[] No.41891685[source]
Gigabit connections are widely available in urban areas. The problem is not theoretical, but definitely is pretty recent / nascent.
replies(1): >>41891775 #
82. Fire-Dragon-DoL ◴[] No.41891692{4}[source]
It passed it! Here there are offers up to 3gbit residential (Vancouver). I had 1.5 bit for a while. Downgraded to 1gbit because while I love fast internet, right now nobody in the home uses it enough to affect 1gbit speed
83. cj ◴[] No.41891693[source]
I’ve always been under the impression that QUIC was designed for connections that aren’t guaranteed to be stable or fast. Like mobile networks.

I never got the impression that it was intended to make all connections faster.

If viewed from that perspective, the tradeoffs make sense. Although I’m no expert and encourage someone with more knowledge to correct me.

replies(3): >>41891848 #>>41891912 #>>41893147 #
84. lysace ◴[] No.41891709{8}[source]
What I meant with Microsoft's Embrace, extend, and extinguish (EEE) scheme taken to the next level is what Google has done to the web via Chromium:

They have several thousand C++ browser engineers (and as many web standards people as they could get their hands on, early on). Combined with a dominant browser market share, this has let them dominate browser standards, and even internet protocols. They have abused this dominant position to eliminate all competitors except Apple and (so far) Mozilla. It's quite clever.

replies(3): >>41891918 #>>41892178 #>>41893616 #
85. jiggawatts ◴[] No.41891710{4}[source]
Both HTTP/2 and HTTP/3 use binary protocol encoding and compressed (binary) headers. You're arguing a straw man that has little to do with reality.
86. cletus ◴[] No.41891721[source]
At Google, I worked on a pure JS Speedtest. At the time, Ookla was still Flash-based so wouldn't work on Chromebooks. That was a problem for installers to verify an installation. I learned a lot about how TCP (I realize QUIC is UDP) responds to various factors.

I look at this article and consider the result pretty much as expected. Why? Because it pushes the flow control out of the kernel (and possibly network adapters) into userspace. TCP has flow-control and sequencing. QUICK makes you manage that yourself (sort of).

Now there can be good reasons to do that. TCP congestion control is famously out-of-date with modern connection speeds, leading to newer algorithms like BRR [1] but it comes at a cost.

But here's my biggest takeaway from all that and it's something so rarely accounted for in network testing, testing Web applications and so on: latency.

Anyone who lives in Asia or Australia should relate to this. 100ms RTT latency can be devastating. It can take something that is completely responsive to utterly unusable. It slows down the bandwidth a connection can support (because of the windows) and make it less responsive to errors and congestion control efforts (both up and down).

I would strongly urge anyone testing a network or Web application to run tests where they randomly add 100ms to the latency [2].

My point in bringing this up is that the overhead of QUIC may not practically matter because your effective bandwidth over a single TCP connection (or QUICK stream) may be MUCH lower than your actual raw bandwidth. Put another way, 45% extra data may still be a win because managing your own congestion control might give you higher effective speed over between two parties.

[1]: https://atoonk.medium.com/tcp-bbr-exploring-tcp-congestion-c...

[2]: https://bencane.com/simulating-network-latency-for-testing-i...

replies(11): >>41891766 #>>41891768 #>>41891919 #>>41892102 #>>41892118 #>>41892276 #>>41892709 #>>41893658 #>>41893802 #>>41894376 #>>41894468 #
87. jsnell ◴[] No.41891762{7}[source]
Except that HTTP/1.1 to HTTP/2 was not a big bang change on the ecosystem level. No server or browser was forced to implement HTTP/2 to remain interoperable[0]. I bet you can't point any of this "enforcement" you claim happened. If other browser implemented HTTP/2, it was because they thought that the benefits of H2 outweighed any downsides.

[0] There are non-browser protocols that are based on H2 only, but since your complaint was explicitly about browsers, I know that's not what you had in mind.

replies(1): >>41891785 #
88. ec109685 ◴[] No.41891766[source]
For reasonably long downloads (so it has a chance to calibrate), why don't congestion algorithms increase the number of inflight packets to a high enough number that bandwidth is fully utilized even over high latency connections?

It seems like it should never be the case that two parallel downloads will preform better than a single one to the same host.

replies(4): >>41891861 #>>41891874 #>>41891957 #>>41892726 #
89. skissane ◴[] No.41891768[source]
> Because it pushes the flow control out of the kernel (and possibly network adapters) into userspace

That’s not an inherent property of the QUIC protocol, it is just an implementation decision - one that was very necessary for QUIC to get off the ground, but now it exists, maybe it should be revisited? There is no technical obstacle to implementing QUIC in the kernel, and if the performance benefits are significant, almost surely someone is going to do it sooner or later.

replies(3): >>41891946 #>>41891973 #>>41893160 #
90. Dylan16807 ◴[] No.41891775{3}[source]
A gigabit connection is just one prerequisite. The server also has to be sending very big bursts of foreground/immediate data or you're very unlikely to notice anything.
91. ec109685 ◴[] No.41891784{3}[source]
How is it clickbait? The title implies that QUIC isn't as fast as other protocols over fast internet connections.
replies(1): >>41891850 #
92. lysace ◴[] No.41891785{8}[source]
You are missing the entire point: Complexity.

It's not your fault, in case you were working on this. It was likely the result a strategy thing being decided at Google/Alphabet exec level.

Several thousand very competent C++ software engineers don't come cheap.

replies(1): >>41891904 #
93. paulddraper ◴[] No.41891790[source]
Those performance results surprised me too.

His testing has CPU-bound quiche at <200MB/s and nghttp2 was >900MB/s.

I wonder if the CPU was throttled.

Because if HTTP 3 impl took 4x CPU that could be interesting but not necessarily a big problem if the absolute value was very low to begin with.

94. nine_k ◴[] No.41891795{4}[source]
Fiber is the most economical solution, it's compact, cheap, not susceptible to electromagnetic interference from thunderstorms, not interesting for metal thieves, etc.

Most importantly, it can be heavily over-provisioned for peanuts, so your cable is future-proof, and you will never have dig the same trenches again.

Copper only makes sense if you already have it.

replies(1): >>41892952 #
95. andsoitis ◴[] No.41891796[source]
Designing for resource-constrained systems typically comes with making tradeoffs.

Once the resource constraint is eliminared, you're no longer getting the benefit of that tradeoff but are paying the costs.

96. paulddraper ◴[] No.41891800[source]
> below 600 Mbit/s is implied as being "Slow Internet" in the intro

Or rather, not "Fast Internet"

replies(1): >>41892077 #
97. ec109685 ◴[] No.41891809[source]
Meanwhile fast.com (and presumably netflix cdn) is using http 1.1 still.
replies(1): >>41891875 #
98. dan-robertson ◴[] No.41891813[source]
Two recommendations are for improving receiver-side implementations – optimising them and making them multithreaded. Those suggest some immaturity of the implementations. A third recommendation is UDP GRO, which means modifying kernels and ideally NIC hardware to group received UDP packets together in a way that reduces per-packet work (you do lots of per-group work instead of per-packet work). This already exists in TCP and there are similar things on the send side (eg TSO, GSO in Linux), and feels a bit like immaturity but maybe harder to remedy considering the potential lack of hardware capabilities. The abstract talks about the cost of how acks work in QUIC but I didn’t look into that claim.

Another feature you see for modern tcp-based servers is offloading tls to the hardware. I think this matters more for servers that may have many concurrent tcp streams to send. On Linux you can get this either with userspace networking or by doing ‘kernel tls’ which will offload to hardware if possible. That feature also exists for some funny stuff in Linux about breaking down a tcp stream into ‘messages’ which can be sent to different threads, though I don’t know if it allows eagerly passing some later messages when earlier packets were lost.

99. dan-robertson ◴[] No.41891848{3}[source]
I think that’s a pretty good impression. Lots of features for those cases:

- better behaviour under packet loss (you don’t need to read byte n before you can see byte n+1 like in tcp)

- better behaviour under client ip changes (which happen when switching between cellular data and wifi)

- moving various tricks for getting good latency and throughput in the real world into user space (things like pacing, bbr) and not leaving enough unencrypted information in packets for middleware boxes to get too funky

100. dathinab ◴[] No.41891850{4}[source]
Because it's QUIC _implementations of browser_ not being as fast as the non quick impl of browsers on connections most people would not just call fast but very fast (in context of browser usage) while still being definitely 100% fast enough for all browser use case done today (sure it theoretically might reduce video bit rate, that is, if it isn't already capped to a anyway smaller rate, which AFIK it basically always is).

So "Not Quick Enough" is plain out wrong, it is fast enough.

The definition of "Fast Internet" misleading.

And even "QUIC" is misleading as it normally refers to the protocol while the benchmarked protocol is HTTP/3 over QUIC and the issue seem to be mainly in the implementations.

101. Veserv ◴[] No.41891861{3}[source]
You can in theory. You just need a accurate model of your available bandwidth and enough buffering/storage to avoid stalls while you wait for acknowledgement. It is, frankly, not even that hard to do it right. But in practice many implementations are terrible, so good luck.
102. gmueckl ◴[] No.41891874{3}[source]
Larger windows can reduce the maximum number of simultaneous connections on the sender side.
103. dan-robertson ◴[] No.41891875[source]
Why do you need multiplexing when you are only downloading one (video) stream? Are there any features of http/2 that would benefit the Netflix use case?
replies(1): >>41892018 #
104. therealmarv ◴[] No.41891887[source]
"immaturity of the implementations" is a funny wording here. QUIC was created because there is absolutely NO WAY that all internet hardware (including all middleware etc) out there will support a new TCP or TLS standard. So QUIC is an elegant solution to get a new transport standard on top of legacy internet hardware (on top of UDP).

In an ideal World we would create a new TCP and TLS standard and replace and/or update all internet routers and hardware everywhere World Wide so that it is implemented with less CPU utilization ;)

replies(1): >>41891927 #
105. akira2501 ◴[] No.41891893{7}[source]
Oh man. I've been off the IT floor for too long. Time to change my rhetoric, ya'll have been around the corner for a while.

Aging has it's upsides and downsides I guess.

106. jsnell ◴[] No.41891904{9}[source]
I mean, the reason I was discussing those specific aspects is that you're the one brought them up. You made the claim about how HTTP/2 was a "big bang" change. You're the one who made the claim that HTTP/2 was enforced on the ecosystem by Google.

And it seems that you can't support either of those claims in any way. In fact, you're just pretending that you never made those comments at all, and have once again pivoted to a new grievance.

But the new grievance is equally nonsensical. HTTP/2 is not particularly complex, and nobody on either the server or browser side was forced to implement it. Only those who thought the minimal complexity was worth it needed to do it. Everyone else remained fully interoperable.

I'm not entirely sure where you're coming from here, to be honest. Like, is your belief that there are no possible tradeoffs here? Nothing can ever justify even such minor amounts of complexity, no matter how large the benefits are? Or do you accept that there are tradeoffs, and are "just" disagree with every developer who made a different call on this when choosing whether to support HTTP/2 in their (non-Google) browser or server?

replies(1): >>41891920 #
107. therealmarv ◴[] No.41891912{3}[source]
It makes everything faster, it's an evolvement of HTTP/2 in many ways. I recommend watching

https://www.youtube.com/watch?v=cdb7M37o9sU

108. jauntywundrkind ◴[] No.41891918{9}[source]
Microsoft just did shit, whatever they wanted. Google has worked with all the w3c committees and other browsers with tireless commitment to participation, with endless review.

It's such a tired sad trope of people disaffected with the web because they can't implement it by themselves easily. I'm so exhausted by this anti-progress terrorism; the world's shared hypermedia should be rich and capable.

We also see lots of strong progress these days from newcomers like Ladybird, and Servo seems gearing up to be more browser like.

replies(1): >>41891955 #
109. api ◴[] No.41891919[source]
A major problem with TCP is that the limitations of the kernel network stack and sometimes port allocation place absurd artificial limits on the number of active connections. A modern big server should be able to have tens of millions of open TCP connections at least, but to do that well you have to do hacks like running a bunch of pointless VMs.
replies(1): >>41892527 #
110. lysace ◴[] No.41891920{10}[source]
Edit: this whole comment is incorrect. I was really thinking about HTTP 3.0, not 2.0.

HTTP/2 is not "particularly complex?" Come on! Do remember where we started.

> I'm not entirely sure where you're coming from here, to be honest. Like, is your belief that there are no possible tradeoffs here? Nothing can ever justify even such minor amounts of complexity, no matter how large the benefits are? Or do you accept that there are tradeoffs, and are "just" disagree with every developer who made a different call on this when choosing whether to support HTTP/2 in their (non-Google) browser or server?

"Such minor amounts of complexity". Ahem.

I believe there are tradeoffs. I don't believe that HTTP/2 met that tradeoff between complexity vs benefit. I do believe it benefitted Google.

replies(1): >>41892103 #
111. api ◴[] No.41891927{3}[source]
A major mistake in IP’s design was to allow middle boxes. The protocol should have had some kind of minimal header auth feature to intentionally break them. It wouldn’t have to be strong crypto, just enough to make middle boxes impractical.

It would have forced IPv6 migration immediately (no NAT) and forced endpoints to be secured with local firewalls and better software instead of middle boxes.

The Internet would be so much simpler, faster, and more capable. Peer to peer would be trivial. Everything would just work. Protocol innovation would be possible.

Of course tech is full of better roads not taken. We are prisoners of network effects and accidents of history freezing ugly hacks into place.

replies(7): >>41892225 #>>41892686 #>>41892920 #>>41893968 #>>41894183 #>>41894543 #>>41895155 #
112. ants_everywhere ◴[] No.41891946{3}[source]
Is this something you could use ebpf for?
113. lysace ◴[] No.41891955{10}[source]
Yes, Google found the loophole: brute-force standards complexity by hiring thousands of very competent engineers eager to leave their mark on the web and eager to get promoted. The only thing they needed was lots of money, and they had just that.

I think my message here is only hard to understand if your salary (or personal worth etc) depends on not understanding it. It's really not that complex.

replies(1): >>41893552 #
114. dan-robertson ◴[] No.41891957{3}[source]
There are two places a packet can be ‘in-flight’. One is light travelling down cables (or the electrical equivalent) or in memory being processed by some hardware like a switch, and the other is sat in a buffer in some networking appliance because the downstream connection is busy (eg sending packets that are further up the queue, at a slower rate than they arrive). If you just increase bandwidth it is easy to get lots of in-flight packets in the second state which increases latency (admittedly that doesn’t matter so much for long downloads) and the chance of packet loss from overly full buffers.

CUBIC tries to increase bandwidth until it hits packet loss, then cuts bandwidth (to drain buffers a bit) and ramps up and hangs around close to the rate that led to loss, before it tries sending at a higher rate and filling up buffers again. Cubic is very sensitive to packet loss, which makes things particularly difficult on very high bandwidth links with moderate latency as you need very low rates of (non-congestion-related) loss to get that bandwidth.

BBR tries to do the thing you describe while also modelling buffers and trying to keep them empty. It goes through a cycle of sending at the estimated bandwidth, sending at a lower rate to see if buffers got full, and sending at a higher rate to see if that’s possible, and the second step can be somewhat harmful if you don’t need the advantages of BBR.

I think the main thing that tends to prevent the thing you talk about is flow control rather than congestion control. In particular, the sender needs a sufficiently large send buffer to store all unacked data (which can be a lot due to various kinds of ack-delaying) in case it needs to resend packets, and if you need to resend some then your send buffer would need to be twice as large to keep going. On the receive size, you need big enough buffers to be able to fill up those buffers from the network while waiting for an earlier packet to be retransmitted.

On a high-latency fast connection, those buffers need to be big to get full bandwidth, and that requires (a) growing a lot, which can take a lot of round-trips, and (b) being allowed by the operating system to grow big enough.

115. dbaggerman ◴[] No.41891961{6}[source]
To clarify, NBN is a monopoly on the last mile infrastructure which is resold to private ISPs that sell internet services.

The history there is that Australia used to have a government run monopoly on telephone infrastructure and services (Telecom Australia), which was later privatised (and rebranded to Telstra). The privatisation left Telstra with a monopoly on the infrastructure, but also a requirement that they resell the last mile at a reasonable rate to allow for some competition.

So Australia already had an existing industry of ISPs that were already buying last mile access from someone else. The NBN was just a continuation of the existing status quo in that regard.

> They even have a grey fibre box that looks identical to the NBNCo box except it's labelled LBNCo with the same font

Early in my career I worked for one of those smaller telcos trying to race to get services into buildings before the NBN. I left around the time they were talking about introducing an LBNCo brand (only one of the reasons I left). At the time, they weren't part of Opticomm, but did partner with them in a few locations. If the brand is still around, I guess they must have been acquired at some point.

replies(1): >>41892626 #
116. lttlrck ◴[] No.41891973{3}[source]
For Linux that's true. But Microsoft never added SCTP to Windows; not being beholden to Microsoft and older OS must have been part of the calculus?
replies(2): >>41892046 #>>41892802 #
117. o11c ◴[] No.41891976{4}[source]
In order to attack UDP, you have to block all routes through which traffic might flow. This is hard; remember, the internet tries to be resilient.

In order to attack TCP, all you have to do is spy on a single packet (very easy) to learn the sequence number, then you can inject a wrench into the cogs and the endpoints will reject all legitimate traffic from each other.

replies(1): >>41893618 #
118. Thaxll ◴[] No.41891981[source]
QUIC is pretty much what serious online games have been doing in the last 20 years.
119. maccard ◴[] No.41891984{6}[source]
> The only issue is that wiring your home for fiber is stupidly expensive

What do you mean by that? My home isnt wired for ethernet. I can buy 30m of CAT6 cable for £7, or 30m of fibre for £17. For a home use, that's a decent amount of cable, and even spending £100 on cabling will likely run cables to even the biggest of houses.

replies(1): >>41892627 #
120. jeltz ◴[] No.41892018{3}[source]
QUIC handles packet loss better. But I do not think there is any benefit from HTTP2.
replies(1): >>41894756 #
121. skissane ◴[] No.41892046{4}[source]
> But Microsoft never added SCTP to Windows

Windows already has an in-kernel QUIC implementation (msquic.sys), used for SMB/CIFS and in-kernel HTTP. I don’t think it is accessible from user-space - I believe user-space code uses a separate copy of the same QUIC stack that runs in user-space (msquic.dll), but there is no reason in-principle why Microsoft couldn’t expose the kernel-mode implementation to user space

122. Dylan16807 ◴[] No.41892073{6}[source]
When you make the cable do the conversion, you go from two expensive transceivers to six expensive transceivers. And if the cable breaks you need to throw out four of them. It's a poor replacement for direct fiber use.
123. lysace ◴[] No.41892077{3}[source]
Yeah.
124. klabb3 ◴[] No.41892102[source]
I did a bunch of real world testing of my file transfer app[1]. Went in with the expectation that Quic would be amazing. Came out frustrated for many reasons and switched back to TCP. It’s obvious in hindsight, but with TCP you say “hey kernel send this giant buffer please” whereas UDP is packet switched! So even pushing zeroes has a massive CPU cost on most OSs and consumer hardware, from all the mode switches. Yes, there are ways around it but no they’re not easy nor ready in my experience. Plus it limits your choice of languages/libraries/platforms.

(Fun bonus story: I noticed significant drops in throughput when using battery on a MacBook. Something to do with the efficiency cores I assume.)

Secondly, quic does congestion control poorly (I was using quic-go so mileage may vary). No tuning really helped, and TCP streams would take more bandwidth if both were present.

Third, the APIs are weird man. So, quic itself has multiple streams, which makes it non-drop in replacement with TCP. However, the idea is to have HTTP/3 be drop-in replaceable at a higher level (which I can’t speak to because I didn’t do). But worth keeping in mind if you’re working on the stream level.

In conclusion I came out pretty much defeated but also with a newfound respect for all the optimizations and resilience of our old friend tcp. It’s really an amazing piece of tech. And it’s just there, for free, always provided by the OS. Even some of the main issues with tcp are not design faults but conservative/legacy defaults (buffer limits on Linux, Nagle, etc). I really just wish we could improve it instead of reinventing the wheel..

[1]: https://payload.app/

replies(2): >>41892805 #>>41893050 #
125. jsnell ◴[] No.41892103{11}[source]
"We" started from you making outlandish claims about HTTP/2 and immediately pivoting to a new complaint when rebutted rather than admit you were wrong.

Yes, HTTP/2 is not really complex as far as these things go. You just keep making that assertion as if it was self-evident, but it isn't. Like, can you maybe just name the parts you think are unnecessary complex? And then we can discuss just how complex they really are, and what the benefits are.

(Like, sure, having header compression is more complicated than not having it. But it's also an amazingly beneficial tradeoff, so it can't be what you had in mind.)

> I believe there are tradeoffs. I don't believe that HTTP/2 met that tradeoff between complexity vs benefit.

So why did Firefox implement it? Safari? Basically all the production level web servers? Google didn't force them to do it. The developers of all of that software had agency, evaluated the tradeoffs, and decided it was worth implementing. What makes you a better judge of the tradoffs than all of these non-Google entities?

replies(1): >>41892192 #
126. Dylan16807 ◴[] No.41892108{6}[source]
These days, passive cables that support ultra bandwidth are down to like .5 meters.

For anything that wants 10Gbps lanes or less, copper is fine.

For ultra bandwidth, going fiber-only is a tempting idea.

127. pests ◴[] No.41892118[source]
The Network tab in the Chrome console allows you to degrade your connection. There are presets for Slow/Fast 4G, 3G, or you can make a custom present where you can specify download and upload speeds, latency in ms, a packet loss percent, a packet queue length and can enable packet reordering.
replies(2): >>41892287 #>>41894505 #
128. vlovich123 ◴[] No.41892122{5}[source]
Because you imply in that comment that it should be someone other than Google developing new protocols while in another you say that the protocols are already too complex implying stasis is the preferred state.

You’re also factually incorrect in a number of ways such as claiming that HTTP/2 was a Google project (it’s not and some of the poorly thought out ideas like push didn’t come from Google).

The fact of the matter is that other attempts at “next gen” protocols had taken place. Google is the only one that won out. Part of it is because they were one of the few properties that controlled enough web traffic to try something. Another is that they explicitly learned from mistakes that the academics had been doing and taken market effects into account (ie not requiring SW updates of middleware boxes). I’d say all things considered Internet connectivity is better that QUIC got standardized. Papers like this simply point to current inefficiencies of today’s implementation - those can be fixed. These aren’t intractable design flaws of the protocol itself.

But you seem to really hate Google as a starting point so that seems to color your opinion of anything they produce rather than engaging with the technical material in good faith.

replies(1): >>41892260 #
129. Dylan16807 ◴[] No.41892154{5}[source]
> My understanding is right around 10Gbps you start to hit limitations with the shielding/type of cable and power needed to transmit/send over Ethernet.

If you decide you only need 50 meters, that reduces both power and cable requirements by a lot. Did we decide to ignore the easy solution in favor of stagnation?

replies(1): >>41903967 #
130. Dylan16807 ◴[] No.41892178{9}[source]
> What I meant with Microsoft's Embrace, extend, and extinguish (EEE) scheme taken to the next level is what Google has done to the web via Chromium

I think this argument is reasonable, but QUIC isn't part of the problem.

131. lysace ◴[] No.41892192{12}[source]
Yeah, sorry, I mixed up 2.0 (the one that still uses TCP) with 3.0. Sorry for wasting your time.
132. suprjami ◴[] No.41892201{5}[source]
QUIC was invented to provide better performance for multiplexed HTTP/3 streams and the bufferbloat people love that it avoids middlebox protocol interference.

QUIC has never been about "worse performance" to avoid TCP packet injection.

Anybody who cares about TCP packet injection is using crypto (IPSec/Wireguard). If performant crypto is needed there are appliances which do it at wirespeed.

133. teleforce ◴[] No.41892212[source]
Previous post on HN (326 comments - 40 days ago):

QUIC is not quick enough over fast internet:

https://news.ycombinator.com/item?id=41484991

134. johncolanduoni ◴[] No.41892225{4}[source]
Making IPv4 headers resistant to tampering wouldn't have helped with IPv6 rollout, as routers (both customer and ISP) would still need to be updated to be able to understand how to route packets with the new headers.
replies(1): >>41892516 #
135. lysace ◴[] No.41892260{6}[source]
I don't hate Google. I admire it what for what it is; an extremely efficient and inherently scalable corporate structure designed to exploit the Internet and the web in the most brutal and profitable way imaginable.

It's just that their interests in certain aspects don't align with ours.

136. reshlo ◴[] No.41892276[source]
> Anyone who lives in Asia or Australia should relate to this. 100ms RTT latency can be devastating.

When I used to (try to) play online games in NZ a few years ago, RTT to US West servers sometimes exceeded 200ms.

replies(2): >>41892498 #>>41893624 #
137. lelandfe ◴[] No.41892287{3}[source]
There's also an old macOS preference pane called Network Link Conditioner that makes the connections more realistic: https://nshipster.com/network-link-conditioner/

IIRC, Chrome's network simulation just applies a delay after a connection is established

replies(1): >>41893107 #
138. Dalewyn ◴[] No.41892294{4}[source]
There is an argument to be made that gigabit ethernet is "good enough" for Joe Average.

Gigabit ethernet is ~100MB/s transfer speed over copper wire or ~30MB/s over wireless accounting for overhead and degradation. That is more than fast enough for most people.

10gbit is seemingly made from unicorn blood and 2.5gbit is seeing limited adoption because there simply isn't demand for them outside of enterprise who have lots of unicorn blood in their banks.

139. reshlo ◴[] No.41892302{5}[source]
You explained why 10G Ethernet cables are expensive, but why should it be so expensive to put a 10G-capable port on the computer compared to the other ports?
replies(1): >>41892394 #
140. kccqzy ◴[] No.41892394{6}[source]
Did you completely misunderstand OP? The 10G Ethernet cables are not expensive. In a pinch, even your Cat 5e cable is capable of 10G Ethernet albeit at a shorter distance than Cat 6 cable. Even then, it can be at least a dozen times longer than a similar USB or HDMI or DisplayPort cable.
replies(1): >>41893195 #
141. indrora ◴[] No.41892498{3}[source]
When I was younger, I played a lot of cs1.6 and hldm. Living in rural New Mexico, my ping times were often 150-250ms.

DSL kills.

replies(1): >>41893340 #
142. ajb ◴[] No.41892516{5}[source]
The GP's point is that if middle boxes couldn't rewrite the header, NAt would be impossible. And if NAT were impossible, ipV4 would have died several years ago because NAT allowed more computers than addresses.
replies(1): >>41893989 #
143. deathanatos ◴[] No.41892523{4}[source]
… I mean, in theory someone could craft some protocol that just starts with speaking Websockets or starts with some other handshake¹, I suppose, but the overwhelming majority of the uses of websockets out there are going to be over HTTP, as that's what a browser speaks, and the client is quite probably a browser.

> A careful reading of RFC6455 only mentions the handshake and its response must be a static string resembling a header in style of RFC2616 (HTTP), but a single static string is not HTTP.

You're going to have to cite the paragraph, then, because that is most definitely not what RFC 6455 says. RFC 6455 says,

> The handshake consists of an HTTP Upgrade request, along with a list of required and optional header fields.

That's not "a single static string". You can't just say "are the first couple of bytes of the connection == SOME_STATIC", as that would not be a conforming implementation. (That would just be a custom protocol with its own custom upgrade-into-Websockets, as mentioned in the first paragraph, but if you're doing that, you might as well just ditch that and just start in Websockets.)

¹(i.e., I grant the RFC's "However, the design does not limit WebSocket to HTTP, and future implementations could use a simpler handshake", but making use of that to me that puts us solidly in "custom protocol" land, as conforming libraries won't interoperate.)

replies(1): >>41894003 #
144. toast0 ◴[] No.41892527{3}[source]
> A modern big server should be able to have tens of millions of open TCP connections at least, but to do that well you have to do hacks like running a bunch of pointless VMs.

Inbound connections? You don't need to do anything other than make sure your fd limit is high and maybe not be ipv4 only and have too many users behind the same cgnat.

Outbound connections is harder, but hopefully you don't need millions of connections to the same destination, or if you do, hopefully they support ipv6.

When I ran millions of connections through HAproxy (bare tcp proxy, just some peaking to determine the upstream), I had to do a bunch of work to make it scale, but not because of port limits.

145. spockz ◴[] No.41892578{6}[source]
Apparently there is the https://store.ui.com/us/en/products/us-xg-6poe from Ubiquity. It only has 4 10GbE ports but they all have PoE.
146. AlienRobot ◴[] No.41892598[source]
Anecdote: I was having trouble accessing wordpress.org. When I started using Wordpress, I could access the documentation just fine, but then suddenly I couldn't access the website anymore. I dual boot Linux, so it wasn't Windows fault. I could ping them just fine. I tried three different browsers with the same issue. It's just that when I accessed the website, it would get stuck and not load at all, and sometimes pages would just stop loading mid-way.

Today I found the solution. Disable "Experimental QUIC Protocol" in Chrome settings.

This makes me kind of worried because I've had issues accessing wordpress.org for months. There was no indication that this was caused by QUIC. I just managed to realize it because there was QUIC-related error in devtools that appeared only sometimes.

I wonder what other websites are rendered inaccessible by this protocol and users have no idea what is causing it.

147. AlienRobot ◴[] No.41892611{3}[source]
Why am I beta testing unstable software?
replies(2): >>41892983 #>>41897400 #
148. kijin ◴[] No.41892614{4}[source]
High-latency networks are going away, too, with Cloudflare eating the web alive and all the other major clouds adding PoPs like crazy.
149. jiggawatts ◴[] No.41892619{6}[source]
These things are nice-to-have until they become sufficiently widespread that typical consumer applications start to require the bandwidth. That comes much later.

E.g.: 8K 60 fps video streaming benefits from data rates up to about 1 Gbps in a noticeable way, but that's at least a decade away form mainstream availability.

replies(2): >>41893367 #>>41896025 #
150. jiggawatts ◴[] No.41892626{7}[source]
I heard from several sources that what they do is give the apartment builder a paper bag of cash in exchange for the right to use their wires instead of the NBN. Then they gouge the users with higher monthly fees.
replies(1): >>41892945 #
151. hakfoo ◴[] No.41892627{7}[source]
Isn't the expensive part more the assembly aspect? For Cat 6 the plugs and keystone jacks add up to a few dollars per port, and the crimper is like $20. I understand building your own fibre cables-- if you don't want to thread them through walls without the heads pre-attached, for example-- involves more sophisticated glass-fusion tools that are fairly expensive.

A rental service might help there, or a call-in service-- the 6 hours of drilling holes and pulling fibre can be done by yourself, and once it's all cut to rough length, bring out a guy who can fuse on 10 plugs in an hour for $150.

replies(3): >>41892963 #>>41893860 #>>41894283 #
152. ocdtrekkie ◴[] No.41892686{4}[source]
This ignores... a lot of reality. Like the fact that when IP was designed, the idea of every individual network device having to run its own firewall was impractical performance-wise, and decades later... still not really ideal.

There's definitely some benefits to glean from a zero trust model, but putting a moat around your network still helps a lot and NAT is probably the best accidental security feature to ever exist. Half the cybersecurity problems we have are because the cloud model has normalized routing sensitive behavior out to the open Internet instead of private networks.

My middleboxes will happily be configured to continue to block any traffic that refuses to obey them. (QUIC and ECH inclusive.)

replies(1): >>41893112 #
153. bdd8f1df777b ◴[] No.41892709[source]
As a Chinese whose latency to servers outside China often exceeds 300ms, I'm a staunch supporter of QUIC. The difference is night and day.
154. toast0 ◴[] No.41892726{3}[source]
I've run a big webserver that served a decent size apk/other app downloads (and a bunch of small files and what nots). I had to set the maximum outgoing window to keep the overall memory within limits.

IIRC, servers were 64GB of ram and sendbufs were capped at 2MB. I was also dealing with a kernel deficiency that would leave the sendbuf allocated if the client disappeared in LAST_ACK. (This stems from a deficiency in the state description from the 1981 rfc written before my birth)

replies(1): >>41894769 #
155. astrange ◴[] No.41892802{4}[source]
No one ever uses SCTP. It's pretty unclear to me why any OSes do include it; free OSes seem to like junk drawers of network protocols even though they add to the security surface in kernel land.
replies(5): >>41892937 #>>41892986 #>>41893372 #>>41893981 #>>41895474 #
156. astrange ◴[] No.41892805{3}[source]
> (Fun bonus story: I noticed significant drops in throughput when using battery on a MacBook. Something to do with the efficiency cores I assume.)

That sounds like the thread priority/QoS was incorrect, but it could be WiFi or something.

157. dilyevsky ◴[] No.41892865{3}[source]
Cloudflare tunnels work over quic so this is not entirely correct
158. dcow ◴[] No.41892920{4}[source]
Now that’s a horse of a different color! I’m already opining this alt reality. Middle-boxes and everyone touching them ruined the internet.
159. kelnos ◴[] No.41892937{5}[source]
Does anyone even build SCTP support directly into the kernel? Looks like Debian builds it as a module, which I'm sure I never have and never will load. Security risk seems pretty minimal there.

(And if someone can somehow coerce me into loading it, I have bigger problems.)

replies(2): >>41893439 #>>41895319 #
160. dbaggerman ◴[] No.41892945{8}[source]
When I was there NBNCo hadn't really moved into the inner city yet. We did have some kind of financial agreement with the building developer/management to install our VDSL DSLAMs in their comms room. It wouldn't surprise me if those payments got shadier and more aggressive as the NBN coverage increased.
161. tomxor ◴[] No.41892952{5}[source]
Then why isn't it everywhere, it's been practical for over 40 years now.
replies(2): >>41893719 #>>41900468 #
162. Dylan16807 ◴[] No.41892963{8}[source]
If you particularly want to use a raw spool, then yes that's an annoying cost. If you buy premade cables for an extra $5 each then it's fine.
replies(2): >>41893210 #>>41893331 #
163. FridgeSeal ◴[] No.41892983{4}[source]
Because Google puts whatever they want in their browser for you to beta test and you’ll be pleased about it, peasant /s.
164. supriyo-biswas ◴[] No.41892986{5}[source]
The telecom sector uses SCTP in lots of places.
165. eptcyka ◴[] No.41893050{3}[source]
One does not need to send and should not send one packet per syscall.
replies(3): >>41894327 #>>41894736 #>>41895201 #
166. mh- ◴[] No.41893107{4}[source]
I don't remember the details offhand, but yes - unless Chrome's network simulation has been rewritten in the last few years, it doesn't do a good job of approximating real world network conditions.

It's a lot better than nothing, and doing it realistically would be a lot more work than what they've done, so I say this with all due respect to those who worked on it.

167. codexon ◴[] No.41893112{5}[source]
Even now, you can saturate a modern cpu core with only 1 million packets per second.
168. fulafel ◴[] No.41893147{3}[source]
That's how the internet works, there's no guaranteed delivery and TCP bandwidth estimation is based on when packets start to be dropped when you send too many.
169. conradev ◴[] No.41893160{3}[source]
Looks like it’s being worked on: https://lwn.net/Articles/989623/
replies(1): >>41896868 #
170. reshlo ◴[] No.41893195{7}[source]
I did misunderstand it, because looking at it again now, they spent the entire post talking about how difficult it is to make the cables, except for the very last sentence where they mention die area one time, and it’s still not clear that they’re talking about die area for something that’s inside the computer rather than a chip that goes in the cable.

> Look at the supported lengths and cables. … relatively generic cables. Your USB-C or HDMI connections cannot go nearly as far and require significantly more tightly controlled cables and shielding. … They used a long list of tricks to squeeze every possible dB of SNR out of those cables.

replies(1): >>41893805 #
171. hakfoo ◴[] No.41893210{9}[source]
A practical drawback to premade cables is the need for a larger hole to accommodate the pre-attached connector. There's also a larger gap that needs to be plugged around the cable to prevent leaks into he wall.

My ordinary home-centre electric drill and an affordable ~7mm masonry bit lets me drill a hole in stucco large enough to accept bare cables with a very narrow gap to worry about.

172. tjoff ◴[] No.41893322[source]
Industry will do absolutely anything, except making lightweight sites.

We had instant internet in the late 90s, if you were lucky enough to have a fast connection. The pages were small and there were barely any javascript. You can still find such fast loading lightweight pages today and the experience is almost surreal.

It feels like the page has completely loaded before you even released the mousebutton.

If only the user experience were better it might have been tolerable but we didn't get that either.

replies(6): >>41893360 #>>41893625 #>>41893919 #>>41894650 #>>41895649 #>>41896257 #
173. inferiorhuman ◴[] No.41893331{9}[source]
Where are you finding them for that cheap? OP is talking about 20GBP for a run of fiber. If I look at, for instance, Ubiquiti their direct attach cables start at $13 for 0.5 meter cables.
replies(1): >>41893480 #
174. somat ◴[] No.41893340{4}[source]
I used to play netquake(not quakeworld) at up to 800 ms lag, past that was too much for even young stupid me.

For them that don't know the difference. netquake was the original strict client server version of quake, you hit the forward key it sends that to the server and the server then sends back where you moved. quakeworld was the client side prediction enhancement that came later, you hit forward, the client moves you forwards and sends it to the server at the same time. and if there are differences it gets reconciled later.

For the most part client side prediction feels better to play. however when there are network problems, large amounts of lag, a lot of artifacts start to show up, rubberbanding, jumping around, hits that don't. Pure client server feels worse, every thing gets sluggish, and mushy but movement is a little more predictable and logical and can sort of be anticipated.

I have not played quake in 20 years but one thing I remember is at past 800ms of lag the lava felt magnetic, it would just suck you in, every time.

175. OtomotO ◴[] No.41893360[source]
I am currently de-javascripting a React app of some project I am working on.

It's a blast. It's faster and way more resilient. No more state desync between frontend and backend.

I admit there is a minimum of javascript (currently a few hundred lines) for convenience.

I'll add a bit more to add the illusion this is still a SPA.

I'll kill about 40k lines of React that way and about 20k lines of Kotlin.

I'll have to rewrite about 30k lines of backend code though.

Still, I love it.

replies(3): >>41893417 #>>41893847 #>>41904214 #
176. notpushkin ◴[] No.41893367{7}[source]
The other side of this particular coin is, when such bandwidth is widely available, suddenly a lot of apps that have worked just fine are now eating it up. I'm not looking forward to 9 gigabyte Webpack 2036 bundles everywhere :V
replies(1): >>41896006 #
177. spookie ◴[] No.41893372{5}[source]
And most of those protocols can be disabled under sysctl.conf.
178. vasilvv ◴[] No.41893375{3}[source]
I'm not sure where rustls comes from -- Chrome uses BoringSSL, and last time I checked, Mozilla implementation used NSS.
179. NetOpWibby ◴[] No.41893417{3}[source]
Nature is healing. Love to see it.
180. jeroenhd ◴[] No.41893439{6}[source]
Linux and FreeBSD have had it for ages. Anything industrial too. Solaris, QNX, Cisco IOS.

SCTP is essential for certain older telco protocols and in certain protocols developed for LTE it was added. End users probably don't use it much, but the harsware their connections are going through will speak SCTP at some level.

181. Dylan16807 ◴[] No.41893480{10}[source]
I was looking at patch cables. Ubiquiti's start at $4.80
182. bawolff ◴[] No.41893552{11}[source]
> I think my message here is only hard to understand if your salary (or personal worth etc) depends on not understanding it. It's really not that complex.

Just because someone disagrees with you, doesn't mean they don't understand you.

However, if you think google is making standards unneccessarily complex, you should read some of the standards from the 2000s (e.g. SAML).

183. jeroenhd ◴[] No.41893586[source]
These caps are a massive pain when downloading large games or OS upgrades for me as the end user. 500mbps is still fast but for a new protocol looking to replace older protocols, it's a big downside.

I don't really benefit much from http/3 or QUIC (I don't live in a remote area or host a cloud server) so I've already considered disabling either. A bandwidth cap this low makes a bigger impact than the tiny latency improvements.

184. bawolff ◴[] No.41893616{9}[source]
> They have abused this dominant position to eliminate all competitors except Apple and (so far) Mozilla.

But that's like all of them. Except edge but that was mostly dead before chrome came on the scene.

It seems like you are using embrace, extend, extinguish to just mean, "be succesful", but that's not what the term means. Being a market leader is not the same thing as embrace, extend, extinguish. Neither is putting competition out of business.

185. jeroenhd ◴[] No.41893618{5}[source]
That's only true if you use the kernel TCP stack. You can replicate the slow QUIC stack and do everything in user mode to get control back over what packets you accept (i.e. reject any that don't fit your TLS stream).
186. albertopv ◴[] No.41893624{3}[source]
I would be surprised if online games use TCP. Anyway, physics is still there and light speed is fast, but that much. In 10ms it travels about 3000km, NZ to US west coast is about 11000km, so less than 60ms is impossible. Cables are probably much longer, c speed is lower in a medium, add network devices latency and 200ms from NZ to USA is not that bad.
replies(2): >>41894331 #>>41901615 #
187. pjmlp ◴[] No.41893625[source]
Lightweight sites don't make for shinny CVs.

Even on the backend, now the golden goose is to sell microservices, via headless SaaS products connected via APIs, that certainly is going to perform.

https://macharchitecture.com/

However if those are the shovels people are going to buy, then those are the ones we have to stockpile, so is the IT world.

replies(1): >>41893767 #
188. attentive ◴[] No.41893658[source]
> I look at this article and consider the result pretty much as expected. Why? Because it pushes the flow control out of the kernel (and possibly network adapters) into userspace. TCP has flow-control and sequencing. QUICK makes you manage that yourself (sort of).

This implies that user space is slow. Yet, some(most?) of the fastest high-performance TCP/IP stacks are made in user space.

replies(2): >>41893816 #>>41893862 #
189. nine_k ◴[] No.41893719{6}[source]
It is everywhere in new development. I remember Google buying tons of "dark fiber" capacity from telcos like 15 years ago; that fiber was likely laid for future needs 20-25 years ago. New apartment buildings in NYC just get fiber, with everything, including traditional "cable TV" with BNC connectors, powered by it.

But telcos have colossal copper networks, and they want to milk the last dollars from it before it has to be replaced, with digging and all. Hence price segmenting, with slower "copper" plans and premium "fiber" plans, obviously no matter if the building has fiber already.

Also, passive fiber interconnects have much higher losses than copper with RJ45s. This means you want to have no more than 2-3 connectors between pieces of active equipment, including from ISP to a building. This requires more careful planning, and this is why wiring past the apartment (or even office floor or a single-family house) level is usually copper Ethernet.

190. Zanfa ◴[] No.41893767{3}[source]
My feeling is that the microservice fad has passed… for now. But I’m sure it’ll be resurrected in a few years with a different name.
replies(3): >>41893799 #>>41895667 #>>41898941 #
191. pjmlp ◴[] No.41893799{4}[source]
Nah, it is only really taking off now in enterprise consulting, with products going SaaS and what used to extension points via libraries, is now only possible via Webhooks and API calls, that naturally have to be done somewhere, either microservices or serverless.
192. pzmarzly ◴[] No.41893802[source]
> I look at this article and consider the result pretty much as expected. Why? Because it pushes the flow control out of the kernel (and possibly network adapters) into userspace. TCP has flow-control and sequencing. QUICK makes you manage that yourself (sort of).

I truly hope the QUIC in Linux Kernel project [0] succeeds. I'm not looking forward to linking big HTTP/3 libraries to all applications.

[0] https://github.com/lxin/quic

replies(1): >>41896924 #
193. chgs ◴[] No.41893805{8}[source]
Their point was those systems like hdmi, bits of usb-c etc put the complexity is very expensive very short cables.

Meanwhile a 10g port on my home router will run over copper for far longer. Not that I’m a fan given the power use, fibre is much easier to deal with and will run for miles.

194. WesolyKubeczek ◴[] No.41893816{3}[source]
You have to jump contexts for every datagram, and you cannot offload checksumming to the network hardware.
195. chgs ◴[] No.41893835{6}[source]
A 3m 100g dac is 1/3 the price
196. pushupentry1219 ◴[] No.41893847{3}[source]
Honestly I used to be on the strict noscript JavaScript hate train.

But if your site works fast. Loads fast. With _a little_ JS that actually improves the functionality+usability in? I think that's completely fine. Minimal JS for the win.

replies(3): >>41893980 #>>41894499 #>>41894626 #
197. chgs ◴[] No.41893860{8}[source]
My single mode keystones pass through were about the same price as cat5, and pre-made cables were no harder to run than un terminated cat5.
198. formerly_proven ◴[] No.41893862{3}[source]
If the entire stack is in usermode and it's directly talking to the NIC with no kernel involvement beyond setup at all. This isn't the case with QUIC, it uses the normal sockets API to send/recv UDP.
199. chgs ◴[] No.41893870{7}[source]
I don’t think my 10g coppers will drop to 10m. 100m sure, but 10m rings a bell.
200. chgs ◴[] No.41893876[source]
QUIC is all about an advertising company guarenteeing delivery of adverts to the consumer.

As long as the adverts arrive quickly the rest is immaterial.

201. kodama-lens ◴[] No.41893919[source]
When I was finishing university I bought into the framework-based web-development hype. I thought that "enterprise" web-development has to be done this way. So I got some experience by migrating my homepage to a static VUE.JS version. Binding view and state by passing the variables name as a sting felt off, extending the build env seemed unnecessary complex and everything was slow and has to be done a certain way. But since everyone is using this, this must be right I thought.

I got over this view and just finished the new version of my page. Raw HTML with some static-site-generator templating. The HTML size went down 90%, the JS usage went down 97% and build time is now 2s instead of 20s. The user experience is better and i get 30% more hits since the new version.

The web could be so nice of we used less of it.

replies(1): >>41898722 #
202. tsimionescu ◴[] No.41893968{4}[source]
I completely disagree with this take.

First of all, NAT is what saved the Internet from being forked. IPv6 transition was a pipe dream at the time it was first proposed, and the vast growth in consumers for ISPs that had just paid for expensive IPv4 boxes would never have resulted in them paying for far more expensive (at the time) IPv6 boxes, it would have resulted in much less growth, or other custom solutions, or even separate IPv4 networks in certain parts of the world. Or, if not, it would have resulted in tunneling all traffic over a protocol more amenable to middle boxes, such as HTTP, which would have been even worse than the NAT happening today.

Then, even though it was unintentional, NAT and CGNAT are what ended up protecting consumers from IP-level tracking. If we had transitioned from IPv4 directly to IPv6, without the decades of NAT, all tracking technology wouldn't have bothered with cookies and so on, we would have had the trivial IP tracking allowed by the one-IP-per-device vision. And with the entrenched tracking adware industry controlling a big part of the Internet and relying on tracking IPs, the privacy extensions to IPv6 (which, remember, came MUCH later in IPv6's life than the original vision for the transition) would never have happened.

I won't bother going into the other kinds of important use cases that other middle boxes support, that a hostile IPv4 would have prevented, causing even bigger problems. NAT is actually an excellent example of why IPs design decisions that allow middle boxes are a godsend, not a tragic mistake. Now hopefully we can phase out NAT in the coming years, as it's served its purpose and can honorably retire.

replies(1): >>41896125 #
203. OtomotO ◴[] No.41893980{4}[source]
Absolutely.

I want the basic functionality to work without JS.

But we have a working application and users are not hating it and used to it.

We rely on modals heavily. And for that I added (custom) JS. It's way simpler than alternatives and some things we do are not even possible without JS/WASM (via JS apis to manipulate the DOM) today.

I am pragmatic.

But as you mentioned it, personally I also use NoScript a lot and if a site refuses to load without JS it's a hard sell to me if I don't know it already.

204. lstodd ◴[] No.41893981{5}[source]
4g/LTE runs on it. So you use it too, via your phone.
replies(1): >>41894177 #
205. tsimionescu ◴[] No.41893989{6}[source]
Very unlikely. Most likely NAT would have happened to other layers of the stack (HTTP, for example), causing even more problems. Or, the growth of the Internet would have stalled dramatically, as ISPs would have either increased prices dramatically to account for investments in new and expensive IPv6 hardware, or simply stopped acceptong new subscribers.
replies(1): >>41894692 #
206. austin-cheney ◴[] No.41894003{5}[source]
That is still incorrect. Once the handshake completes the browser absolutely doesn’t care about HTTP with regard to message processing over WebSockets. Therefore just achieve the handshake by any means and WebSockets will work correctly in the browser. The only browser specific behavior of any importance is that RFC6455 masking will occur on all messaging leaving the browser and will fail on all messaging entering the browser.

> You can't just say

I can say that, because I have my own working code that proves it cross browser and I have written perf tools to analyze it with numbers. One of my biggest learnings about software is to always conduct your own performance measurements because developers tend to be universally wrong about performance assumptions and when they are wrong they are frequently wrong by multiple orders of magnitude.

As far as custom implementation goes you gain many liberties after leaving the restrictions of the browser as there are some features you don’t need to execute the protocol and there are features of the protocol the browser does not use.

replies(1): >>41896941 #
207. yunohn ◴[] No.41894048{5}[source]
This is one of those HN buzzword medley comments that has only rant, no substance.

- MS embrace extend extinguish

- Google is making the world complex

- Nth level manager is messed up

None of the above was connected to deliver a clear point, just thrusted into the comment to sound profound.

208. astrange ◴[] No.41894177{6}[source]
Huh, didn't know that. But iOS doesn't support it, so it's not needed on the AP side even for wifi calling.
209. AndyMcConachie ◴[] No.41894183{4}[source]
A major mistake of the IETF was to not standardize IPv4 NAT. Had it been standardized early on there would be fewer problems with it.
210. maccard ◴[] No.41894283{8}[source]
Thanks - I genuinely didn't know. I assumed that you could "just" crimp it like CAT6, but a quick google leads me to spending quite a few hundred pounds on something like this[0].

That said;

> A rental service might help there, or a call-in service-- the 6 hours of drilling holes and pulling fibre can be done by yourself, and once it's all cut to rough length, bring out a guy who can fuse on 10 plugs in an hour for $150.

If you were paying someone to do it (rather than DIY) I'd wager the cost would be similar, as you're paying them for 6 hours of labour either way.

[0] https://www.cablemonkey.co.uk/fibre-optic-tool-kits-accessor...

211. jacobgorm ◴[] No.41894327{4}[source]
On platforms like macOS that don’t have UDP packet pacing you more or less have to.
212. Hikikomori ◴[] No.41894331{4}[source]
Speed of light in fiber is about 200 000km/s. Most of the latency is because of distance, modern routers have a forwarding latency of tens of microseconds, some switches can start sending out a packet before fully receiving it.
213. TechDebtDevin ◴[] No.41894333{6}[source]
Thanks for the response! Very interesting. Unfortunately the USA is a tumor on this planet. Born and Raised, this place is fucked and slowly fucking the whole world.
replies(1): >>41895604 #
214. superjan ◴[] No.41894376[source]
As an alternative to simulating latency: How about using a VPN service to test your website via Australia? I suppose that when it easier to do, it is more likely that people will actually do this test.
replies(1): >>41894443 #
215. sokoloff ◴[] No.41894443{3}[source]
That’s going to give you double (plus a bit) latency as your users in Australia will experience.
replies(1): >>41894545 #
216. Tade0 ◴[] No.41894468[source]
I've been tasked with improving a system where a lot of the events relied on timing to be just right, so now I routinely click around the app with a 900ms delay, as that's the most that I can get away with without having the hot-reloading system complain.

Plenty of assumptions break down in such an environment and part of my work is to ensure that the user always knows that the app is really doing something and not just being unresponsive.

217. starspangled ◴[] No.41894499{4}[source]
What do you use that good javascipt for? And what is the excessive stuff that causes slowness and bloat? I'm not a web programmer, just curious.
replies(2): >>41895569 #>>41895617 #
218. youngtaff ◴[] No.41894505{3}[source]
Chrome’s network emulation is a pretty poor simulation of the real world… it throttles on a per request basis so can’t simulate congestion due to multiple requests in flight at the same time

Really need something like ipfw, dummynet, tc etc to do it at the packet level

219. bell-cot ◴[] No.41894543{4}[source]
> It would have forced IPv6 migration immediately (no NAT) and forced endpoints to be secured...

There's a difference between "better roads not taken", and "taking this road would require that most of our existing cars and roads be replaced, simultaneously".

220. codetrotter ◴[] No.41894545{4}[source]
Rent a VPS or physical server in Australia. Then you will have approx the same latency accessing that dev server, that the Australians have reaching servers in your country.
221. selimnairb ◴[] No.41894626{4}[source]
Building a new app at work using Web Components and WebSockets for dynamism. I’m using Bulma for CSS, which is still about 300KiB. However, the site loads instantly. I’m not using a Javascript framework or bundler or any of that (not even npm!), just vanilla Javascript. It’s a dream to program and I love not having the complexity of a framework taking up space in my brain.
222. Flex247A ◴[] No.41894650[source]
Example of an almost instant webpage today: https://www.mcmaster.com/
replies(3): >>41894679 #>>41896641 #>>41901295 #
223. loufe ◴[] No.41894679{3}[source]
And users clearly appreciate it. I was going over some bolt types with a design guy at my workplace yesterday for a project and his first instinct is to pull up the McMaster-Carr site to see what was possible. I don't know if we even order from them, since we pass through purchasing folks, but the site is just brilliantly simple and elegant.
224. ajb ◴[] No.41894692{7}[source]
Your first scenario is plausible, the second I'm not sure about. Due to the growth rate central routers had a very fast replacement cycle anyway, and edge devices mostly operated at layer 2, so didn't much care about IP. (Maybe the was done device in the middle that would have had a shorter lifespan?). I worked at a major router semiconductor vendor, and I can tell you that all the products supported IPv6 at a hardware level for many, many years before significant deployment and did not use it as a price differentiator. (Sure, they were probably buggy for longer than necessary, but that would have been shaken out earlier if the use was earlier). So I don't think the cost of routers was the issue.

The problem with ipv6 in my understanding was that the transitional functions (nat-pt etc) were half baked and a new set had to be developed. It is possible that disruption would have occurred if that had to be done against an earlier address exhaustion date.

225. tomohawk ◴[] No.41894736{4}[source]
On linux, there is sendmmsg, which can send up to 1024 packets each time, but that is a far cry from a single syscall to send 1GB file. With GSO, it is possible to send even more datagrams to call, but the absolute limit is 64KB * 1024 per syscall, and it is fiddly to pack datagrams so that this works correctly.

You might think you can send datagrams of up to 64KB, but due to limitations in how IP fragment reassembly works, you really must do your best to not allow IP fragmentation to occur, so 1472 is the largest in most circumstances.

replies(1): >>41895858 #
226. dan-robertson ◴[] No.41894756{4}[source]
Yeah I was thinking the same thing – in some video contexts with some video codecs you may care more about latency and may be able to get a video codec that can cope with packet loss instead of requiring retransmission – except it seemed it wouldn’t apply too much to Netflix where the latency requirement is lower and so retransmission ought to be fine.

Maybe one advantage of HTTP/3 would be handling ip changes but I’m not sure this matters much because you can already resume downloads fine in HTTP/1.1 if the server supports range requests (which it very likely does for video)

227. dan-robertson ◴[] No.41894769{4}[source]
I wonder if there’s some way to reduce this server-side memory requirement. I thought that was part of the point of sendfile but I might be mistaken. Unfortunately sendfile isn’t so suitable nowadays because of tls. But maybe if you could do tls offload and do sendfile then an OS could be capable of needing less memory for sendbufs.
228. kbolino ◴[] No.41895155{4}[source]
The only mechanism I can think of that could have been used for that purpose, and was publicly known about (to at least some extent) in the late 1970s, would be RSA. That is strong crypto, or at least we know it is when used properly today, but it's unlikely the authors of IP would have known about it. Even if they did, the logistical challenges of key distribution would have sunk its use, and they would almost certainly have fallen into one of the traps in implementing it that took years to discover, and the key sizes that would have been practical for use ca 1980 would be easy to break by the end of the 1990s.

Simply put, this isn't a road not taken, it's a road that didn't exist.

229. intelVISA ◴[] No.41895201{4}[source]
Anyone pushing packets seriously doesn't even use syscalls...
230. LittleOtter ◴[] No.41895205[source]
This paper has been shown one month ago:https://news.ycombinator.com/item?id=41484991.

Now it's back to headlines of HN.Seems like people all interested in this topic.

231. jpambrun ◴[] No.41895238[source]
This paper seems to be neglecting to the effect of latency and packet loss. From my understanding, the biggest issue with TCP is the window sizing that gets cut every time a packet gets lost or arrives out of order, thus killing throughput. The latency makes that more likely to happen and makes the effect last longer.

This paper needs multiple latency simulations, some packet loss and latency jitter to have any value.

replies(1): >>41895273 #
232. dgacmu ◴[] No.41895273[source]
This is a bit of a misunderstanding. A single out of order packet will not cause a reduction; tcp uses three duplicate acks as a loss signal. So the packet must have been reordered to arrive after 3 later packets.

Latency does not increase the chances of out of order packet arrival. Out of order packet arrival is usually caused by multipath or the equivalent inside a router if packets are handled by different stream processors (or the equivalent). Most routers and networks are designed to keep packets within a flow together to avoid exactly this problem.

However, it is fair to say that traversing more links and routers probably increases the chance of out of order packet delivery, so there's a correlation in some way with latency, but it's not really about the latency itself - you can get the same thing in a data center network.

233. rjsw ◴[] No.41895319{6}[source]
I added it to NetBSD and build it into my kernels, it isn't enabled by default though.

Am part way through adding NAT support for it to the firewall.

234. j1elo ◴[] No.41895474{5}[source]
SCTP is exactly how you establish a data communication link with the very modern WebRTC protocol stack (and is rebranded to "WebRTC Data Channels"). Granted, it is SCTP-over-UDP. But still.

So yes, SCTP is under the covers getting a lot more use than it seems, still today. However all WebRTC implementations usually bring their own userspace libraries to implement SCTP, so they don't depend on the one from the OS.

235. lbriner ◴[] No.41895497[source]
Funny though, we all implicitly buy into "QUIC is the new http/2" or whatever because fast = good without really understanding the details.

It's like buying the new 5G cell phone because it is X times faster than 4G even though 1) My 4G phone never actually ran at the full 4G speed and 2) The problem with any connection is almost never due to the line speed of my internet connection but a misbehaving DNS server/target website/connection Mux at my broadband provider. "But it's 5G"

Same thing cracks me up when people advertise "fibre broadband" for internet by showing people watching the TV like the wind is blowing in their hair, because that's how it works (not!). I used to stream on my 8Mb connection so 300Mb might be good for some things but I doubt I would notice much difference.

236. graemep ◴[] No.41895569{5}[source]
Two examples that come up a lot for me:

1. filtering a drop down list by typing rather than scrolling through lots of options to pick one 2. Rearranging items with drag and drop

The excessive stuff is requiring a whole lot of scripts and resources to load before you display a simple page of information.

replies(1): >>41897193 #
237. oasisaimlessly ◴[] No.41895604{7}[source]
This is about Australia, not the USA.
238. _heimdall ◴[] No.41895617{5}[source]
My rule of thumb is to render HTML where the state actually lives.

In a huge majority of cases I come across that is on the server. Some things really are client-side only though, think temporary state responding to user interactions.

Either way I also try really hard to make sure the UI is at least functional without JS. There are times that isn't possible, but those are pretty rare in my experience.

239. wlll ◴[] No.41895649[source]
My personal projects are all server rendered HTML. My blog (a statically rendered Hugo site) has no JS at all, my project (Rails and server rendered HTML) has minimal JS that adds some nice to have stuff but nothing else (it works with no JS). I know they're my sites, but the experience is just so much better than most of the rest of the web. We've lost so much.
replies(1): >>41898738 #
240. _heimdall ◴[] No.41895667{4}[source]
I've come across quite a few job postings in the last could weeks looking for senior engineers with experience migrating monoliths to micro services. Not sure if the fad is still here or if those companies are just slow to get onboard.

There are still good uses for micro services. Specific services can gain a lot from it, the list of those types of services/apps is pretty short in my experience though.

241. Veserv ◴[] No.41895858{5}[source]
Why does 1 syscall per 1 GB versus 1 syscall per 1 MB have any meaningful performance cost?

syscall overhead is only on the order of 100-1000 ns. Even at a blistering per core memory bandwidth of 100 GB/s, just the single copy fundamentally needed to serialize 1 MB into network packets costs 10,000 ns.

The ~1,000 syscalls needed to transmit a 1 GB file would incur excess overhead of 1 ms versus 1 syscall per 1 GB.

That is at most a 10% overhead if the only thing your system call needs to do is copy the data. As in it takes 10,000 ns total to transmit 1,000 packets meaning you get 10 ns per packet to do all of your protocol segmentation and processing.

The benchmarks in the paper show that the total protocol execution time for a 1 GB file using TCP is 4 seconds. The syscall overhead for issuing 1,000 excess syscalls should thus be ~1/4000 or about 0.025% which is totally irrelevant.

The difference between the 4 second TCP number and the 8 second QUIC number can not be meaningfully traced back to excess syscalls if they were actually issuing max size sendmmsg calls. Hell, even if they did one syscall per packet that would still only account for a mere 1 second of the 4 second difference. It would be a stupid implementation for sure to have such unforced overhead, but even that would not be the actual cause of the performance discrepancy between TCP and QUIC in the produced benchmarks.

242. wkat4242 ◴[] No.41896006{8}[source]
Yeah for me it's mostly ollama models lol. It is nice to see it go fast. But even on my 1gbit it feels fast enough.
243. wkat4242 ◴[] No.41896025{7}[source]
Yeah the problem here is also that I don't have the router setup to actually distribute that kind of bandwidth. 2.5Gbit max..

And internal network is 1 Gbit too. So it'll take ) and cost) more than just changing my subscription.

Also my TV is still 1080p lol

244. api ◴[] No.41896125{5}[source]
The cost of NAT is much higher than you think. If computers could just trivially connect to each other then software might have evolved collaboration and communication features that rely on direct data sharing. The privacy and autonomy benefits of that are enormous, not to mention the reduced need for giant data centers.

It’s possible that the cloud would not have been nearly as big as it has been.

The privacy benefits of NAT are minor to nonexistent. In most of the developed world most land connections get one effectively static V4 IP which is enough for tracking. Most tracking relies primarily on fingerprints, cookies, apps, federated login, embeds, and other methods anyway. IP is secondary, especially with the little spies in our pockets that are most people’s phones.

replies(1): >>41902268 #
245. nbittich ◴[] No.41896257[source]
Tried that on my website (bittich.be), it's only 20ish kb gzipped. I could have done better if I didn't use tailwind css :(
replies(1): >>41904255 #
246. 8n4vidtmkvmk ◴[] No.41896641{3}[source]
Someone did an analysis of that site on tiktok or YouTube. It's using some tricks to speed things up, like preloading the html for the next page on hover and then replacing the shell of the page on click. So pre-rendering and prefetching. Pretty simple to do and effective apparently.
247. throawayonthe ◴[] No.41896868{4}[source]
also looks like current quic performance issues are a consideration, tested in section 4. :

> The performance gap between QUIC and kTLS may be attributed to:

  - The absence of Generic Segmentation Offload (GSO) for QUIC.
  - An additional data copy on the transmission (TX) path.
  - Extra encryption required for header protection in QUIC.
  - A longer header length for the stream data in QUIC.
248. giuscri ◴[] No.41896920{4}[source]
Like which one?
249. deathanatos ◴[] No.41896941{6}[source]
> That is still incorrect. Once the handshake completes the browser absolutely doesn’t care about HTTP with regard to message processing over WebSockets.

I never made any claim to the contrary.

> Therefore just achieve the handshake by any means and WebSockets will work correctly in the browser.

At which point you're parsing a decent chunk of HTTP.

> I can say that, because I have my own working code that proves it

Writing code doesn't prove anything; code can have bugs. According to the standard portion I quoted, your code is wrong. A conforming request isn't required to match.

> I have written perf tools to analyze it with numbers. One of my biggest learnings about software is to always conduct your own performance measurements because developers tend to be universally wrong about performance assumptions and when they are wrong they are frequently wrong by multiple orders of magnitude.

Performance has absolutely nothing to do with this.

Even if such an implementation appears to work today in browsers, this makes situations with a still-conforming UA damn near impossible to debug, and there's no guarantees made on header ordering, casing, etc. that would mean it would continue to work. Worse, non-conformant implementations like this are the sort of thing that result in ossification.

replies(1): >>41898380 #
250. LtWorf ◴[] No.41897193{6}[source]
Doesn't the combo box input field already do this?
replies(1): >>41902809 #
251. stouset ◴[] No.41897400{4}[source]
You’re the one choosing to use it.
replies(1): >>41897870 #
252. AlienRobot ◴[] No.41897870{5}[source]
Okay, which browser doesn't come with it enabled by default? Chrome, Vivaldi, and Firefox do. Am I supposed to use Edge?
253. austin-cheney ◴[] No.41898380{7}[source]
In my own implementation I wrote a queue system to force message ordering and support offline messaging state and so forth. Control frames can be sent at any time irrespective of message ordering without problems, however.

In the end an in house implementation that allows custom extensions is worth far more than any irrational unfounded fears. If in the future it doesn’t work then just fix the current approach to account for those future issues. In the meantime I can do things nobody else can because I have something nobody else is willing to write.

What’s interesting is that this entire thread is about performance concerns. If you raise a solution that people find unfamiliar all the fear and hostility comes out. To me such contrary behavior suggests performance, in general, isn’t a valid concern to most developers in comparison to comfort.

254. mmcnl ◴[] No.41898722{3}[source]
Choose the right tool for the job. Every engineering decision is a trade-off. No one blames the hammer when it's used to insert a screw into a wall either.

SPA frameworks like Vue, React and Angular are ideal for web apps. Web apps and web sites are very different. For web apps, initial page load doesn't matter a lot and business requirements are often complex. For websites it's exactly the opposite. So if all you need is a static website with little to no interactivity, why did you choose a framework?

replies(1): >>41904274 #
255. mmcnl ◴[] No.41898738{3}[source]
I have two websites written in JS that render entirely server-side. They are blazing fast, minimal in size and reach 100/100 scores on all criteria with Lighthouse. On top of that they're highly interactive, no build step required to publish a new article.
256. greenchair ◴[] No.41898941{4}[source]
yes it has for early adopters but there are still lots of dinosaurs out there just now trying it out.
257. BenjiWiebe ◴[] No.41900468{6}[source]
I think our phone lines (the only buried cable here that can do data) are probably >40 years old. They're still selling DSL over it.
replies(1): >>41903548 #
258. rofrol ◴[] No.41901295{3}[source]
rewritten in next.js https://x.com/rauchg/status/1848033867145572654
259. reshlo ◴[] No.41901615{4}[source]
The total length of the relevant sections of the Southern Cross Cable is 12,135km, as it goes via Hawaii.

The main reason I made my original comment was to point out that the real numbers are more than double what the other commenter called “devastating” latency.

https://en.wikipedia.org/wiki/Southern_Cross_Cable

260. tsimionescu ◴[] No.41902268{6}[source]
End to end connectivity without a third party server for discovery is either complicated for the end-user (manually specifying IPs, ports, etc) or it relies on inherently insecure techniques like multicast/broadcast. And once you introduce a third party server that both peers connect to, establishing a connection even through NAT is not that much harder. And yes, NAT does have some costs, but transitioning to IPv6 also does, and I don't think that the Internet justified that cost at the time IPv4 addresses first started running out. NAT's cost is much more diffuse and in the future.

We'll see if this more direct communication actually happens as IPv6 becomes ubiquitous, but I for one doubt it. Especially since ISPs are not at all friendly to residential customers trying to run servers, often giving out dynamic prefixes or small subnets (/128s even!) even on IPv6. And I think the LTE network is decent evidence in support of my doubts: it was built from the ground up with IPv6-only internally, and there are no stable IP guarantees anywhere.

As to the privacy benefits, those are real and have made IP tracking almost useless. Your public IP, even in the developed world, very commonly changes daily or weekly. Even worse for trackers, when it does change, it changes to an IP that someone else was using.

replies(1): >>41903927 #
261. graemep ◴[] No.41902809{7}[source]
You are right, it does.

A better example would be dynamically loading the list of options where it is very long and loading the entire list would make the page size much larger.

262. nine_k ◴[] No.41903548{7}[source]
Coaxial "cable TV" cables, also sometimes buried, can carry data all right, at pretty high speeds, given right electronics.
replies(1): >>41910998 #
263. nijave ◴[] No.41903914{6}[source]
I think 10GTek. However there were only 2 of them in the uplink ports on a 24x1Gbps switch in a server cabinet with decent airflow. They might have been getting up to 60C but I don't think they were hitting as high as you were saying. I've since replaced with a 8x10Gbps Hasivo switch so I can't check anymore.
264. api ◴[] No.41903927{7}[source]
> establishing a connection even through NAT is not that much harder.

This is false. Because of the inconsistency of NATs and other middle-boxes out there and the fact that many are broken, it's far less reliable. You end up having to relay some traffic, which imposes external cost that unlike a third party locator server isn't trivial. Now you're already losing the benefits of end-to-end connectivity.

Also if E2E is easy there are distributed algorithms for location like DHTs that can be implemented. With trivial end to end they're pretty easy and would be fast and reliable.

The way the Internet has developed has basically broken it for end to end connectivity, forcing everything into the cloud. That is far worse for privacy and autonomy (and cost, making everything a subscription) than IP tracking.

I think you're a little blinded by what is and unable to imagine an alternate path.

Evolution is very path dependent and small changes at one point make things massively different later. One less asteroid and we'd be warm blooded bird-reptile like things that laid eggs.

replies(1): >>41905997 #
265. nijave ◴[] No.41903967{6}[source]
I'm not sure what you're saying. The cable length is largely fixed/determined by the building you're running cable in. I'd rather spend an extra $100 on cable than start ripping open walls/floors/ceilings to get a slightly more optimal run length.

If it's new construction or you already have everything ripped open it's less of an issue.

replies(1): >>41906971 #
266. wwalexander ◴[] No.41904214{3}[source]
Check out mcmaster.com for an example of a highly optimized website using JS strictly to improve performance.
267. butlike ◴[] No.41904255{3}[source]
you should add a page to your website found at /trippin
replies(1): >>41904527 #
268. butlike ◴[] No.41904274{4}[source]
A hammer to insert a screw into the wall could be a shrewd way to bore a hole with a bigger gauge if you're missing a drill.
269. nbittich ◴[] No.41904527{4}[source]
Not sure to understand what you meant
270. tsimionescu ◴[] No.41905997{8}[source]
Perhaps, but I'm not at all convinced. The hard problems of running distributed peer-to-peer services are not end-to-end connectivity. While that is a problem, it's a relatively small hurdle; you can connect the vast majority of clients with some not huge effort.

The much bigger problems are related to moderation, copyright enforcement, spam prevention, security. All of those are extremely hard if you don't have a centralized authority server.

Could Zoom have better quality more cheaply if it could easily do P2P connections for small meetings? Very likely. Could you make a fully distributed Zoom where anyone can call anyone else without a centralized authority server handling all calls? No, not without significant legal hurdles and effort on preventing malicious actors from spamming the network, from distributing illegal content, etc.

Also, back to middleboxes: not having NAT would not get rid of middleboxes. Even on IPv6, there will always be a stateful firewall blocking all outside connections to the internal network in any sane deployment, at least for home networks. And that firewall will probably be about as buggy as cheap NAT boxes are. And for corporate networks, you have all sorts of other middlemen critical to the security of the network, I clouding IDS and IPS systems, TLS listeners to protect from data e filtration etc. Those will interfere with your traffic far more than relatively regular NAT boxes would.

271. Dylan16807 ◴[] No.41906971{7}[source]
I'm not saying 10gig itself should have been range-limited. I'm saying if the reason it was expensive was cable limits and transmit power, both of those can be solved by cutting the range. And if cutting the range could have given us cheap fast connections 15 years ago we should have made it a variant. It could have become the default network port, and anyone that wanted full distance could have bought a card for it.

Instead we waited and waited before making slower versions of 10gig, and those are still very slow to roll out. Also 2.5gig and 5gig seem especially consumer-oriented, so for those users a cheap but half range 10gig would be all upside.

And 40gig can't reach 100m on any version of copper, so it's not like 100m is a sacred requirement.

272. exabrial ◴[] No.41908681{3}[source]
I'm aware... sigh, I've used this method to decrypt traffic alraedy and this is a giant PITA.
273. BenjiWiebe ◴[] No.41910998{8}[source]
I'm aware of that, but here there's no coaxial cable TV lines either. The only lines in our area that can provide data service are the copper phone lines.