←back to thread

306 points carlos-menezes | 4 comments | | HN request time: 0.001s | source
Show context
jrpelkonen ◴[] No.41891238[source]
Curl creator/maintainer Daniel Stenberg blogged about HTTP/3 in curl a few months ago: https://daniel.haxx.se/blog/2024/06/10/http-3-in-curl-mid-20...

One of the things he highlighted was the higher CPU utilization of HTTP/3, to the point where CPU can limit throughput.

I wonder how much of this is due to the immaturity of the implementations, and how much this is inherit due to way QUIC was designed?

replies(4): >>41891693 #>>41891790 #>>41891813 #>>41891887 #
therealmarv ◴[] No.41891887[source]
"immaturity of the implementations" is a funny wording here. QUIC was created because there is absolutely NO WAY that all internet hardware (including all middleware etc) out there will support a new TCP or TLS standard. So QUIC is an elegant solution to get a new transport standard on top of legacy internet hardware (on top of UDP).

In an ideal World we would create a new TCP and TLS standard and replace and/or update all internet routers and hardware everywhere World Wide so that it is implemented with less CPU utilization ;)

replies(1): >>41891927 #
api ◴[] No.41891927[source]
A major mistake in IP’s design was to allow middle boxes. The protocol should have had some kind of minimal header auth feature to intentionally break them. It wouldn’t have to be strong crypto, just enough to make middle boxes impractical.

It would have forced IPv6 migration immediately (no NAT) and forced endpoints to be secured with local firewalls and better software instead of middle boxes.

The Internet would be so much simpler, faster, and more capable. Peer to peer would be trivial. Everything would just work. Protocol innovation would be possible.

Of course tech is full of better roads not taken. We are prisoners of network effects and accidents of history freezing ugly hacks into place.

replies(7): >>41892225 #>>41892686 #>>41892920 #>>41893968 #>>41894183 #>>41894543 #>>41895155 #
1. johncolanduoni ◴[] No.41892225[source]
Making IPv4 headers resistant to tampering wouldn't have helped with IPv6 rollout, as routers (both customer and ISP) would still need to be updated to be able to understand how to route packets with the new headers.
replies(1): >>41892516 #
2. ajb ◴[] No.41892516[source]
The GP's point is that if middle boxes couldn't rewrite the header, NAt would be impossible. And if NAT were impossible, ipV4 would have died several years ago because NAT allowed more computers than addresses.
replies(1): >>41893989 #
3. tsimionescu ◴[] No.41893989[source]
Very unlikely. Most likely NAT would have happened to other layers of the stack (HTTP, for example), causing even more problems. Or, the growth of the Internet would have stalled dramatically, as ISPs would have either increased prices dramatically to account for investments in new and expensive IPv6 hardware, or simply stopped acceptong new subscribers.
replies(1): >>41894692 #
4. ajb ◴[] No.41894692{3}[source]
Your first scenario is plausible, the second I'm not sure about. Due to the growth rate central routers had a very fast replacement cycle anyway, and edge devices mostly operated at layer 2, so didn't much care about IP. (Maybe the was done device in the middle that would have had a shorter lifespan?). I worked at a major router semiconductor vendor, and I can tell you that all the products supported IPv6 at a hardware level for many, many years before significant deployment and did not use it as a price differentiator. (Sure, they were probably buggy for longer than necessary, but that would have been shaken out earlier if the use was earlier). So I don't think the cost of routers was the issue.

The problem with ipv6 in my understanding was that the transitional functions (nat-pt etc) were half baked and a new set had to be developed. It is possible that disruption would have occurred if that had to be done against an earlier address exhaustion date.