←back to thread

422 points km | 2 comments | | HN request time: 0s | source
Show context
michaelmior ◴[] No.41831072[source]
> various protocols (HTTP, SMTP, CSV) still "require" CRLF at the end of each line

What would be the benefit to updating legacy protocols to just use NL? You save a handful of bits at the expense of a lot of potential bugs. HTTP/1(.1) is mostly replaced by HTTP/2 and later by now anyway.

Sure, it makes sense not to require CRLF with any new protocols, but it doesn't seem worth updating legacy things.

> Even if an established protocol (HTTP, SMTP, CSV, FTP) technically requires CRLF as a line ending, do not comply.

I'm hoping this is satire. Why intentionally introduce potential bugs for the sake of making a point?

replies(13): >>41831206 #>>41831210 #>>41831225 #>>41831256 #>>41831322 #>>41831364 #>>41831391 #>>41831706 #>>41832337 #>>41832719 #>>41832751 #>>41834474 #>>41835444 #
FiloSottile ◴[] No.41831391[source]
Exactly. Please DO NOT mess with protocols, especially legacy critical protocols based on in-band signaling.

HTTP/1.1 was regrettably but irreversibly designed with security-critical parser alignment requirements. If two implementations disagree on whether `A:B\nC:D` contains a value for C, you can build a request smuggling gadget, leading to significant attacks. We live in a post-Postel world, only ever generate and accept CRLF in protocols that specify it, however legacy and nonsensical it might be.

(I am a massive, massive SQLite fan, but this is giving me pause about using other software by the same author, at least when networks are involved.)

replies(7): >>41831450 #>>41831498 #>>41831871 #>>41832546 #>>41832632 #>>41832661 #>>41839309 #
tptacek ◴[] No.41831450[source]
This would be more persuasive if HTTP servers didn't already widely accept bare 0ah line termination. What's the first major public web site you can find that doesn't?
replies(5): >>41831506 #>>41831717 #>>41832137 #>>41832555 #>>41832731 #
hifromwork ◴[] No.41832137[source]
As the parent mentioned, it's security critical that every HTTP parser in the world - including every middleware, proxy, firewall, WAF - parses the headers in the same way. If you write a HTTP parser for a server application it's imperative you don't introduce random inconsistences with the standard (I can't believe I have to write this).

On the other hand, as a client, it's OK to send malformed requests, as long as you're prepared that they may fail. But it's a weird flex, legacy protocols have many warts, why die on this particular hill.

replies(2): >>41832201 #>>41835964 #
tptacek ◴[] No.41832201{3}[source]
That appears to be an argument in favor of accepting bare-0ah, since as a positive statement that is the situation on the Internet today.
replies(3): >>41832905 #>>41833940 #>>41834573 #
1. MobiusHorizons ◴[] No.41833940{4}[source]
If you expect to be behind a reverse proxy that manages internal headers for you (removes them on incoming requests, and adds them based on internal criteria) then accepting bare 0x0a newlines could be a security vulnerability, as a malicious request could sneak an internal header that would not be stripped by the reverse proxy.
replies(1): >>41842146 #
2. Smar ◴[] No.41842146[source]
Only in the case the reverse proxy does not handle bare 0a newlines?