←back to thread

349 points dgl | 3 comments | | HN request time: 0.001s | source
Show context
dwheeler ◴[] No.44504152[source]
Ah yes, yet ANOTHER vulnerability caused because Linux and most Unixes allow control characters in filenames. This ability's primary purpose appears to be to enable attacks and to make it significantly more difficult to write correct code. For example, you're not supposed to exchange filenames a line at a time, since filenames can contain newlines.

See my discussion here: https://dwheeler.com/essays/fixing-unix-linux-filenames.html

One piece of good news: POSIX recently added xargs -0 and find -print0, making it a little easier to portably handle such filenames. Still, it's a pain.

I plan to complete my "safename" Linux module I started years ago. When enabled, it prevents creating filenames in certain cases such as those with control characters. It won't prevent all problems, but it's a decent hardening mechanism that prevents problems in many cases.

replies(2): >>44504259 #>>44506119 #
1. Cloudef ◴[] No.44506119[source]
I think better idea is to make git use user namespaces and sandbox itself to the clone directory so it literally cannot write/read outside of it. This prevents path traversal attacks and limits the amount of damage RCE could do. Filenames really aren't the problem.
replies(1): >>44507053 #
2. dbdr ◴[] No.44507053[source]
The idea of Defence in Depth is to handle vulnerabilities at several levels, instead of relying on a single technique that becomes a single point of failure.
replies(1): >>44508234 #
3. Cloudef ◴[] No.44508234[source]
I'm not saying not to do that. But it seems sandboxing should be the first thing to think of. Especially in concept of git which allows you to execute all sorts of custom scripts. File name sanitation is not that however, in fact in contrary file name sanitation is known to cause security vulnerabilities and other annoying issues in past.