←back to thread

lsr: ls with io_uring

(rockorager.dev)
335 points mpweiher | 2 comments | | HN request time: 0.856s | source
Show context
maplant ◴[] No.44605037[source]
This seems more interesting as demonstration of the amortized performance increase you'd expect from using io_uring, or as a tutorial for using it. I don't understand why I'd switch from using something like eza. If I'm listing 10,000 files the difference is between 40ms and 20ms. I absolutely would not notice that for a single invocation of the command.
replies(2): >>44605508 #>>44606229 #
rockorager ◴[] No.44606229[source]
Yeah, I wrote this as a fun little experiment to learn more io_uring usage. The practical savings of using this are tiny, maybe 5 seconds over your entire life. That wasn't the point haha
replies(2): >>44606524 #>>44606697 #
1. JuettnerDistrib ◴[] No.44606524[source]
I'd be curious to know if this helps on supercomputers, which are notorious for frequently hanging for a few seconds on an ls -l.
replies(1): >>44608279 #
2. mrlongroots ◴[] No.44608279[source]
It could, but important to keep in mind that the filesystem architecture there is also very different with a parallel filesystem with disaggregated data and metadata.

When you run `ls -l` you could potentially be enumerating a directory with one file per rank, or worse, one file per particle or something. You could try making the read fast, but I also think that it makes no sense to have that many files: you can do things to reduce the number of files on disk. Also many are trying to push for distributed object stores instead of parallel filesystems... fun space.