Most active commenters
  • eqvinox(4)
  • jeffbee(3)

←back to thread

83 points zardinality | 17 comments | | HN request time: 3.468s | source | bottom
1. jeffbee ◴[] No.42195134[source]
Interesting that it is taken on faith that unix sockets are faster than inet sockets.
replies(5): >>42195458 #>>42195476 #>>42195489 #>>42195960 #>>42196345 #
2. dangoodmanUT ◴[] No.42195458[source]
Are there resources suggesting otherwise?
3. aoeusnth1 ◴[] No.42195476[source]
Tell me more, I know nothing about IPC
4. eqvinox ◴[] No.42195489[source]
That's because it's logical that implementing network capable segmentation and flow control is more costly than just moving data with internal, native structures. And looking up random benchmarks yields anything from equal performance to 10x faster for Unix domain.
replies(1): >>42196875 #
5. pjmlp ◴[] No.42195960[source]
As often in computing, profiling is a foreign word.
6. yetanotherdood ◴[] No.42196345[source]
Unix Domain Sockets are the standard mechanism for app->sidecar communication at Google (ex: Talking to the TI envelope for logging etc.)
replies(2): >>42196392 #>>42197783 #
7. jeffbee ◴[] No.42196392[source]
Search around on Google Docs for my 2018 treatise/rant about how the TI Envelope was the least-efficient program anyone had ever deployed at Google.
replies(2): >>42196631 #>>42196835 #
8. eqvinox ◴[] No.42196631{3}[source]
Ok, now it sounds like you're blaming unix sockets for someone's shitty code...

No idea what "TI Envelope" is, and a Google search doesn't come up with usable results (oh the irony...) - if it's a logging/metric thing, those are hard to get to perform well regardless of socket type. We ended up using batching with mmap'd buffers for crash analysis. (I.e. the mmap part only comes in if the process terminates abnormally, so we can recover batched unwritten bits.)

replies(1): >>42196764 #
9. jeffbee ◴[] No.42196764{4}[source]
> Ok, now it sounds like you're blaming unix sockets for someone's shitty code...

No, I am just saying that the unix socket is not Brawndo (or maybe it is?), it does not necessarily have what IPCs crave. Sprinkling it into your architecture may or may not be relevant to the efficiency and performance of the result.

replies(1): >>42196884 #
10. yetanotherdood ◴[] No.42196835{3}[source]
I'm a xoogler so I don't have access. Do you have a TL;DR that you can share here (for non-Googlers)?
11. bluGill ◴[] No.42196875[source]
It wouldn't surprise me if inet sockets were more optimized though and so unix sockets ended up slower anyway just because nobody has bothered to make them good (which is probably why some of your benchmarks show equal performance). Benchmarks are important.
replies(2): >>42196928 #>>42198265 #
12. eqvinox ◴[] No.42196884{5}[source]
Sorry, what's brawndo? (Searching only gives me movie results?)

We started out discussing AF_UNIX vs. AF_INET6. If you can conceptually use something faster than sockets that's great, but if you're down to a socket, unix domain will generally beat inet domain...

replies(2): >>42197131 #>>42198275 #
13. eqvinox ◴[] No.42196928{3}[source]
I agree, but practically speaking they're used en masse all across the field and people did bother to make them good [enough]. I suspect the benchmarks where they come up equal are cases where things are limited by other factors (e.g. syscall overhead), though I don't want to make unfounded accusations :)
14. exe34 ◴[] No.42197131{6}[source]
it's what plants crave! it's got electrolytes.
15. ithkuil ◴[] No.42197783[source]
servo's Ipc-channel doesn't use Unix domain sockets to move data. It uses it to share a memfd file descriptor effectively creating a memory buffer shared between two processes
16. sgtnoodle ◴[] No.42198265{3}[source]
I've spent several years optimizing a specialized IPC mechanism for a work project. I've spent time reviewing the Linux Kernel's unix socket source code to understand obscure edge cases. There isn't really much to optimize - it's just copying bytes between buffers. Most of the complexity of the code has to do with permissions and implementing the ability to send file descriptors. All my benchmarks have unambiguously showed unix sockets to be more performant than loopback TCP for my particular use case.
17. sgtnoodle ◴[] No.42198275{6}[source]
You can do some pretty crazy stuff with pipes, if you want to do better than unix sockets.