Most active commenters
  • eqvinox(3)

65 points anon6362 | 28 comments | | HN request time: 2.067s | source | bottom
1. alexdns ◴[] No.45074520[source]
It was considered innovative when it was first shared here eight years ago.
replies(1): >>45074700 #
2. nurumaik ◴[] No.45074700[source]
Anything more innovative happened since (honestly curious)?
replies(4): >>45075146 #>>45075479 #>>45075495 #>>45077234 #
3. js4ever ◴[] No.45075146{3}[source]
I don't think so, but my guess is raw performance rarely matters in the real world.

I once explored this, hitting around 125K RPS per core on Node.js. Then I realized it was pointless, the moment you add any real work (database calls, file I/O, etc.), throughput drops below 10K RPS.

replies(3): >>45075358 #>>45075454 #>>45075994 #
4. antoinealb ◴[] No.45075358{4}[source]
The goal of this kind of system is not to replace the application server. This is intended to work on the data plane where you do simple operations but do them many time per second. Think things like load balancers, cache server, routers, security appliances, etc. In this space Kernel Bypass is still very much the norm if you want to get an efficient system.
replies(2): >>45075829 #>>45076472 #
5. jandrewrogers ◴[] No.45075454{4}[source]
Storage and database doesn’t have to be that slow, that’s just architecture. I have database servers doing 10M RPS each, which absolutely will stress the network.

We just do the networking bits a bit differently now. DPDK was a product of its time.

6. klaussilveira ◴[] No.45075479{3}[source]
https://asynciobench.github.io/
replies(1): >>45077880 #
7. ozgrakkurt ◴[] No.45075495{3}[source]
You can apparently do 100gbit/sec on a single thread over ethernet with io uring.
replies(2): >>45075999 #>>45076203 #
8. eqvinox ◴[] No.45075829{5}[source]
> In this space Kernel Bypass is still very much the norm if you want to get an efficient system.

Unless you can get an ASIC to do it, then the ASIC is massively preferrable; just the power savings generally¹ end the discussion. (= remove most routers from the list; also some security appliances and load balancers.)

¹ exceptions confirm the rule, i.e. small/boutique setups

replies(1): >>45077150 #
9. rivetfasten ◴[] No.45075994{4}[source]
It's always a matter of chasing the bottleneck. It's fair to say that network isn't the bottleneck for most applications. Heuristically, if you're willing to take on the performance impacts of a GC'd language you're probably already not the target audience.

Zero copy is the important part for applications that need to saturate the NIC. For example Netflix integrated encryption into the FreeBSD kernel so they could use sendfile for zero-copy transfers from SSD (in the case of very popular titles) to a TLS stream. Otherwise they would have had two extra copies of every block of video just to encrypt it.

Note however that their actual streaming stack is very different from the application stack. The constraint isn't strictly technical: ISP colocation space is expensive, so they need to have the most juiced machines they can possibly fit in the rack to control costs.

There's an obvious appeal to accomplishing zero-copy by pushing network functionality into user space instead of application functionality into kernel space, so the DPDK evolution is natural.

replies(1): >>45077821 #
10. touisteur ◴[] No.45075999{4}[source]
Recently did 400gb/s on a single core / 4x100gb nics (or just the one 400g nic, too) with dpdk. Mind you it's with jumbo frames and constant packet size for hundreds of mostly synchronized streams... You won't process each packet individually, mostly put them in queues for later batch-process by other cores. Amazing for data acquisition applications using UDP streams.

I keep watching and trying io_uring and still can't make it work as fast with simple code as consistently for those use cases. AF_XDP gets me partly there but then you're writing ebpf... might as well go full-dpdk.

Maybe it's a skill issue on my part, though. Or just a well-fitting niche.

replies(2): >>45076075 #>>45077079 #
11. lossolo ◴[] No.45076075{5}[source]
Any numbers for io_uring with 4x100gb nics in your tests?
12. ramesh31 ◴[] No.45076175[source]
Thanks for the F-Stack!
13. Kamillaova ◴[] No.45076203{4}[source]
Of course, but when working directly with the NIC, such speeds can be achieved with smaller packets, getting even closer to the linerate.
14. baruch ◴[] No.45076472{5}[source]
We do storage systems and use DPDK in the application, when the network IS the bottleneck it is worth it. Saturating two or three 400gbps NICs is possible with DPDK and the right architecture that makes the network be the bottleneck.
15. ozgrakkurt ◴[] No.45077079{5}[source]
Sounds super cool but dpdk sounds like it won't be worth the difficulty from what I read so far.

I also want to get into socket io using io_uring in zig. I'll try to apply everything I found in liburing wiki [0] and see how much I can get (max hardware I have is 10gbit/s).

Seems like there is: - multi-shot requests - register_napi on uring instance - zero copy receive/send. (Probably won't be able to get into it)

Did you already try these or are there other configurations I can add to improve it?

[0]: https://github.com/axboe/liburing/wiki/io_uring-and-networki...

replies(2): >>45077749 #>>45078109 #
16. gonzopancho ◴[] No.45077144[source]
When originally published they wouldn’t even ack Patrick Kelsey, the author of libuinet or that they had forked libuinet.

Now they say this: “Thanks to libplebnet and libuinet this work became a lot easier.”

F-stack is literally forked libuinet using DPDK instead of netmap.

The net-net is that Kelsey took his work private and tencent isn’t advancing the work.

Back in the day I was sponsoring work on libuinet in order to move enough of the kernel needed for a security appliance to libuinet to underpin a performance improvement for pfsense.

Then Tencent did what they did, Patrick reacted as he did and that was over.

We pivoted to VPP. But back in 2016 it also needed a lot of work.

17. gonzopancho ◴[] No.45077150{6}[source]
ASICs require years to develop and aren’t flexible once deployed
replies(2): >>45077634 #>>45078128 #
18. yxhuvud ◴[] No.45077234{3}[source]
Well, io_uring came along and removed a lot of the incentive.
19. Eduard ◴[] No.45077595[source]
Hacker News is the kind of place where you can have _this_ submission (PRC-sponsored Tencent-owned network devkit) on the front page next to a submission about how PRC-sponsored cybercrime group Salt Typhoon pwned 'nearly every American': https://news.ycombinator.com/item?id=45074157
replies(1): >>45078275 #
20. nsteel ◴[] No.45077634{7}[source]
Even the ones supporting things like P4?
21. lossolo ◴[] No.45077749{6}[source]
You don't even need io_uring for 10 gbit/s, epoll will do that easily, unless you have very niche workload.
replies(1): >>45078699 #
22. pclmulqdq ◴[] No.45077821{5}[source]
TCP is generally zero-copy now. Zero-copy with io_uring is also possible.

AF_XDP is also another way to do high-performance networking in the kernel, and it's not bad.

DPDK still has a ~30% advantage over an optimized kernel-space application with a huge maintenance burden. A lot of people reach for it, though, without optimizing kernel interfaces first.

23. pclmulqdq ◴[] No.45077880{4}[source]
Sorry, but when one thing in your benchmark has 50x the performance of the baseline, you probably have a bad baseline. If you used AI to write these, it probably did not use io_uring or aio correctly, or you have some sort of system misconfiguration. You may have also failed to bypass the filesystem with those methods, which would explain a lot of the discrepancy.
24. touisteur ◴[] No.45078109{6}[source]
I ... kind of agree with the difficulty. I don't get it - DPDK is at its core really not a complex API ! Allocate a pool of buffers, and in an infinite loop, ask your NIC to fill these buffers. There. After that, yes you have to decap every packet (ethernet then IP - don't forget reassembly - then whatever you have over - UDP is absolutely no effort, TCP... not so). It's wholly manageable to anyone knowing a bit of light C++ (more C-like) and lower layers (and can parse the sometimes very dry and cryptic doc, for all the utility fonctions. Interaction with the actual consumer of the data can be done with DPDK-provided primitives or simple shared memory... it's really not hard for a mid-level systems programmer. But I still find myself unable to hire people who can work at that level of the stack, a bit baffling. I can't see how they'd be better with io_uring or AF_XDP and all their inherent complexity. Anything harder than a socket and epoll and you're a wizard now...

One other big plus of DPDK for me is the low-level access to hardware offload. GPUDirect (when you can get it to work), StorageDirect or most of the available DMA engines in some (not so) high-end hardware. The flow API on mellanox hardware is the basis of many of my multi-accelerator applications (I wish they supported P4 for packet format instead, or just open-source whatever low-level ISA the controller is running, but I don't buy enough gear to have a voice). Perusing the DPDK documentation can give ideas.

So, yes, very low-level with some batteries included. Good and stable for niche uses. But far smaller hiring pool (is the io_uring-100Gb pool bigger ? I don't know).

25. eqvinox ◴[] No.45078128{7}[source]
You don't develop an ASIC to run a router with, you buy one off the shelf. And the function of a router doesn't exactly change day by day (or even year by year).
26. eqvinox ◴[] No.45078275[source]
People in the PRC develop interesting things. People in the PRC hack their way around the planet.

People in the USA develop interesting things. People in the USA hack their way around the planet.

The Russians seem to be doing mostly the hacking part.

The Europeans run around like headless chickens.

You can probably guess I'm European.

27. immibis ◴[] No.45078699{7}[source]
For UDP Pixelflut, I was able to send 8Gbps on a 10Gbps link with a single thread running a tight loop doing byte shuffling and then sendmmsg. I didn't bother to multithread it because that's a convenient amount of headroom left over for actual communications.