←back to thread

188 points ilove_banh_mi | 2 comments | | HN request time: 0.418s | source
Show context
UltraSane ◴[] No.42170007[source]
I wonder why Fibre Channel isn't used as a replacement for TCP in the datacenter. It is a very robust L3 protocol. It was designed to connect block storage devices to servers while making the OS think they are directly connected. OSs do NOT tolerate dropped data when reading and writing to block devices and so Fibre Channel has a extremely robust Token Bucket algorithm. The algo prevents congestion by allowing receivers to control how much data senders can send. I have worked with a lot of VMware clusters that use FC to connect servers to storage arrays and it has ALWAYS worked perfectly.
replies(9): >>42170384 #>>42170465 #>>42170698 #>>42171057 #>>42171576 #>>42171890 #>>42174071 #>>42174140 #>>42175585 #
wejick ◴[] No.42170698[source]
I'm imagining having a shared memory mounted as block storages then do the RPC thru this block. Some synchronization and polling/notifications work will need to be done.
replies(2): >>42171599 #>>42171658 #
1. fmajid ◴[] No.42171658[source]
That’s essentially what RDMA is, except it is usually run over Infiniband although hyperscalers are wary of Nvidia’s control over the technology and looking for cheaper Ethernet-based alternatives.

https://blogs.nvidia.com/blog/what-is-rdma/

https://dl.acm.org/doi/abs/10.1145/3651890.3672233

replies(1): >>42172621 #
2. hylaride ◴[] No.42172621[source]
If it's a secure internal network, RDMA is probably what you want if you need low-latency data transfer. You can do some very performance-oriented things with it and it works over ethernet or infiniband (the quality of switching gear and network cards matters, though).

Back in ~2012 I was setting up a high-frequency network for a forex company and at the time we deployed Mellanox and they had some very (at the time) bleeding edge networking drivers that significantly reduced the overhead of writing to TCP/IP sockets (particularily zero-copy which TL;DR meant data didn't get shifted around in memory as much and was written to the ethernet card's buffers almost straight away) that made a huge difference.

I eventually left the firm and my successors tried to replace it with cisco gear and Intel NICs and the performance plummeted. That made me laugh as I received so much grief pushing for the Mellanox kit (to be fair, they were a scrappy unheard of Israeli company at the time).