But it is often used for block storage in datacenters. Using it for anything else is going to be hard, as it is incompatible with TCP.
The problem with not using TCP is the same thing HOMA will face - anything already speaks TCP, nearly all potential hires know TCP and most problems you have with TCP have been solved by smart engineers already. Hardware is also easily available. Once you drop all those advantages, either your scale or your gains need to be massive to make that investment worth it, which is why TCP replacements are so rare outside of FAANG.
TCP (in practice) runs on top of (mostly) routed IP networks and network architectures. E.g. a spine/leaf network with BGP. Fibre Channel as I understand it is mostly used in more or less point to point connections? I do see some mention of "Switched Fabric" but is that very common?
I guess I am old. Everytime I see new tech that wants to be hyped, completely throw out everything that is widely supported and working for 80-90% of uses cases, not battle tested and may be conceptually complex I will simply pass.
Anything on top of Ethernet, and we no longer know where this host is located (because of software defined networking). Could be next rack server, or could be something in the cloud, could be third party service.
And that's a feature, not a bug: because everything speaks TCP: we can arbitrarily cut and slice network just by changing packet forwarding rules. We can partition network however we want.
We could have a single global IP space shared by cloud, dc, campus networks, or could have Christmas Tree of NATs.
as soon as you introduce something other than TCP to the mix, now you will have gateways: chokepoints where traffic will have to be translated TCP<->Homa and I don't want to be a person troubleshooting a problem at the intersection of TCP and Homa.
in my opinion, the lowest level Ethernet should try its best to mirror the actual physical signal flow. Anything on top becomes software-network network
Drop the "pretending to be block devices" part and you basically end up with InfiniBand. It works well, if you ignore the small problem of "you need to either reimplement all your software to work with RDMA based IPC, or reimplement Ethernet on top of InfiniBand to remain compatible and throw away most advantages of InfiniBand again".
Similar things are happening with stuff like Infiniband, it has become far too expensive and Ethernet/ROCE is making inroads in lower- to medium-end installations. Availability is also an issue, Nvidia is the only Infiniband vendor left.
https://en.wikipedia.org/wiki/Unix_domain_socket#:~:text=The....
Puma creates a `UnixServer` which is a ruby stdlib class, using the defaults, which is extending `UnixSocket` which is also using the defaults
https://github.com/puma/puma/blob/fba741b91780224a1db1c45664...
Those defaults are creating a socket of type `SOCK_STREAM`, which is a tcp socket
> SOCK_STREAM will create a stream socket. A stream socket provides a reliable, bidirectional, and connection-oriented communication channel between two processes. Data are carried using the Transmission Control Protocol (TCP).
https://github.com/ruby/ruby/blob/5124f9ac7513eb590c37717337...
You still have the tcp overhead when using a local unix socket with puma, but you do not have any network overhead.
Back in ~2012 I was setting up a high-frequency network for a forex company and at the time we deployed Mellanox and they had some very (at the time) bleeding edge networking drivers that significantly reduced the overhead of writing to TCP/IP sockets (particularily zero-copy which TL;DR meant data didn't get shifted around in memory as much and was written to the ethernet card's buffers almost straight away) that made a huge difference.
I eventually left the firm and my successors tried to replace it with cisco gear and Intel NICs and the performance plummeted. That made me laugh as I received so much grief pushing for the Mellanox kit (to be fair, they were a scrappy unheard of Israeli company at the time).
FC seems to work nicely in a single-vendor stack, or at least among specific sets of big-name vendors. that's OK for the "enterprise" market, where prices are expected to be high, and where some integrator is getting a handsome profit for making sure the vendors match.
besides consumer, the original non-enterprise market was HPC, and we want no vendor lock-in. hyperscale is really just HPC-for-VM-hosting - more or less culturally compatible.
besides these vendor/price/interop reasons, FC has never done a good job of keeping up. 100/200/400/800 Gb is getting to be common, and is FC there?
resolving congestion is not unique to FC. even IB has more, probably better solutions, but these days, datacenter ethernet is pretty powerful.
Granted I'm also trying to find a switch that supports ROCm and rdma. Not easy to find a high bandwidth switch that supports this stuff without breaking the bank.
Ping it and you can at least deduce where it's not.
UDS using SOCK_STREAM does not do that; ie, it is not using IPPROTO_TCP.
> SOCK_STREAM will create a stream socket. A stream socket provides a reliable, bidirectional, and connection-oriented communication channel between two processes. Data are carried using the Transmission Control Protocol (TCP).
> SOCK_DGRAM will create a datagram socket.[b] A Datagram socket does not guarantee reliability and is connectionless. As a result, the transmission is faster. Data are carried using the User Datagram Protocol (UDP).
> SOCK_RAW will create an Internet Protocol (IP) datagram socket. A Raw socket skips the TCP/UDP transport layer and sends the packets directly to the network layer.
I don't claim to be an expert, I just have a certain confidence that I'm able to comprehend words I read. It seems you can have 3 types of sockets, raw, udp, or tcp.
I’d love to just play with QUIC a bit because it’s pretty neat, but I always get distracted by this problem and end up reading the RFCs, which so far I haven’t had the patience to get through.
https://fibrechannel.org/preview-the-new-fibre-channel-speed...
FC was one of the first non-mainframe specific storage area network fabrics. One of the key differences between FC and ethernet is the collision detection/avoidance and guaranteed throughput. All that extra coordination takes effort on big switches, so it cost more to develop/produce.
You could shovel data through it at "line speed" and know that it'll get there, or you'll get an error at the fabric level. Ethernet happily takes packets, and if it can't deliver, then it'll just drop them. (well, kinda, not all ethernet does that, because ethernet is the english language of layer 2 protocols)