←back to thread

76 points efecan0 | 8 comments | | HN request time: 1.053s | source | bottom

Hi HN,

I’m a recent CS graduate. During the past few months I wrote BinaryRPC, an open-source RPC framework in modern C++20 focused on low-latency, binary WebSocket messaging.

Why I built it * Wanted first-class session support, pluggable QoS levels and a simple middleware chain (global, specific, multi handler) without extra JSON/XML parsing. * Easy developer experience

A quick feature list * Binary WebSocket frames – minimal overhead * Built-in session layer (login / reconnect / heartbeat) * QoS1 / QoS2 with automatic ACK & retry * Plugin system – rooms, msgpack, etc. can be added in one line * Thread-safe core: RAII + folly

Still early (solo project), so any feedback on design, concurrency model or missing must-have features would help a lot.

Thanks for reading!

also see "Chat Server in 5 Minutes with BinaryRPC": https://medium.com/@efecanerdem0907/building-a-chat-server-i...

Show context
jayd16 ◴[] No.44543338[source]
My immediate reaction is why websocket based design and TCP (?) over gRPC with http/3 and UDP and multiplexing and such?
replies(6): >>44543363 #>>44543401 #>>44543447 #>>44543548 #>>44544437 #>>44546559 #
inetknght ◴[] No.44543401[source]
I'm not the author but off the top of my head:

- gRPC is not a library I would trust with safety or privacy. It's used a lot but isn't a great product. I have personally found several fuckups in gRPC and protobuf code resulting in application crashes or risks of remote code execution. Their release tagging is dogshit, their implementation makes you think the standard library and boost libraries are easy to read and understand, and neither takes SDLC lifecycles seriously since there aren't sanitizer builds nor fuzzing regime nor static analysis running against new commits last time I checked.

- http/3 using UDP sends performance into the crater, generally requiring _every_ packet to reach the CPU in userspace instead of being handled in the kernel or even directly by the network interface hardware

- multiplexing isn't needed by most websocket applications

replies(2): >>44543464 #>>44544159 #
efecan0 ◴[] No.44543464[source]
Thank you for the extra information!

I am a recent CS graduate and I work on this project alone. I chose WebSocket over TCP because it is small, easy to read, and works everywhere without extra tools. gRPC + HTTP/3 is powerful but adds many libraries and more code to learn.

When real users need QUIC or multiplexing, I can change the transport later. Your feedback helps me a lot.

replies(1): >>44543664 #
1. reactordev ◴[] No.44543664[source]
The point people are beating around the bush at here is that a binary RPC framework has no such need for HTTP handling, even for handshaking, when a more terse protocol of your own design would/could/might? be better.

I totally understand your reasoning behind leaning on websockets. You can test with a data channel in a browser app. But if we are talking low-latency, Superman fast, modern C++, RPC and forgeddaboutit. Look into handling an initial payload with credential negotiation outside of HTTP 1.1.

replies(2): >>44543949 #>>44546252 #
2. efecan0 ◴[] No.44543949[source]
You’re right: HTTP adds an extra RTT and headers we don’t strictly need.

My current roadmap is:

1. Keep WebSocket as the “zero-config / browser-friendly” default. 2. Add a raw-TCP transport with a single-frame handshake: [auth-token | caps] → ACK → binary stream starts. 3. Later, test a QUIC version for mobile / lossy networks.

So users can choose: * plug-and-play (WebSocket) * ultra-low-latency (raw TCP)

Thanks for the nudge this will go on the transport roadmap.

replies(1): >>44546294 #
3. gr4vityWall ◴[] No.44546252[source]
Shouldn't WebSockets be comparable to raw TCP + a simple message protocol on top of it once you're done with the initial handshaking and protocol upgrade?

I wouldn't expect latency to be an issue for long lived connections, compared to TCP.

replies(1): >>44546351 #
4. reactordev ◴[] No.44546294[source]
The actual handshake part of WebSockets is good. Send a NONCE/KEY and get back a known hash encoded however you like. This can be as little as 24 bytes or as much as 1024. Just sending the HTTP preamble eats through 151 bytes at least. Imagine that for every connection, per every machine... That's a lot of wasted bandwidth if one can skip it.

Compression helps but I think if you want to win over the embedded crowd, having a pure TCP alternative is going to be a huge win. That said, do NOT abandon the HTTP support, WebSockets are still extremely useful. WebRTC is too. ;)

replies(2): >>44547220 #>>44548314 #
5. reactordev ◴[] No.44546351[source]
no but reliability is. And if you need to re-establish the connection, you'll have to preamble your way through another handshake.

gRPC uses HTTP/2, which has a Client/Server Stream API, to forgo the preamble. In the end though, ANY HTTP based protocol could be throttled by infrastructure in-between. TCP on the other hand, can be encrypted and sent without any preamble - just a protocol, and only L2/L3 can throttle.

6. inetknght ◴[] No.44547220{3}[source]
> Compression helps

It's generally unwise to use compression for encrypted transport such as TLS or HTTP/S.

https://en.wikipedia.org/wiki/Oracle_attack

replies(1): >>44548321 #
7. efecan0 ◴[] No.44548314{3}[source]
Agree: for small devices every byte counts. Plan is to keep WebSocket for zero-config use, but add a raw-TCP handshake (~24-40 bytes) so embedded clients can skip the HTTP preamble. I’ll note that on the transport roadmap. Appreciate the insights!
8. efecan0 ◴[] No.44548321{4}[source]
Good point, thank you.

You’re right—no compression over TLS by default. If I add deflate support later it will be opt-in and disabled when the connection is encrypted.

Appreciate the insights!