I'll start.
I'll start.
Not a drop in replacement, but worth looking at.
Kind of like how people use docker for evrything, when what you really should be doing is learn how to package software.
Agree on the Kafka thing though. I've seen so many devs trip over Kafka topics, partitions and offsets when their throughput is low enough that RabbitMQ would do fine.
The people distributing software should shut them damn up about how the rest of the system it runs in is configured. (But not you, your job is packaging full systems.)
That said, it seems to me that this is becoming less of a problem.
So you are stuck with some really terrible tradeoffs- Go with Confluent Cloud, pay a fortune, and still likely have some issues to deal with. Or you could go with Confluent Platform, still have to pay people to operate it, while Confluent the company focuses most of their attention on Cloud and still charges you a fortune. Or you could just go completely OS and forgo anything Confluent and risk being really up the river when something inevitably breaks, or you have to learn the hard way that librdkafka has poor support for a lot of the shiny features discussed in the release notes.
Redpanda has surpassed them from a technical quality perspective, but Kafka has them beat on the ecosystem and the sheer inertia of moving from one platform to another. Kafka for example was built in a time of spinning rust hard disks, and expects to be run on general purpose compute nodes, where Redpanda will actually look at your hardware and optimize the number of threads its spawns for the box it is on- assuming it is going to be the only real app running there, which is true for anything but a toy deployment.
This is my experience from running platform teams and being head of messaging at multiple companies.
I used ZMQ to connect nodes and the worker nodes would connect to an indexer/coordinator node that effectively did a `SELECT FROM ORDER BY ASC`.
It's easier than you may think and the bits here ended up with probably < 1000 SLOC all told.
- Coordinator node ingests from a SQL table
- There is a discriminator key for each row in the table for ordering by stacking into an in-memory list-of-lists
- Worker nodes are started with _n_ threads
- Each thread sends a "ready" message to the coordinator and coordinator replies with a "work" message
- On each cycle, the coordinator advances the pointer on the list, locks the list, and marks the first item in the child list as "pending"
- When worker thread finishes, it sends a "completed" message to the coordinator and coordinator replies with another "work" message
- Coordinator unlocks the list the work item originated from and dequeues the finished item.
- When it reaches the end of the list, it cycles to the beginning of the list and starts over, skipping over any child lists marked as locked (has a pending work item)
Effectively a distributed event loop with the events queued up via a simple SQL query.Dead simple design, extremely robust, very high throughput, very easy to scale workers both horizontally (more nodes) and vertically (more threads). ZMQ made it easy to connect the remote threads to the centralized coordinator. It was effectively "self balancing" because the workers would only re-queue their thread once it finished work. Very easy to manage, but did not have hot failovers since we kept the materialized, "2D" work queue in memory. Though very rarely did we have issues with this.
Generally I say, "Message queues are for tasks, Kafka is for data." But in the latter case, if your data volume is not huge, a message queue for async ETL will do just fine and give better guarantees as FIFO goes.
In essence, Kafka is a very specialized version of much more general-purpose message queues, which should be your default starting point. It's similar to replacing a SQL RDBMS with some kind of special NoSQL system - if you need it, okay, but otherwise the general-purpose default is usually the better option.
> Ah yes, and every consumer should just do this in a while (true) loop as producers write to it. Very efficient and simple with no possibility of lock contention or hot spots. Genius, really.
Seemed to imply that it's not possible to build a high performance pub/sub system using a simple SQL select. I do not think that is true and it is in fact fairly easy to build a high performance pub/sub system with a simple SQL select. Clearly, this design as proposed is not the same as Kafka.kevstev wrote just above about Kafka being written to run on spinning disks (HDDs), while Redpanda was written to take advantage of the latest hardware (local NVMe SSDs). He has some great insights.
As well, Apache Kafka was written in Java, back in an era when you were weren't quite sure what operating system you might be running on. For example, when Azure first launched they had a Windows NT-based system called Windows Azure. Most everyone else had already decided to roll Linux. Microsoft refused to budge on Linux until 2014, and didn't release its own Azure Linux until 2020.
Once everyone decided to roll Linux, the "write once run everywhere" promise of Java was obviated. But because you were still locked into a Java Virtual Machine (JVM) your application couldn't optimize itself to the underlying hardware and operating system you were running on.
Redpanda, for example, is written in C++ on top of the Seastar framework (seastar.io). The same framework at the heart of ScyllaDB. This engine is a thread-per-core shared-nothing architecture that allows Redpanda to optimize performance for hardware utilization in ways that a Java app can only dream of. CPU utilization, memory usage, IO throughput. It's all just better performance on Redpanda.
It means that you're actually getting better utility out of the servers you deploy. Less wasted / fallow CPU cycles — so better price-performance. Faster writes. Lower p99 latencies. It's just... better.
Now, I am biased. I work at Redpanda now. But I've been a big fan of Kafka since 2015. I am still bullish on data streaming. I just think that Apache Kafka, as a Java-based platform, needs some serious rearchitecture,
Even Confluent doesn't use vanilla Kafka. They rewrote their own engine, Kora. They claim it is 10x faster. Or 30x faster. Depending on what you're measuring.
1. https://www.confluent.io/confluent-cloud/kora/
2. https://www.confluent.io/blog/10x-apache-kafka-elasticity/
https://docs.streamnative.io/cloud/build/kafka-clients/kafka...