We leverage DuckDB as the stream processing engine, which gives SQLFlow the ability to process 10's of thousands of messages a second using ~250MiB of memory!
DuckDB also supports a rich ecosystem of sinks and connectors!
https://sql-flow.com/docs/category/tutorials/
https://github.com/turbolytics/sql-flow
We were tired of running JVM's for simple stream processing, and also of bespoke one off stream processors
I would love your feedback, criticisms and/or experiences!
Thank you
But really you should get excited for DuckDB Labs to build out materialized views. Materialized views where you can ingest more streaming data to update aggregates. This way you could just keep pushing rows through aggregates from Kafka.
It is going to be a POWER HOUSE for streaming analytics.
Contact DuckDB Labs if you want to sponsor the work on materialized views: https://duckdb.org/roadmap
I am familiar with materialized views / dynamic tables from enterprise-grade cloud lake type offerings, but I've never quite understood where duckdb, though impressive, fits into everyones use case. I've toyed with it for personal things, it's very cool having a local instance of something akin to snowflake when it comes to processing and aggregating on Big Data™ but generally I don't see it used in operational settings. For application development people are generally tied to sqlite and postgres.
It all does seem really cool though, I guess I'm just not feeling creative enough to conjure up a stream-to-duckdb use case. Feel free to bombard me with cool ideas.