←back to thread

FireDucks: Pandas but Faster

(hwisnu.bearblog.dev)
374 points sebg | 2 comments | | HN request time: 0s | source
Show context
imranq ◴[] No.42193396[source]
This presentation does a good job distilling why FireDucks is so fast:

https://fireducks-dev.github.io/files/20241003_PyConZA.pdf

The main reasons are

* multithreading

* rewriting base pandas functions like dropna in c++

* in-built compiler to remove unused code

Pretty impressive especially given you import fireducks.pandas as pd instead of import pandas as pd, and you are good to go

However I think if you are using a pandas function that wasn't rewritten, you might not see the speedups

replies(1): >>42193761 #
faizshah ◴[] No.42193761[source]
It’s not clear to me why this would be faster than polars, duckdb, vaex or clickhouse. They seem to be taking the same approach of multithreading, optimizing the plan, using arrow, optimizing the core functions like group by.
replies(2): >>42193939 #>>42195630 #
1. maleldil ◴[] No.42195630[source]
None of those drop-in replacements for Pandas. The main draw is "faster without changing your code".
replies(1): >>42197904 #
2. faizshah ◴[] No.42197904[source]
I’m asking more about what techniques did they use to get the performance improvements in the slides.

They are showing a 20-30% improvement over Polars, Clickhouse and Duckdb. But those 3 tools are SOTA in this area and generally rank near eachother in every benchmark.

So 20-30% improvement over that cluster makes me interested to know what techniques they are using to achieve that over their peers.