Most active commenters

    ←back to thread

    FireDucks: Pandas but Faster

    (hwisnu.bearblog.dev)
    374 points sebg | 12 comments | | HN request time: 1.148s | source | bottom
    Show context
    rich_sasha ◴[] No.42193043[source]
    It's a bit sad for me. I find the biggest issue for me with pandas is the API, not the speed.

    So many foot guns, poorly thought through functions, 10s of keyword arguments instead of good abstractions, 1d and 2d structures being totally different objects (and no higher-order structures). I'd take 50% of the speed for a better API.

    I looked at Polars, which looks neat, but seems made for a different purpose (data pipelines rather than building models semi-interactively).

    To be clear, this library might be great, it's just a shame for me that there seems no effort to make a Pandas-like thing with better API. Maybe time to roll up my sleeves...

    replies(22): >>42193093 #>>42193139 #>>42193143 #>>42193309 #>>42193374 #>>42193380 #>>42193693 #>>42193936 #>>42194067 #>>42194113 #>>42194302 #>>42194361 #>>42194490 #>>42194544 #>>42194670 #>>42195628 #>>42196720 #>>42197192 #>>42197489 #>>42198158 #>>42199832 #>>42200060 #
    stared ◴[] No.42194490[source]
    Yes, every time I write df[df.sth = val], a tiny part of me dies.

    For a comparison, dplyr offers a lot of elegant functionality, and the functional approach in Pandas often feels like an afterthought. If R is cleaner than Python, it tells a lot (as a side note: the same story for ggplot2 and matplotlib).

    Another surprise for friends coming from non-Python backgrounds is the lack of column-level type enforcement. You write df.loc[:, "col1"] and hope it works, with all checks happening at runtime. It would be amazing if Pandas integrated something like Pydantic out of the box.

    I still remember when Pandas first came out—it was fantastic to have a tool that replaced hand-rolled data structures using NumPy arrays and column metadata. But that was quite a while ago, and the ecosystem has evolved rapidly since then, including Python’s gradual shift toward type checking.

    replies(3): >>42195076 #>>42197375 #>>42202116 #
    1. oreilles ◴[] No.42195076[source]
    > Yes, every time I write df[df.sth = val], a tiny part of me dies.

    That's because it's a bad way to use Pandas, even though it is the most popular and often times recommended way. But the thing is, you can just write "safe" immutable Pandas code with method chaining and lambda expressions, resulting in very Polars-like code. For example:

      df = (
        pd
        .read_csv("./file.csv")
        .rename(columns={"value":"x"})
        .assign(y=lambda d: d["x"] * 2)
        .loc[lambda d: d["y"] > 0.5]
      )
    
    Plus nowadays with the latest Pandas versions supporting Arrow datatypes, Polars performance improvements over Pandas are considerably less impressive.

    Column-level name checking would be awesome, but unfortunately no python library supports that, and it will likely never be possible unless some big changes are made in the Python type hint system.

    replies(4): >>42195381 #>>42195401 #>>42195717 #>>42198220 #
    2. OutOfHere ◴[] No.42195381[source]
    Using `lambda` without care is dangerous because it risks being not vectorized at all. It risks being super slow, operating one row at a time. Is `d` a single row or the entire series or the entire dataframe?
    replies(1): >>42195423 #
    3. rogue7 ◴[] No.42195401[source]
    Agreed 100%. I am using this method-chaining style all the time and it works like a charm.
    4. rogue7 ◴[] No.42195423[source]
    In this case `d` is the entire dataframe. It's just a way of "piping" the object without having to rename it.

    You are probably thinking about `df.apply(lambda row: ..., axis=1)` which operates on each row at a time and is indeed very slow since it's not vectorized. Here this is different and vectorized.

    replies(2): >>42195757 #>>42196869 #
    5. wodenokoto ◴[] No.42195717[source]
    I’m not really sure why you think

        .loc[lambda d: d["y"] > 0.5]
    
    Is stylistically superior to

        [df.y > 0.5]
    
    I agree it comes in handy quite often, but that still doesn’t make it great to write compared to what sql or dplyr offers in terms of choosing columns to filter on (`where y > 0.5`, for sql and `filter(y > 0.5)`, for dplyr)
    replies(3): >>42195824 #>>42196070 #>>42197641 #
    6. OutOfHere ◴[] No.42195757{3}[source]
    That's excellent.
    7. oreilles ◴[] No.42195824[source]
    It is superior because you don't need to assign your dataframe to a variable ('df'), then update that variable or create a new one everytime you need to do that operation. Which means it is both safer (you're guaranteed to filter on the current version of the dataframe) and more concise.

    For the rest of your comment: it's the best you can do in python. Sure you could write SQL, but then you're mixing text queries with python data manipulation and I would dread that. And SQL-only scripting is really out of question.

    replies(1): >>42196789 #
    8. ◴[] No.42196070[source]
    9. chaps ◴[] No.42196789{3}[source]
    Eh, SQL and python can still work together very well where SQL takes the place of pandas. Doing things in waves/batch helps.

    Big problem with pandas is that you still have to load the dataframe into memory to work with it. My data's too big for that and postgres makes that problem go away almost entirely.

    10. almostkorean ◴[] No.42196869{3}[source]
    Appreciate the explanation, this is something I should know by now but don't
    11. __mharrison__ ◴[] No.42197641[source]
    It's superior because it is safer. Not because the API (or requirement for using Lambda) looks better. The lambda allows the operation to work on the current state of the dataframe in the chained operation rather than the original dataframe. Alternatively, you could use .query("y > 0.5"). This also works on the current state of the dataframe.

    (I'm the first to complain about the many warts in Pandas. Have written multiple books about it. This is annoying, but it is much better than [df.y > 0.5].)

    12. moomin ◴[] No.42198220[source]
    I mean, yes there’s arrow data types, but it’s got a long way to go before it’s got full parity with the numpy version.