did you moved from Pandas to Polars? why and how was your experience?
from driving_crooner@lemmy.eco.br to python@programming.dev on 30 May 2025 03:41
https://lemmy.eco.br/post/13604707

I’m finding myself with a couple of really big databases and my PC is throwing memory errors so I’m moving the project to polars and learning on the way in, and would like to read your experience in how you did it, what frustrate you and what you found good (I’m still getting used with the syntax, but I’m loving how fast it reads the databases)

#python

threaded - newest

kSPvhmTOlwvMd7Y7E@programming.dev on 30 May 2025 06:59 next collapse

I moved from pandas.

that’s it, there is no polars. Its been great !

anzo@programming.dev on 02 Jun 16:15 collapse

I am still using R-lang dataframes…, Or tibbles :)

misk@sopuli.xyz on 30 May 2025 07:25 next collapse

I thought I’d be using Polars more but in the end, professionally, when I have to process large amounts of data I won’t be doing that on my computer but on a Hadoop cluster via PySpark which also has a very non-pythonic syntax. For smaller stuff Pandas is just more convenient.

driving_crooner@lemmy.eco.br on 30 May 2025 12:43 collapse

My company is moving to databricks, that I know uses pyspark but never used it, guess eventually I’m going to have to learn it too.

8uurg@lemmy.world on 30 May 2025 07:40 next collapse

Polars has essentially replaced Pandas for me. It is MUCH faster (in part due to lazy queries) and uses much less RAM, especially if the query can be streamed. While syntax takes a bit of getting used to at first, it allows me to specify a lot more without having to resort to apply with custom Python functions.

My biggest gripe is that the error messages are significantly less readable due to the high amount of noise: the stacktrace into the query executor does not help with locating my logic error, stringified query does not tell me where in the query things went wrong…

driving_crooner@lemmy.eco.br on 30 May 2025 12:40 collapse

I had to move away from apply a while ago because it was extremely slow, and started using masks and vectorize operations. That’s actually what is being a roadblock for me right now, can’t find a way to make it work (use to do df.loc[mask, ‘column’], but df.with_columns(pl.when(mask).then()…) is not working as expected)

8uurg@lemmy.world on 30 May 2025 15:03 collapse

It is unclear to me what you are trying to accomplish, do you want to update the elements for where masked?

driving_crooner@lemmy.eco.br on 30 May 2025 15:25 collapse

There’s this categorical column of integers that have some excepcional cases where some letters are included. I need to process the column except the excepcional cases to format the column, but I just found put that it was giving me a problem because pandas imported it as mixed type while polars just imported it as string respecting the original correct formatting.

gigachad@sh.itjust.works on 30 May 2025 07:46 collapse

Nope. I am working with geodata so I need geopandas for my work. Sadly, there is no serious alternative until now. If, in the future, that will change, I am am absolutely on board giving polars a try.