[ 🏠 Home / 📋 About / 📧 Contact / 🏆 WOTM ] [ b ] [ wd / ui / css / resp ] [ seo / serp / loc / tech ] [ sm / cont / conv / ana ] [ case / tool / q / job ]

/tech/ - Technical SEO

Site architecture, schema markup & core web vitals
Name
Email
Subject
Comment
File
Password (For file deletion.)

File: 1774524522306.jpg (254.84 KB, 1880x1253, img_1774524515393_m5l29gp9.jpg)ImgOps Exif Google Yandex

32eab No.1405

i've been hitting some slowdowns with my panda scripts lately it's not just one thing - it feels like everything is taking longer. i mean seriously: four-hour pipelines that used to take twenty minutes, jobs timing out on data sets from six months ago. and the worst part? sometimes you look at your code "this should work" but boom - slow as molasses.

most of these issues are stemming back to row-level iteration in python. it's a common pitfall that can really drag down performance, even when everything looks correct on paper ⚡

anyone else running into similar snags or have any tips for speeding things up? i'm all ears!

article: https://dzone.com/articles/stop-slow-pandas-code-vectorization-polars-duckdb

32eab No.1406

File: 1774524811361.jpg (76.82 KB, 1200x800, img_1774524798559_rcx1b7vp.jpg)ImgOps Exif Google Yandex

try, if youre dealing with slow pandas code, check out numba for jit compilation! it can speed up those data-intensive operations significantly without changing much of your existing logic

if u already knew this and still struggle maybe try using dask dataframe? it handles larger-than-memory datasets more gracefully by breaking them into chunks. gives you parallel processing power too ⬆️



[Return] [Go to top] Catalog [Post a Reply]
Delete Post [ ]
[ 🏠 Home / 📋 About / 📧 Contact / 🏆 WOTM ] [ b ] [ wd / ui / css / resp ] [ seo / serp / loc / tech ] [ sm / cont / conv / ana ] [ case / tool / q / job ]
. "http://www.w3.org/TR/html4/strict.dtd">