Hoang Pham
Hoang Pham

Reputation: 37

Pandas read_csv with 4GB of csv

My machine was laggy while trying to read a 4GB of csv in jupyter notebook with chunksize option: raw = pd.read_csv(csv_path, chunksize=10**6) data = pd.concat(raw, ignore_index=True) This takes forever to run and also freeze my machine (Ubuntu 16.04 with 16GB of RAM). What is the right way to do this?

Upvotes: 0

Views: 495

Answers (1)

XxX
XxX

Reputation: 36

The point of using chunk is that you don't need the whole dataset in memory at one time and you can process each chunk when you read the file. Assuming you don't need the whole dataset in memory at one time, you can do

chunksize = 10 ** 6
for chunk in pd.read_csv(filename, chunksize=chunksize):
   do_something(chunk)

Upvotes: 2

Related Questions