George
George

Reputation: 451

How to read a large csv with pandas?

I am loading an rdx (csv-like format) file of around 16GB as a pandas dataframe and then I cut it down by removing some lines. Here's the code:

import pandas as pd

t_min, t_max, n_min, n_max, c_min, c_max = raw_input('t_min, t_max, n_min, n_max, c_min, c_max: ').split(' ')

data=pd.read_csv('/Users/me/Desktop/foo.rdx',header=None)

new_data=data.loc[(data[0] >= float(t_min)) & (data[0] <= float(t_max)) & (data[1] >= float(n_min)) & (data[1] <= float(n_max)) & (data[2] >= float(c_min)) & (data[2] <= float(c_max))]

This code works for smaller files (~5GB), but it appears that it cannot load a file of this size. Is there a workaround to this? Or maybe a bash script way to do this?

Any help or suggestion is greatly appreciated.

Upvotes: 2

Views: 3474

Answers (1)

Uri Goren
Uri Goren

Reputation: 13700

Try to use the chunksize parameter, filter in chunks and then concat

t_min, t_max, n_min, n_max, c_min, c_max = map(float, raw_input('t_min, t_max, n_min, n_max, c_min, c_max: ').split())

num_of_rows = 1024
TextFileReader = pd.read_csv(path, header=None, chunksize=num_of_rows)

dfs = []
for chunk_df in TextFileReader:
    dfs.append(chunk_df.loc[(chunk_df[0] >= t_min) & (chunk_df[0] <= t_max) & (chunk_df[1] >= n_min) & (chunk_df[1] <= n_max) & (chunk_df[2] >= c_min) & (chunk_df[2] <= c_max)])

df = pd.concat(dfs,sort=False)

Upvotes: 4

Related Questions