PV8
PV8

Reputation: 6270

Does pandas automatically skip rows do a size limit?

We all know the question, when you are running in a memory error: Maximum size of pandas dataframe

I also try to read 4 large csv-files with the following command:

files = glob.glob("C:/.../rawdata/*.csv")
dfs = [pd.read_csv(f, sep="\t", encoding='unicode_escape') for f in files]
df = pd.concat(dfs,ignore_index=True)

The only massage I receive is:

C:..\conda\conda\envs\DataLab\lib\site-packages\IPython\core\interactiveshell.py:3214: DtypeWarning: Columns (22,25,56,60,71,74) have mixed types. Specify dtype option on import or set low_memory=False. if (yield from self.run_code(code, result)):

which should be no problem.

My total dataframe has a size of: (6639037, 84)

Could there be any datasize restriction without an memory error? That means python is automatically skipping some lines without telling me? I had this with another porgramm in the past, I don't think python is so lazy, but you never know.

Further reading: Later i am saving is as sqlite-file, but I also don't think this should be a problem:

conn = sqlite3.connect('C:/.../In.db')
df.to_sql(name='rawdata', con=conn, if_exists = 'replace', index=False)
conn.commit()
conn.close()

Upvotes: 1

Views: 336

Answers (2)

PV8
PV8

Reputation: 6270

It turned out that there was an error in the file reading, so thanks @Oleg O for the help and tricks to reduce the memory.

For now I do not think that there is a effect that python automatically skips lines. It only happened with wrong coding. My example you can find here: Pandas read csv skips some lines

Upvotes: 0

Oleg O
Oleg O

Reputation: 1065

You can pass a generator expression to concat

dfs = (pd.read_csv(f, sep="\t", encoding='unicode_escape') for f in files)

so you avoid the creation of that crazy list in the memory. This might alleviate the problem with the memory limit.

Besides, you can make a special generator that contains a downcast for some columns. Say, like this:

def downcaster(names): 
    for name in names:
        x = pd.read_csv(name, sep="\t", encoding='unicode_escape')
        x['some_column'] = x['some_column'].astype('category')
        x['other_column'] = pd.to_numeric(x['other_column'], downcast='integer')
        yield x

dc = downcaster(names)
df = pd.concat(dc, ...

Upvotes: 3

Related Questions