Reputation: 122052
I have a tab separated file with 1 billion lines of these(imagine 200+ columns instead of 3):
abc -0.123 0.6524 0.325
foo -0.9808 0.874 -0.2341
bar 0.23123 -0.123124 -0.1232
If the number of columns are unknown, how do I find the number of columns in a tab separated file?
I've tried this:
import io
with io.open('bigfile', 'r') as fin:
num_columns = len(fin.readline().split('\t'))
And (from @EdChum, Read a tab separated file with first column as key and the rest as values):
import pandas as pd
num_columns = pd.read_csv('bigfile', sep='\s+', nrows=1).shape[1]
How else can I get the number of columns? And which is the most efficient way? (Imagine that i suddenly receive a file with unknown number of columns, like more than 1 million columns)
Upvotes: 3
Views: 3176
Reputation: 180401
Some timings on a file with 100000 columns, count seems fastest but is off by one:
In [14]: %%timeit
with open("test.csv" ) as f:
r = csv.reader(f, delimiter="\t")
len(next(r))
....:
10 loops, best of 3: 88.7 ms per loop
In [15]: %%timeit
with open("test.csv" ) as f:
next(f).count("\t")
....:
100 loops, best of 3: 11.9 ms per loop
with io.open('test.csv', 'r') as fin:
num_columns = len(next(fin).split('\t'))
....:
10 loops, best of 3: 133 ms per loop
Using str.translate actually seems the fastest although again you need to add 1:
In [5]: %%timeit
with open("test.csv" ) as f:
n = next(f)
(len(n) - len(n.translate(None, "\t")))
...:
100 loops, best of 3: 9.9 ms per loop
The pandas solution gives me an error:
in pandas.parser.TextReader._read_low_memory (pandas/parser.c:7977)()
StopIteration:
Using readline adds more overhead:
In [19]: %%timeit
with open("test.csv" ) as f:
f.readline().count("\t")
....:
10 loops, best of 3: 28.9 ms per loop
In [30]: %%timeit
with io.open('test.csv', 'r') as fin:
num_columns = len(fin.readline().split('\t'))
....:
10 loops, best of 3: 136 ms per loop
Different results using python 3.4:
In [7]: %%timeit
with io.open('test.csv', 'r') as fin:
num_columns = len(next(fin).split('\t'))
...:
10 loops, best of 3: 102 ms per loop
In [8]: %%timeit
with open("test.csv" ) as f:
f.readline().count("\t")
...:
100 loops, best of 3: 12.7 ms per loop
In [9]:
In [9]: %%timeit
with open("test.csv" ) as f:
next(f).count("\t")
...:
100 loops, best of 3: 11.5 ms per loop
In [10]: %%timeit
with io.open('test.csv', 'r') as fin:
num_columns = len(next(fin).split('\t'))
....:
10 loops, best of 3: 89.9 ms per loop
In [11]: %%timeit
with io.open('test.csv', 'r') as fin:
num_columns = len(fin.readline().split('\t'))
....:
10 loops, best of 3: 92.4 ms per loop
In [13]: %%timeit
with open("test.csv" ) as f:
r = csv.reader(f, delimiter="\t")
len(next(r))
....:
10 loops, best of 3: 176 ms per loop
Upvotes: 3
Reputation: 3437
There is a str.count()
method:
h = file.open('path', 'r')
columns = h.readline().count('\t') + 1
h.close()
Upvotes: 0