Yuxiang Wang
Yuxiang Wang

Reputation: 8423

Read the last N lines of a CSV file in Python with numpy / pandas

Is there a quick way to read the last N lines of a CSV file in Python, using numpy or pandas?

  1. I cannot do skip_header in numpy or skiprow in pandas because the length of the file varies, and I would always need the last N rows.

  2. I know I can use pure Python to read line by line from the last row of the file, but that would be very slow. I can do that if I have to, but a more efficient way with numpy or pandas (which is essentially using C) would be really appreciated.

Upvotes: 4

Views: 16183

Answers (3)

hpaulj
hpaulj

Reputation: 231385

With a small 10 line test file I tried 2 approaches - parse the whole thing and select the last N lines, versus load all lines, but only parse the last N:

In [1025]: timeit np.genfromtxt('stack38704949.txt',delimiter=',')[-5:]
1000 loops, best of 3: 741 µs per loop

In [1026]: %%timeit 
      ...: with open('stack38704949.txt','rb') as f:
      ...:      lines = f.readlines()
      ...: np.genfromtxt(lines[-5:],delimiter=',')

1000 loops, best of 3: 378 µs per loop

This was tagged as a duplicate of Efficiently Read last 'n' rows of CSV into DataFrame. The accepted answer there used

from collections import deque

and collected the last N lines in that structure. It also used StringIO to feed the lines to the parser, which is an unnecessary complication. genfromtxt takes input from anything that gives it lines, so a list of lines is just fine.

In [1031]: %%timeit 
      ...: with open('stack38704949.txt','rb') as f:
      ...:      lines = deque(f,5)
      ...: np.genfromtxt(lines,delimiter=',') 

1000 loops, best of 3: 382 µs per loop

Basically the same time as readlines and slice.

deque may have an advantage when the file is very large, and it gets costly to hang onto all the lines. I don't think it saves any file reading time. Lines still have to be read one by one.

timings for the row_count followed by skip_header approach are slower; it requires reading the file twice. skip_header still has to read lines.

In [1046]: %%timeit 
      ...: with open('stack38704949.txt',"r") as f:
      ...:       ...:     reader = csv.reader(f,delimiter = ",")
      ...:       ...:     data = list(reader)
      ...:       ...:     row_count = len(data)
      ...: np.genfromtxt('stack38704949.txt',skip_header=row_count-5,delimiter=',')

The slowest run took 5.96 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 760 µs per loop

For purposes of counting lines we don't need to use csv.reader, though it doesn't appear to cost much extra time.

In [1048]: %%timeit 
      ...: with open('stack38704949.txt',"r") as f:
      ...:    lines=f.readlines()
      ...:    row_count = len(data)
      ...: np.genfromtxt('stack38704949.txt',skip_header=row_count-5,delimiter=',')

1000 loops, best of 3: 736 µs per loop

Upvotes: 9

Israel Unterman
Israel Unterman

Reputation: 13510

Option 1

You can read the entire file with numpy.genfromtxt, get it as a numpy array, and take the last N rows:

a = np.genfromtxt('filename', delimiter=',')
lastN = a[-N:]

Option 2

You can do a similar thing with the usual file reading:

with open('filename') as f:
    lastN = list(f)[-N:]

but this time you will get the list of last N lines, as strings.

Option 3 - without reading the entire file to memory

We use a list of at most N items to hold each iteration the last N lines:

lines = []
N = 10
with open('csv01.txt') as f:
    for line in f:
        lines.append(line)
        if len(lines) > 10:
            lines.pop(0)

A real csv requires a minor change:

import csv
...
with ...
    for line in csv.reader(f):
    ...

Upvotes: 3

Jason Brown
Jason Brown

Reputation: 374

Use skiprows parameter of pandas read_csv(), the tougher part is finding the number of lines in the csv. here's a possible solution:

with open('filename',"r") as f:
    reader = csv.reader(f,delimiter = ",")
    data = list(reader)
    row_count = len(data)

df = pd.read_csv('filename', skiprows = row_count - N)

Upvotes: 3

Related Questions