Piotr Lopusiewicz
Piotr Lopusiewicz

Reputation: 2594

Reading n lines from file (but not all) in Python

How to read n lines from a file instead of just one when iterating over it? I have a file which has well defined structure and I would like to do something like this:

for line1, line2, line3 in file:
    do_something(line1)
    do_something_different(line2)
    do_something_else(line3)

but it doesn't work:

ValueError: too many values to unpack

For now I am doing this:

for line in file:
    do_someting(line)
    newline = file.readline()
    do_something_else(newline)
    newline = file.readline()
    do_something_different(newline)
... etc.

which sucks because I am writing endless 'newline = file.readline()' which are cluttering the code. Is there any smart way to do this ? (I really want to avoid reading whole file at once because it is huge)

Upvotes: 3

Views: 4777

Answers (11)

tzot
tzot

Reputation: 95941

itertools to the rescue:

import itertools
def grouper(n, iterable, fillvalue=None):
    "grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
    args = [iter(iterable)] * n
    return itertools.izip_longest(fillvalue=fillvalue, *args)


fobj= open(yourfile, "r")
for line1, line2, line3 in grouper(3, fobj):
    pass

Upvotes: 2

Mats Ekberg
Mats Ekberg

Reputation: 1675

You could use a helper function like this:

def readnlines(f, n):
    lines = []
    for x in range(0, n):
        lines.append(f.readline())
    return lines

Then you can do something like you want:

while True:
    line1, line2, line3 = readnlines(file, 3)
    do_stuff(line1)
    do_stuff(line2)
    do_stuff(line3)

That being said, if you are using xml files, you will probably be happier in the long run if you use a real xml parser...

Upvotes: 2

Thomas K
Thomas K

Reputation: 40340

It's possible to do it with a clever use of the zip function. It's short, but a bit voodoo-ish for my tastes (hard to see how it works). It cuts off any lines at the end that don't fill a group, which may be good or bad depending on what you're doing. If you need the final lines, itertools.izip_longest might do the trick.

zip(*[iter(inputfile)] * 3)

Doing it more explicitly and flexibly, this is a modification of Mats Ekberg's solution:

def groupsoflines(f, n):
    while True:
        group = []
        for i in range(n):
            try:
                group.append(next(f))
            except StopIteration:
                if group:
                    tofill = n - len(group)
                    yield group + [None] * tofill
                return
        yield group

for line1, line2, line3 in groupsoflines(inputfile, 3):
    ...

N.B. If this runs out of lines halfway through a group, it will fill in the gaps with None, so that you can still unpack it. So, if the number of lines in your file might not be a multiple of three, you'll need to check whether line2 and line3 are None.

Upvotes: 0

Stunner
Stunner

Reputation: 12194

It sounds like you are trying to read from disk in parallel... that is really hard to do. All the solutions given to you are realistic and legitimate. You shouldn't let something put you off just because the code "looks ugly". The most important thing is how efficient/effective is it, then if the code is messy, you can tidy it up, but don't look for a whole new method of doing something because you don't like how one way of doing it looks like in code.

As for running out of memory, you may want to check out pickle.

Upvotes: 0

neil
neil

Reputation: 3635

Basically, your fileis an iterator which yields your file one line at a time. This turns your problem into how do you yield several items at a time from an iterator. A solution to that is given in this question. Note that the function isliceis in the itertools module so you will have to import it from there.

Upvotes: 6

atcuno
atcuno

Reputation: 543

why can't you just do:

ctr = 0

for line in file:

  if ctr == 0:

     ....

  elif ctr == 1:

     ....

  ctr = ctr + 1

if you find the if/elif construct ugly you could just create a hash table or list of function pointers and then do:

for line in file:

   function_list[ctr]()

or something similar

Upvotes: 0

Tyler Eaves
Tyler Eaves

Reputation: 13121

If it's xml why not just use lxml?

Upvotes: 3

Eric Pauley
Eric Pauley

Reputation: 1779

If you want to be able to use this data over and over again, one approach might be to do this:

lines = []
for line in file_handle:
    lines.append(line)

This will give you a list of the lines, which you can then access by index. Also, when you say a HUGE file, it is most likely trivial what the size is, because python can process thousands of lines very quickly.

Upvotes: 0

Paul Schreiber
Paul Schreiber

Reputation: 12599

Do you know something about the length of the lines/format of the data? If so, you could read in the first n bytes (say 80*3) and f.read(240).split("\n")[0:3].

Upvotes: 0

Chris Morgan
Chris Morgan

Reputation: 90762

for i in file produces a str, so you can't just do for i, j, k in file and read it in batches of three (try a, b, c = 'bar' and a, b, c = 'too many characters' and look at the values of a, b and c to work out why you get the "too many values to unpack").

It's not clear entirely what you mean, but if you're doing the same thing for each line and just want to stop at some point, then do it like this:

for line in file_handle:
    do_something(line)
    if some_condition:
        break  # Don't want to read anything else

(Also, don't use file as a variable name, you're shadowning a builtin.)

Upvotes: 1

Tyler Eaves
Tyler Eaves

Reputation: 13121

If your're doing the same thing why do you need to process multiple lines per iteration?

For line in file is your friend. It is in general much more efficient than manually reading the file, both in terms of io performance and memory.

Upvotes: 0

Related Questions