LookIntoEast
LookIntoEast

Reputation: 8798

Different ways to read large data in python

I'm dealing with large data, so finding a good way for reading data is really important. I'm just a little bit confused about different reading methods.

1.f=gzip.open(file,'r')
      for line in f:
          process line
     #how can I process nth line? can I?
2.f=gzip.open(file,'r').readlines()
  #f is a list
  f[10000]
  #we can process nth line

3.f=gzip.open(file,'r')
  while True:
       linelist=list(islice(f,4))

4.for line in fileinput.input():
  process line

What's the difference between 2 and 3 ? I just find their memory usage is the same. islice() also needs to first load the whole file into memory (but just later take bit by bit). And I hear the 4th method is the least memory-consuming, it's really processing bit by bit, right? For 10GB-scale file, which file-reading method would you recommend? Any thought/information is welcomed. thx

edit: I think one of my problem is I need to pick out specific lines randomly sometimes. say:

f1=open(inputfile1, 'r')
while True:
    line_group1 = list(islice(f1, 3))
    if not line_group1:
        break
    #then process specific lines say, the second line.
    processed 2nd line
    if ( ....):
           LIST1.append(line_group1[0])
           LIST1.append(processed 2nd line)
           LIST1.append(line_group1[2])

And then sth. like

with open(file,'r') as f,
    for line in f:
       # process line

may not work, am I correct?

Upvotes: 6

Views: 4666

Answers (5)

Spencer Rathbun
Spencer Rathbun

Reputation: 14900

For reading specific lines in large files, you could use the linecache library.

Upvotes: 0

Zach Kelling
Zach Kelling

Reputation: 53829

You can use enumerate to get an index as you iterate over something:

for idx, line in enumerate(f):
    # process line

Simple and memory efficient. You can actually use islice too, and iterate over it without converting to a list first:

for line in islice(f,start,stop):
    # process line

Neither approach will read the entire file into memory, nor create an intermediate list.

As for fileinput, it's just a helper class for quickly looping over standard input or a list of files, there is no memory-efficiency benefit to using it.

As Srikar points out, using the with statement is preferred way to open/close a file.

Upvotes: 1

zchenah
zchenah

Reputation: 2108

you don't know how many lines until you read and count how many \n in it. In 1, you can add a enumerate to get the line number.

Upvotes: 0

Srikar Appalaraju
Srikar Appalaraju

Reputation: 73638

You forgot -

with open(...) as f:
    for line in f:
        <do something with line>

The with statement handles opening and closing the file, including if an exception is raised in the inner block. The for line in f treats the file object f as an iterable, which automatically uses buffered IO and memory management so you don't have to worry about large files.

Both 2,3 are not advised for large files as they read & load the entire file contents in memory before processing starts. To read large files you need to find ways to not read the entire file in one single go.

There should be one -- and preferably only one -- obvious way to do it.

Upvotes: 6

Bashwork
Bashwork

Reputation: 1619

Check out David M. Beazley's talks on parsing large log files with generators (see the pdf for the presentation):

http://www.dabeaz.com/generators/

Upvotes: 5

Related Questions