Reputation: 104082
With a text file, I can write this:
with open(path, 'r') as file:
for line in file:
# handle the line
This is equivalent to this:
with open(path, 'r') as file:
for line in iter(file.readline, ''):
# handle the line
This idiom is documented in PEP 234 but I have failed to locate a similar idiom for binary files.
With a binary file, I can write this:
with open(path, 'rb') as file:
while True:
chunk = file.read(1024 * 64)
if not chunk:
break
# handle the chunk
I have tried the same idiom that with a text file:
def make_read(file, size):
def read():
return file.read(size)
return read
with open(path, 'rb') as file:
for chunk in iter(make_read(file, 1024 * 64), b''):
# handle the chunk
Is it the idiomatic way to iterate over a binary file in Python?
Upvotes: 41
Views: 29073
Reputation: 104082
Nearly 10 years after this question and now Python 3.8 has the :=
Walrus Operator described in PEP 572.
To read a file in chunks idiomatically and expressively (with Python 3.8 or later) you can do:
while chunk := file.read(1024 * 64):
process(chunk)
Upvotes: 12
Reputation: 42367
Try:
chunk_size = 4 * 1024 * 1024 # MB
with open('large_file.dat','rb') as f:
for chunk in iter(lambda: f.read(chunk_size), b''):
handle(chunk)
iter
needs a function with zero arguments.
f.read
would read the whole file, since the size
parameter is missing;f.read(1024)
means call a function and pass its return value (data loaded from file) to iter
, so iter
does not get a function at all;(lambda:f.read(1234))
is a function that takes zero arguments (nothing between lambda
and :
) and calls f.read(1234)
.Upvotes: 43
Reputation: 8077
The Pythonic way to read a binary file iteratively is using the built-in function iter
with two arguments and the standard function functools.partial
, as described in the Python library documentation:
iter
(object[, sentinel])Return an iterator object. The first argument is interpreted very differently depending on the presence of the second argument. Without a second argument, object must be a collection object which supports the iteration protocol (the
__iter__()
method), or it must support the sequence protocol (the__getitem__()
method with integer arguments starting at0
). If it does not support either of those protocols,TypeError
is raised. If the second argument, sentinel, is given, then object must be a callable object. The iterator created in this case will call object with no arguments for each call to its__next__()
method; if the value returned is equal to sentinel,StopIteration
will be raised, otherwise the value will be returned.See also Iterator Types.
One useful application of the second form of
iter()
is to build a block-reader. For example, reading fixed-width blocks from a binary database file until the end of file is reached:from functools import partial with open('mydata.db', 'rb') as f: for block in iter(partial(f.read, 64), b''): process_block(block)
Upvotes: 14
Reputation: 198777
I don't know of any built-in way to do this, but a wrapper function is easy enough to write:
def read_in_chunks(infile, chunk_size=1024*64):
while True:
chunk = infile.read(chunk_size)
if chunk:
yield chunk
else:
# The chunk was empty, which means we're at the end
# of the file
return
Then at the interactive prompt:
>>> from chunks import read_in_chunks
>>> infile = open('quicklisp.lisp')
>>> for chunk in read_in_chunks(infile):
... print chunk
...
<contents of quicklisp.lisp in chunks>
Of course, you can easily adapt this to use a with block:
with open('quicklisp.lisp') as infile:
for chunk in read_in_chunks(infile):
print chunk
And you can eliminate the if statement like this.
def read_in_chunks(infile, chunk_size=1024*64):
chunk = infile.read(chunk_size)
while chunk:
yield chunk
chunk = infile.read(chunk_size)
Upvotes: 25