traintraveler
traintraveler

Reputation: 381

Using yield for file handling?

I want to know whether it is a good practice to use yield for file handling,

So currently I use a function f()

f which iterates over a list of file objects and yields the file object.


files = []
for myfile in glob.glob(os.path.join(path, '*.ext')):
    f = open(myfile, 'r')
    files.append(f)

def f():
    for f_obj in files:
        yield f_obj
        f_obj.close()

for f_obj in f():
    // do some processing on file.

Is this a proper way of handling files in Python ?

Upvotes: 4

Views: 4199

Answers (1)

Evgeny
Evgeny

Reputation: 4551

If you are reading a text file (eg CSV) the yield is quite appropriate to make your source a generator. A bit of a problem with your code is that f_obj is very generic, so that you do no accomplish as much with f() function.

I simple example with yield is reading CSV file - lines() turn you filename into a stream of individual lines. Similar thing here (aims to keep less data in memory).

import csv 

def lines(path):
    with open(path, 'r', newline='') as f:
        for line in csv.reader(f):
            yield line

from pathlib import Path
Path("file.txt").write_text("a,1,2\nb,20,40") 

gen = lines("file.txt")
print(next(gen))
print(next(gen))

In a generic way you originally present use of yield I think it lack intent and confuses a code reader why this is being done, but that is subjective.

Upvotes: 1

Related Questions