moonify
moonify

Reputation: 15

How to read n lines from multiple files until the end of the files

I am doing a virtual memory simulator but I got a problem. I need to read n (8) lines from k(4) files, for example: I read the first 8 lines of file 1 - file 2 - file 3 - file 4, then I read the line 9 - 17 from each file again, until each file run out of lines.

I have no problem with the input of the files, and already done this code.

def rr_process(quantum, file, fline):
    global rr_list #List to save the reading lines
    condition = file_len(file) #Return the total lines of passed file
    with open(file) as fp:
        line = fp.readlines() #Save all the lines of the file in a list   
        for i in range(fline,fline+quantum): #for i in range(NewStartLine, NewStartLie + n_lines)
            if i <= condition-1:
                sline = line[i].rstrip()#Remove /n from lines
                rr_list.append(sline) #append the n_lines to the list
            else:
                break 

operation = concat_count//(n_proc*quantum) #total_lines//(k_files*n_lines)

for i in range(0,operation):
    for fname in process: #Open each file (4)
         rr_process(quantum,fname,fline) #Calls the read lines function
    fline = fline + quantum + 1 #New start line number 0-9-17...

I have no success at all, I need to read 50k lines but my program just reads 44446. Which is the bug in the code? or What is a better way to handle this? Thanks guys!

Upvotes: 0

Views: 154

Answers (2)

chepner
chepner

Reputation: 531205

This can be reduced to a few lines of code using the grouper and roundrobin functions provided by the documentation of the itertools module.

import contextlib
from itertools import zip_longest, cycle, islice, chain

# Define grouper() and roundrobin() here

with contextlib.ExitStack() as stack:
    # Open each file *once*; the exit stack will make sure they get closed
    files = [stack.enter_context(open(fname)) for frame in process]
    # Instead of iterating over each file line by line, we'll iterate
    # over them in 8-line batches.
    groups = [grouper(f, 8) for f in files]
    # Interleave the groups by taking an 8-line group from one file,
    # then another, etc.
    interleaved = roundrobin(*groups)
    # *Then* flatten them into a stream of single lines
    flattened = chain.from_iterable(interleaved)
    # Filter out the None padding added by grouper() and
    # read the lines into a list
    lines = list(filter(lambda x: x is not None, flattened))

Note that until you call list, you don't actually read anything from the files; you are just building up a functional pipeline that will process the input on demand.


For reference, these are the definitions of grouper and roundrobin copied from the documentation.

# From itertools documentation
def grouper(iterable, n, fillvalue=None):
    "Collect data into fixed-length chunks or blocks"
    # grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
    args = [iter(iterable)] * n
    return zip_longest(*args, fillvalue=fillvalue)


# From itertools documentation
def roundrobin(*iterables):
    "roundrobin('ABC', 'D', 'EF') --> A D E B F C"
    # Recipe credited to George Sakkis
    num_active = len(iterables)
    nexts = cycle(iter(it).__next__ for it in iterables)
    while num_active:
        try:
            for next in nexts:
                yield next()
        except StopIteration:
            # Remove the iterator we just exhausted from the cycle.
            num_active -= 1
            nexts = cycle(islice(nexts, num_active))

Upvotes: 1

Sam Mason
Sam Mason

Reputation: 16184

I ended up with something very similar to chepner...

first we defined a simple file that iterates over lines in a file, grouping them into blocks:

def read_blocks(path, nlines):
    with open(path) as fd:
        out = []
        for line in fd:
            out.append(line)
            if len(out) == nlines:
                yield out
                out = []
        if out:
            yield out

I then define a function that interleaves the output of a set of iterators (i.e. the same as roundrobin from chepner, I find the version in itertools somewhat opaque):

def interleave(*iterables):
    iterables = [iter(it) for it in iterables]
    i = 0
    while iterables:
        try:
            yield next(iterables[i])
        except StopIteration:
            del iterables[i]
        else:
            i += 1
        if i >= len(iterables):
            i = 0

we then define a function to put the above together:

def read_files_in_blocks(filenames, nlines):
    return interleave(*(read_blocks(path, nlines) for path in filenames))

and call it with some dummy data:

filenames = ['foo.txt', 'bar.txt', 'baz.txt']

for block in read_files_in_blocks(filenames, 5):
    for line in block:
        print(line)

Upvotes: 0

Related Questions