belvoir
belvoir

Reputation: 345

Python synchronised reading of sorted files

I have two groups of files that contain data in CSV format with a common key (Timestamp) - I need to walk through all the records chronologically.

What is best approach?

I will be running lots of iterations of the post-processing side of things. Any thoughts or suggestions? I am using Python.

Upvotes: 4

Views: 427

Answers (5)

S.Lott
S.Lott

Reputation: 391872

This is a similar to a relational join. Since your timestamps don't have to match, it's called a non-equijoin.

Sort-Merge is one of several popular algorithms. For non-equijoins, it works well. I think this would be what you're called "pre-merge". I don't know what you mean by "merge in real time", but I suspect it's still a simple sort-merge, which is a fine technique, heavily used by real databases.

Nested Loops can also work. In this case, you read the smaller table in the outer loop. In the inner loop you find all of the "matching" rows from the larger table. This is effectively a sort-merge, but with an assumption that there will be multiple rows from the big table that will match the small table.

This, BTW, will allow you to more properly assign meaning to the relationship between Event Data and Environmental Data. Rather than reading the result of a massive sort merge and trying to determine which kind of record you've got, the nested loops handle that well.

Also, you can do "lookups" into the smaller table while reading the larger table.

This is hard when you're doing non-equal comparisons because you don't have a proper key to do a simple retrieval from a simple dict. However, you can easily extend dict (override __contains__ and __getitem__) to do range comparisons on a key instead of simple equality tests.

Upvotes: 1

pixelbeat
pixelbeat

Reputation: 31728

"YYYY-MM-DD HH:MM:SS" can be sorted with a simple ascii compare. How about reusing external merge logic? If the first field is the key then:

for entry in os.popen("sort -m -t, -k1,1 file1 file2"):
    process(entry)

Upvotes: 2

Michał Marczyk
Michał Marczyk

Reputation: 84341

You could read from the files in chunks of, say, 10000 records (or whatever number further profiling tells you to be optimal) and merge on the fly. Possibly using a custom class to encapsulate the IO; the actual records could then be accessed through the generator protocol (__iter__ + next).

This would be memory friendly, probably very good in terms of total time to complete the operation and would enable you to produce output incrementally.

A sketch:

class Foo(object):

    def __init__(self, env_filenames=[], event_filenames=[]):
        # open the files etc.

    def next(self):
        if self._cache = []:
            # take care of reading more records
        else:
            # return the first record and pop it from the cache

    # ... other stuff you need ...

Upvotes: 0

jspcal
jspcal

Reputation: 51914

im thinking importing it into a db (mysql, sqlite, etc) will give better performance than merging it in script. the db typically has optimized routines for loading csv and the join will be probably be as fast or much faster than merging 2 dicts (one being very large) in python.

Upvotes: 2

inspectorG4dget
inspectorG4dget

Reputation: 114005

I would suggest pre-merge.

Reading a file takes a lot of processor time. Reading two files, twice as much. Since your program will be dealing with a large input (lots of files, esp in Group A), I think it would be better to get it over with in one file read, and have all your relevant data in that one file. It would also reduce the number of variables and read statements you will need.

This will improve the runtime of your algorithm, and I think that's a good enough reason in this scenario to decide to use this approach

Hope this helps

Upvotes: 0

Related Questions