Lou
Lou

Reputation: 4474

Algorithm to quickly traverse a large binary file

I have a problem to solve involving reading large files, and I have a general idea how to approach it, but would like to see it there might be a better way.

The problem is following: I have several huge disk files (64GB each) filled with records of 2.5KB each (around 25,000,000 of records total). Each record has, among other fields, a timestamp, and a isValid flag indicating whether the timestamp is valid or not. When the user enters a timespan, I need to return all records for which the timestamp is withing the specified range.

The layout of the data is such that, for all records marked as "Valid", timestamp monotonically increases. Invalid records should not be considered at all. So, this is how the file generally looks like (although ranges are far larger):

a[0]  = { Time=11, IsValid = true };
a[1]  = { Time=12, IsValid = true };
a[2]  = { Time=13, IsValid = true };
a[3]  = { Time=401, IsValid = false }; // <-- should be ignored
a[4]  = { Time=570, IsValid = false }; // <-- should be ignored
a[5]  = { Time=16, IsValid = true }; 

a[6]  = { Time=23, IsValid = true };  // <-- time-to-index offset changed 
a[7]  = { Time=24, IsValid = true };
a[8]  = { Time=25, IsValid = true };
a[9]  = { Time=26, IsValid = true };

a[10] = { Time=40, IsValid = true };  // <-- time-to-index offset changed 
a[11] = { Time=41, IsValid = true };
a[12] = { Time=700, IsValid = false };  // <-- should be ignored 
a[13] = { Time=43, IsValid = true };

If the offset between a timestamp and a counter was constant, seeking the first record would be an O(1) operation (I would simply jump to the index). Since it isn't, I am looking for a different way to (quickly) find this information.

One way might be a modified binary search, but I am not completely sure how to handle larger blocks of invalid records. I suppose I could also create an "index" to speed up lookup, but since there will be many large files like this, and extracted data size will be much smaller than the entire file, I don't want to traverse each of these files, record by record, to generate the index. I am thinking if a binary search would also help while building the index.

Not to mention that I'm not sure what would be the best structure for the index. Balanced binary tree?

Upvotes: 2

Views: 435

Answers (4)

Joni
Joni

Reputation: 111339

It does sound like a modified binary search can be a good solution. If large blocks of invalid records are a problem you can handle them by skipping blocks of exponentially increasing size, e.g 1,2,4,8,.... If this makes you overshoot the end of the current bracket, step back to the end of the bracket and skip backwards in steps of 1,2,4,8,... to find a valid record reasonably close to the center.

Upvotes: 1

Ankush
Ankush

Reputation: 2554

You can use modified binary search. The idea is to do usual binary search to figure out lower bound and upper bound and then return the in between entries which are valid.

The modification lies in the part where if current entry is invalid. In that case you have to figure out two end points where you have a valid entry. e.g if mid point is 3,

a[0]  = { Time=11, IsValid = true };
a[1]  = { Time=12, IsValid = true };
a[2]  = { Time=401, IsValid = false };
a[3]  = { Time=570, IsValid = false }; // <-- Mid point.
a[4]  = { Time=571, IsValid = false };
a[5]  = { Time=16, IsValid = true }; 
a[6]  = { Time=23, IsValid = true };

In above case the algorithm will return two points a[1] and a[5]. Now algo will decide to binary search lower half or upper half.

Upvotes: 2

Debobroto Das
Debobroto Das

Reputation: 862

You may bring some randomness in binary searching. In practical the random algorithms perform well for large data sets.

Upvotes: 1

Jasen
Jasen

Reputation: 12422

it's times like this that using someone elses database code starts to look like a good idea,

Anyway, you need to fumble about until you find the start of the valid data and then read until you hit the end,

start by taking pot shots and moving the markers accordingly same as a normal binary search except when you hit an invalid record begin a search for a valid record just reading forward from the guess is as good as anything

it's probably worthwhile running a maintenance task over the files to replace the invalid timestamps with valid ones, or perhaps maintaining an external index,

Upvotes: 1

Related Questions