Reputation: 2609
I have a few thousand of very big radio-telemetry array fields of the same area in a database. The georeference of the pixels is the same for all of the array fields. An array can be loaded into memory in an all or nothing approach.
I want to extract the pixel for a specific geo-coordinate from all the array fields. Currently I query for the index of the specific pixel for a specific geocoordinate and then load all array fields from the database into memory. However that is very IO intensive and overloads our systems.
I'd imagine the following: I save the arrays to disk and then sequentially open them and seek to the byte-position corresponding to the pixel. I imagine that this is far less wasteful and much speedier than loading them all to memory.
Is seeking to a position considered a fast operation or would one not do such a thing?
Upvotes: 0
Views: 125
Reputation: 28893
The time it takes for a seek operation would be measured in low milliseconds, probably less than 10 in most cases. So that wouldn't be a bottleneck.
However, if you have to retrieve and save all the records from the database either way, you may end up with roughly the same IO load and perhaps greater. The IO time for writing a file is certainly greater than reading into memory.
Time for a small-ish experiment :) Try it with a few arrays and time the performance, then you can do the math to see how it would scale.
Upvotes: 2