Reputation: 3184
I would like to perform a select query that returns a lot of data on a database. Said database forces me to split my queries in chunks of 10000 results with offset+limit. While I iterate over these chunks, someone else updates the database which could in some cases make the db return the same row multiple times. I handle that by a post-processing filter that removes rows with duplicate ids but I wonder if there is a way to build a set of sql queries that allows me to obtain a coherent view of the database across multiple select statements. i.e., BEGIN+COMMIT but for select.
Did I mention that I am not an sql person ?
Upvotes: 6
Views: 240
Reputation: 2877
Can you not:
Order them by ID, get the first 10000, get the LAST id.
Second time around filter on greater that LAST id, getting the next 10000.
Do the same till your done
Select top(10000) * from Table order by id
get the last id
Select top(10000) * from Table where id> LAST order by id
LAST of course substituted by a number
Very low level, but should solve problem and eliminate duplicates
Upvotes: 1