Reputation: 1130
Running the following code against a large PostgreSQL table, the NpgsqlDataReader object blocks until all data is fetched.
NpgsqlCommand cmd = new NpgsqlCommand(strQuery, _conn);
NpgsqlDataReader reader = cmd.ExecuteReader(); // <-- takes 30 seconds
How can I get it to behave such that it doesn't prefetch all the data? I want to step through the resultset row by row without having it fetch all 15 GB into memory at once.
I know there were issues with this sort of thing in Npgsql 1.x but I'm on 2.0. This is against a PostgreSQL 8.3 database on XP/Vista/7. I also don't have any funky "force Npgsql to prefetch" stuff in my connection string. I'm at a complete loss for why this is happening.
Upvotes: 2
Views: 1709
Reputation: 2016
Which Npgsql version are you using? We added support for large tables a while ago. In fact, Postgresql protocol version 3 has support for paging through large resultsets without using cursors. Unfortunately we didn't implement it yet. Sorry for that.
Please, give it a try with Npgsql 2.0.9 and let me know if you still have problems.
Upvotes: 1
Reputation: 133712
I'm surprised the driver doesn't provide a way to do this-- but you could manually execute the SQL statements to declare a cursor, open it and fetch from it in batches. i.e. (and this code is very dubious as I'm not a C# guy):
new PgsqlCommand("DECLARE cur_data NO SCROLL CURSOR AS "
+ strQuery, _conn).ExecuteNonQuery();
do {
NpgsqlDataReader reader = new NpgsqlCommand("FETCH 100 FROM cur_data", _conn)
.ExecuteReader();
int rows = 0;
// read data from reader, incrementing "rows" for each row
} while (rows > 0);
new PgsqlCommand("CLOSE cur_data", _conn).ExecuteNonQuery();
Note that:
cursor_tuple_fraction
setting may cause a different plan to be used when executing a query via a cursor as opposed to in immediate mode. You may want to do "SET cursor_tuple_fraction=1" just before declaring the cursor since you're actually intending to fetch all the cursor's output.Upvotes: 3