Reputation: 11
I have SqlDataAdapter
query in C# windows application. I need to know the number of return rows before I fill it in the DataGridView
due to when the number is 1 million rows the fill method takes 9 seconds.
I need a way doesn't need to fill all data in DataTable and get count then I use paging principle to display the data like this
DataTable dt = new DataTable();
query = new Classes.order().GetAllBills();
query.Fill(dt);
DateTime date1 = System.DateTime.Now;
MaxCount = dt.Rows.Count;
int maxPag = (MaxCount / 25)+1;
lblMaxCount.Text = MaxCount.ToString() ;
lblNumOfPagFromAll.Text = (CounterPaging / 25) + "/" + maxPag;
dt = new DataTable();
query.Fill(CounterPaging,maxRecord,dt);
dataGridView1.DataSource = dt;
Upvotes: 0
Views: 2733
Reputation: 1845
An SqlDataAdapter
cannot return the number of rows before all rows have been fetched. This is simply because the underlying database driver will not provide this information. So you have to count the rows while or after fetching them.
Alternatively you need to query the database for the number of results as Progman suggested. But this has some drawbacks.
SELECT COUNT(*)
returning an entirely different number of rows by the SqlDataAdapter
.Another solution is to let the database handle the paging. In T-SQL SELECT TOP n ...
will do the job.
The drawback is that you cannot retrieve the total number of pages this way. But it is easy to check whether there is further data. Simply select one row more than the page size. If you get n+1 rows there is another page.
DataTable
= poor performanceOne last note: If performance counts get rid of the data tables and adaptors. They are not intended to be used for significantly more data than it fits on the screen.
Especially if you have many value types like int
or DateTime
the effect is significant because of the excessive auto boxing.
I refactored a C# application this way some time ago and the speed increased by almost two orders of magnitude. I did another change simultaneously: I deduplicated identical strings from different rows in the result set. This also reduced the memory footprint by almost one order of magnitude because some values got repeated quite often.
It was that fast that I ended up by loading all data into a session cache and do the paging from this cache. Even several millions of rows took only in the order of one second.
Upvotes: 1