Reputation: 17026
I have written this code to read thousands of rows from Excel files and load them in a DataGridView.
But the problem I am facing is, no matter which file I load, the DataGridView is showing the rows from the first file only and the _list is never cleared.
public class MyForm : Form
{
private List<Student> _list = null;
private void LoadFile_Click(object sender, EventArgs e)
{
try
{
if (_list != null)
{
_list.Clear();
}
openFileDialog1.ShowDialog();
_connStr = MakeConnectionString.GetConnectionString(openFileDialog1.FileName);
if (!string.IsNullOrEmpty(_connStr))
{
backgroundWorker1.RunWorkerAsync();
}
}
catch
{
MessageBox.Show("Application is busy with the first task!", "Busy...", MessageBoxButtons.OK, MessageBoxIcon.Warning);
}
}
private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e)
{
if (backgroundWorker1.CancellationPending)
{
e.Cancel = true;
return;
}
IDataReader read = StudentDA.GetReader(_connStr);
List<Student> localList = null;
if (_list != null)
{
_list.Clear();
}
_list = StudentMapper.GetStudents(read);
localList = new List<Student>(_list);
dataGridView1.Invoke(new MethodInvoker(delegate
{
dataGridView1.Rows.Clear();
}));
foreach (Student std in localList)
{
dataGridView1.Invoke(new MethodInvoker(delegate
{
dataGridView1.Rows.Add(std.SerialNo, std.RollNo);
}));
}
}
}
Upvotes: 0
Views: 336
Reputation: 17556
Try creating a new BackgroundWorker object each time , you load new data.
you are not chaning the _connection object
static _connection = null;
if(_connection == null)
{
}
this will work only works for 1st time and next time when you change the file this connection is not getting changed.
Upvotes: 1
Reputation: 1064204
Are you sure there isn't an exception happening somewhere? Try handling the completion event, and check the exception exposed on the event-arg object
Also; a loop with a single Invoke in each step is probably going to slow things down; maybe do the data-fetch on the backround, then do the entire clear/add-loop in a single Invoke. If that is too much, at least batch it into small sets; or consider virtual mode (which is much more efficient for large data volumes).
Upvotes: 1