Reputation: 1243
currently i have an implementation of a SwingWorker process something like this. Note these are not actual code. Just skeletons.
private void jButtonSomeButton ( .... ) {
.....
for( File file: files){
worker( args1, args2 );
}
}
private void worker( args1, args2 ){
mytask = new SwingWorker<Object, Object>(){
public Object doInBackground(){
while( !isCancelled() ){
individualtask( args1, args2 );
}
}
.....
}
}
private void individualtask(args1, args2){
...
table.addRow( somevector ); //add some data to row
...
}
While I had the above going, I find that sometimes the row data goes awry. Some characters may be missing etc. Sometimes they are alright. I believe I need some synchronizing mechanism however i do not have experience in this. Can you help suggest improvement to the code above? thanks
Upvotes: 1
Views: 108
Reputation: 17971
Please note that code inside doInBackground() method runs outside the Event Dispatch Thread (EDT). Then calling individualTask(args1, args2)
inside it and consequently calling table.addRow(...)
is conceptually wrong. Not to mention that addRow(...)
is not present in JTable API but DefaultTableModel instead.
Heavy tasks must run in doInBackground()
thread and Swing components updates must be performed in the EDT.
The right way to go through is using publish() and process() methods as explained in Tasks that Have Interim Results section of Concurrency in Swing lesson.
Having said that I'd suggest you also reconsider this part:
private void jButtonSomeButton ( ... ) {
...
for( File file: files){
worker( args1, args2 );
}
}
If you trigger several workers that update the same table, then the outcome won't be the expected either. It all depends on what parallelism level you need to achieve, of course.
Based on this comment:
I have to process files in a directory but i don't want to do it in serial. So that's why i use worker(). How do you suggest I do this?
The use of SwingWorker
is certainly the right choice in this case. However consider this case:
Let's say there are two files in the directory, files A and B. If you trigger two different workers to process A and B, then there will be two parallel tasks updating the very same table. In consequence the rows will be added to the table very asynchronously and you can have rows 0, 1, 2, 4, 7 parsed from file A and rows 3, 5, 6, 8 parsed from file B. If the order on what rows are added to the table and most important the file from where these were parsed don't matter, then the approach is just fine.
On the other hand if you want to add all rows parsed from file A first and then all rows parsed from file B, then consider put this for (File file : files)
loop inside doInBackground()
as well. This will ensure a parallelism between file processing and table update, but respecting the order in which files are processed. For example:
SwingWorker<Void, Vector> worker = new SwingWorker<Void, Vector>() {
@Override
protected Void doInBackground() {
int numberOfFiles = files.size(); // or files.length if it's an array
int processed = 0;
for (File file : files) {
...
// process each file here and then publish interim results
publish(vector);
...
int progress = (int)(++processed * 100 / numberOfFiles);
setProgress(progress);
}
return null;
}
@Override
protected void process(List<Vector> rows) {
DefaultTableModel model = (DefaultTableModel)table.getModel();
for (Vector row: rows) {
model.addRow(row);
}
}
};
Upvotes: 5
Reputation: 41188
You shouldn't be calling table.addRow()
from individualTask
.
You do background tasks in doInBackground
and then publish
the results.
See:
http://docs.oracle.com/javase/8/docs/api/javax/swing/SwingWorker.html
Upvotes: 4