Reputation: 1021
I am using JDBC addBatch and batchExecute for insert statements. I have autocommit mode to on. My problem is, when I have for example 20 inserts, and insert number 10 raises an exception (for example null values not allowed), no data is inserted. Shouldn't be the first 10 ok statements be inserted?.
My code:
try {
int[] results = stmt.executeBatch();
return results;
} catch (BatchUpdateException e) {
int[] tmpres = e.getUpdateCounts();
for (i = 0; i < tmpres.length; i++) {
System.out.println(tmpres[i]);
}
}
I see on the output that the update count of the first 10 statements is 1. So why no data is inserted?
Best regards, Peter
Upvotes: 1
Views: 368
Reputation: 123849
The other answers to this question are somewhat misleading, at least with regard to the general case.
In most cases, when using executeBatch()
with setAutoCommit(true)
:
This is definitely true for MySQL Connector/J (with rewriteBatchedStatements=false
, which is the default), the Microsoft SQL Server JDBC driver, Derby, and HSQLDB. (I just ran actual Java code to confirm.)
As with many other aspects of JDBC, the actual behaviour in your particular case depends on the specific implementation of the JDBC driver you are using.
Upvotes: 1
Reputation: 13858
Seems like in your configuration, all the statements inside one batch are in fact executed in one transaction. So if any of the inserts fail, the whole transaction (that's the whole batch) will be rolled back. Even with autocommit on.
Upvotes: 0
Reputation: 1259
In this case all the statements are involved in the same transaction so any statement that will cause an error will rollback the whole transaction.
Upvotes: 0