Reputation: 157
My java code reads the excel file and writes (insert) data from it to oracle database.
For example, I need to read some similar cells in 2000 rows of excel file, my code reads it, insert to database and after do commit.
The first approximately 1000 rows inserts very fast, but another 1000 rows inserts very long.
Possibly reason in the lack of memory.
So, I think to do frequently commits while data is loading to database (e.g. do commit after every 50 rows read).
Is it good practice to do it or there are other ways to solve this problem?
Upvotes: 0
Views: 1864
Reputation:
Databases always have to be consistent, i.e. only commit, if your data is consistent, even if your program crashes afterwards.
(If you don't need that consistency, then why do you use a DB?)
PS: You won't go out of memory that fast.
Upvotes: 1
Reputation: 1779
Commits are for atomic operations in the database. You don't just throw them around because you feel like it. Each transaction is generally (depending on isolation level, but assuming serial isolation) a distinct, all-or-nothing operation.
If you don't know what is causing database transaction to take "long time", you should read the logs or talk to someone that knows how to diagnose the cause of the "slowdown" and remedies it. Most likely reason is bad configuration.
The bottom line is, people have transactions that insert 100,000 or even millions of rows as a single transaction without causing issues. And generally, it is better not to commit often for performance reasons.
Upvotes: 4