Reputation: 456
I am measuring the execution time of methods that open and close a database, as well as performing a query and an insert on it. For each one of these four operations I basically get the time before and after the execution of the relevant statement like this:
for(int i = 0; i < 500; i++) {
startTime = System.nanoTime();
long insertResult = db.insert(tableName, null, contentValues);
endTime = System.nanoTime();
if(insertResult > -1) {
generateNoteOnSD(fileName, (endTime - startTime));
}
}
In the case of the insert:
Could somebody please give me an idea/direction on how this type of operations are performed by the system? I do not really know how a write operation to persisten storage is done and on what factors does it depend. I want to know this to trying to explain the above described measurements (why the variation on the execution times for the same operation).
Any help is very appreciated.
Octavio
Upvotes: 3
Views: 326
Reputation: 13529
I will assume that this is a regular table, not a temp table.
Let us start by the biggest performance hog here.
By default, each such insert starts and ends a new transaction. If you want to speed this up tremendously, apply
db.beginTransaction();
before your loop, and
db.setTransactionSuccessful();
db.endTransaction;
after the loop. This will put all the inserts in the same transaction. Be sure to measure the time of db.endTransaction
where some of the total time spent will have moved. When you did not call these operations by hand, each db.insert
was implicitly wrapped in those anyway, and this is called implicit transactions.
While the transactioning protocol itself is complex and accounts for some of the performance difference, the fine grained transactions entail a difference in hardware speed. Transactions need to be durable, so they entail writes to the flash memory (writes are even slower than reads, and you need more than one write per transaction). In contrast, the "one long transaction" writes to volatile memory, and even though the data needs to go to the flash during the commit as well, you will only need a fraction of the writes. This effect is clearer if your rows are narrow, because more rows will fit into a flash memory block and be written at once.
As far as reads go, transactioning plays less of a role here. It is easy. If your application is hot in the volatile RAM, all the data goes from there. If it is not, the data goes from the flash.
The spike in your data may well be attributable to unrelated background processes competing with the database. This would occur only when those applications do intensive computation. If some of those processes access the same database, you could also be experience lock contention; this is possible even if those processes are waiting for something else. This would also explain why their distribution is so irregular. This is not the only possibility.
For a general overview of the algorithms used in SQLite, it may be useful to look at this book, although it has been written for a competing operating system.
Upvotes: 2