Reputation: 143
I'm doing some performance comparison whether to go for serializing data or to store them in a a DB. The application receives hell of a lot of data (x GB) that needs to be persisted with a minimum speed rate of 18mb/s (as for now)
Storing in DB offers easier functionality in terms of searching and accessing data at a later time, data snapshots, data migration and etc, but my tests so far shows a huge difference in performance time.
The test saves 1000 objects (of about 7hundredsomething kb each). Either to their respective columns in table or to disk by saving them as a generic List. (The SQLite ends up with a bit more data)
I haven't done any performance tweaks to SQLite, just use it out of the box with Fluent nHibernate and the SQLite.Data adapter (no transaction), but at first thought that is a huge difference.
Obviously I know that going through a ORM mapper and DB to write to disk gives an overhead compared to serializing, but that was a lot.
Also into considerations are to persist the data right away as I recieve them. If there is a power failure I need the last data recieved.
Any thoughts?
----- Updates (as I continue to investigate solutions) ------
Upvotes: 7
Views: 4584
Reputation: 389
You should consider using compiled statements for sqlite.
Check this
On insert/update queries there is a huge performance boost, I managed to obtain from 2x to 10x faster execution time using compiled statements, although from 33 sec to 0.3 sec is long way.
On the other hand, the SQLite execution speed depends on the schema of the table you are using, ex: if you have an index on a huge data, it would result a slow insert.
Upvotes: 2
Reputation: 143
After investigating further, the answer lays in a bit of a confusion of the intial results.
While testing the result with larger data I got some other result.
The disk transfer rate is limited to 126mb/s by the manufacturer and how could I write 750MB in a split second? Not sure why. But when I increased the data amount the transfer rate when fast down to ~136 mb/s.
As for database, using a transaction I got speeds up to 90mb/s using the IStatelessSession with large amounts of data (5-10GB). This is good enough for our purpose and I'm sure it can still be tweaked with compiled SQL statements and other if needed.
Upvotes: 0
Reputation: 154673
I had a similar problem once and I suggest you go the SQLite route.
As for your performance issues, I'm pretty sure you'll get a very significant boost if you:
CREATE TABLE t1(a INTEGER, b INTEGER, c VARCHAR(100));
INSERT INTO t1 VALUES(1,13153,'thirteen thousand one hundred fifty three');
INSERT INTO t1 VALUES(2,75560,'seventy five thousand five hundred sixty');
... 995 lines omitted
INSERT INTO t1 VALUES(998,66289,'sixty six thousand two hundred eighty nine');
INSERT INTO t1 VALUES(999,24322,'twenty four thousand three hundred twenty two');
INSERT INTO t1 VALUES(1000,94142,'ninety four thousand one hundred forty two');
BEGIN;
CREATE TABLE t2(a INTEGER, b INTEGER, c VARCHAR(100));
INSERT INTO t2 VALUES(1,59672,'fifty nine thousand six hundred seventy two');
... 24997 lines omitted
INSERT INTO t2 VALUES(24999,89569,'eighty nine thousand five hundred sixty nine');
INSERT INTO t2 VALUES(25000,94666,'ninety four thousand six hundred sixty six');
COMMIT;
*** These benchmarks are for SQLite 2, SQLite 3 should be even faster.
Upvotes: 6