Neil N
Neil N

Reputation: 25258

How to optimize a table for fast inserts only?

I have a log table that will receive inserts from several web apps. I wont be doing any searching/sorting/querying of this data. I will be pulling the data out to another database to run reports. The initial table is strictly for RECEIVING the log messages.

Is there a way to ensure that the web applications don't have to wait on these inserts? For example I know that adding a lot of indexes would slow inserts, so I won't. What else is there? Should I not add a primary key? (Each night the table will be pumped to a reports DB which will have a lot of keys/indexes)

Upvotes: 15

Views: 16572

Answers (3)

Adriaan Stander
Adriaan Stander

Reputation: 166396

No keys, no constraints, no validation, no triggers, No calculated columns

If you can, have the services insert async, so as to not wait for the results (if that is acceptable).

You can even try to insert into a "daily" table, which should then be less records, and then move this across before the batch runs at night.

But mostly on the table NO KEYS/Validation (PK and Unique indexes will kill you)

Upvotes: 5

Philip Kelley
Philip Kelley

Reputation: 40309

If performance is key, you may not want to write this data to a database. I think most everything will process a database write as a round-trip, but it sounds like you don't want to wait for the returned confirmation message. Check if, as S. Lott suggests, it might not be faster to just append a row to a simple text file somewhere.

If the database write is faster (or necessary, for security or other business/operational reasons), I would put no indexes on the table--and that includes a primary key. If it won't be used for reads or updates, and if you don't need relational integrity, then you just don't need a PK on this table.

To recommend the obvious: as part of the nightly reports run, clear out the contents of the table. Also, never reset the database file sizes (ye olde shrink database command); after a week or so of regular use, the database files should be as big as they'll ever need to be and you won't have to worry about the file growth performance hit.

Upvotes: 10

Shiraz Bhaiji
Shiraz Bhaiji

Reputation: 65391

Here are a few ideas, note for the last ones to be important you would have extremly high volumns:

  • do not have a primary key, it is enforced via an index
  • do not have any other index
  • Create the database large enough that you do not have any database growth
  • Place the database on it's own disk to avoid contention
  • Avoid software RAID
  • place the database on a mirrored disk, saves the calculating done on RAID 5

Upvotes: 7

Related Questions