Tim Haines
Tim Haines

Reputation: 1526

How should mysql be optimized/tuned for lots of writes and few reads (100:1). Read perf reqd

I have simple db with 4 tables with a few millions rows each, and several indexes. I'm performing several hundred update and inserts on them per minute. The reads are much less frequent, but they need to be fast - for a web app. The reads should have priority - I can delay writes, if it helps improve the snappiness of reading.

Currently, when I'm not writing inserts and updates, the selects all hum along nicely. When I'm writing at the same time, things can slow down - sometimes massively. The server definitely gets IO bound - I've used iostat and seen the disk unitilization at 99% during periods of high writing.

Tomorrow I'm going to try and cut an index or two, shrink the row size, and disable the query cache. Does anyone have any other suggestions on how the table or mysql itself should be tuned for lots of writes and few reads?

The tables are currently set up to use the innodb engine with compact rows, and most of the config is still set to default apart from the buffer pool size. The db is going to continue to grow quickly, so having it all sit in ram isn't an option.

Update: It's on slicehost.com - 1gb ram, raid 10.

Upvotes: 3

Views: 3899

Answers (7)

Rick James
Rick James

Reputation: 142208

You have already done some useful things (fewer indexes, smaller datatypes)

  • innodb_buffer_pool_size should be about 70% of available RAM. (In old versions, the default was too low.) But for your tiny server (1GB RAM), 150M might be too big. Consider getting more RAM.
  • If you are swapping, get more RAM or lower more settings.
  • Consider innodb_flush_log_at_trx_commit = 2 -- faster, though less data security
  • Batch inserts -- 100 inserts in a single query will run 10 times as fast as 100 single-row inserts.
  • Look at normalization or denormalization
  • Check for hiding indexed columns in function calls: WHERE DATE(col) = '2019-01-01'
  • Use the slowlog to identify the naughtiest queries -- they may be write or read.
  • Did I suggest getting more RAM?

There are lots of other tips, but it would be better to see the worst queries, plus SHOW CREATE TABLE.

Upvotes: 0

Emil H
Emil H

Reputation: 40230

I think you need to consider partitioning. It's pretty much the only way to scale writes. MySQL has native support for this from 5.1 and onwards, but it's also quite possible to roll your own solution. The latter is much more complicated, so if possible I'd recommend using the built-in support. However, considering your excessive write load, it might not be enough. It's hard to give you more detailed advice without knowing how the data is structured, though.

Upvotes: 1

Mitch Wheat
Mitch Wheat

Reputation: 300489

If MySQL supported an index Fill Factor, that would be one area to look at. Unfortunately version 5 doesn't support Fill Factor (apparently it is on the feature request list for version 6.x).

  • Removing any un-used indexes, and limiting the width of your indexes would help.

  • Examine how much memory the server has.

  • Is the Disk RAID'ed?

Upvotes: 0

instanceof me
instanceof me

Reputation: 39138

Unfortunately for you, MySQL is typically built for 80/20 read/write ratio. I don't know if there is a lot you can do.

Are you using transactions ?

If the data you select is not often affected by writing (so that modifying it at write time when there is a modification would not affect writing performance), you can externalize it at write time, e.g. at the end of the transaction.

Upvotes: 1

Itay Moav -Malimovka
Itay Moav -Malimovka

Reputation: 53597

One thing (from many) to consider is using transactions. If you can bundle several write operations under one transaction it should lower the number of disk access.

Upvotes: 1

Cade Roux
Cade Roux

Reputation: 89651

Indexes are going to slow the writes, but are necessary for read performance, so as few as you can get away with to support the reads. Is your clustered index going to cause a lot of slow down?

Another possibility is to read from a separate database/table from your writes and opt for eventual consistency - that may not be possible in your case.

Upvotes: 1

jitter
jitter

Reputation: 54605

I suggest you do write/read splitting by setting up a mysql master-slave configuration.

You write to the master and redirect reads to the slaves.

The splitting itself can be done in two ways

  1. Use a proxy (MySQL Proxy, Continuent/Sequoia, ..)
  2. Do the splitting yourself in your application

Upvotes: 0

Related Questions