Reputation: 177
In my configuration innodb_flush_method=O_DSYNC
from O-DIRECT
reduces about 75% the iowait, and accordingly this the load. Should I set another variables besides innodb_flush_method to reduce more the iowait?
My configuration file is:
[mysqld]
innodb_file_per_table=1
query_cache_size=128M
thread_cache_size=64
key_buffer_size=32M
max_allowed_packet=16M
table_cache=1024
table_definition_cache=8192
wait_timeout=20
max_user_connections=25
innodb_flush_method=O_DSYNC
open_files_limit=16384
myisam_sort_buffer_size=2M
collation_server=utf8_unicode_ci
character_set_server=utf8
tmp_table_size = 384M
max_heap_table_size = 384M
innodb_buffer_pool_size=64M
innodb_thread_concurrency=8
max_connections=125
I have a database with 100 Innodb tables, 3 of them has about 25000 records, the others has no significant records. The average queries in peak time is about 160, the majority is SELECT
Upvotes: 2
Views: 9356
Reputation: 10236
Major problem is innodb_buffer_pool_size
is too small. Recommandation is set to 50~75% of main memory.
innodb_buffer_pool_size=64M
I strongly recommand that you should increase it's value.
Generally speaking, O_DIRECT
is little bit fast because InnoDB Buffer Pool caches Data+Index, So with O_DIRECT
disabled File System Page Cache is faster. MySQL Manaual says
( http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_flush_method)
Depending on hardware configuration, setting innodb_flush_method to O_DIRECT can either have either a positive or negative effect on performance. Benchmark your particular configuration to decide which setting to use.
But in my experience, there was no significant difference between O_DIRECT and O_DSYNC. Both SSD and HDD are test.
Anyway you should increase innodb_buffer_pool_size
.
mysql> SHOW GLOBAL STATUS LIKE '%innodb%';
+---------------------------------------+-------------+
| Variable_name | Value |
+---------------------------------------+-------------+
.....
.....
| Innodb_buffer_pool_read_requests | 11054273949 |
| Innodb_buffer_pool_reads | 135237 |
| Innodb_buffer_pool_wait_free | 0 |
....
innodb buffer pool hit ratio = ((Innodb_buffer_pool_read_requests) / (Innodb_buffer_pool_read_requests + Innodb_buffer_pool_reads)) * 100
E.g. above examples,
hit ratio = (11054273949 / (11054273949 + 135237)) * 100 = 99.99%
I think this value is too small in your case.
"the majority is SELECT"
If most queries are SELECT and update query is rare, I think increasing query_cache_size
is very helpful for you.
Could you post your query cache status
as follows?
mysql> show global status like 'Qc%';
+-------------------------+------------+
| Variable_name | Value |
+-------------------------+------------+
| Qcache_free_blocks | 13 |
| Qcache_free_memory | 1073403104 |
| Qcache_hits | 217949 |
| Qcache_inserts | 337009 |
| Qcache_lowmem_prunes | 0 |
| Qcache_not_cached | 2122598 |
| Qcache_queries_in_cache | 68 |
| Qcache_total_blocks | 167 |
+-------------------------+------------+
mysql> show global status like 'com_select%';
+---------------+---------+
| Variable_name | Value |
+---------------+---------+
| Com_select | 3292531 |
+---------------+---------+
1 row in set (0.00 sec)
query cache hit ratio = ((Qcache_hits) / (Qcache_hits + Com_select)) * 100
first, figure out your query cache hit ratio.
Upvotes: 7