Reputation: 321
About
For any single query, it took the MariaDB server to do it in approximately 1ms for every request. When the concurrency increases the query time for each request also increases until it timed out. By far, It seems its only possible to do about 2k max connections per second for each mysql server instance, no amount of config tweeking seems to have any effect. Is there any way to reduce query time for each client by less than 0.1ms?
This is the query
select ID from table where id=1;
If it helps, here is the mysql configuration file
[client]
port = 3306
socket = /home/user/mysql.sock
[mysqld]
port = 3306
bind-address=127.0.0.1
datadir=/home/user/database
log-error=/home/user/error.log
pid-file=/home/user/mysqld.pid
innodb_file_per_table=1
back_log = 2000
max_connections = 1000000
max_connect_errors = 10
table_open_cache = 2048
max_allowed_packet = 16M
binlog_cache_size = 1M
max_heap_table_size = 64M
read_buffer_size = 2M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 8M
thread_cache_size = 8
thread_concurrency = 200
query_cache_size = 64M
query_cache_type = 1 #My settings
innodb_io_capacity = 100000
query_cache_limit = 2M
ft_min_word_len = 4
default-storage-engine = innodb
thread_stack = 240K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 64M
log-bin=mysql-bin
binlog_format=mixed
slow_query_log
long_query_time = 2
server-id = 1
key_buffer_size = 32M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size = 10G
myisam_repair_threads = 1
myisam_recover
innodb_buffer_pool_size = 2G
innodb_data_file_path = ibdata1:10M:autoextend
innodb_doublewrite = 0
sync_binlog=0
skip_name_resolve
innodb_write_io_threads = 500
innodb_read_io_threads = 500
innodb_thread_concurrency = 1000
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 8M
innodb_log_file_size = 256M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[myisamchk]
key_buffer_size = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M
[mysqlhotcopy]
interactive-timeout
[mysqld_safe]
open-files-limit = 81920
HW
2x Intel Xeon 2670 32 Gb RAM 500Gb ssd samsung evo 850
Detour
While its true that MySql can do more than 1 million queries per second, the test here just did only 250 connected clients.
Upvotes: 0
Views: 1196
Reputation: 142238
Your machine has 4 cores, correct? So, if you run more than 4 CPU-bound processes simultaneously, the CPU will be saturated. This implies that each thread will be interrupted to let other threads run. That is, latency increases.
Is your goal to shrink time taken for the average query? That is latency? Then more connections will not help.
Is your goal queries/second, then, again, you will be stopped once the CPUs are saturated. That will probably happen before you get to 8 connections. After the CPU is saturated, throughput (queries/second) will level off even as you increase the number of connections. But, as I already said, latency for individual queries will increase.
If you want to push the machine, do multiple queries in each connection. Otherwise, you are only timing the connection handling. This is not a useful metric.
If you add more servers (via Replication, Clustering, etc), you can run more queries/second. Ditto for more cores. But nothing will decrease the time taken for an individual query by much.
In the settings, max_connections = 1000000
is ludicrous, and may consume a lot of RAM. As I have said, 8 might be all that your benchmarking can handle.
Another setting... Having the Query Cache turned on is deceptive. It speeds up running the identical SELECT
if the table in question has not changed. That is, the first run of a query might take 1.0ms; then all subsequent runs of the same query might take 0.1ms. That is not a very exciting finding. Execute a query twice -- this will give you all you can learn, without firing up any benchmark platform etc.
But most Production machines find the QC to be useless. This is because the data is changing, so the QC is out of date. In fact, the cost of "purging" the QC may make queries run slower!
If you want lots of connections for reading, Replication Slaves can provide an unlimited number of connections. I used to work with a system with 23 Slaves; that gave 23x the connections. Booking.com has systems with well over 100 Slaves. That's how you can look up hotel availability so fast.
Please back up and think what your real goal is. Then we can discuss things further.
Upvotes: 1