Reputation: 119
Host with 32GB RAM. Mysql 5.7.21 works in Docker container, runs with the following command:
docker run --restart=always --name mydb5721 -v ... -e MYSQL_ROOT_PASSWORD=... -e MYSQL_USER=... -e MYSQL_PASSWORD=... -p ...:3306 --memory 14G --memory-swap 14G --health-interval=10s --health-timeout=10s --health-retries=3 --health-cmd='/bin/bash /var/lib/mysql/healthcheck.sh' mysql:5.7.21 --verbose &
so without swap and with 14GB RAM.
Memory usage almost always grows and is never cleared, even after my large requests are completed. And it crashes when exceeds limit 14GB. "Flush query cache" never changes this graph, "flush tables" usually not greatly affected and then grows fast again. When the db-requests are large and heavy, the limit is reached quickly, otherwise - slowly.
/etc/mysql/mysql.conf.d:
[mysqld]
innodb_force_recovery = 6
user=root
max_allowed_packet=1500M
bind-address=0.0.0.0
key_buffer_size=1024M
max_connections=200
table_open_cache=64
query_cache_limit=4M
query_cache_size=512M
innodb_buffer_pool_size=7G
server-id=2
log-slave-updates=1
innodb_log_buffer_size = 16M
innodb_log_file_size = 250M
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql
log-error = /var/log/mysql/error.log
symbolic-links=0
skip-host-cache
skip-name-resolve
sql-mode="NO_AUTO_VALUE_ON_ZERO"
innodb_file_per_table = 1
table_open_cache = 2048
innodb_open_files = 2048
sort_buffer_size = 128M
read_buffer_size = 128M
read_rnd_buffer_size = 1M
thread_stack = 128M
query_cache_type = 0
thread_cache_size = 32
max_heap_table_size = 256M
tmp_table_size = 1G
innodb_buffer_pool_instances = 4
innodb_read_io_threads = 8
innodb_write_io_threads = 8
performance_schema = 0
innodb_flush_log_at_trx_commit = 2
slow_query_log=1
long_query_time=40
slow-query-log-file=/var/log/mysql/slow_queries.log
general_log=0
innodb_flush_method=O_DIRECT
innodb_tmpdir=/tmp
secure-file-priv = ""
My main questions:
- is the Docker guilty or Mysql (cause I never have this problem with mysql without docker)?
- why memory never releases even after my queries are completed?
- will the solution migration to MariaDB?
Upvotes: 1
Views: 5995
Reputation: 119
I tend to think that it was mysql v5 bug cause when I upgrade mysql to v8.0.12 the situation became good - memory regularly releases as expected even during the heavy request.
Upvotes: 1
Reputation: 2343
Your ulimit -a report indicates Open files limit of 1024. In LX, ulimit -n 24000 will enable more file handles for MySQL to use.
For the new limit to be persistent over LX shutdown, restart review this URL
https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/
Your details may be slightly different.
Review my recent COMMENT about DOCKER missing CLOSE() to release resources.
Rate Per Second=RPS Suggestions to consider for your my.cnf [mysqld] section
# 20180924 1442 mysqlservertuning.com
read_rnd_buffer_size=256K # from 1M to reduce handler_read_rnd_next RPS
read_buffer_size=256K # from 128M will increase handler_read_next RPS
innodb_io_capacity_max=20000 # from 2000 to take advantage of SSD IOPS capacity
innodb_io_capacity=10000 # from 200 to take advantage of SSD IOPS capacity
tmp_table_size-256M # from 1G to be matched to max_heap_table_size
key_buffer_size=64M # from 1G to conserve RAM, key_blocks_used is less than 1%
innodb_lru_scan_depth=100 # from 1024 to conserve CPU cycles every SECOND
innodb_thread_concurrency=24 # from 0 (unlimited) for your 16 cpu's reported by iostat
table_open_cache=4096 # from 2048 to reduce opened_tables count
table_definition_cache=2048 # from 1424 to reduce opened_table_definitions count
query_cache_size=0 # from ~512M because query_cache_type=0 (off) to conserve RAM
query_cache_limit=0 # from ~4M because query_cache_type=0 (off) to conserve RAM
Considering your situation, backup your current my.cnf, copy the whole block, starting
with # date time, my web address to END of [mysqld] section, lead SAME NAMED variable with # AND spacebar to disable the line prior to new BLOCK of variables, stop service / start service.
Until the docker world gets the CLOSE() in place, you will still have crashes, but a little bit later in the day. Many of your PER CONNECTION variables were OVERPROVISIONED, some may still be, but likely survivable.
For additional suggestions please view my profile, Network profile for contact information including my Skype ID. Look forward to hearing from you.
Upvotes: 1