Alex Macedonsky
Alex Macedonsky

Reputation: 37

Optimization my.cnf for 100+ wp databases and low RAM usage

I have VDS: 6 cores and 10GB RAM with 150+ WordPress sites. All this things based on CentOS 7, ISPManager Web Panel with MariaDB 10.3 in Docker container. Sites are not highload, but 3 times per week container shutting down becose RAM is low. (OOM Killer killing it) What .my.cnf parametrs can you reccomend for optimal and low RAM usage? How i can stop this periodical container deaths that OOM Killer makes? Maybe some system tweaks of CentOS7? Tnx, sorry for mistakes.

Upvotes: 0

Views: 290

Answers (1)

Rick James
Rick James

Reputation: 142228

There are many settings that can be decreased to help. innodb_buffer_pool_size is usually the biggest, hence the first one to tweak.

10GB of RAM -- What is in that? One MariaDB instance (with 150+ databases)? Plus WP app code?

Lower table_open_cache a little.

Please provide the GLOBAL STATUS and VARIABLEs as discussed here: http://mysql.rjweb.org/doc.php/mysql_analysis#tuning

swappiness = 1 may help.

(from Comments)

MySQL/MariaDB would probably never need 100 cores; 3 cores is probably fine.

100 connections would probably never be active at the same time; max_connections is the configurable limit, 151 is the default (in some versions).

WordPress has some inefficient indexes on its tables; this extra CPU and/or I/O time spent. That can be improved upon with this plugin: WP Index Improvements

I think you have found the only tunable (innodb_flush_log_at_trx_commit) that provides a tradeoff between speed and data loss. Even there, the potential for loss is only during a crash.

What was your setting for swappiness? If it was "0", or if there is no swap space allocated, then the system is configured to kill a process rather than crash whenever RAM is being over-committed. A value of "1" says to delay swapping until you really have to. I think "1" is optimal when a database is one of the main processes on the system.

Allocate some swap space if there is none now. Use top or htop to monitor RAM, CPU, I/O, and swap usage. I find that "normally" there is more RAM and swap being used than I can explain.

If CPU usage (or the "Load Average") is high, I suggest capturing the SlowLog . Let's look at that to see which queries are consuming the most elapsed time.

A common CPU-waster (and/or I/O-waster) is the gratuitous calculation of "total number of rows" that a query would return. The slowlog would point out which tables are big and wasting time on such calculations. Then consider whether you can live without the value. (Notice that search engines don't say "... out of [exactly] 987,654,321 matches". There's a performance reason for that!)

Upvotes: 1

Related Questions