Justin
Justin

Reputation: 3634

Reduce memory usage in Google Cloud SQL

We have a simple MySQL db with a few tables, managed by Google Cloud SQL. The database has a few hundred million rows, and fairly light reading/writing. We are nowhere near the limits for storage, yet memory usage is climbing linearly with storage, and will soon be an issue for scalability.

Should memory usage be rising linearly with storage? What should we consider to lower our memory usage? We haven't customized any database flags.

I'm not seeing any information about memory usage and best practices in the Google Cloud documentation. It seems way too early for us to be thinking about horizontal scaling/sharding.

Upvotes: 6

Views: 7014

Answers (1)

Royzipuff
Royzipuff

Reputation: 507

MySQL will automatically allocate around 0.8 of the instance's memory to store data & indexes. This is in order to avoid heavy I/O operations and provide good response times.

So, by default, MySQL will hope to store as much as possible in memory - causing the memory usage to look like it scales linearly as more data is inserted.

This does not necessarily mean you are close to facing issues. It is really a matter of what resources your MySQL machine has. Give it 128GB of RAM and insert ~ 120GB worth data, and it will hold ~ 102GB of data in memory (maintaining performance) while 64GB RAM machine with 120GB worth data will surely show slower response times.

A side note:

If you store hundreds of millions of rows and willing to compromise a little in query times (millisecond to seconds), I would suggest checking out BigQuery, you might end up paying less than Cloud SQL, not worrying about scale (ever..) nor DBA / Dev Ops maintenance (Memory, CPU, Indexing etc..).

Upvotes: 13

Related Questions