alasarr
alasarr

Reputation: 1605

Cloud Bigtable minimum recommended table size

According to the Cloud Bigtable performance docs I should have a certain amount of data to ensure the highest throughput.

Under "Causes of slower performance" it says:

The workload isn't appropriate for Cloud Bigtable. If you test with a small amount (< 300 GB) of data

Does this limit apply to the table's size or to the total size of the instance?

I've a table of 100GB and another one of 1TB. I want to know if I should merge both of them.

Upvotes: 3

Views: 1078

Answers (1)

Billy Jacobson
Billy Jacobson

Reputation: 1703

That limit would appear to apply to the total size of the instance, but you probably don't need to worry about that too much unless you are seeing any performance issues.

If both of these are on the same instance, the data for each table will be distributed amongst the nodes you have at the instance level. In the Bigtable whitepaper it says "Each table consists of a set of tablets, and each tablet contains all data associated with a row range. Initially, each table consists of just one tablet. As a table grows, it is automatically split into multiple tablets, each approximately 100-200 MB in size by default."

The issue with a small set of data would be the likelihood that you keep accessing the same rows too frequently. If you are seeing some performance issues, you can use the Key Visualizer to look for hotspots in your database.

Upvotes: 3

Related Questions