Reputation: 403
I am currently "trying" to create a sound model for my data in DSE graph. But it currently has no data, just the unfinished "schema" definition.
I haven't managed to check on it for a couple of days - and I have just found that my cluster is now consuming 4 times the disk space that it was when I last looked it.
I am not so worried about the capacity of the cluster... I have plenty to go around... And there are plenty of Q&A's about trying to reclaim Cassandra disk space, too - but again - I have no data... there is nothing to delete and reclaim.
So my question is how can my cluster be consuming 220 MB, per node (give or take 15 - according to OpsCenter)? and more worrying - two and a bit days ago - it told me it was consuming 240 MB for the whole cluster. (Which I still thought was ridiculously, too much - given I have only 1 user created keyspace (graph) - with no data.
I am worried about;
As always - Thanks!
-Gavin.
Upvotes: 0
Views: 139
Reputation: 171
You mention that you are using OpsCenter. OpsCenter collects metrics about your cluster and stores it in tables. When you start with an empty cluster the Opscenter metrics can quickly take more room than your data, but on a normal production cluster the Opscenter data becomes much less of a factor. You should first find out if Opscenter is the culprit by running nodetool cfstats and looking at "Space used (total)" to see if the space is being used by Opscenter. If it is you have three choices:
1- Live with it. Now that you know the source it may not be a bad thing 2- Reduce the data Opscenter collects 3- Have Opscenter store it's data in a different cluster
Here is a link to the latest doc that may help: https://docs.datastax.com/en/opscenter/6.0/opsc/configure/opscConfigureDataCollectionExpiration_c.html
Upvotes: 1