Reputation: 171
I am having a Casandra DB modeled in such a way the the data_time will be the row Key. Row Key is in the format (yyyy_mm_dd_hh). This has been modeled as per the application needs.
There might be around 700K rows with the same row Key and hence when I try to delete the rows, I get a rpc_timeout exception when I query again. When I searched, I found that it is because SS Table might get corrupted. I do not want to run nodetool also because the deletion part will be automated through a batch.
I tired using Astyanax API from Netflix, but no luck. I am trying to delete records using a plan delete query from Java.
Could anyone please help me with this.
Upvotes: 2
Views: 805
Reputation: 16393
The issue with your deletes (and reads, for that matter) is that you are performing a huge request that is not completing with the Cassandra timeout (default is 10 seconds).
Instead, try to narrow down the number of rows that you are deleting by specifying the uuid
together with the rowkey
.
So instead of:
cqlsh> DELETE FROM user_events WHERE rowkey='2015_02_08_14' ;
try this:
cqlsh> DELETE FROM user_events WHERE rowkey='2015_02_08_14' AND uuid = '5ee9d850-af44-11e4-9822-12e3f512a338';
Upvotes: 1