Reputation: 31
We're experiencing problems with writing data in cassandra table.
The flow is following.. we delete all records from XXX
with some primary key.
Then inserting new ones in loop.
execute("DELETE FROM XXX WHERE key = {SOME_UUID}");
for(int i = 0; i < 5; ++i) {
execute("INSERT INTO XXX (key, field1, field2) VALUES ({SOME UUID},'field1','field2')";
}
The result: Sometimes not all rows are inserted into the table. After querying the table we see that not all rows were inserted.
The environment we have:
We use DataStax Enterprise Edition (4.5.2). Cassandra 2.0.10.
The datacenter has 4 nodes and the keyspace we work on has replication_factor
set to 3.
The queries CONSISTENCY_LEVEL
is set to LOCAL_QUORUM
.
The java driver is data stax enterprise 2.1.1
Thanks in advance. Any help would be appreciated.
Upvotes: 1
Views: 1407
Reputation: 10721
I assume in your example that SOME_UUID is the same for the delete and the insert.
It's probably a race condition between the delete (tombstone) and the new inserts being propagated to all the nodes (per your replication factor). If the delete and insert are marked with the same timestamp, the delete will win. You may have a case where on some nodes the delete wins, and on others the insert wins.
You could try lowering RF to 1, as @BryceAtNetwork23 suggested.
Another test would be to insert a delay (like 500ms) in your sample code between the delete and the insert for loop. That would give time for the delete to propagate before the inserts come through.
Depending on your data model, the best solution here might be to avoid the need for the deletes.
Upvotes: 1