Reputation: 123
I have a problem with Cassandras consistency. I have 3 Cassandra nodes (Version 2.0.14.352) in the cluster and I am reading and writting with consistency level QUORUM and my replicationfactor is 3. If I understand this right in my case Cassandra should be consistent, because 2+2>3. But I wrote a test in java, where I insert some data very fast into the cassandra using the datastax-driver:
final Instant t1 = Instant.parse("2000-01-01T00:00:00.000Z");
final Instant t2 = Instant.parse("2000-02-01T00:00:00.000Z");
for (int i = 0; i < 100; i++) {
dataProvider.setValue(t1, new Double(1));
//If the next line is removed, the test will pass
dataProvider.setValue(t2, new Double(3));
dataProvider.saveToDB();
dataProvider.clear();
assertEquals("i=" + i, new Double(3), dataProvider.getValue(t2));
assertEquals("i=" + i, new Double(1), dataProvider.getValue(t1));
dataProvider.setValue(t1, new Double(2));
dataProvider.saveToDB();
dataProvider.clear();
assertEquals("i=" + i, new Double(2), dataProvider.getValue(t1));
dataProvider.setValue(t1, new Double(101));
dataProvider.saveToDB();
dataProvider.clear();
assertEquals("i=" + i, new Double(101), dataProvider.getValue(t1));
}
with the corresponding table
CREATE TABLE keyspace.table(
id text,
year int,
month int,
time timestamp,
value double,
PRIMARY KEY ((id, year, month), time)
)
dataProvider.setValue() internaly puts the given value into a NavigableMap. dataProvider.saveToDB() inserts the data into Cassandra. Here I tried on the one hand to insert the data asynchronous and waited until all ResultSetFuture finished and on the other hand I executed the statements synchronous. But this effected only the performance. In detail the save method looks like
final List<ResultSetFuture> sets = newLinkedList();
Batch batch = QueryBuilder.batch();
int batchsize=0;
for (Map.Entry<Instant, Double> entry : valueMap) {
final Instant instant = entry.getKey();
final ZonedDateTime zonedDateTime = instant.atZone(ZoneId.of("UTC"));
final Date date = Date.from(instant);
final Insert insert = QueryBuilder.insertInto(table)
.value(ID, id)
.value(YEAR, zonedDateTime.getYear())
.value(MONTH, zonedDateTime.getMonthValue())
.value(TIME, date)
.value(VALUE, entry.getValue());
batch.add(insert);
++batchsize;
if(batchsize % 200 == 0){
sets.add(cassandraConnector.executeAsync(batch));
batch = QueryBuilder.batch();
}
}
if(batchsize % 200 != 0) { //es gibt noch nicht abgeschickte Statements
sets.add(cassandraConnector.executeAsync(batch));
}
cassandraConnector.waitForFinish(sets);
cassandraConnector manages the connection. I am waiting until all ResultSets finished with
public boolean waitForFinish(List<ResultSetFuture> sets) {
ResultSet result = null;
for (final ResultSetFuture resultSetFuture : sets) {
// Wait until finished
try {
result = resultSetFuture.get();
} catch (InterruptedException e) {
resultSetFuture.cancel(true);
e.printStackTrace();
return false;
} catch (ExecutionException e) {
e.printStackTrace();
if (result != null) {
ExecutionInfo executionInfo = result.getExecutionInfo();
System.out.println("Timout from server with IP: " + executionInfo.getTriedHosts());
}
return false;
}
}
return true;
}
The curiosity is, that if I remove the line under the comment, the test will pass and it doesn´t matter how often I execute it. But if I run the test without removing the line sometimes it fails in the first loop, but sometimes it runs 3 loop until it fails. Furthermore it fails always at different lines. For example
java.lang.AssertionError: i=0
Expected :101
Actual :2
I also got
java.lang.AssertionError: i=2
Expected :2
Actual :101
So it seems that Cassandra wrote the 1 and after that instead of writting the 2 Cassandra recovered the 101 i wrote before the 1. Does anyone have an explanation for this behavior? Why does the test pass if I remove the line? I am writting to different partitions. I tried to change the consistency level to ALL but the behavior didn´t change.
Upvotes: 3
Views: 652
Reputation: 123
I solved it. Obviously the clocks are not 100 % synchronous. When I create the insert statement I added .using(timestamp(System.nanoTime() / 1000)); and now the test passes.
Upvotes: 1