JDesuv
JDesuv

Reputation: 1064

Cassandra LOCAL_QUORUM

I'm having trouble understanding / finding information about how various quorums are calculated in cassandra.

Let's say I have a 16 node cluster using Network Topology Strategy across 2 data centers. The replication factor is 2 in each datacenter (DC1: 2, DC2: 2).

In this example, if I write using a LOCAL_QUORUM, I will write the data to 4 nodes (2 in each data center) but when will the acknowledgement happen? After 2 nodes in 1 data center are written?

In addition, to maintain strong read consistency, I need Write nodes + read nodes > replication factor. In the above example, if both reads and writes were LOCAL_QUORUM, I would have 2 + 2 which would not guarantee strong read consistency. Am I understanding this correctly? What level would I need then to ensure strong read consistency?

The goal here is to ensure that if a data center fails, reads/writes can continue while minimizing latency.

Upvotes: 7

Views: 22172

Answers (3)

harish
harish

Reputation: 43

client will get WRITE or READ acknowledgement from the corrdinator node once LOCAL_QUORUM complete its work in any one data center.

https://docs.datastax.com/en/archived/cassandra/3.0/cassandra/dml/dmlClientRequestsMultiDCWrites.html

If the write consistency level is LOCAL_ONE or LOCAL_QUORUM, only the nodes in the same datacenter as the coordinator node must respond to the client request in order for the request to succeed.

Use either LOCAL_ONE or LOCAL_QUORUM to reduce geographical latency lessen the impact on client write request response times.

Upvotes: 0

Loic
Loic

Reputation: 1248

The previous answer is correct: "The write will be successful after the coordinator received acknowledgement from 2 nodes from the same DC of the coordinator." It is the same for reads.

The Quorum is always calculated by N/2+1 (N being the replication factor), having a local_quorum avoids the latency of the other data center.

As far as I understand, with a RF of 2 and LOCAL_QUORUM you have better local consistency but no availability in case of partition: if one single node drops, all writes and reads will fail for the range tokens of that node and its replica.

Therefore I recommend a RF of 3 if you intend to use Quorum. For 2 replica you should better use ONE.

Upvotes: 2

Stefan Podkowinski
Stefan Podkowinski

Reputation: 5249

The write will be successful after the coordinator received acknowledgement from 2 nodes from the same DC of the coordinator.

Using LOCAL_QUORUM for both reads and write will get you strong consistency, provided the same DC will be used for both reads and write, and just for this DC.

Upvotes: 9

Related Questions