Reputation: 10512
In cassandra is there any way to define minimum value of a counter in counter column. Let say when counter values reaches 0 it should not go below this even if i do decrement operation.
Upvotes: 1
Views: 1632
Reputation: 2996
There isn't. Your counter is initialized with a value of 0, you can increment it, decrement it, and query its value. It is an integer between (-2^64 and 2^63 - 1). Sum / substraction will overflow when you hit the min / max values.
If you try to handle the logic in your application, it would be easy if you only have 1 application who write, but you probably have more than one. It would be doable if your applications are on the same system where they can use a lock, again I am guessing that's not the case, plus the performance would drop. In a distributed environment, you would need to be able to get a distributed lock, the performance would suffer.
If you really want to achieve this functionality with Cassandra, you can emulate it with the following strategy:
1. Table definition
CREATE TABLE test.counter (
my_key tinyint,
my_random uuid,
my_operation int,
my_non_key tinyint,
PRIMARY KEY ((my_key), my_operation, my_random)
);
This table will be used to keep track of the increment / decrement operation you are running. A few notes:
uuid
can be used for that. Without this column, executing twice the same operation (such as increment by 10) will be only stored once. Each operation will have its own uuid.2. Counter initialization You can initialize your counter, say to zero, with:
INSERT INTO test.counter (my_key , my_random, my_operation, my_non_key ) VALUES
( 0, 419ec9cc-ef53-4767-942e-7f0bf9c63a9d, 0, 0);
3. Counter increment
Let's say you add some number, such as 10. You would do so by inserting with the same partition key 0
, a new random uuid, and a value of 10:
INSERT INTO test.counter (my_key , my_random, my_operation, my_non_key) VALUES
( 0, d2c68d2a-9e40-486b-bb69-42a0c1d0c506, 10, 0);
4. Counter decrement Let's say you substract 15 now:
INSERT INTO test.counter (my_key , my_random, my_operation, my_non_key ) VALUES
( 0, e7a5c52c-e1af-408f-960e-e98c48504dac, -15, 0);
5. Counter increment Let's say you add 1 now:
INSERT INTO test.counter (my_key , my_random, my_operation, my_non_key ) VALUES
( 0, 980554e6-5918-4c8d-b935-dde74e02109b, 1, 0);
6. Counter query Now, let's say you want to query your counter, you would need to run:
SELECT my_operation, writetime(my_non_key), my_random FROM test.counter WHERE my_key = 0;
which will return: 0; 10; -15; 1 with the timestamp at which it was written. Your application now has all the information to calculate the correct value, since it knows in which order the incremente / decremente operations occured. This is of course necessary when the counter is reaching zero towards negative values. In this case, your application should be able to calculate that the right value which is 1.
6. Cleaning up At regular interval, or when you query the counter, you could combine values together and delete old one in a batch statement to ensure atomicity, for example:
BEGIN BATCH
DELETE FROM test.counter WHERE my_key = 0 AND my_operation = -5 and my_random = e7a5c52c-e1af-408f-960e-e98c48504dac;
DELETE FROM test.counter WHERE my_key = 0 AND my_operation = 0 and my_random = 419ec9cc-ef53-4767-942e-7f0bf9c63a9d;
DELETE FROM test.counter WHERE my_key = 0 AND my_operation = 10 and my_random = d2c68d2a-9e40-486b-bb69-42a0c1d0c506;
INSERT INTO test.counter (my_key , my_random, my_operation, my_non_key ) VALUES (0, ca67df54-62c7-4d31-a79c-a0011439b486, 1, 0);
APPLY BATCH;
Final notes
Upvotes: 2