Reputation: 3262
I prefer to use the timestamp as one of the column in Cassandra (which I decided to use as Clustering Key). which is the right way to store the column as timestamp in Cassandra?
(i.e) Is it fine to use the 'milliseconds' (Example : 1513078338560) directly like below?
INSERT INTO testdata (nodeIp, totalCapacity, physicalUsage, readIOPS, readBW, writeIOPS, writeBW, writeLatency, flashMode, timestamp) VALUES('172.30.56.60',1, 1,1,1,1,1,1,'yes',1513078338560);
or to use the dateof(now());
INSERT INTO testdata (nodeIp, totalCapacity, physicalUsage, readIOPS, readBW, writeIOPS, writeBW, writeLatency, flashMode, timestamp) VALUES('172.30.56.60',1, 1,1,1,1,1,1,'yes',dateof(now()));
which is faster and recommended way to use for timestamp based queries in Cassandra?
NOTE : I know internally it stores as milliseconds, I used the 'SELECT timestamp, blobAsBigint(timestampAsBlob(timestamp)) FROM'
Thanks, Harry
Upvotes: 4
Views: 2772
Reputation: 87154
The dateof
is deprecated in Cassandra >= 2.2... Instead it's better to use function toTimestamp
, like this: toTimestamp(now())
. When you selecting, you can also use the toUnixTimestamp
function if you want to get the timestamp as long:
cqlsh:test> CREATE TABLE test_times (a int, b timestamp, PRIMARY KEY (a,b));
cqlsh:test> INSERT INTO test_times (a,b) VALUES (1, toTimestamp(now()));
cqlsh:test> SELECT toUnixTimestamp(b) FROM test_times;
system.tounixtimestamp(b)
---------------------------
1513086032267
(1 rows)
cqlsh:test> SELECT b FROM test_times;
b
---------------------------------
2017-12-12 13:40:32.267000+0000
(1 rows)
Regarding the performance - there are different considerations:
The pseudo code will look as following (Java-like).
PreparedStatement prepared = session.prepare(
"insert into your_table (field1, field2) values (?, ?)");
while(true) {
session.execute(prepared.bind(value1, value2));
}
Upvotes: 3