Reputation: 1
I have one table contains 8 millions rows data in cassandra3.10. When I did the query as follow : select max(glass_id) from poc.dream where time < '2017-04-01 00:00:00+0000' allow filtering
;
The time property is partition key of table and the data size which I selected is 70 millions. But I get a bad performance for reading, it took 2mins to get the result.
The primary key is PK(time,glass_id);
The part of tracing log is showed as follow:
Preparing statement [Native-Transport-Requests-1] | 2017-04-07 14:58:55.598000 | 172.19.16.44 | 247 | 127.0.0.1
Computing ranges to query [Native-Transport-Requests-1] | 2017-04-07 14:58:55.599000 | 172.19.16.44 | 771 | 127.0.0.1
RANGE_SLICE message received from /172.19.16.44 [MessagingService-Incoming-/172.19.16.44] | 2017-04-07 14:58:55.599000 | 172.19.20.89 | 40 | 127.0.0.1
Submitting range requests on 769 ranges with a concurrency of 1 (0.0 rows per range expected) [Native-Transport-Requests-1] | 2017-04-07 14:58:55.599000 | 172.19.16.44 | 862 | 127.0.0.1
Enqueuing request to /172.19.20.89 [Native-Transport-Requests-1] | 2017-04-07 14:58:55.599000 | 172.19.16.44 | 881 | 127.0.0.1
Submitted 1 concurrent range requests [Native-Transport-Requests-1] | 2017-04-07 14:58:55.599000 | 172.19.16.44 | 899 | 127.0.0.1
Sending RANGE_SLICE message to /172.19.20.89 [MessagingService-Outgoing-/172.19.20.89-Small] | 2017-04-07 14:58:55.599000 | 172.19.16.44 | 937 | 127.0.0.1
Executing seq scan across 0 sstables for (min(-9223372036854775808), max(-9222003595370030342)] [ReadStage-1] | 2017-04-07 14:58:55.600000 | 172.19.20.89 | 369 | 127.0.0.1
Read 0 live and 0 tombstone cells [ReadStage-1] | 2017-04-07 14:58:55.600000 | 172.19.20.89 | 517 | 127.0.0.1
Enqueuing response to /172.19.16.44 [ReadStage-1] | 2017-04-07 14:58:55.600000 | 172.19.20.89 | 575 | 127.0.0.1
Sending REQUEST_RESPONSE message to /172.19.16.44 [MessagingService-Outgoing-/172.19.16.44-Small] | 2017-04-07 14:58:55.600000 | 172.19.20.89 | 977 | 127.0.0.1
REQUEST_RESPONSE message received from /172.19.20.89 [MessagingService-Incoming-/172.19.20.89] | 2017-04-07 14:58:55.601000 | 172.19.16.44 | 3018 | 127.0.0.1
Processing response from /172.19.20.89 [RequestResponseStage-2] | 2017-04-07 14:58:55.601000 | 172.19.16.44 | 3090 | 127.0.0.1
Enqueuing request to /172.19.20.5 [Native-Transport-Requests-1] | 2017-04-07 14:58:55.601000 | 172.19.16.44 | 3168 | 127.0.0.1
Enqueuing request to /172.19.20.5 [Native-Transport-Requests-1] | 2017-04-07 14:58:55.601000 | 172.19.16.44 | 3198 | 127.0.0.1
Enqueuing request to /172.19.20.89 [Native-Transport-Requests-1] | 2017-04-07 14:58:55.601000 | 172.19.16.44 | 3210 | 127.0.0.1
Sending RANGE_SLICE message to /172.19.20.5 [MessagingService-Outgoing-/172.19.20.5-Small] | 2017-04-07 14:58:55.601000 | 172.19.16.44 | 3210 | 127.0.0.1
Enqueuing request to /172.19.20.5 [Native-Transport-Requests-1] | 2017-04-07 14:58:55.601001 | 172.19.16.44 | 3222 | 127.0.0.1
Enqueuing request to /172.19.20.89 [Native-Transport-Requests-1] | 2017-04-07 14:58:55.601001 | 172.19.16.44 | 3230 | 127.0.0.1
Sending RANGE_SLICE message to /172.19.20.89 [MessagingService-Outgoing-/172.19.20.89-Small] | 2017-04-07 14:58:55.601001 | 172.19.16.44 | 3232 | 127.0.0.1
Enqueuing request to /172.19.20.89 [Native-Transport-Requests-1] | 2017-04-07 14:58:55.601001 | 172.19.16.44 | 3240 | 127.0.0.1
Sending RANGE_SLICE message to /172.19.20.5 [MessagingService-Outgoing-/172.19.20.5-Small] | 2017-04-07 14:58:55.601001 | 172.19.16.44 | 3247 | 127.0.0.1
Enqueuing request to /172.19.20.5 [Native-Transport-Requests-1] | 2017-04-07 14:58:55.601001 | 172.19.16.44 | 3249 | 127.0.0.1
Enqueuing request to /172.19.20.89 [Native-Transport-Requests-1] | 2017-04-07 14:58:55.601001 | 172.19.16.44 | 3257 | 127.0.0.1
Enqueuing request to /172.19.20.5 [Native-Transport-Requests-1] | 2017-04-07 14:58:55.601001 | 172.19.16.44 | 3264 | 127.0.0.1
Sending RANGE_SLICE message to /172.19.20.5 [MessagingService-Outgoing-/172.19.20.5-Small] | 2017-04-07 14:58:55.601002 | 172.19.16.44 | 3267 | 127.0.0.1
Executing seq scan across 0 sstables for (max(-9153807552532774465), max(-9087147317664915466)] [ReadStage-2] | 2017-04-07 14:58:55.601002 | 172.19.16.44 | 3273 | 127.0.0.1
Enqueuing request to /172.19.20.89 [Native-Transport-Requests-1] | 2017-04-07 14:58:55.601002 | 172.19.16.44 | 3273 | 127.0.0.1
Sending RANGE_SLICE message to /172.19.20.89 [MessagingService-Outgoing-/172.19.20.89-Small] | 2017-04-07 14:58:55.601002 | 172.19.16.44 | 3274 | 127.0.0.1
Enqueuing request to /172.19.20.89 [Native-Transport-Requests-1] | 2017-04-07 14:58:55.601002 | 172.19.16.44 | 3285 | 127.0.0.1
Executing seq scan across 0 sstables for (max(-9072913252686483927), max(-9071905664525634895)] [ReadStage-4] | 2017-04-07 14:58:55.601002 | 172.19.16.44 | 3284 | 127.0.0.1
Sending RANGE_SLICE message to /172.19.20.5 [MessagingService-Outgoing-/172.19.20.5-Small] | 2017-04-07 14:58:55.601002 | 172.19.16.44 | 3290 | 127.0.0.1
Enqueuing request to /172.19.20.89 [Native-Transport-Requests-1] | 2017-04-07 14:58:55.601002 | 172.19.16.44 | 3294 | 127.0.0.1
Sending RANGE_SLICE message to /172.19.20.89 [MessagingService-Outgoing-/172.19.20.89-Small] | 2017-04-07 14:58:55.601002 | 172.19.16.44 | 3296 | 127.0.0.1
Enqueuing request to /172.19.20.89 [Native-Transport-Requests-1] | 2017-04-07 14:58:55.601002 | 172.19.16.44 | 3304 | 127.0.0.1
Sending RANGE_SLICE message to /172.19.20.5 [MessagingService-Outgoing-/172.19.20.5-Small] | 2017-04-07 14:58:55.601002 | 172.19.16.44 | 3307 | 127.0.0.1
Enqueuing request to /172.19.20.89 [Native-Transport-Requests-1] | 2017-04-07 14:58:55.601002 | 172.19.16.44 | 3313 | 127.0.0.1
Sending RANGE_SLICE message to /172.19.20.89 [MessagingService-Outgoing-/172.19.20.89-Small] | 2017-04-07 14:58:55.601003 | 172.19.16.44 | 3319 | 127.0.0.1
Enqueuing request to /172.19.20.5 [Native-Transport-Requests-1] | 2017-04-07 14:58:55.601003 | 172.19.16.44 | 3321 | 127.0.0.1
Read 0 live and 0 tombstone cells [ReadStage-4] | 2017-04-07 14:58:55.601003 | 172.19.16.44 | 3323 | 127.0.0.1
Read 0 live and 0 tombstone cells [ReadStage-2] | 2017-04-07 14:58:55.601003 | 172.19.16.44 | 3323 | 127.0.0.1
Upvotes: 0
Views: 425
Reputation: 12840
You are doing two things wrong in a single query.
First : You have not specified partition key so cassandra needs to execute query on each and every node
Second : You are using aggregate method max()
, which scan all the row (For your case 70 millions row) just to give you the max number.
Instead of using such query, change your data model so that you can specify partition key and specify glass_id
as clustering key order by desc.
Upvotes: 3