seprob
seprob

Reputation: 11

Cassandra replication node failed when try to read from keyspace

I have 3 nodes in my Cassandra (3.0.2) replication set. My consistency level is "ONE". At the beginning all my keyspaces had replication factor equal 1. I changed it by table altering and ran "nodetool repair" on all nodes. Right now when I'm trying to select some data (not on every keyspace) I get something like this (select * from keyspace.table):

Traceback (most recent call last): File "/usr/bin/cqlsh.py", line 1258, in perform_simple_statement result = future.result() File "cassandra/cluster.py", line 3781, in cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:73073) raise self._final_exception ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read] message="Operation failed - received 0 responses and 1 failures" info={'failures': 1, 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}

In "/var/log/cassandra/system.log" I get:

WARN [SharedPool-Worker-2] 2017-04-07 12:46:20,036 AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread Thread[SharedPool-Worker-2,5,main]: {} java.lang.AssertionError: null at org.apache.cassandra.db.columniterator.AbstractSSTableIterator$IndexState.updateBlock(AbstractSSTableIterator.java:463) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.columniterator.SSTableIterator$ForwardIndexedReader.computeNext(SSTableIterator.java:268) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.columniterator.SSTableIterator$ForwardReader.hasNextInternal(SSTableIterator.java:158) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.columniterator.AbstractSSTableIterator$Reader.hasNext(AbstractSSTableIterator.java:352) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.columniterator.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:219) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.columniterator.SSTableIterator.hasNext(SSTableIterator.java:32) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:426) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:286) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:108) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:131) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:298) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1721) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2375) ~[apache-cassandra-3.0.2.jar:3.0.2] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_121] at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) ~[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-3.0.2.jar:3.0.2] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121] DEBUG [SharedPool-Worker-1] 2017-04-07 12:46:20,037 ReadCallback.java:126 - Failed; received 0 of 1 responses

Also I get:

DEBUG [SharedPool-Worker-1] 2017-04-07 13:20:30,002 ReadCallback.java:126 - Timed out; received 0 of 1 responses

I checked is there connection between nodes on ports 9042 and 7000. I changed options in "/etc/cassandra/cassandra.yml" like "read_request_timeout_in_ms", "range_request_timeout_in_ms", "write_request_timeout_in_ms" oraz "truncate_request_timeout_in_ms". I changed file "~/.cassandra/cqlshrc" and option "client_timeout = 3600". Additonally when I do for example "select * from keyspace.table where column1 = 'value' and column2 = value" I get:

ReadTimeout: Error from server: code=1200 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}

Any ideas?

Upvotes: 0

Views: 1513

Answers (3)

seprob
seprob

Reputation: 11

Fixed! I changed Cassandra version from 3.0.2 to 3.0.9 and problem solved.

Upvotes: 0

seprob
seprob

Reputation: 11

Marko Švaljek, yes I changed replication factor from 1 to 3 (because I have 3 nodes in my replication). You're right; you change replication factor on keyspace and that's what I did. Here you have description of my keyspace where usually I get some errors (but of course it happens with other keyspaces):

soi@cqlsh> desc keyspace engine;

CREATE KEYSPACE engine WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true;

CREATE TABLE engine.messages (
    persistence_id text,
    partition_nr bigint,
    sequence_nr bigint,
    timestamp timeuuid,
    timebucket text,
    message blob,
    tag1 text,
    tag2 text,
    tag3 text,
    used boolean static,
    PRIMARY KEY ((persistence_id, partition_nr), sequence_nr, timestamp, timebucket)
) WITH CLUSTERING ORDER BY (sequence_nr ASC, timestamp ASC, timebucket ASC)
    AND bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND comment = ''
    AND compaction = {'bucket_high': '1.5', 'bucket_low': '0.5', 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'enabled': 'true', 'max_threshold': '32', 'min_sstable_size': '50', 'min_threshold': '4', 'tombstone_compaction_interval': '86400', 'tombstone_threshold': '0.2', 'unchecked_tombstone_compaction': 'false'}
    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99PERCENTILE';

CREATE MATERIALIZED VIEW engine.eventsbytag1 AS
    SELECT tag1, timebucket, timestamp, persistence_id, partition_nr, sequence_nr, message
    FROM engine.messages
    WHERE persistence_id IS NOT NULL AND partition_nr IS NOT NULL AND sequence_nr IS NOT NULL AND tag1 IS NOT NULL AND timestamp IS NOT NULL AND timebucket IS NOT NULL
    PRIMARY KEY ((tag1, timebucket), timestamp, persistence_id, partition_nr, sequence_nr)
    WITH CLUSTERING ORDER BY (timestamp ASC, persistence_id ASC, partition_nr ASC, sequence_nr ASC)
    AND bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND comment = ''
    AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99PERCENTILE';

CREATE TABLE engine.config (
    property text PRIMARY KEY,
    value text
) WITH bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND comment = ''
    AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99PERCENTILE';

CREATE TABLE engine.metadata (
    persistence_id text PRIMARY KEY,
    deleted_to bigint,
    properties map<text, text>
) WITH bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND comment = ''
    AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99PERCENTILE';

Usually I get code error no. 1200 or 1300 as you can see in first post. Here you have my "nodetool status":

ubuntu@cassandra-db1:~$ nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load       Tokens       Owns    Host ID                               Rack
UN  192.168.1.13  3.94 MB    256          ?       8ebcc3fe-9869-44c5-b7a5-e4f0f5a0beb1  rack1
UN  192.168.1.14  4.26 MB    256          ?       977831cb-98fe-4170-ab15-2b4447559003  rack1
UN  192.168.1.15  4.94 MB    256          ?       7515a967-cbdc-4d89-989b-c0a2f124173f  rack1

Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless

I don't think so some other process can manipulate some data on disk. I'll add that I have similar replication where I have a lot more data and I don't have problem like this.

Upvotes: 0

Marko Švaljek
Marko Švaljek

Reputation: 2101

This is more or less a comment but since there is just so much to say It won't fit into a comment.

It would be very nice to say to what replication factor you changed the value. I'll just assume it was 3 because it's pretty standard. Then again since you only have a cluster of 3 people sometimes set RF to 2. You also mentioned that you updated replication factor on a table. As far as I know replication factor is setup on the keyspace level.

It would be very helpful if you posted a description of the keyspaces where the error is occurring.

Take into account that select * from something might get pretty intensive in your cluster, especially if you have a lot of data. If you were to do this query in cqlsh you just might get back 10 000 then again you only mentioned cqlsh only and no application code so I'm just note sure about this one etc.

Could you please provide nodetool status just to make sure you are actually not running the query with some nodes down. Because the first error just looks like this.

By the second error where you posted a stack trace it just looks like you are missing some sstables on disk? Is there a chance that some other process has somehow manipulated the sstables in some way?

You also changed a lot of properties in cassandra.yaml basically you reduced expected response times by almost 50% and I guess it's no wonder the nodes don't have time to respond... count on a whole table usually takes more than 3.6 seconds

Reasoning why those values are changed is simply missing.

Upvotes: 0

Related Questions