Dimaf
Dimaf

Reputation: 683

sstableloader. The error: could not achieve replication factor 2 (found 1 replicas only), check your keyspace replication settings

I try to use the sstableloader utility for BulkLoad. SStables were created by using the script like https://github.com/yukim/cassandra-bulkload-example/blob/master/src/main/java/bulkload/BulkLoad.java.

When I start the sstableload on the node5 [10.0.2.2] I get the following errors:

./sstableloader data/keyspace1/ -d localhost
WARN  14:56:55 Error while computing token map for datacenter dc2: could not achieve replication factor 2 (found 1 replicas only), check your keyspace replication settings.
WARN  14:56:55 Error while computing token map for datacenter dc1: could not achieve replication factor 3 (found 0 replicas only), check your keyspace replication settings.

The Keyspace info:

cqlsh> DESCRIBE keyspace1;
CREATE KEYSPACE keyspace1 WITH replication = {'class': 'NetworkTopologyStrategy', 'dc1': '3', 'dc2': '2'}  AND durable_writes = true;

The nodes info:

./nodetool status keyspace1
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens       Owns (effective)  Host ID                               Rack
UN  10.0.1.1  286.11 KB  256          100.0%            504effa7-bf46-48c6-af80-3fe7d43cea4c  r1
UN  10.0.1.2  335.55 KB  256          100.0%            95551193-344b-4672-9803-f8d192210f63  r1
UN  10.0.1.3  476.38 KB  256          100.0%            66f431cc-7843-47ea-81ec-85bc6b7adb34  r2
Datacenter: dc2
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens       Owns (effective)  Host ID                               Rack
UN  10.0.2.1   34.69 GB   256          100.0%            f9f0de46-4c75-4671-a7c4-8d6b5b8b658c  r2
UN  10.0.2.2   31.79 GB   256          49.5%             c3f30f98-bab3-43f0-97e8-d8187556f3a9  r1
UN  10.0.2.3   34.32 GB   256          50.5%             7ebdd58d-dfe5-4abe-a80e-0b63ee57d0d9  r1

./nodetool version
ReleaseVersion: 3.1.1

The node is listening on the localhost.

netstat -na | grep 9042
tcp 0 0 ::ffff:127.0.0.1:9042 :::* LISTEN

cassandra.yaml

cluster_name: 'testCluster1'
num_tokens: 256
seed_provider:
   - class_name: org.apache.cassandra.locator.SimpleSeedProvider
       parameters:
           - seeds: "10.0.1.1,10.0.2.1"
listen_address:
endpoint_snitch: GossipingPropertyFileSnitch

cassandra-rackdc.properties (node1)

dc=dc1
rack=r1

I've tried to start on the node1(seed) of dc1 but I've got also the result:

./sstableloader -d localhost data/keyspace1/ -f /export/data/cassandra/dc1/r1/node1/conf/cassandra.yaml WARN 19:17:01 Error while computing token map for datacenter dc2: could not achieve replication factor 2 (found 0 replicas only), check your keyspace replication settings. WARN 19:17:01 Error while computing token map for datacenter dc1: could not achieve replication factor 3 (found 1 replicas only), check your keyspace replication settings. WARN 19:17:01 Error while computing token map for datacenter dc2: could not achieve replication factor 1 (found 0 replicas only), check your keyspace replication settings...

Upvotes: 3

Views: 5136

Answers (3)

axel
axel

Reputation: 1

Please also note: that when you define you replication factor in CQLSH and you specify you data centers .....

CREATE KEYSPACE keyspace1 WITH replication = {'class': 'NetworkTopologyStrategy', 'dc1': '3', 'dc2': '2'}  AND durable_writes = true;

its case sensitive and must match what you have defined in your cassandra-rackdc.properties file

dc=dc1
rack=rack1

get these mixed up and you will have the same error message

Upvotes: 0

Dimaf
Dimaf

Reputation: 683

My mistakes:

  1. Node has to listen 9042 on 0.0.0.0. Change cassandra.yaml: rpc_address: localhost to rpc_address:
  2. It needs to use: ./sstableloader data/keyspace1/tablename -d IPADDRESS

Upvotes: 0

Chris Lohfink
Chris Lohfink

Reputation: 16410

sstableloader will kinda start up C* (not in client mode), which requires the cassandra.yaml etc of the node and to be given appropriate seed nodes. Are you by chance running this on one of the dc2 nodes? If the node is not listening on the localhost interface it probably wont be able to join the ring to know where to stream to and the non-client mode probably messes up the replica computation. Try providing -d 10.0.1.1,10.0.1.2,10.0.2.1,10.0.2.2 as your seeds

Upvotes: 2

Related Questions