mchapala
mchapala

Reputation: 101

Cassandra 2.0 multi region nodes

we are trying to setup a multiregion cassandra cluster on ec2. Our configuration looks like

5 nodes each on us-east-1a,us-east-1b,us-east-1c,us-west-1a. For this we have modified the cassandra-rackdc.properties file.

using GossipingPropertyFileSnitch and modified cassandra.yaml file accordingly

we are using all 20 public ips for the seeds configuration in cassandra.yaml file

We have commented out the listen_address and rpc_address property so that cassandra defaults to using InetAddress.getLocalHost()

We have uncommented the broadcast address to use public ip

we have modifed the agents address.yaml file to use public ip address for the properties stomp_interface and local_interface

We are starting the nodes one by one with a 3 min pause in between.

Issue:

  1. When using the opscenter. It shows only one node in the cluster

  2. the 'nodetool status' command also shows only one node

  3. When using cql statement it does show all of its peers

What is the mistake we are doing?

Upvotes: 2

Views: 1155

Answers (1)

LHWizard
LHWizard

Reputation: 2379

I am doing something similar as a proof-of-concept. I have a working 2-region cluster. Here are the things that I did differently, from reading your question:

  1. I used the Ec2MultiRegionSnitch, which is designed to handle the public and private IPs in EC2.In AWS, the Elastic IP is not bound to the interface by the instance and this causes problems with the cluster communications.
  2. in cassandra.yaml, I left listen_address as the private IP.
  3. also, set rpc_address to 0.0.0.0
  4. uncomment broadcast_address and set it to the public IP (like you did).
  5. i set up dc_suffix in the cassandra-rackdc.properties file and uncommented prefer_local=true (inside the region, Cassandra will prefer to use private IPs).
  6. I opened the security groups for Cassandra so that tcp ports 7000 and 7001 could talk between the nodes in the 2 different regions. Opscenter uses ports 61620 and 61621.
  7. All nodes have the same cluster name.
  8. seed IPs are set to the public IPs. I didn't use all the nodes as seeds, that's not recommended.
  9. Start the seeds first, followed by the other nodes.

This provided a working cluster. Now I am working on ssl node-to-node communication between regions.

Upvotes: 4

Related Questions