Ashrak
Ashrak

Reputation: 13

Cassandra and defuncting connection

I've got a question about Cassandra. I haven't found any "understable answer" yet... I made a cluster build on 3 nodes (RackInferringSnitch) on differents VM. I'm using Datastax's Java Driver to read and update my keyspace (with CSVs). When one node is down (ie : 10.10.6.172), I've got this debug warning:

 INFO 00:47:37,195 New Cassandra host /10.10.6.172:9042 added
 INFO 00:47:37,246 New Cassandra host /10.10.6.122:9042 added
 DEBUG 00:47:37,264 [Control connection] Refreshing schema
 DEBUG 00:47:37,384 [Control connection] Successfully connected to /10.10.6.171:9042
 DEBUG 00:47:37,391 Adding /10.10.6.172:9042 to list of queried hosts
 DEBUG 00:47:37,395 Defuncting connection to /10.10.6.172:9042
 com.datastax.driver.core.TransportException: [/10.10.6.172:9042] Channel has been closed
at com.datastax.driver.core.Connection$Dispatcher.channelClosed(Connection.java:621)
at 
 [...]
 [...]  
 DEBUG 00:47:37,400 [/10.10.6.172:9042-1] Error connecting to /10.10.6.172:9042 (Connection refused: /10.10.6.172:9042)
 DEBUG 00:47:37,407 Error creating pool to /10.10.6.172:9042 ([/10.10.6.172:9042] Cannot connect)
 DEBUG 00:47:37,408 /10.10.6.172:9042 is down, scheduling connection retries
 DEBUG 00:47:37,409 First reconnection scheduled in 1000ms
 DEBUG 00:47:37,410 Adding /10.10.6.122:9042 to list of queried hosts
 DEBUG 00:47:37,423 Adding /10.10.6.171:9042 to list of queried hosts
 DEBUG 00:47:37,427 Adding /10.10.6.122:9042 to list of queried hosts
 DEBUG 00:47:37,435 Shutting down pool
 DEBUG 00:47:37,439 Adding /10.10.6.171:9042 to list of queried hosts
 DEBUG 00:47:37,443 Shutting down pool
 DEBUG 00:47:37,459 Connected to cluster: WormHole

I wanted to know if I need to handle this exception or it will be handled by itself (I mean, when the node will be back again cassandra will do the correct write if the batch was a write...)

EDIT : Current consistency level is ONE.

Upvotes: 1

Views: 2657

Answers (1)

phact
phact

Reputation: 7305

The DataStax driver keeps track of which nodes are available at all times and routes queries (load balacing) based on this information. The way it does this is based on your reconnection policy.

You will see debug level messages when nodes are detected as down, etc. This is no cause for concern as the driver will re-route to other available nodes, it will also re-try the nodes periodically to find out if they are back up. If you had a problem and the data was not getting saved to Cassandra you would see timeout errors. No action necessary in this case.

Upvotes: 2

Related Questions