Reputation: 111
I'm facing an issue with writing data into cassandra.
When using ConnectionPoolType.BAG we observed :
NoAvailableHostsException("No hosts to borrow from") exception being thrown.
Caused by: com.netflix.astyanax.connectionpool.exceptions.NoAvailableHostsException:
NoAvailableHostsException: [host=None(0.0.0.0):0, latency=0(0), attempts=1]No hosts to borrow from
at com.netflix.astyanax.connectionpool.impl.BagOfConnectionsConnectionPoolImpl.borrowConnection(BagOfConnectionsConnectionPoolImpl.java:93) ~[astyanax-1.56.24-SNAPSHOT.jar:na]
at com.netflix.astyanax.connectionpool.impl.BagOfConnectionsConnectionPoolImpl.access$000(BagOfConnectionsConnectionPoolImpl.java:31) ~[astyanax-1.56.24-SNAPSHOT.jar:na]
at com.netflix.astyanax.connectionpool.impl.BagOfConnectionsConnectionPoolImpl$BagExecuteWithFailover.borrowConnection(BagOfConnectionsConnectionPoolImpl.java:158) ~[astyanax-1.56.24-SNAPSHOT.jar:na]
at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:67) ~[astyanax-1.56.24-SNAPSHOT.jar:na]
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:253) ~[astyanax-1.56.24-SNAPSHOT.jar:na]
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$6$3.execute(ThriftColumnFamilyQueryImpl.java:739) ~[astyanax-1.56.24-SNAPSHOT.jar:na]
Also, we tried using ConnectionPoolType.ROUND_ROBIN but again we observed ConnectionTimeOut error.
Client configuration. We are using Astyanax java client.
context = new AstyanaxContext.Builder()
.forCluster("clustername")
.forKeyspace("keyspace")
.withAstyanaxConfiguration(new AstyanaxConfigurationImpl()
.setDiscoveryType(NodeDiscoveryType.NONE)
.setCqlVersion("3.0.0")
.setConnectionPoolType(ConnectionPoolType.BAG) // We also tried ConnectionPoolType.ROUND_ROBIN
)
.withConnectionPoolConfiguration(
new ConnectionPoolConfigurationImpl("poolname")
.setPort(9160)
.setMaxConnsPerHost(20)
.setInitConnsPerHost(10)
.setSeeds("host1:9160,host2:9160,host3:9160")
.setMaxTimeoutWhenExhausted(11000) // Default : 2000
.setConnectTimeout(10000) // Default : 2000
)
.withConnectionPoolMonitor(new Slf4jConnectionPoolMonitorImpl())
.buildKeyspace(ThriftFamilyFactory.getInstance());context.start();
Update :
I'm facing an issue connecting to cassandra cluster from play framework. We are using astyanax java client to connect to the cluster.
When we start the play application using "play start" everything seems to be working fine but when we create a dist using "play dist" command and start the service it throws the following exception.
What could be the difference between play dist and play start ?
Update 2 : I'm testing on my machine, single instance of cassandra. I created the keyspace with "Simple Stragety" as the strategy.
Cassandra on my box : ReleaseVersion: 1.1.7. It also failed for version 1.1.6.
Running nodetool -h localhost gave the following output.
Address DC Rack Status State Load Effective-Ownership Token
127.0.0.1 datacenter1 rack1 Up Normal 130.44 KB 100.00% 129209944818829357072522096381370300409
CLOSING THE THREAD My problem was because of some conflicting thrift libraries. I had added dependencies for hive and also for cassandra and probably there was some mismatch.
THanks for the help.
Thanks.
Upvotes: 2
Views: 3027
Reputation: 16872
I had a similar problem trying to TRUNCATE a table (astyanax.version=1.0.6).
Truncation is a long operation because first of all Cassandra does a sync, that is, flushes all data from memory. It is a lot of data, so a read timeout happens.
One workaround to this problem is to first call cassandraClient.send_truncate()
and then repeatedly try{cassandraClient.recv_truncate();}
.
But if the timeout exception is thrown, we still get a "NoAvailableHostsException ... No hosts to borrow from". The reason for that exception is that deep inside thrift the connection is closed by submitting an asynchronous job to some executor. The remedy is to increase the time between retries (1000 ms is too small, 5000 works for me).
Upvotes: 0