Reputation: 1129
I am using Cassandra 1.0.6 version... I have around ~1million JSON Objects of 5KB each to be inserted to the cassandra. As the inserts goes on, the memory consumption of cassandra also goes up until it gets stable to certain point.. After some inserts(around 2-3 lkhs) the ruby client gives me "`recv_batch_mutate': CassandraThrift::TimedOutException" exception.
I have also tried inserting 1KB sized JSON Objects more than a million times. This doesnt give any exception. Also in this experiment I plotted a graph between time taken by 50000 inserts vs number of 50000 inserts. I could find that there is a sharp rise in time taken to inserts after some iterations and suddenly that falls down. This could be due to Garbage collection done by JVM. But the same doesnt happen while inserting 5KB of data for a million times.
What may be the problem?? Some of the configuration options which I am using:- System:-
Cassandra configuration:- - concurrent_writes: 64
memtable_flush_writers: 4
memtable_flush_queue_size: 8
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 30
in_memory_compaction_limit_in_mb: 64
multithreaded_compaction: true
Do I need to do any changes in configuration. Is it related to JVM heap space or due to Garbage collection ??
Upvotes: 2
Views: 3101
Reputation: 5064
You can increase the rpc timeout to a larger value in cassandra config file, look for rpc_timeout_in_ms . But you should really look into your ruby client on the connection part.
# Time to wait for a reply from other nodes before failing the command
rpc_timeout_in_ms: 10000
Upvotes: 2