Reputation: 1
We have a Couchbase instance mounted on a AmazoneWeb Service Server, and an Elastic Search instance running on the same server.
The connection bewtween the two of them is being done ok, and currently replicating fine until... Out of the blue, we got the following error log on ElasticSearch:
[2013-08-29 21:27:34,947][WARN ][cluster.metadata ] [01-Thor] failed to dynamically update the mapping in cluster_state from shard
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:343)
at org.elasticsearch.common.io.FastByteArrayOutputStream.write(FastByteArrayOutputStream.java:103)
at org.elasticsearch.common.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:1848)
at org.elasticsearch.common.jackson.core.json.UTF8JsonGenerator.writeString(UTF8JsonGenerator.java:436)
at org.elasticsearch.common.xcontent.json.JsonXContentGenerator.writeString(JsonXContentGenerator.java:84)
at org.elasticsearch.common.xcontent.XContentBuilder.field(XContentBuilder.java:314)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.doXContentBody(AbstractFieldMapper.java:601)
at org.elasticsearch.index.mapper.core.NumberFieldMapper.doXContentBody(NumberFieldMapper.java:286)
at org.elasticsearch.index.mapper.core.LongFieldMapper.doXContentBody(LongFieldMapper.java:338)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.toXContent(AbstractFieldMapper.java:595)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:920)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:852)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:920)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:852)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:920)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:852)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:920)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:852)
at org.elasticsearch.index.mapper.object.ObjectMapper.toXContent(ObjectMapper.java:920)
at org.elasticsearch.index.mapper.DocumentMapper.toXContent(DocumentMapper.java:700)
at org.elasticsearch.index.mapper.DocumentMapper.refreshSource(DocumentMapper.java:682)
at org.elasticsearch.index.mapper.DocumentMapper.<init>(DocumentMapper.java:342)
at org.elasticsearch.index.mapper.DocumentMapper$Builder.build(DocumentMapper.java:224)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:231)
at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:380)
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:190)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$2.execute(MetaDataMappingService.java:185)
at org.elasticsearch.cluster.service.InternalClusterService$2.run(InternalClusterService.java:229)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:95)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
[2013-08-29 21:27:56,948][WARN ][indices.ttl ] [01-Thor] failed to execute ttl purge
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.ByteBlockPool$Allocator.getByteBlock(ByteBlockPool.java:66)
at org.apache.lucene.util.ByteBlockPool.nextBuffer(ByteBlockPool.java:202)
at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:319)
at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:274)
at org.apache.lucene.search.ConstantScoreAutoRewrite$CutOffTermCollector.collect(ConstantScoreAutoRewrite.java:131)
at org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:79)
at org.apache.lucene.search.ConstantScoreAutoRewrite.rewrite(ConstantScoreAutoRewrite.java:95)
at org.apache.lucene.search.MultiTermQuery$ConstantScoreAutoRewrite.rewrite(MultiTermQuery.java:220)
at org.apache.lucene.search.MultiTermQuery.rewrite(MultiTermQuery.java:288)
at org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:639)
at org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:686)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:309)
at org.elasticsearch.indices.ttl.IndicesTTLService.purgeShards(IndicesTTLService.java:186)
at org.elasticsearch.indices.ttl.IndicesTTLService.access$000(IndicesTTLService.java:65)
at org.elasticsearch.indices.ttl.IndicesTTLService$PurgerThread.run(IndicesTTLService.java:122)
[2013-08-29 21:29:23,919][WARN ][indices.ttl ] [01-Thor] failed to execute ttl purge
java.lang.OutOfMemoryError: Java heap space
We tried changing several memory values, but we cant seem to get it right.
Did some one experienced the same issue?
Upvotes: 0
Views: 1452
Reputation: 2450
A few troubleshooting tips:
Generally smart to dedicate one AWS instance only to Elasticsearch for predictable performance / ease of debugging.
Monitor your memory usage using the Bigdesk plugin. This will show you if your memory bottleneck is occurring from Elasticsearch - might be from the OS, simultaneous heavy querying and indexing, or else something unexpected.
Elasticsearch's Java heap should be set around 50% of your boxes's total memory.
This gist from Shay Banon offers several solutions to solve memory problems in Elasticsearch.
Upvotes: 1