Reputation: 5129
I am using elasticsearch 0.19.2 version and when i start my elastic search server it throws following memory / heap size issue. I have set heap size in elasticsearch.in.sh file but still getting the same issue.
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid8397.hprof ...
Heap dump file created [23111032 bytes in 0.495 secs]
[2013-12-18 10:52:34,862][WARN ][transport.netty ] [Norrin Radd] Exception caught on netty layer [[id: 0x0cb9a81c, /192.168.1.193:57115 => /192.168.1.141:9300]]
java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.common.netty.buffer.HeapChannelBuffer.<init>(HeapChannelBuffer.java:42)
at org.elasticsearch.common.netty.buffer.BigEndianHeapChannelBuffer.<init>(BigEndianHeapChannelBuffer.java:34)
at org.elasticsearch.common.netty.buffer.ChannelBuffers.buffer(ChannelBuffers.java:134)
at org.elasticsearch.common.netty.buffer.HeapChannelBufferFactory.getBuffer(HeapChannelBufferFactory.java:69)
at org.elasticsearch.common.netty.buffer.DynamicChannelBuffer.<init>(DynamicChannelBuffer.java:58)
at org.elasticsearch.common.netty.buffer.ChannelBuffers.dynamicBuffer(ChannelBuffers.java:221)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:98)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:777)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:553)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:343)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:274)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:194)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2013-12-18 10:52:34,868][WARN ][transport.netty ] [Norrin Radd] Exception caught on netty layer [[id: 0x53f21ba0, /192.168.1.190:60296 => /192.168.1.141:9300]]
java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.common.netty.buffer.HeapChannelBuffer.<init>(HeapChannelBuffer.java:42)
at org.elasticsearch.common.netty.buffer.BigEndianHeapChannelBuffer.<init>(BigEndianHeapChannelBuffer.java:34)
at org.elasticsearch.common.netty.buffer.ChannelBuffers.buffer(ChannelBuffers.java:134)
at org.elasticsearch.common.netty.buffer.HeapChannelBufferFactory.getBuffer(HeapChannelBufferFactory.java:69)
at org.elasticsearch.common.netty.buffer.DynamicChannelBuffer.<init>(DynamicChannelBuffer.java:58)
at org.elasticsearch.common.netty.buffer.ChannelBuffers.dynamicBuffer(ChannelBuffers.java:221)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:98)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:777)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:553)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:343)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:274)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:194)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2013-12-18 10:52:35,861][WARN ][transport.netty ] [Norrin Radd] Exception caught on netty layer [[id: 0x0da4791a, /192.168.1.190:60297 => /192.168.1.141:9300]]
java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.common.netty.buffer.HeapChannelBuffer.<init>(HeapChannelBuffer.java:42)
at org.elasticsearch.common.netty.buffer.BigEndianHeapChannelBuffer.<init>(BigEndianHeapChannelBuffer.java:34)
at org.elasticsearch.common.netty.buffer.ChannelBuffers.buffer(ChannelBuffers.java:134)
at org.elasticsearch.common.netty.buffer.HeapChannelBufferFactory.getBuffer(HeapChannelBufferFactory.java:69)
at org.elasticsearch.common.netty.buffer.DynamicChannelBuffer.<init>(DynamicChannelBuffer.java:58)
at org.elasticsearch.common.netty.buffer.ChannelBuffers.dynamicBuffer(ChannelBuffers.java:221)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:98)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:777)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:553)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:343)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:274)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:194)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
I also tried to run ES server with this command but still getting the same heap size issue. ./elasticsearch -f -Xms512m -Xmx1024m
Upvotes: 0
Views: 6814
Reputation: 38511
Open java_pid8397.hprof
in HPJmeter and analyze the results. At a minimum the tool will tell you if the min and max memory settings you configured were applied.
Upvotes: 1
Reputation: 3882
If you use some plugins like Marvel you should check indexes count and their size. Because some plugins create big number of indixes and they can eat all your memory.
Upvotes: 0