Reputation: 1398
When sending 100MB size messages using queue, ActiveMQ runs into out of memory error, we are using file cursor for the queue -
Queue Detail - Our producer are sending NON-Persistent messages at the size of 100MB per messages and the producer will just keep producing through a while loop of the same 100MB messages.
We use the default heap size that came with activeMQ which is 1GB max.
We have the ActiveMQ config setting as follows:
<policyEntry queue=">" producerFlowControl="false" memoryLimit="512mb" maxPageSize="1000000">
<pendingQueuePolicy>
<fileQueueCursor />
</pendingQueuePolicy>
</policyEntry>
On the consume side we have a Async consumer which will keep listening on messages coming in and send auto-ack.
After this program runs for a while activeMQ throws the following error:
2016-04-21 14:52:18,961 | ERROR | Error in thread 'ActiveMQ BrokerService.worker.1' | org.apache.activemq.broker.BrokerService | ActiveMQ BrokerService.worker.1
java.lang.OutOfMemoryError: Java heap space
at org.apache.activemq.util.DataByteArrayOutputStream.ensureEnoughBuffer(DataByteArrayOutputStream.java:249)[activemq-client-5.13.1.jar:5.13.1]
at org.apache.activemq.util.DataByteArrayOutputStream.writeBoolean(DataByteArrayOutputStream.java:140)[activemq-client-5.13.1.jar:5.13.1]
at org.apache.activemq.openwire.v11.BaseDataStreamMarshaller.looseMarshalByteSequence(BaseDataStreamMarshaller.java:627)[activemq-client-5.13.1.jar:5.13.1]
at org.apache.activemq.openwire.v11.MessageMarshaller.looseMarshal(MessageMarshaller.java:300)[activemq-client-5.13.1.jar:5.13.1]
at org.apache.activemq.openwire.v11.ActiveMQMessageMarshaller.looseMarshal(ActiveMQMessageMarshaller.java:111)[activemq-client-5.13.1.jar:5.13.1]
at org.apache.activemq.openwire.v11.ActiveMQTextMessageMarshaller.looseMarshal(ActiveMQTextMessageMarshaller.java:111)[activemq-client-5.13.1.jar:5.13.1]
at org.apache.activemq.openwire.OpenWireFormat.marshal(OpenWireFormat.java:161)[activemq-client-5.13.1.jar:5.13.1]
at org.apache.activemq.broker.region.cursors.FilePendingMessageCursor.getByteSequence(FilePendingMessageCursor.java:480)[activemq-broker-5.13.1.jar:5.13.1]
at org.apache.activemq.broker.region.cursors.FilePendingMessageCursor.flushToDisk(FilePendingMessageCursor.java:440)[activemq-broker-5.13.1.jar:5.13.1]
at org.apache.activemq.broker.region.cursors.FilePendingMessageCursor.onUsageChanged(FilePendingMessageCursor.java:401)[activemq-broker-5.13.1.jar:5.13.1]
at org.apache.activemq.usage.Usage$1.run(Usage.java:308)[activemq-client-5.13.1.jar:5.13.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_74]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_74]
at java.lang.Thread.run(Thread.java:745)[:1.8.0_74]
Does anyone knows how to resolve this? This doesn't seems to be happening when I keep sending smaller size messages, For example, messages that are less than 10MB.
Upvotes: 1
Views: 5471
Reputation: 1413
Non-persistent messages are going to be stored in memory rather than persisted to a data store, as described here. So 1GB is going to disappear quickly, especially if you can't consume as fast as you produce.
Of course, you could increase the amount of memory allocated to ActiveMQ in activemq.xml, but you might be better off persisting, even if you don't need to recover, and potentially expire messages after a period of time to emulate non-persistence (if necessary).
I'd suggest other solutions, such as breaking messages into something more manageable or by using shared file storage for the data and sending messages that including pointers to the data. Besides the processing overhead for ActiveMQ, believe you have a larger-than-normal network impact (for example, if you have secure communications to your AMQ instance, you're encrypting/decrypting 100M messages, not cheap).
Upvotes: 3