Reputation: 19700
I need to store a large PDF(120 MB) on to elastic search.
I ran the below scripts through cygwin:
$ curl -XPUT 'localhost:9200/samplepdfs/' -d '{
"settings": {
"index": {
"number_of_ shards": 1,
"number_of_replicas": 0
}
}
}'
{
"acknowledged": true
}
$ coded=`cat sample.pdf | perl -MMIME::Base64 -ne 'print encode_base64($_)'`
$ json="{\"file\":\"${coded}\"}"
$ echo $json > json.file
$ curl -XPOST 'localhost:9200/samplepdfs/attachment/1' -d @json.file
and the server threw an out of memory Exception
.
at org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator .appendToCumulation(HttpChunkAggregator.java:208)
Kindly suggest a solution/configuration change to resolve the issue.
Upvotes: 2
Views: 3689
Reputation: 4489
The error is easy to understand, you are doing large job in small machine. So, By configuration I guess you have single machine with allocating 512 MB of RAM or 2Gigs .
2 Gigs of RAM is not sufficient for your document.
So, What's the solution?
References
Hope this solves the problem, Thanks
Upvotes: 3