Reputation: 13
When elasticdump is stopped and restarted, it tries to execute after offset. but an error occurs.
[Execute command]
nohup ./elasticdump --input=http://host/common --output=http://host/common --type=data --limit=1000 --offset=1000 &
[error]
Error Emitted => {"error":{"root_cause":[{"type":"action_request_validation_exception","reason":"Validation Failed: 1: using [from] is not allowed in a scroll context;"}],"type":"action_request_validation_exception","reason":"Validation Failed: 1: using [from] is not allowed in a scroll context;"},"status":400}
How do I use offset???
Upvotes: 0
Views: 3689
Reputation: 1
You can select Elasticsearch max id , and use searchBody continue dump.
elasticdump --input=http://host/common --output=http://host/common --type=data --searchBody='{"query": {"range": {"xxxId": {"gt": 10000}}}}' --limit=1000
Upvotes: 0
Reputation: 468
You can use --limit parameter in the command, offset is dangerous to use as it can skip n number of records, n being the offset.
more reference - https://github.com/elasticsearch-dump/elasticsearch-dump
e.g.
elasticdump --input=domain/index --output "s3://bucket/file.json" --limit 1000
Upvotes: 0
Reputation: 217474
From the notes in the elasticdump project:
if you are using Elasticsearch version 6.0.0 or higher the offset parameter is no longer allowed in the scrollContext
What you can do to prevent this (as long as you don't cross the 10000 limit) is to not use the offset
parameter (i.e. no scroll context) and provide a search body instead with from
and size
settings, like this:
nohup ./elasticdump --input=http://host/common --output=http://host/common --type=data --searchBody='{"from": 1000, "size": 1000, "query": { "match_all": {} }}' &
UPDATE:
If you have more than 10K records and elasticdump is prone to stop midway, I suggest leveraging the snapshot/restore feature in order to move the data from one server to another.
Upvotes: 0