Derrops
Derrops

Reputation: 8127

Elasticsearch Snapshot Failing in AWS, preventing upgrade

My incremental Snapshots in Elasticsearch are now failing. I didn't touch anything, nothing seems to have changed, can't figure out what is wrong.

I checked my Snapshots by doing: GET _cat/snapshots/cs-automated?v&s=id and finding the details of a failed one:

GET _snapshot/cs-automated/adssd....

Which showed this stacktrace:

java.nio.file.NoSuchFileException: Blob object [YI-....] not found: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: 21...; S3 Extended Request ID: zh1C6C0eRy....)
    at org.elasticsearch.repositories.s3.S3RetryingInputStream.openStream(S3RetryingInputStream.java:92)
    at org.elasticsearch.repositories.s3.S3RetryingInputStream.<init>(S3RetryingInputStream.java:72)
    at org.elasticsearch.repositories.s3.S3BlobContainer.readBlob(S3BlobContainer.java:100)
    at org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.readBlob(ChecksumBlobStoreFormat.java:147)
    at org.elasticsearch.repositories.blobstore.ChecksumBlobStoreFormat.read(ChecksumBlobStoreFormat.java:133)
    at org.elasticsearch.repositories.blobstore.BlobStoreRepository.buildBlobStoreIndexShardSnapshots(BlobStoreRepository.java:2381)
    at org.elasticsearch.repositories.blobstore.BlobStoreRepository.snapshotShard(BlobStoreRepository.java:1851)
    at org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:505)
    at org.elasticsearch.snapshots.SnapshotShardsService.access$600(SnapshotShardsService.java:114)
    at org.elasticsearch.snapshots.SnapshotShardsService$1.doRun(SnapshotShardsService.java:386)
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractPrioritizedRunnable.doRun(ThreadContext.java:763)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)

Don't know how to resolve this I can now longer upgrade my index, I checked this page: Resolve snapshot error in .. but still struggling. I've tried deleting a whole bunch of indicies. I may try restoring an old Snapshot. I also delete some .opendis.. indicies used for tracking ILM and a .lock index as well but nothing is helping. Very annoying.

as requested in comments:

GET /_cat/repositories?v
id           type
cs-automated   s3

GET /_cat/snapshots/cs-automated produces heaps of Snapshots all of which are PARTIAL in their status:

2020-09-08t01-12-44.ea93d140-7dba-4dcc-98b5-180e7b9efbfa PARTIAL 1599527564 01:12:44 1599527577 01:12:57 13.4s  84 177 52 229
2021-02-04t08-55-22.8691e3aa-4127-483d-8400-ce89bbbc7ea4 PARTIAL 1612428922 08:55:22 1612428957 08:55:57   35s 208 793 31 824
2021-02-04t09-55-16.53444082-a47b-4739-8ff9-f51ec038cda9 PARTIAL 1612432516 09:55:16 1612432552 09:55:52 35.6s 208 793 31 824
2021-02-04t10-55-30.6bf0472f-5a6c-4ecf-94ba-a1cf345ee5b9 PARTIAL 1612436130 10:55:30 1612436167 10:56:07 37.6s 208 793 31 824
2021-02-04t11-......

Upvotes: 0

Views: 2180

Answers (1)

piyush daftary
piyush daftary

Reputation: 281

The reason for snapshot to end in PARTIAL state is that because of some issue in S3 repository YI-.... file is missing. Which is clear case of repository corruption.

java.nio.file.NoSuchFileException: Blob object [YI-....] not found: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: 21...; S3 Extended Request ID: zh1C6C0eRy....)

This kind of repository corruption is observed when cluster is heavily loaded (JVM > 80% or CPU utilization >80%) and few of nodes drops out of cluster.

One way to fix the issue is to delete all the snapshots that refers to index referred by "YI-....". This will cleanup S3 snapshot files of index YI-.... and now when you take new snapshot everything starts afresh.

To be on safer side, I would recommend to contact AWS support to fix this type of repository corruption.

Elasticsearch reference similar issue fixed in elasticsearch version 7.8 and above : https://github.com/elastic/elasticsearch/issues/57198

Upvotes: 1

Related Questions