NIKHIL C M
NIKHIL C M

Reputation: 4236

Getting Error: getaddrinfo EAI_AGAIN using Amazon S3 SDK

I am getting an error from Amazon S3 SDK in my Node.js Project as given below.

{ Error: getaddrinfo EAI_AGAIN ***.s3-accelerate.amazonaws.com:443
    at Object._errnoException (util.js:992:11)
    at errnoException (dns.js:55:15)
    at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:92:26)
  message: 'getaddrinfo EAI_AGAIN ***.s3-accelerate.amazonaws.com:443',
  code: 'NetworkingError',
  errno: 'EAI_AGAIN',
  syscall: 'getaddrinfo',
  hostname: '***.s3-accelerate.amazonaws.com',
  host: '***.s3-accelerate.amazonaws.com',
  port: 443,
  region: 'us-east-1',
  retryable: true,
  time: 2018-12-14T05:46:18.649Z }

error: There was an error viewing your album: getaddrinfo EAI_AGAIN ***.s3-accelerate.amazonaws.com:443

I know it is a DNS issue error. But the error happening occasionally. If I try running the code, again and again, this error may not show.

The S3 is in the us-east region and I am accessing from Asia. But as far as I know, Amazon S3 region have no part in this.

Part of my code is given below :

FYI: I promisified S3 SDK

const s3 = new AWS.S3({useAccelerateEndpoint: true});
        const bucket_name = s3Storage.bucketName;


s3getImageV2: async function (albumPhotosKey) {
        albumPhotosKey = albumPhotosKey.toString();

        try {
            const s3 = new AWSP.S3({useAccelerateEndpoint: true});
            const bucket_name = s3Storage.bucketName;
            if (!albumPhotosKey) {
                return {
                    status: false,
                    message: 'Album name is not given.'
                };
            }
            const data = await listImageObjects(s3, bucket_name, albumPhotosKey);
            var photos = [];
            logger.debug('S3:data.Contents: ', data.Contents.length);
            for (let i = 0; i < data.Contents.length; i++) {
                const photo = data.Contents[i];
                if (photo.Key.endsWith("/")) continue;
                const params = {
                    Bucket: bucket_name,
                    Key: photo.Key,
                    Expires: config.cache.ttl || 86400
                };
                logger.silly(`iteration:, ${i}`);
                // skiniq:s3
                const resp = await s3.getSignedUrlProm('getObject', params);
                photos.push(resp);
            }
            logger.debug('S3:OUTPUT: ', photos);
            return photos;
        } catch (e) {
            console.error(e);
            return null;
        }

Upvotes: 11

Views: 13867

Answers (3)

Abhishek Shah
Abhishek Shah

Reputation: 864

We were doing deleteObject and it happened when someone is viewing the directory in the AWS console where the file/object is located.
So when deleting a S3 object is being is done by SDK, and if someone is viewing the same directory where the file is located then this error occured for us.

So we tried to delete object after moving away from console tab to some other tab and the issue did not happened.

Also in the console in S3, we had entered the search query which would show us the file in the results. And then when we made a delete object request for one of the file in the results, we got this error.

Upvotes: 0

byteslash
byteslash

Reputation: 49

i had the same issue lately, and after very long hours googling about it I've found a solution which stated that one should remove the REGION parameter from the initialization and you should be fine. I did it and worked. I hope this helps any one in trouble with this error.

Upvotes: 2

metaprogrammer
metaprogrammer

Reputation: 96

Yep, we starting to get these too. They are not too severe in our case, to we can restart our node process once we detect that failure, but they are very surprising. We can't see a logical explanation (in our case we start 64 identical pods and one or two is getting that error at start time).

Upvotes: 0

Related Questions