Shailendra
Shailendra

Reputation: 141

Timeout waiting for connection from pool for S3 upload

I am trying to upload huge no of file from 7 machines. In each machines i am running 6 threads to upload into S3 . When i ran upload from one machine it worked fine but when i ran in 7 machines it started failing .

I am getting below error in rest of the machines .

ERROR - AmazonClientException com.amazonaws.SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool

Total no of small files i am uploading to S3 = 1659328

No of records in each thread = 276554

So do i have to close TransferManager? If yes then how should i close it? My application Multithreaded. When i call tm.shutdownNow(); then other threads will not be able to use it.

Here is my code to upload into S3.

AWSCredentials credential = new ProfileCredentialsProvider("skjfffkjg-Prod-ServiceUser").getCredentials();
        AmazonS3Client s3Client = (AmazonS3Client) AmazonS3ClientBuilder.standard().withRegion("us-east-1")
                .withCredentials(new AWSStaticCredentialsProvider(credential)).withForceGlobalBucketAccessEnabled(true)
                .build();

        s3Client.getClientConfiguration().setMaxConnections(100);

Uploading to S3 method

public void uploadToToS3() {
        _logger.info("Number of record to be processed in current thread: : " + records.size());


        TransferManager tm = new TransferManager(s3Client);

        MultipleFileUpload upload = tm.uploadFileList(bucketName, "", new File(fileLocation), records);

        if (upload.isDone() == false) {
            System.out.println("Transfer: " + upload.getDescription());
            System.out.println("  - State: " + upload.getState());
            System.out.println("  - Progress: " + upload.getProgress().getBytesTransferred());
        }
        try {
            upload.waitForCompletion();
        } catch (AmazonServiceException e1) {
            _logger.error("AmazonServiceException " + e1.toString());
        } catch (AmazonClientException e1) {
            _logger.error("AmazonClientException " + e1.toString());
        } catch (InterruptedException e1) {
            _logger.error("InterruptedException " + e1.toString());
        }
        System.out.println("Is Upload completed Successfully ="+upload.isDone());

        for (File file : records) {
            try {
                Files.delete(FileSystems.getDefault().getPath(file.getAbsolutePath()));
            } catch (IOException e) {
                _logger.error("IOException in file delete: " + e.toString());
                System.exit(1);
                _logger.error("IOException: " + e.toString());
            }
        }

        _logger.info("Calling Transfer manager shutdown");
        // tm.shutdownNow();
    }

Do i have to close anything in order to smooth upload?

Upvotes: 5

Views: 20509

Answers (1)

Shahaf Fridman
Shahaf Fridman

Reputation: 41

when you posses the S3 Object, you are required to abort and close it - as you abort an opened connection or close a file reader, etc.

Therefore, you need to make sure that your object requests are closed properly. BTW, increase in the number of max connections will probably not be an optimal solution for this.

There's an AWS API that allows aborting S3 operations. I believe that you need to add 'finally' block in order to control the uploading during exceptions.

Here's a snippet to what I would use:

public void uploadToToS3() {
        _logger.info("Number of record to be processed in current thread: : " + records.size());
    TransferManager tm = new TransferManager(s3Client);

    MultipleFileUpload upload = tm.uploadFileList(bucketName, "", new File(fileLocation), records);

    if (upload.isDone() == false) {
        System.out.println("Transfer: " + upload.getDescription());
        System.out.println("  - State: " + upload.getState());
        System.out.println("  - Progress: " + upload.getProgress().getBytesTransferred());
    }
    try {
        upload.waitForCompletion();
    } catch (AmazonServiceException e1) {
        _logger.error("AmazonServiceException " + e1.toString());
    } catch (AmazonClientException e1) {
        _logger.error("AmazonClientException " + e1.toString());
    } catch (InterruptedException e1) {
        _logger.error("InterruptedException " + e1.toString());
    } 

    // ************ THIS IS THE MODIFICATION ************ 
    finally {
        upload.getSubTransfers().forEach(s -> s.abort());
    }
    // *************************************************** 

    System.out.println("Is Upload completed Successfully ="+upload.isDone());
    for (File file : records) {
        try {
            Files.delete(FileSystems.getDefault().getPath(file.getAbsolutePath()));
        } catch (IOException e) {
            _logger.error("IOException in file delete: " + e.toString());
            System.exit(1);
            _logger.error("IOException: " + e.toString());
        }
    }

    _logger.info("Calling Transfer manager shutdown");
    // tm.shutdownNow();
}

Upvotes: 4

Related Questions