Valter Silva
Valter Silva

Reputation: 16656

S3CMD timed out

I'm trying to create a good script to backup my files into Amazon S3 storage service. With that in mind, I'm using the s3cmd tool, which seem very useful for that. But one thing is bothering me tough. Sometimes when I'm uploading a file it gives me the follow message error:

s3cmd sync --skip-existing -r --check-md5 -p -H --progress ./files/ s3://company-backup/files/
./files/random-file.1 -> s3://company-backup//files/random-file.1  [1 of 11]
  3358720 of 14552064    23% in   10s   299.86 kB/s  failed
WARNING: Upload failed: ./files/random-file.1 (timed out)
WARNING: Retrying on lower speed (throttle=0.00)
WARNING: Waiting 3 sec...

So looking on the internet I found out this post, which basically says to increase the socket_timeout in the configuration file, but how I can figure out the best timeout to many different size of files ? I mean, sometimes I would need to send 100MB and others times 10GB. And the worst thing is that when the connection is closed by the timeout, it tries to send the file again but it don't start from where stop it, but, start all over again, I really need to avoid that. So two questions here:

1- how know the best socket_timeout value

2- how keep my uploading from where it stop it ? (in cases when it gives timeout)

Upvotes: 2

Views: 3555

Answers (1)

Rohit
Rohit

Reputation: 7629

Answering the 2nd part of the questions. The new version of s3cmd supports a --continue parameter on gets and puts.

Upvotes: 1

Related Questions