JamesArmes
JamesArmes

Reputation: 1373

RequestTimeout uploading to S3 using PHP

I am having trouble uploading files to S3 from on one of our servers. We use S3 to store our backups and all of our servers are running Ubuntu 8.04 with PHP 5.2.4 and libcurl 7.18.0. Whenever I try to upload a file Amazon returns a RequestTimeout error. I know there is a bug in our current version of libcurl preventing uploads of over 200MB. For that reason we split our backups into smaller files.

We have servers hosted on Amazon's EC2 and servers hosted on customer's "private clouds" (a VMWare ESX box behind their company firewall). The specific server that I am having trouble with is hosted on a customer's private cloud.

We use the Amazon S3 PHP Class from http://undesigned.org.za/2007/10/22/amazon-s3-php-class. I have tried 200MB, 100MB and 50MB files, all with the same results. We use the following to upload the files:

$s3 = new S3($access_key, $secret_key, false);
$success = $s3->putObjectFile($local_path, $bucket_name,
    $remote_name, S3::ACL_PRIVATE);

I have tried setting curl_setopt($curl, CURLOPT_NOPROGRESS, false); to view the progress bar while it uploads the file. The first time I ran it with this option set it worked. However, every subsequent time it has failed. It seems to upload the file at around 3Mb/s for 5-10 seconds then drops to 0. After 20 seconds sitting at 0, Amazon returns the "RequestTimeout - Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed." error.

I have tried updating the S3 class to the latest version from GitHub but it made no difference. I also found the Amazon S3 Stream Wrapper class and gave that a try using the following code:

include 'gs3.php';
define('S3_KEY', 'ACCESSKEYGOESHERE');
define('S3_PRIVATE','SECRETKEYGOESHERE');
$local = fopen('/path/to/backup_id.tar.gz.0000', 'r');
$remote = fopen('s3://bucket-name/customer/backup_id.tar.gz.0000', 'w+r');

$count = 0;
while (!feof($local))
{
    $result = fwrite($remote, fread($local, (1024 * 1024)));
    if ($result === false)
    {
        fwrite(STDOUT, $count++.': Unable to write!'."\n");
    }
    else
    {
        fwrite(STDOUT, $count++.': Wrote '.$result.' bytes'."\n");
    }
}

fclose($local);
fclose($remote);

This code reads the file one MB at a time in order to stream it to S3. For a 50MB file, I get "1: Wrote 1048576 bytes" 49 times (the first number changes each time of course) but on the last iteration of the loop I get an error that says "Notice: fputs(): send of 8192 bytes failed with errno=11 Resource temporarily unavailable in /path/to/http.php on line 230".

My first thought was that this is a networking issue. We called up the customer and explained the issue and asked them to take a look at their firewall to see if they were dropping anything. According to their network administrator the traffic is flowing just fine.

I am at a loss as to what I can do next. I have been running the backups manually and using SCP to transfer them to another machine and upload them. This is obviously not ideal and any help would be greatly appreciated.

Update - 06/23/2011

I have tried many of the options below but they all provided the same result. I have found that even trying to scp a file from the server in question to another server stalls immediately and eventually times out. However, I can use scp to download that same file from another machine. This makes me even more convinced that this is a networking issue on the clients end, any further suggestions would be greatly appreciated.

Upvotes: 6

Views: 10135

Answers (5)

Roma Rush
Roma Rush

Reputation: 4167

I solved this problem in another way. My bug was, that filesize() function returns invalid cached size value. So just use clearstatcache()

Upvotes: 2

usef_ksa
usef_ksa

Reputation: 1659

This problem exists because you are trying to upload the same file again. Example:

$s3 = new S3('XXX','YYYY', false);
$s3->putObjectFile('file.jpg','bucket-name','file.jpg');
$s3->putObjectFile('file.jpg','bucket-name','newname-file.jpg');

To fix it, just copy the file and give it new name then upload it normally.

Example:

$s3 = new S3('XXX','YYYY', false);
$s3->putObjectFile('file.jpg','bucket-name','file.jpg');
now rename file.jpg to newname-file.jpg
$s3->putObjectFile('newname-file.jpg','bucket-name','newname-file.jpg');

Upvotes: 5

jbrass
jbrass

Reputation: 941

You should take a look at the AWS PHP SDK. This is the AWS PHP library formerly known as tarzan and cloudfusion.

http://aws.amazon.com/sdkforphp/

The S3 class included with this is rock solid. We use it to upload multi GB files all of the time.

Upvotes: 1

Rakesh Sankar
Rakesh Sankar

Reputation: 9415

There are quite a bit of solutions available. I had this exact problem but I don't wanted to write a code and figure out the problem.

Initially I was searching for a possibility to mount S3 bucket in the Linux machine, found something interesting:

s3fs - http://code.google.com/p/s3fs/wiki/InstallationNotes - this did work for me. It uses FUSE file-system + rsync to sync the files in S3. It kepes a copy of all filenames in the local system & make it look like a FILE/FOLDER.

This saves BUNCH of our time + no headache of writing a code for transferring the files.

Now, when I was trying to see if there is other options, I found a ruby script which works in CLI, can help you manage S3 account.

s3cmd - http://s3tools.org/s3cmd - this looks pretty clear.

[UPDATE] Found one more CLI tool - s3sync

s3sync - https://forums.aws.amazon.com/thread.jspa?threadID=11975&start=0&tstart=0 - found in the Amazon AWS community.

I don't see both of them different, if you are not worried about the disk-space then I would choose a s3fs than a s3cmd. A disk makes you feel more comfortable + you can see the files in the disk.

Hope it helps.

Upvotes: 1

Jason Palmer
Jason Palmer

Reputation: 731

I have experienced this exact same issue several times.

I have many scripts right now which are uploading files to S3 constantly.

The best solution that I can offer is to use the Zend libraries (either the stream wrapper or direct S3 API).

http://framework.zend.com/manual/en/zend.service.amazon.s3.html

Since the latest release of Zend framework, I haven't seen any issues with timeouts. But, if you find that you are still having problems, a simple tweak will do the trick.

Simply open the file Zend/Http/Client.php and modify the 'timeout' value in the $config array. At the time of writing this it existed on line 114. Before the latest release I was running at 120 seconds, but now things are running smooth with a 10 second timeout.

Hope this helps!

Upvotes: 1

Related Questions