Reputation: 4214
I'm writing a PHP app that will process an uploaded file. A ZIP that contains a few CSVs, images, etc. The process requires user input if a warning/error arises while processing the file and the same file should be available to be re-processed later. On a normal server, I's use the filesystem to store the file, then keep the path on my DB.
However, in Heroku I can't do this. I'm using AWS S3 to store other file uploads. Should I store these ones too on S3 and each time I need them, download them to temp dir, process them, upload back and delete the local copy? Or is there a way to process the file while on AWS S3? Maybe mount the S3 bucket?
Upvotes: 0
Views: 631
Reputation: 3005
You can use the Amazon SDK S3 stream wrapper and once setup you can access the files in your bucket inside of a PHP script as if it was part of the local file system. Then you could unzip the file from S3 directly to a local directory, process normally, clean up. and the file still will be in S3
http://docs.aws.amazon.com/aws-sdk-php/guide/latest/feature-s3-stream-wrapper.html
Alternatively..
Not sure about Heroku specific but you can mount a bucket in linux by using s3fs Fuse over Amazon https://code.google.com/p/s3fs/wiki/FuseOverAmazon
That will allow you to mount a bucket like a local file system and interact with it in that way.
Upvotes: 1