Reputation: 115
As part of the process, my site crates a csv file from a pandas dataframe -
df.to_csv()
As the application is deployed on Heroku, here there is no persistent file storage, I want to save this to Amazon S3. I have this set up for uploaded files and static files, but can't get it to work for this. Any help much appreciated.
Upvotes: 4
Views: 2317
Reputation: 601
Take a look at Boto. The code is really easy and your don't have to worry about writing the file out just post it directly.
heroku config:add AWS_ACCESS_KEY_ID='your_key_here'
heroku config:add AWS_SECRET_ACCESS_KEY='your_secret_here'
Boto will use the config automatically:
import boto
filecontents = "colm1, colm2, etc"
c = boto.connect_s3()
b = c.get_bucket('mybucket') # substitute your bucket name here
from boto.s3.key import Key
k = Key(b)
k.key = filecontents
k.get_contents_as_string()
'colm1, colm2, etc'
Read more: http://boto.readthedocs.org/en/latest/s3_tut.html
Upvotes: 1
Reputation: 16644
Maybe you can use django-storages to save your files to Amazon S3. Add your AWS credentials to your config like this:
heroku config:add AWS_ACCESS_KEY_ID='your_key_here'
heroku config:add AWS_SECRET_ACCESS_KEY='your_secret_here'
heroku config:add AWS_STORAGE_BUCKET_NAME='your_bucket_name_here'
and retrieve them in your settings.py like this:
AWS_ACCESS_KEY = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = os.environ.get('AWS_STORAGE_BUCKET_NAME')
Update: You can try StringIO
to do something like this:
from django.core.files.base import ContentFile
from StringIO import StringIO
output = StringIO()
df.to_csv(output)
mymodel.myfield.save('django_test.csv', ContentFile(output.getvalue()))
Upvotes: 1