Gyanendra Singh
Gyanendra Singh

Reputation: 1005

Data "eaten up" on file upload to Amazon S3 server

I am trying to upload an xml file to an Amazon S3 server. My code in ruby goes like this:-

AWS::S3::S3Object.store("dir/data.xml",
                            "#{xml.target!}",
                            "bucket",
                            :access => :private,
                            :content_type => 'text/xml')

The xml is an RSS feed file. When I download the uploaded file from the server, the file is missing the last line from the file.

</rss>

Removing the optional parameter content-type has no effect on the output. However, changing the data component to insert additional characters makes those missing characters appear corectly.

AWS::S3::S3Object.store("dir/data.xml",
                                "#{xml.target!}         ",
                                "bucket",
                                :access => :private,
                                :content_type => 'text/xml')

Even though this solves my problem, I am a little reluctant to use this code in production. Also, I would like to know what is going wrong?

As a matter of fact, when I write to a file on my local machine, it works correctly.

file = File.new("/path/feed.xml", "w")
    file.write(xml.target!)
    file.close

Update: I am facing the same problem while uploading a csv file as well. I notice that for very large files, even more data is truncated. For the xml file, not only the closing rss tag, but a few other tags are also missing. Similarly for the csv file. How do I resolve these?

Upvotes: 1

Views: 386

Answers (1)

Gyanendra Singh
Gyanendra Singh

Reputation: 1005

Sending the xml.target! as data was somehow resulting in this problem. In order to solve this I created a new string using StringIO.new.

For csv, i used something like this :-

 csv_string = StringIO.new(csv_string)

Upvotes: 2

Related Questions