Reputation: 565
I'm having an issue downloading complete data from a website. I am doing the following
request({url: 'http://somehost/somefile.txt'}, function (error, response, body) {
if (!error && response.statusCode == 200) {
console.log(response.headers);
console.log(body.length)
}
});
The length is 64,472 bytes when downloaded above. Content-Length is 65,536. The file is malformed.
If I use wget to obtain the file, the resulting length is 65,536 and is a proper file.
Any ideas how to get Node to duplicate the results of wget? I tried changing the User-Agent to wget in case that was it.
Thanks!
Upvotes: 0
Views: 899
Reputation: 106736
UPDATE: request
has had an encoding
option for awhile now and it's easier to just use that instead of manually buffering. For binary data you can set encoding: null
as mentioned in the request
readme to get back a single Buffer
instance containing the binary data instead of a (utf8) string. Any non-null
encoding
value will be passed directly to the internal Buffer
's .toString()
method.
The problem is that the request
module buffers response data as a utf8 string when you pass in a callback as the second argument. So for binary data (or for textual data in an encoding that node does not support out of the box), you need to buffer the data manually. For example:
request({url: 'http://somehost/somefile.txt'}).on('response', function(res) {
// res === http.IncomingMessage object
var buffer = [],
bufsize = 0;
response.on('data', function(data) {
buffer.push(data);
bufsize += data.length;
}).on('end', function() {
var body = Buffer.concat(buffer, bufsize);
// body now contains the raw binary data
});
});
Upvotes: 4