Reputation: 1587
I run into unexpected behavior with regards to fs.createReadStream
and fs.createWriteStream
. I hope somebody can point out where I make the wrong assumptions:
I create a readable and writable stream like this
let readableStream = fs.createReadStream('./lorem ipsum.doc');
let writableStream = fs.createWriteStream('./output');
Why, if I send the read stream to the write stream like this
let data, chunk;
readableStream
.on('readable', () => {
while ((chunk=stream.read()) !== null) {
data+=chunk;
}
})
.on('end', ()=>{
writableStream.write(data)
console.log("done");
});
I end up with a discrepancy in the output file:
while if I stream like this:
let data, chunk;
readableStream
.on('readable', () => {
while ((chunk=stream.read()) !== null) {
writableStream.write(chunk)
}
})
.on('end', ()=>{
console.log("done");
});
all is fine and as expected:
I.e., in the first example, when/where is the additional overhead in bytes added? Why is it added? What goes wrong?
Thanks for enlightening me!
Note: I am aware of using pipe
(which gives me the correct output file), but these examples are just for my understanding.
Upvotes: 0
Views: 298
Reputation: 928
I am guessing the point is in the first demo, you use 'data +=', which converts binary stream to character string, and wastes some space. Could you please try to convert the second demo too? ===>
var s=chunk;
writableStream.write(s);
Updated: the correct way to combine stream buffer is like your comment:
var chunks = [];
var size = 0;
...on('data', function(chunk){
chunks.push(chunk);
size += chunk.length;
})
...on('end', function(){
var buf = Buffer.concat(chunks, size); // use buf to write to writestream
var str = iconv.decode(buf, 'utf8'); // use str to console.log string, which supports all languages such as Asian
})
Upvotes: 1