Reputation: 67
i'm doing an upload function with chunks, locally it's working properly, but when I go up to production, the file being fully grouped by appendFile
is not complete.
the code works as follows, in the frontend I divide the file into 1mb pieces, name it with a function to be a unique name, send it to the backend and execute the appendFile on the file with that name
request.on("data", (part) => {
chunk.push(part);
}).on("end", async () => {
const firstChunk = chunkId === 0;
const lastChunk = (chunkId) === (chunksQuantity) -1;
const completedChunk = Buffer.concat(chunk);
if (firstChunk && existsSync(`${__dirname}/../../uploads/videos/${fileName}`)) {
unlinkSync(`${__dirname}/../../uploads/videos/${fileName}`);
}
appendFile(`${__dirname}/../../uploads/videos/${fileName}`, completedChunk, async (err)=> {
if(err) throw err;
if(lastChunk) {
//I upload the video to s3 and follow our internal flow
}
});
appendFile
functionUpvotes: 1
Views: 274
Reputation: 67
I didn't understand what was happening, in our production environment we use kubernetes and with that it balanced the requests because we were sending pieces of buffer of a maximum of 5mb, and when we mounted the file the pieces were in different pods.
We changed the way of the requests to transfer as streaming (Content-type: 'application/octet-stream') and everything went well.
Upvotes: 2