root
root

Reputation: 1

Getting Error to download specific parts of a video file stored in GridFS as chunks

I'm currently developing a video streaming server for my college project, and I'm using MongoDB GridFS along with the official MongoDB Node.js driver. In my project, I'm trying to download specific parts of a video file stored in GridFS as chunks. Unfortunately, I've encountered a 'FileNotFound' error in the process

app.get('/download-video', async (req, res) => {
    if (db === null) {
        console.error('Error connecting to MongoDB:',);
        res.status(500).send('Error connecting to MongoDB');
        return;
    }
    const chunkSizeBytes = 50 * 1024; // 50kb chunk size as an example

    // Create a GridFSBucket with the desired chunk size
    const bucket = new mongodb.GridFSBucket(db, { chunkSizeBytes });

    // Specify the file ID you want to retrieve
    const fileId = '514a65328a1e0fef0c936c1109bbc946.mp4';

    // Get the file information
    const fileInfo = await db.collection('fs.files').findOne({ filename: fileId });
    if (!fileInfo) {
        console.error('File not found');
        res.status(400).json({ message: "file not found" })
        return;
    }
    // Calculate the total number of chunks based on the file size and chunk size
    const totalChunks = Math.ceil(fileInfo.length / chunkSizeBytes);
    // Array to store downloaded chunks
    const chunksArray = [];
    try {
        // Download each chunk one by one
        for (let i = 0; i < totalChunks; i++) {
            const downloadStream = bucket.openDownloadStream(fileId, { start: i * chunkSizeBytes });
            let chunkData = Buffer.from([]);

            downloadStream.on('data', (chunk) => {
                // Append the chunk to the existing data buffer
                chunkData = Buffer.concat([chunkData, chunk]);

            });

            downloadStream.on('end', () => {
                // Process the downloaded chunk, e.g., save to a file, send to the client, etc.
                console.log(`Downloaded chunk ${i + 1} of ${totalChunks}`);

                // Add the chunk to the array
                chunksArray.push(chunkData);

                // If this is the last chunk, concatenate and process the complete video
                if (i === totalChunks - 1) {
                    // Concatenate all chunks into a single Buffer
                    const completeVideo = Buffer.concat(chunksArray);

                    // Specify the path where you want to save the complete video
                    const outputPath = 'path/to/output/video.mp4';

                    // Save the complete video to a file
                    fs.writeFileSync(outputPath, completeVideo);

                    console.log('Video saved:', outputPath);
                }
            });

            downloadStream.on('error', (error) => {
                console.error(`Error downloading chunk ${i + 1}:`, error);
                // res.send(400).json({message:"oops error"});
                return;

            });
        }

    } catch (error) {
        console.log(error.message);
        res.json({ message: error.message });
        //Error downloading chunk 102: Error: FileNotFound: file 514a65328a1e0fef0c936c1109bbc946.mp4 was not found
    }
});

Here is the code images code image 1 code image2

I'm seeking assistance to understand and resolve this issue. Any guidance or insights would be greatly appreciated.

Upvotes: 0

Views: 92

Answers (1)

VC.One
VC.One

Reputation: 15936

I don't use MongoDB but from general coding experience, below are the things that I see as being wrong within your shown code:

(1) Use an actual (decimal) bytes length to get the required amount of bytes

Try replacing:

const chunkSizeBytes = 50 * 1024; // 50kb of chunk size as an example

With this new calculation version:

const chunkSizeBytes = 50 * 1000; // 50 000 bytes of chunk size as an example

Don't worry about file storage technicalities like 1 KiloByte == 1024 bytes etc.
You need to work with the decimal count of your required bytes.
This will avoid any "over-shooting" of the true range of your bytes length.

(2) Double-check your reading position.

Your code (example where i is 101):

const downloadStream = bucket.openDownloadStream(fileId, { start: i * chunkSizeBytes });

Check on a calculator if your 101 * chunkSizeBytes is within the file's bytes length.

Also try setting an end position along with your start position in the given config Object.

For example:

{ start: (i * chunkSizeBytes), end: ((i * chunkSizeBytes) + chunkSizeBytes) }

Note: Your last chunk can have a smaller size set, in order to get the remaining last few bytes that did not make it into a full chunk in the previous reading.

Without an end it's possible you are downloading the full file's bytes (from start pos up to end of file), only to overwrite that same data when reading from next chunk's position (also reading up to file end), then repeating with such steps until an error occurs later on.

(3) You should Return after file saving.

In your if (i === totalChunks - 1) block you should exit the function with a return;.

console.log('Video saved:', outputPath);
return;

This will guarantee an exit and stop any further attempts to read into the download file.

Upvotes: 0

Related Questions