Or Smith
Or Smith

Reputation: 3606

upload and get chunks from GridFS (mongodb)

I'm new in mongoDB, and I try to upload a file to GridFS with chunks. Meaning, I get a file from my client divided into chunks, and I want to upload it to mongoDB, so a file would be a collection of chunks.

In other words, I want to implement random access in gridFS.

How can I do it? and how I can get this file (which again, divided into parts in my mongo.db. is the Id would be the same?)

The application is written in node.js

Upvotes: 1

Views: 4752

Answers (1)

Farid Nouri Neshat
Farid Nouri Neshat

Reputation: 30430

Generally in node sequential chunks of data are handled using streams, if in this case if you are dealing with streams, then gridfs-stream module is your best bet. Check the example and also the stream handbook. you can just write a file by piping a stream into it or read a file by piping it into another stream.

If you don't have a stream already, you can also use it manually. For writing data you can just use nodes .write and .end methods:

var mongo = require('mongodb');
var Grid = require('gridfs-stream');

// create or use an existing mongodb-native db instance
var db = new mongo.Db('yourDatabaseName', new mongo.Server("127.0.0.1", 27017));
var gfs = Grid(db, mongo);

// streaming to gridfs
var writestream = gfs.createWriteStream({
    filename: 'my_file.txt'
});

writestream.write('this is a line\n');
writestream.write('second line coming some time after\n');
writestream.write('last line');
writestream.end();// Add the EOF marker and finish the file writing.

Or read from the file:

var readstream = gfs.createReadStream({
  filename: 'my_file.txt'
});

readstream.on('data', function (data) {
    // We got a buffer of data...
});
readstream.on('end', function () {
    // File finished reading...
});

Note that only the old API works here. For new api you have to wrap the stream.

You can pass a chunkSize option, if you want to change the amount of data being read everytime.

Now if your data is not coming in/going out sequentially or you want to get closer to the metal(more low level), you can use the mongodb gridstore API.

For writing files, you should use the write method of a gridstore instance:

var Db = require('mongodb').Db,
    Server = require('mongodb').Server,
    ObjectID = require('mongodb').ObjectID;

var db = new Db('test', new Server('localhost', 27017));
// Establish connection to db
db.open(function(err, db) {
  // Our file ID
  var fileId = new ObjectID();

  // Open a new file
  var gridStore = new GridStore(db, fileId, 'w');

  // Open the new file
  gridStore.open(function(err, gridStore) {

    // Write a text string
    gridStore.write('Hello world', function(err, gridStore) {

      // Write a buffer
      gridStore.write(new Buffer('Buffer Hello world'), function(err, gridStore) {

        // Close the file...
        gridStore.close(function(err, result) {});
      });
    });
  });
});

You can change the writing position by calling the .seek method if you want random writes.

For reading, you can use the .read method which is almost counterpart of .write method:

gs.seek(0, function() {
  // Read 1024 bytes...
  gs.read(1024, function(err, data) {
    // Now go to position 2048
    gs.seek(2048, function () {
      // Read another 1024 bytes...
      gs.read(1024, function(err, data) {
      });      
    });
  });
});

There's also GridStore.read method. Which is a higher level method for the above functions:

// Read 1024 bytes starting from the byte 2048.
GridStore.read(db, 'file1', 1024, 2048, function (err, data) {

});

I hope that can help you. That's all there is to reading/writing in chunks.

Upvotes: 3

Related Questions