Reputation: 1080
I was saving my files on the FS of my server and now I want to save them in the mongodb.(for easier backup and stuff).I want to store files like 4-5Mb maximum and I tried save them with mongoose with Buffer type.I successfully saved them and retrieved them but I noticed a significant slow performance when i save and retrieve files like 4 or 5Mb.
My schema:
let fileSchema = new Schema({
name: {type: String, required: true},
_announcement: {type: Schema.Types.ObjectId, ref: 'Announcements'},
data: Buffer,
contentType: String
});
How I retrieve them from the expressjs server:
let name = encodeURIComponent(file.name);
res.writeHead(200, {
'Content-Type': file.contentType,
'Content-Disposition': 'attachment;filename*=UTF-8\'\'' + name
});
res.write(new Buffer(file.data));
My question is should I use some zlib compress functions like 'deflate' to compress buffer before saving them in the mongodb and then uncompress the binary before sending them to the client? Would this make the whole proccess faster?Am I missing something?
Upvotes: 16
Views: 5107
Reputation: 434
It seems that you are trying to save a really big amount of information with mongoDb.
I can think in 3 diferent options for your case
Instead of storing a file in a single document, GridFS divides the file into parts, or chunks [1], and stores each chunk as a separate document. By default, GridFS uses a default chunk size of 255 kB; that is, GridFS divides a file into chunks of 255 kB with the exception of the last chunk.
And next they say in what situations you may use this way of storing information:
In some situations, storing large files may be more efficient in a MongoDB database than on a system-level filesystem.
- If your filesystem limits the number of files in a directory, you can use GridFS to store as many files as needed.
- When you want to access information from portions of large files without having to load whole files into memory, you can use GridFS to recall sections of files without reading the entire file into memory.
- When you want to keep your files and metadata automatically synced and deployed across a number of systems and facilities, you can use GridFS. When using geographically distributed replica sets, MongoDB can distribute files and their metadata automatically to a number of mongod instances and facilities.
Hope it was useful :)
Upvotes: 8
Reputation: 2465
If you absolutely feel that you must store the images in your Database and not in filesystem or other cloud services, I wont comment on that.
With respect to your specific question, GridFS is a respectable option which people use in production as well and has served its purpose quite well. I personally used it couple of years back but my use case changed therefore moved to another medium. (Please check the SO link where people are discussing its performance)
What is of concern is the fact that you have 4mb images, unless you are serving images with huge dependency on quality and big resolution - that should not happen. Please compress your images before storing them, do it on the frontend or backend (your choice), if you compress them on frontend itself then it will reduce the transmission time of packets.
Module for node.js side compression
Upvotes: 0
Reputation: 1342
I will suggest you to use GridFS
it's faster and very easy to use.
For more info please check this url: https://docs.mongodb.com/manual/core/gridfs/.
If you have any question about GridFS
let me know.
Upvotes: 0