Reputation: 1401
I want a database to insert simple data (1 table with 10 columns). The problem is I need to insert 200,000 records per second. My computer is a HP server with 96GB ram and 300T NAS drive. I have no clue about this insertion rate.
I heard about Mongodb. Would you please lead me the right way?
Upvotes: 0
Views: 1222
Reputation: 2002
Mongo is potentially a good choice here but 200k per second is a big number so careful design will be necessary I think with whatever you use.
200kps * 300 bytes/document = 60MBps -> ~600Mbps -> close to gigabit ethernet adapter saturation. so something to keep an eye on.
in a case like this keep your field names short as they are present in the BSON for each document
the _id field should be monotonically increasing (approximately) so that the entire _id index need not be in ram. this would be true by default as BSON ObjectIds do that. if you add more indexes, you likely won't be able to hit 200K with 1 server; perhaps could be done with SSDs.
I'm not sure you will get to 200K per second on one server with mongo. 100K should be pretty easy. depends on the speed of the box. if you shard you can of course do 200K per second with multiple machines.
is there any way to bundle these small rows into larger documents that make sense for your application? if you do you will get higher throughput: for example one document which consists of equivalent of ten of these rows, it would become a single insert and have a single key in the _id index.
Upvotes: 3