Michelle
Michelle

Reputation: 53

NodeJS logs service design

I need to write logs service using NodeJS and Mongo. It gets at least 10,000 http requests per sec and I can't lose data. I am using multiple servers with load balancer, also each server uses cluster for scale out. Does anyone have an idea for not losing data? I thought maybe I can save the requests in a queue (sqs), and every interval read the messages and insert a bulk to the db, if it failed keep it in the queue, so I won't lose data. can all servers read and write to same queue? Does anyone have a better idea? thanks.

Upvotes: 2

Views: 78

Answers (1)

rdegges
rdegges

Reputation: 33824

This is a pretty vague question, but you're thinking along the right lines.

If your main goal is to NOT lose log data, and maintain high throughput, you should always try to take any incoming data, dump it into a queue, and return a successful response as fast as possible.

This will minimize the amount of time you spend processing each request, increase your application throughput, and generally improve reliability.

Once you've got your data into a queueing system like SQS, you can most definitely run a background process that simply does bulk DB inserts.

If your log data is < 64kb, you might want to consider using a solution like DynamoDB to store your resulting log data. It's incredibly quick to perform INSERTS with, has very low latency (since it's running on AWS, just like SQS), and can be scaled to handle throughput easily.

Upvotes: 2

Related Questions