Reputation: 3590
I'm quite new to Node.js and I have a request for an application that would receive a payload of UDP packets and process it.
I'm talking about more than 400 messages per second, which would reach something like 200.000 messages/minute.
I have written a code to setup a UDP server (grabbed from docs here http://nodejs.org/api/all.html#all_udp_datagram_sockets actually) but it's losing something around 5% of the packets.
What I really need to develop is a server which would get the packet and send it to another worker do the job with the message. But looks like threading in node.js is a nightmare.
This is my core as is:
var dgram = require("dgram");
var fs = require("fs");
var stream = fs.createWriteStream("received.json",{ flags: 'w',
encoding: "utf8",
mode: 0666 });
var server = dgram.createSocket("udp4");
server.on("message", function (msg, rinfo) {
console.log("server got: " + msg + " from " +
rinfo.address + ":" + rinfo.port);
stream.write(msg);
});
server.on("listening", function () {
var address = server.address();
console.log("server listening " +
address.address + ":" + address.port);
});
server.bind(41234);
// server listening 0.0.0.0:41234
Upvotes: 13
Views: 24118
Reputation: 8188
A subtle tip here. Why did you use UDP for a stream? You need to use TCP for streams. The UDP protocol sends datagrams, discrete messages. It will not break them apart on you behind the scenes. What you send is what you receive with UDP. (IP fragmentation is a different issue I'm not talking about that here). You don't have to be concerned with re-assembling a stream on the other side. That's one of the main advantages of using UDP instead of TCP. Also, if you are doing localhost to localhost you don't have to worry about losing packets due to network hiccups. You could lose packets if you overflow the network stack buffers though so give yourself big ones if you are doing high speed data transfer. So, forget about the stream, just use UDP send:
var udp_server = dgram.createSocket({ type: 'udp4', reuseAddr: true, recvBufferSize: 20000000 // <<== mighty big buffers });
udp_server.send("Badabing, badaboom", remote_port, remote_address);
Go was developed by Google to deal with the proliferation of languages that occurs in modern tech shops. (it's crazy, I agree). I cannot use it because its culture and design prohibit using Exceptions which are the most important feature modern languages have for removing the huge amount of clutter added by old fashion error handling. Other than that, it's fine but that's a show stopper for me.
Upvotes: 0
Reputation: 1455
I wrote a soap/xml forwarding service with a similar structure, and found that the info would come in 2 packets. I needed to update my code to detect 2 halves of the message and put them back together. This payload size thing may be more of an HTTP issue than a udp issue, but my suggestion is that you add logging to write out everything you are receiving and then go over it with a fine tooth comb. It looks like you would be logging what you are getting now, but you may have to dig into the 5% that you are losing.
How do you know its 5%? if you send that traffic again, will it always be 5%? are the same messages always lost.
I built a UDP server for voip/sip call data using ruby and Event Machine, and so far things have been working well. (I'm curious about your test approach though, I was doing everything over netcat, or a small ruby client, I never did 10k messages)
Upvotes: 0
Reputation: 10413
You are missing concepts, NodeJS is not meant to be multi-thread in terms of you mean, requests should be handled in a cycle. No other thread exists so no context-switches happens. In a multi-core environment, you can create a cluster via node's cluster module, I have a blog-post about this here.
You set the parent proceses to fork child processes, and ONLY child processes should bind to a port. Your parent proceses will handle the load balancing between children.
Note: In my blog post, I made i < os.cpus().length / 2;
but it should be i < os.cpus().length;
Upvotes: 3