daphshez
daphshez

Reputation: 9638

Monitor UDP packet loss in Windows Server, Java

My Java application receives data through UDP. It uses the data for an online data mining task. This means that it is not critical to receive each and every packet, which is what makes the choice of UDP reasonable on the first place. Also, the data is transferred over LAN, so the physical network should be reasonably reliable. Anyway, I have no control over the choice of protocol or the data included.

Still, I am concerned about packet loss that may arise from overload and long processing time of the application itself. I would like to know how often these things happen and how much data is lost.

Ideally I am looking for a way to monitor packet loss continuously in the production system. But a partial solution would also be welcome.

I realize it is impossible to always know about UDP packet losses (without control on the packet contents). I was thinking of something along the lines of packets received by the OS but never arriving to the application; or maybe some clever solution inside the application, like a fast reading thread that drops data when its client is busy.

We are deploying under Windows Server 2003 or 2008.

Upvotes: 1

Views: 3253

Answers (2)

Shane Powell
Shane Powell

Reputation: 14148

The problem is that there is no way to tell that you have lost any packets if you are relying on the UDP format.

If you need to know this type of information, you need to build it into the format that you layer ontop of UDP (like the TCP Sequence Number). If you do that and the format is simple then you can easily create filters into Microsoft's NetMon or WireShark to log and track that information.

Also note that the TCP Sequence Number implementation also helps to detect out of order packets take may happen when using UDP.

Upvotes: 2

Kekoa
Kekoa

Reputation: 28250

If you are concerned about packet loss, use TCP.

That's one reason why it was invented.

Upvotes: 0

Related Questions