LPalmer
LPalmer

Reputation: 256

How can I speed up Java DatagramSocket performance?

I'm using Java DatagramSocket class to send a UDP data gram to an endpoint. The datagram must arrive at the endpoint in 60ms intervals.

I'm finding that DatagramSocket.Send can often take > 1 ms (close to 2) to package and send packets no greater than 56 bytes. This causes my packets to be delivered at 62 ms intervals, rather than 60ms intervals.

This is on a windows vista machine. Here is how I'm measuring the time:

              DatagramPacket d = new DatagramPacket(out, out.length, address, port);
              long nanoTime = System.nanoTime();
    socket.send(d);
    long diff = System.nanoTime() - nanoTime;
    System.out.println( out.length + " in " + diff + "ms." );

Does any one have tips or tricks to speed this process?

Upvotes: 4

Views: 9036

Answers (9)

asdf
asdf

Reputation: 1

You are measuring nanoTime, so it will give you nano seconds instead of milli seconds.

long diff = System.nanoTime() - nanoTime; System.out.println( out.length + " in " + diff + "ms." );

Upvotes: 0

Tchakabam
Tchakabam

Reputation: 505

I had the same problem as you. I found the solution. To make a long story short, here it is:

Look for the Java class ScheduledThreadPoolExecutor , it is in the Java 5 JDK/JRE. (i can post only one link as i just found out otherwise i would have pointed to the Oracle JavaDoc)

ScheduledThreadPoolExecutor schedThPoolExec = new ScheduledThreadPoolExecutor(1);   

/*
 * 
 * cue a byte buffer for sending in equal segments on the udp port with a inter-pkt-delay
 * 
 */
public void send(byte[] data, String destinationHost, int destinationPort, double interPacketDelayMs) {

        long interDelayNanos = (long) ( interPacketDelayMs * 1000000.0 );

        schedThPoolExec.scheduleAtFixedRate( new SendPacketsTimerTask(data, destinationHost, destinationPort), 0, interDelayNanos , TimeUnit.NANOSECONDS);

}

/*
 * 
 *
 * 
 */

class SendPacketsTimerTask implements Runnable {

    int offset = 0;
    byte[] buffer;
    String host;
    int port;

    public SendPacketsTimerTask(byte[] buffer, String destinationHost, int destinationPort) {

        this.buffer = buffer;
        host = destinationHost;
        port = destinationPort;
    }

    @Override
    public void run() {

        if(offset + PKT_SIZE < buffer.length) {

            //copy from cue to packet               
            byte[] tmp_pkt_buffer = new byte[PKT_SIZE];
            System.arraycopy(buffer, offset, tmp_pkt_buffer, 0, PKT_SIZE);

            try {
                //send packet       
                socket.send( new DatagramPacket(tmp_pkt_buffer, tmp_pkt_buffer.length, InetAddress.getByName(host), port) );
                //increment offset
                offset += tmp_pkt_buffer.length;



            } catch (UnknownHostException e) {
                e.printStackTrace();
            } catch (IOException e) {
                e.printStackTrace();
            } 

        }
    }
}

Along with the TimerTask class (as already mentioned above) you can schedule an event periodicaly. Now you just need to write the TimerTask that will send your messages, or as in my case a buffer of data.

My problem was actualy that i am handling real time media streams and that i need a throughputs of 15 Mbits+ (video data). So that brings you to inter-packet-delays of 0.5 ms. So the granularity of the Thread.sleep method (which takes nano-seconds arguments true, but has miliseconds granularity nevertheless - true also ). So I was stuck with 6 mbit sending rate. When i checked out the Timer class i thought i found my solution. Finaly finding out this class was not handling my low execution periods either. Searching for people having similar problems i found this article which was very helpful. Instead of the Timer class you can use the above mentioned thread scheduler class which is accordingly binded to native code using your full system performance to run the send method periodicaly with the highest resolution possible.

Note: The general meaning (also at my employer) Java would be "too slow" , "have imprecise timings") to do timing critical applications and high network data throughput is finaly averred as WRONG. It WAS TRUE. Finaly with Java 5 we can achieve full possible timing capabilities and thus application performance :)

Upvotes: 1

KarlP
KarlP

Reputation: 5191

Use a Timer, as mentioned by James Van Huis. That way, you will at least get the average frequency correct.

Quote from the javadoc :

If an execution is delayed for any reason (such as garbage collection or other background activity), two or more executions will occur in rapid succession to "catch up." In the long run, the frequency of execution will be exactly the reciprocal of the specified period (assuming the system clock underlying Object.wait(long) is accurate).

Also, to answer your actual, but perhaps slightly misguided question: reusing an instance DatagramPacket and just setting a new output buffer shaves of a "massive" microsecond in average, on my machine...

    datagram.setData(out);
    socket.send(datagram);

It reduces the load on the gc slightly so it might be a good idea if you are sending at a high rate.

Upvotes: 2

Javamann
Javamann

Reputation: 2922

Since you are not using a Real Time Java there is no way make sure you will always send a packet every 60ms. I would set up a timer thread that will do a 'notify' on two other waiting threads that actually send the packet. You could get by with only one thread to send but I am sort of anal about having a backup in case there is a problem.

Upvotes: 0

erickson
erickson

Reputation: 269857

How about sending your packets at 58 millisecond intervals?

No matter how you optimize (and there really aren't any opportunities to do so; using the channel-orient NIO will do the same work), some time will be required to send data, and there is likely to be some variability there. If precise timing is required, some strategy that acknowledges the transmission time is required.

Also, a note about the measurement: be sure not to measure the delay until several thousand iterations have been performed. This gives a the optimizer a chance to do its work and give a more representative timing.


At one time, the time resolution on Windows was poor. However, 1 millisecond resolution is now common. Try the following test to see what how precise your machine is.

  public static void main(String... argv)
    throws InterruptedException
  {
    final int COUNT = 1000;
    long time = System.nanoTime();
    for (int i = 0; i < COUNT; ++i) {
      Thread.sleep(57);
    }
    time = System.nanoTime() - time;
    System.out.println("Average wait: " + (time / (COUNT * 1000000F)) + " ms");
  }

On my Windows XP machine, the average wait time is 57.7 ms.

Upvotes: 0

Mr. Will
Mr. Will

Reputation: 2308

If you send the packets out in 60ms intervals then theoretically the packets would arrive in 60ms intervals at the destination, however this is not guaranteed. Once the packets hit the link they become the mercy of the network link which could include network traffic and even dropping your packets along the routed path.

Is there a reason the packets must be received exactly 60ms apart? If so, there are other protocols that could help you achieve this.

Upvotes: 0

James Van Huis
James Van Huis

Reputation: 5571

You can use the Timer class to schedule an event.

    Timer timer = new Timer();
    TimerTask task = new TimerTask() {
        public void run() {
            //send packet here
        }};
    timer.scheduleAtFixedRate(task, 0, 60);

This will create a recurring event every 60ms to execute the "run" command. All things remaining equal, the packet should hit the wire every 60ms (although, the first packet will be delayed by some amount, and garbage collection/other tasks/etc may slightly delay this number).

Upvotes: 6

James
James

Reputation: 2066

Besides for the obvious and smart-allecky response of "wait only 59 ms," there isn't a whole lot you can actually do. Any operation you take is going to take some amount of time which is not likely to be consistent. As such, there is no way to guarantee that your packets will be delivered at precisely 60 ms intervals.

Remember that it takes time to wrap your tiny little 56 byte message in the headers needed for the UDP and IP layers and still more time to shunt it out to your network card and send it on its way. This adds another 8 bytes for the UDP layer, 20 for the IP layer, and still more for whatever the link layer needs. There is nothing you can do to avoid this.

Also, since you are using UDP, there is no way that you can guarantee that your packets actually arrive, or if they do that they arrive in order. TCP can make these guarantees, but neither can guarantee that they will actually arrive on time. In particular, network congestion may slow down your data en route to the destination, causing it to be late, even compared to the rest of your data. Thus, it is unreasonable to try to use a remote application to control another at precise intervals. You should consider yourself lucky if your signals actually arrive within 2 ms of when you want it to.

Upvotes: 1

Nat
Nat

Reputation: 9951

You're seeing the time taken to copy the data from user-space into kernel space. It takes even longer to send through the UDP, IP and Ethernet layers and it can take a variable amount of time for a datagram to cross the physical network to its destination.

Assuming you have a network that exhibits no jitter (variance in per-packet transmission time) and your process is running at real-time priority, and nothing else is competing with it for the CPU...

You need to call send every 60ms, no matter how long it takes for the send() method to execute. You cannot wait 60ms between calls. You need to measure how long it takes to perform the body of your loop (send() and whatever else) and subtract that from 60ms to get the wait time.

Upvotes: 4

Related Questions