Neha Sharma
Neha Sharma

Reputation: 153

Optimizing incoming UDP broadcast in Linux

Environment

Application

Architecture

Problem

Inspite of tolerable app latency of around 4 microsec we are not able to receive desirable performance. We believe it is because of network latency.

Tuning steps used

Question:

  1. We observed that the performance of the application deteriorated after we increased the values of above mentioned parameters. What are the suggested parameters to be optimized & the recommended values. A guide towards optimizing incoming broadcast is requested?
  2. Is there are a way to measure the network latency in a more accurate manner in environment like this. Note that the UDP sender is an external entity (exchange)

Thanks in advance

Upvotes: 1

Views: 1752

Answers (1)

Maxim Egorushkin
Maxim Egorushkin

Reputation: 136296

It is not clear what and how you measure.

You mention that you are receiving UDP, why are you tuning TCP buffer size?

Generally, increasing incoming socket buffer sizes may help you with packet loss on a slow receiver, but it will not reduce latency.

You may like to find out more about bufferbloat:

Bufferbloat is a phenomenon in packet-switched networks, in which excess buffering of packets causes high latency and packet delay variation (also known as jitter), as well as reducing the overall network throughput. When a router device is configured to use excessively large buffers, even very high-speed networks can become practically unusable for many interactive applications like voice calls, chat, and even web surfing.


You also use Java for a low-latency application. People normally fail to achieve this kind of latencies with Java. One of the major reasons being the garbage collector. See Quantifying the Performance of Garbage Collection vs. Explicit Memory Management for more details:

Comparing runtime, space consumption, and virtual memory footprints over a range of benchmarks, we show that the runtime performance of the best-performing garbage collector is competitive with explicit memory management when given enough memory. In particular, when garbage collection has five times as much memory as required, its runtime performance matches or slightly exceeds that of explicit memory management. However, garbage collection’s performance degrades substantially when it must use smaller heaps. With three times as much memory, it runs 17% slower on average, and with twice as much memory, it runs 70% slower. Garbage collection also is more susceptible to paging when physical memory is scarce. In such conditions, all of the garbage collectors we examine here suffer order-of-magnitude performance penalties relative to explicit memory management.

People doing HFT using Java often turn off garbage collection completely and restart their systems daily.

Upvotes: 1

Related Questions