user2104560
user2104560

Reputation:

increasing UDP buffer programmatically - does it make sense?

Good day everybody!
I'm developing java UDP server. I send a lot of data to my server and lose part of incoming packets due to small recieve buffer size. Now I'm setting recieve buffer in a such way -

DatagramChannel serverChannel;
serverChannel.setOption(StandardSocketOptions.SO_RCVBUF, 1024*1024*10); // 10 MB


Does it make sense? This code didn't solve problem of packet loosing. My operating system - 64 bit windows 7.

One more question - My network controller's recieve buffer property is 512 by default - is it bytes or megabytes? Is it necessary to increase manually this property if it's 512 bytes? I think increasing buffer programatically actually increases operating system buffer but this doesn't make sense because initially udp packets "come" to my network card.

Upvotes: 1

Views: 2833

Answers (3)

Stephen C
Stephen C

Reputation: 719239

Does it make sense?

Kind of. However, nothing you can do can guarantee that UDP packets won't get lost. UDP is an inherently unreliable protocol. Packets will get lost if your application can't pull them from the socket fast enough to keep up, OR if there is a network error.

If you need data to be sent to your server reliably, you should probably use TCP ...

My network controller's receive buffer property is 512 by default - is it bytes or megabytes?

I don't know, but I suspect neither. It is more likely to be kilobytes.


I tried to increase controller's buffer size - OS says 512 is maximal possible)

Well then, you won't be able to increase it to 10Mb.

So, if there is no newtwork error my packets lose in the only case when I handle them too slow.

Not exactly. Packets can get lost / dropped at many places for a variety of causes. Some of these are "normal behaviour" not "network errors"; e.g. packet dropping when a router / bridge / gateway gets congested.

Increasing buffer size looks enough to solve the problem but surprisingly it fails.

Actually, it is not surprising. Loss of packets can be normal behaviour; see above.

May be I should recieve packets in one thread and handle them in another even I use java NIO which devoted to develop "one thread" servers.

It might help reduce packet loss in the OS and networking interface. However, you end up buffering the data in application memory, and that can cause worse problems ... if the application can't keep up in the long term.

And besides, it does not deal with packets that get lost / dropped for other reasons.


The bottom line is that you will NOT be able to achieve reliable lossless transmission using UDP. Packet loss is inevitable, and you are WASTING YOUR TIME trying to avoid it.

You either have to design your application to cope with lost datagrams, or switch to using TCP or some other protocol which offers reliable data transfer.

Upvotes: 0

user207421
user207421

Reputation: 310980

I send a lot of data to my server and lose part of incoming packets due to small recieve buffer size.

You don't know that. There are many possible causes.

Now I'm setting recieve buffer in a such way -

DatagramChannel serverChannel;
serverChannel.setOption(StandardSocketOptions.SO_RCVBUF, 1024*1024*10); // 10 MB

Does it make sense?

Not really, that's an enormous buffer, and the operating system may truncate that to a more reasonable size. Try getting the option after you set it and see what the actual size is.

This code didn't solve problem of packet loosing.

Nobody said it would. You just assumed it.

One more question - My network controller's recieve buffer property is 512 by default - is it bytes or megabytes?

Nobody can answer that until you tell us what NIC you are using, but you should be able to discover that for yourself from the manufacturer's data.

Is it necessary to increase manually this property if it's 512 bytes?

Not only unnecessary but impossible.

I think increasing buffer programatically actually increases operating system buffer

If you mean that SO_RCVBUF controls the socket receive buffer size, that is correct.

but this doesn't make sense

Yes it does.

because initially udp packets "come" to my network card.

And from there into the operating system, and from there into the IP stack, and from there to the UDP stack, and from there to the socket receive buffer. It doesn't seem likely that an NIC would have a 512-byte buffer when most path MTUs are up to 1500 bytes, but it does make sense for it to be 512k for example. You can assume it's enough, after all it does work.

Your basic assumption is flawed. UDP packets can be lost for a variety of reasons: they never got sent, they got dropped by an intermediate router, they got fragmented and not all the fragments made it to the receiver, or the receiver's socket receive buffer was full, which in turn can be either because it is too small or because the receiver can't keep up with the sender. Just using a gigantic socket receiver buffer only addresses one of these issues. If you need reliability, use TCP, or else implement an ACK or NACK mechanism with a retry.

Upvotes: 2

seand
seand

Reputation: 5296

UDP sends/receives discrete packets; the max size is around 65K. If you have more data to send than this, you need to break it into multiple packets. Also as others have noted UDP is an unreliable protocol. Packets may disappear, get duplicated, or received out of order.

Upvotes: 0

Related Questions