Syaiful Nizam Yahya
Syaiful Nizam Yahya

Reputation: 4305

UDPclient rate control in c#

Im sending multiple udp packets consecutively to a remote pc. the problem is, if the amount of data is too high, some device somewhere between the channel experience buffer overflow. i intent to limit/throttle/control sending rate of the udp packets. can somebody give me some guide on how to find the optimal rate sending interval?

By the way, please stop suggesting tcp over udp. the objective is not to send data reliably, but to measure maximum throughput.

Upvotes: 2

Views: 3778

Answers (4)

Sam Ginrich
Sam Ginrich

Reputation: 841

Assume the receiver processes data faster than they ever come in, then only the bandwidth is relevant.

Non-adaptive approach to estimate the Channel Bandwidth:
You reserve a time window, e.g. 500ms and send chunk packets with a message counter and at maximum speed, i.e. the system just blocks to get the packets in sequence through the UDP stack. Then you count the data sent and collect the packet loss from the receiver. With the assumption of a unique routing for all packets and that only traffic limiting strategies of gateways lead to packet loss, the extent of received packets should give a good estimate.

In case the bottleneck is a gateway with small MTU, which produces additional framgentation and loss, you can detect this by repeating the test with typical MTU/packet sizes 500, 1000, 1500 bytes.

Upvotes: 0

Daniel Mošmondor
Daniel Mošmondor

Reputation: 19956

Try this then:

  • Start with packets of 1KB size (for example).
  • For them, calculate how many packets per second will be OK to send - for example - 1GB ethernet = 100MBytes of raw bandwidth -> 100000 packets
  • create a packed so first 4 bytes would be the serial number, rest could be anything - if you are testing here, fill it with zeroes or noise (random data)
  • on sending side, create a packets and push them at the speed of RATE (previously calculated) for one second. calculate time spent, and Sleep() the rest of the time, waiting for new time slot.
  • on receiving end, gather packets and look into their serial numbers. if packets are missing, send (another connection) some info to the sender about it.
  • sender, on info about lost packets, should do something like RATE = RATE * .9 - reduce sending rate to 90% of a previous one
  • sender should gradually increase rate (say 1%) every few seconds if it doesn't get any 'lost packets' message
  • after some time you RATE will converge to something that you wanted in the first place

Some considerations: - if back connection is TCP, you'll have some overhead there - if back connection is UDP, you can also have dropped packets here (because you are flooding the channel) and sender could never know that packets are dropped - algorithm above won't solve missing data issue or out-of-order data issue, it will just measure the throughput.

Upvotes: 3

Daniel Mošmondor
Daniel Mošmondor

Reputation: 19956

Despite your proposal that I don't suggest TCP over UDP, I have to. In same paragraph you are stating that the main purpose of your test is to measure throughput - that is bandwidth - and only way to do it properly without to re-invent whole TCP stack is to actually USE TCP stack.

Large parts of TCP are designed to work with flow control issues, and when TCP streams are used, you'll get exactly what you need - maximum bandwidth for given connection, with ease and without 'warm water inventing'.

If this answer doesn't suit you, that probably means that you have to re-state your requirements on the problem. They are in conflict.

Upvotes: 1

TomTom
TomTom

Reputation: 62093

Trial and error. Point.

  • Make up a secnod connection (UDP or TCP based) that you use ONLY to send control commands.
  • Send stats there about missing packets etc. Then the sides can decide whether the data rate is too high.
  • Possibly start low then up the data rate until you see missing packets.

NEVER (!) assume all packets will arrive. Means: you need (!) a way to reask for missing packets. Even under perfect cnoditions packets will get lost sometimes.

If loss is ok and only should be minimized, a statistical approach is pretty much the only way I see to handle this.

Upvotes: 2

Related Questions