Reputation: 1418
This is not a question about how to deal with a stream that returns less than requested. I will show how I dealt with it, then comments are very welcome.
But first things first:
I use the UWP
based StreamSocket
for TCP communication.
StreamSocket strs = GetConnection();
Stream str = strs.InputStream.AsStreamForRead();
Let's say we expect an incomming transmission. The first few bytes tell me amongst other things, how long the transmission is going to be. Then pass the stream to a method that will read the specified amount of data with a specified buffer size.
// commencing data read:
int ret = 0;
int total = 0;
for(int k = 0; k < size; k += buffserSize) {
int len = Math.Min(bufferSize, size - k);
ret = stream.Read(data, k, len);
if(ret != len)
Debug.WriteLine("Congestion: {0}/{1}", ret, len); // throw?
total += ret;
}
For small transmissions I get no warnings at all. For larger transmissions however, once the transmission is into about 40kB I consistantly get less data than requested. So with this implementation I lose data.
So I have to take into account that a blocking read on a network stream may return less than requested at some point. So I implement this solution:
int bs = 4096;
int rpos = 0;
while(rpos < size) {
int len = Math.Min(bs, size - rpos);
int read = stream.Read(data, rpos, len);
if(read < bs) {
Debug.WriteLine("Congestion: {0}/{1}.", read, len);
bs /= 2;
}
rpos += read;
}
Again after about 40kB I get congestion, but this time I account for it (and as a bonus reduce the buffer size = Poor-Mans-Flow-Control)
This brings me to the actual
Why do I need to expect a stream that is based on a TCP Socket to return less data than requested on a blocking read.
I was under the impression that TCP has it's own flow control already. That explains why the traffic (bytes/second) increases and then (suddenly) degrades.
But it does not explain why the read does not wait, until enough data has arrived. I thought this should be handled one abstraction layer below.
Upvotes: 1
Views: 434
Reputation: 1418
It is noteworthy that the BinaryReader Class provides a method called ReadBytes(int count) that provides the abstraction layer that I had a misunderstanding about.
Namely it will return the requested amount of bytes as long as it has not reached the end of the stream:
Return Value Type: System.Byte[]
A byte array containing data read from the underlying stream. This might be less than the number of bytes requested if the end of the stream is reached.
So there is a much more simple solution:
var bReader = new BinaryReader(stream);
byte[] data = bReader.ReadBytes(size);
One still should check for IOException
Upvotes: 1
Reputation: 239664
I thought this should be handled one abstraction layer below.
Why? The abstraction that TCP offers is a (potentially endless) stream of bytes in both directions. If you want messages, it's up to you to either implement that yourself (as you're doing here) or move to a higher level abstraction
I meant the abstraction that a Stream provides. But I seem to have a misunderstanding. As I expect a stream to give me n bytes if I request n bytes
Unfortunately, that's a misapprehension as well. Whilst some Stream
implementations may be able to offer such a guarantee, the abstract Stream
class defines the Read
method:
The total number of bytes read into the buffer. This can be less than the number of bytes requested if that many bytes are not currently available,
And so NetworkStream
takes advantage of this flexibility - it gives you at least one byte and as much as it can conveniently, once it has some data available.
Whilst it doesn't fit your use case, it may be that some consumers are able to work with whatever data is available - so they may supply a large buffer (in case lots of data is available) but can do something useful with just a single byte. Those consumers will be calling exactly the same API as you are, and its behaviour suits their needs.
Upvotes: 3