Reputation: 11
I'm trying to make an HTTP server that handles very large requests. To avoid high memory usage I think that reading the connection stream in chunks is the best option.
The problem is that reading the BufferedReader
locks until it reads 128 bytes. When the stream closes it finally reads the buffer and works.
How can I read the current data stored instead of waiting to the buffer to fill?
My code:
const BUFFER_SIZE = 128;
fn process_chunk(chunk: []u8, size: usize) bool{
std.log.debug("Readed size: {d}", .{size});
if(std.mem.containsAtLeast(u8, "\r\n", 1, chunk) or size < 128){
return true;
}
return false;
}
fn handleConnection( connection: std.net.Server.Connection) !void{
var buffer: [BUFFER_SIZE]u8 = undefined;
const stream_reader = connection.stream.reader().any();
var reader = std.io.BufferedReader(128, @TypeOf(rr)){.unbuffered_reader = stream_reader};
while (true) {
const bbis = try reader.read(&buffer);
std.log.debug("Readed size: {d}", .{bbis});
if(process_chunk(&buffer)){
connection.stream.close();
return;
}
}
}
Complete Output:
debug: Readed size: 128
debug: Readed size: 128
debug: Readed size: 128
debug: Readed size: 128
debug: Readed size: 128
debug: Readed size: 128
debug: Readed size: 128
debug: Readed size: 128
Upvotes: 1
Views: 179
Reputation: 2613
Passing a [128]u8
to .read()
is basically saying you want to wait until 128 bytes are available or until the end of stream.
I would instead recommend reading whatever amount of data is useful, based on the protocol you are implementing. For example, if you were reading a stream of 64-bit integers, you would read in chunks of 8 (or better yet, readInt(u64, ...)
).
The actual performance boost from chunking will be handled by the BufferedReader, so your code should just concern itself with how many bytes the next step of processing can handle. The next character or token, some lookahead number of bytes, the next field, etc. The readUntilDelimiter
class of functions might be good here.
Upvotes: 0