Reputation: 852
I'm new to Winsock. I am writing an HTTP server and I am aiming to send a large file by reading and sending in chunks. To optimize this process, I attempt to use a nonblocking socket where I read the next chunk from the disk while I send the current one.
My problem now is that I get an FD_WRITE message even when it seems the buffer should be full and my function deletes the associated data from memory prematurely. I believe this causes my responses contain less data than they should send() stops sending prematurely and the client (which is a well-known one) receives about 70% of the data. When I use blocking sockets it works fine, it's just longer.
I tried using Wget, a simple HTTP client, to get a better idea of what's going on. From what I can see, the datastream thins out when I detect WSAEWOULDBLOCK errors when checking for errors after using send(). It looks like during those sends, not all the data gets sent.
When I set the sleep time to over 2000ms after checking for the FD_WRITE message, everything works as it basically comes down to using a blocking socket. I also tried setting times around 100-200ms, but those fail as well. As it is, the condition checking for FD_WRITE always returns valid before entering the loop.
WSAEVENT event = WSACreateEvent();
const int sendBufferSize = 1024 * 64;
int connectionSpeed = 5; //estimated, in MBytes/s
const int sleepTime = sendBufferSize / (connectionSpeed * 1024 * 1024);
size = 0;
const int bufSize = 1024 * 1024 * 35;
int lowerByteIndex = 0;
int upperByteIndex = bufSize;
size = bufSize;
int totalSIZE = 0;
unsigned char* p;
unsigned char* pt;
clock_t t = std::clock();
p = getFileBytesC(resolveLocalPath(path), size, lowerByteIndex, upperByteIndex);
lowerByteIndex += bufSize;
upperByteIndex += bufSize;
totalSIZE += size;
while (upperByteIndex <= fileSize + bufSize)
{
int ret = send(socket, (char*)p, size, 0);
pt = getFileBytesC(resolveLocalPath(path), size, lowerByteIndex, upperByteIndex);
totalSIZE += size;
lowerByteIndex += bufSize;
upperByteIndex += bufSize;
if (ret == SOCKET_ERROR && WSAGetLastError() == WSAEWOULDBLOCK)
{
while (SOCKET_ERROR == WSAEventSelect(socket, event, FD_WRITE))
{
Sleep(50);
}
}
Sleep(sleepTime); //should be around 30-50ms. Wait for the buffer to be empty
delete[] p;
p = pt;
std::cout << std::endl << (std::clock() - t) / (double)CLOCKS_PER_SEC;
}
send(socket, (char*)p, size, 0);
delete[] p;
std::cout << std::endl << (std::clock() - t) / (double)CLOCKS_PER_SEC ;
if (totalSIZE == fileSize) std::cout << std::endl << "finished transfer. UBI=" << upperByteIndex;
else
{
std::cout << std::endl << "ERROR: finished transfer\r\nBytes read=" << totalSIZE;
}
Sleep(2000);
closeSocket(socket);
Upvotes: 2
Views: 1696
Reputation: 18972
The TransmitFile
function exists to solve exactly this problem for you. It does the whole thing entirely in kernel mode so it's always going to beat a hand-crafted version.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms740565(v=vs.85).aspx
Edit: On Linux there's the somewhat similar sendfile
call.
http://linux.die.net/man/2/sendfile
(The ubiquity of web servers helped to motivate OS designers to solve this problem.)
Upvotes: 2
Reputation: 311039
You can't write correct non-blocking send()
code without storing the value returned in a variable. It is the number of bytes actually sent. You can't assume the entire buffer was sent in non-blocking mode.
If send()
returns -1 with WSAGetLastError() == WSAEWOULDBLOCK
or whatever it is, then is the time to call WSASelect(),
or WSAEVENTSelect()
if you must, but with a timeout. Otherwise, i.e. if it returns a positive count, you should just advance your offset and decrement your length by the amount sent and repeat until there is nothing left to send.
Your sleeps are just literally a waste of time.
But I would question the whole approach. Sending on a blocking-mode socket is asynchronous anyway. There is little to be gained by your present approach. Just read the file in chunks and send the chunks in blocking mode.
Upvotes: 2