user2103287
user2103287

Reputation: 11

Browser delays between individual HTTP GET file requests

I am debugging a web server running on a Microchip embedded platform. The embedded part should not be relevant, with the exception that the firmware source allows me full control of coding all TCP/IP communication.

Especially on Internet Explorer, there are delays of anywhere from 3 to 10 seconds between all GET requests needed before rendering the server's content. When it is the first time accessing the site and nothing is cached, there are typically about 5 files to retrieve (htm, css, js) so it is longer than 15 seconds before the user sees the page.

A Wireshark capture shows that it is definitely the client introducing the delays, because the web server responds immediately upon receiving each connection request. After the connection is done and both sides have sent their FIN/ACK's, this is where I see the minimum 3 second pause before the client sends its next SYN to connect for the next GET. The complete connection from SYN to FIN/ACK has no issues and takes under half a second.

I verified that each side is ACKing the other side's FIN flag because the ack number of its final ACK packet is incremented accordingly. I even broadened the capture to show all traffic involving the client's MAC address, and there is none of any kind during the delay.

Anybody have an idea what is happening? Would anything server-side such as HTTP headers cause this? Thanks for any help.

Upvotes: 0

Views: 1214

Answers (1)

user2103287
user2103287

Reputation: 11

I have determined that the issue is with the fact that the web server was running only one listening TCP socket.

Clients such as Internet Explorer clearly expect to be able to make multiple concurrent requests to retrieve files in parallel, most probably using independent threads. When one thread has the one listening socket occupied, a second thread attempting to GET a second file must wait until the first thread releases the socket. When the second thread's first connection attempt fails, it seemingly must time out after 3 seconds before trying again.

Because the algorithm is to time out, it doesn't matter that the socket is occupied for only half a second before returned to the listening state. The second thread won't try again until it times out, hence the delay between file requests.

The client hasn't the sophistication to hand off the second file request off to the first thread, who is in the best position to realize that the socket is available again. Such a feature could optimize throughput in cases such as mine.

Upvotes: 1

Related Questions