Warty
Warty

Reputation: 7395

Long delay [~1s] between browser attempting to connect and Socket.Accept()

Overview of the problem: I've been playing with writing custom http server apps for a while now. I found out that when any web browser connected to my server app, there would be a 0.5-1 second "latency" (according to Google Chrome), before the request was handled [which would take milliseconds]

I eventually tried to make a dummy program to pinpoint the problem:

using System.Text;
using System.Net;
using System.Net.Sockets;

namespace SlowHTTPServer 
{ 
    class FailServer 
    {
        static void Main() 
        {
            //Create socket object, bind it, listen to 80
            Socket listenerSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
            listenerSocket.Bind(new IPEndPoint(IPAddress.Any, 80));
            listenerSocket.Listen(64);

            while (true)
            {
                Socket clientConn = listenerSocket.Accept();            //Accept
                System.DateTime startTime = System.DateTime.Now;

                byte[] buffer = new byte[1024];                         //Request header buffer
                clientConn.Receive(buffer);                             //Recieve request header

                string reqHeader = Encoding.ASCII.GetString(buffer);    //Get the string version of it
                                                                        //We completely ignore most of the request header lol

                //Normally i'd send a response header but...
                if (reqHeader.IndexOf("script1") != -1)                 //script1.js - document.title='hai dere'
                    clientConn.Send(Encoding.ASCII.GetBytes("document.title='hai dere';"));
                else if (reqHeader.IndexOf("script2") != -1)            //script2.js - Get a pretty background color onload
                    clientConn.Send(Encoding.ASCII.GetBytes("window.onload=function(){document.body.style.backgroundColor='#FF99FF';};"));
                else if (reqHeader.IndexOf("iframe") != -1)             //Noob iframe that just has text.
                    clientConn.Send(Encoding.ASCII.GetBytes("blargh zargh nargh dargh pikachu tangerine zombie destroy annihilate"));
                else                                                    //hai dere is the body.innerHTML, load script1.js and script2.js
                    clientConn.Send(Encoding.ASCII.GetBytes("<html><head><script src='script1.js'></script><script src='script2.js'></script></head><body>mainPage<iframe src='iframe.html'>u no haz iframe</iframe></body></html>"));

                clientConn.Close();                                     //Close the connection to client.  We've done such a good job!

                System.Console.WriteLine((System.DateTime.Now - startTime).TotalMilliseconds);
            }
        }
    }
}

... And I am now totally confused, because the above program will serve the page+script+iframe to the web browser over a 2 second time span [connecting from localhost to localhost, windows firewall+antivirus are off]. Using a server like Apache, the requests [using pre-created files in a file system, of course] would be handled in less than 100 milliseconds for sure.

From what I can see, the code i've posted is an extremely stripped down http server. Performance does not change if I send response headers or not.

The problem becomes extremely annoying when I have projects with 30+ JavaScript script files, taking 20+ seconds for the page to fully load [which is unacceptable when the scripts are all <10 kb]

If you look at the script, you'll notice that i track when the socket accepts and when it closes, and usually the handling time is just 1-10 milliseconds, meaning that the lag is between the http server and the web browser, [before the connection is accepted?]

I'm stumped, and in need of help. Thanks.

Other notes: - The "real" server I wrote accepts client connections and passes them off to another thread to do the work, so the task of sending bytes and closing the connection isn't the reason why there's so much lag. - Binds to port 80, to test the code run the program and navigate to http://127.0.0.1/ or http://localhost/

Images:
Chrome's developer tools showing the load times


Additional program:
I attempted to simulate the problem by creating a very simple server/client program... The problem was NOT reproduced, and connection time was 1 millisecond. I am further stumped

using System;
using System.Text;
using System.Net;
using System.Net.Sockets;
using System.Threading;

namespace SlowSocketAccept
{
    class Program
    {
        static DateTime connectionBeginTime;
        static bool listening = false;
        static void Main(string[] args)
        {

            new Thread(ClientThread).Start();
            Socket listenerSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
            listenerSocket.Bind(new IPEndPoint(IPAddress.Any, 80));
            listenerSocket.Listen(80);

            listening = true;
            Socket newConn = listenerSocket.Accept();
            byte[] reqHeader = new byte[1024];
            newConn.Receive(reqHeader);
            newConn.Send(Encoding.ASCII.GetBytes("Response Header\r\n\r\nContent"));
            newConn.Close();

            Console.WriteLine("Elapsed time: {0} ms", (DateTime.Now - connectionBeginTime).TotalMilliseconds);

        }
        static void ClientThread()
        {
            while (listening == false) ; //Busy wait, whatever it's an example
            System.Threading.Thread.Sleep(10); //Wait for accept to be called =/

            Socket s = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
            connectionBeginTime = DateTime.Now;
            s.Connect(new IPEndPoint(IPAddress.Parse("127.0.0.1"), 80));
            s.Send(Encoding.ASCII.GetBytes("Request Header"));
            byte[] response = new byte[1024];
            s.Receive(response);
        }
    }
}

[a and b are threads]
a) Launch thread 'b'
b) busy wait until A sets a flag saying it's okay to connect
a) create socket, begin listening to 80, set a flag telling B it's okay to connect
b) Create socket, store current time, and connect to 127.0.0.1:80 [localhost:80]
a) thread accepts connection and begins listening
b) thread sends a dummy string [our request header]
a) accepts the dummy string, then sends a dummy string back [response header + content]
a) close connection

The elapsed time is about 1 millisecond =/

Upvotes: 3

Views: 1345

Answers (1)

cHao
cHao

Reputation: 86524

Unless you include a Content-Length header -- and specify either that you're using HTTP 1.0, or that the connection should close -- a browser would keep reading content til it notices the connection's been closed (which could take a while).

Make sure your app is sending a Content-Length header, and "Connection: close".

Also, you should be shutting down the client socket before you close it. This tells .net (and maybe other other side, i forget) that you're done sending data, and that it should be flushed out to the network and the connection reset. Close is a code-side thing; Shutdown is what actually closes the connection from the client's point of view.

Upvotes: 2

Related Questions