Subby
Subby

Reputation: 2036

Go http server bad performance on request forwarding

Hi I developed a little go server that does (at the moment) nothing but forwarding the request to a local service on the machine it is running. So nearly the same as nginx as reverse proxy.

But I observed a really bad performance that even uses up all resources of the server and runs into timeouts on further requests.

I know that this cannot be as performant as nginx, but I don't think that it should be that slow.

Here is the server I use for forwarding the request:

  package main

import (
    "github.com/gorilla/mux"
    "net/http"
    "github.com/sirupsen/logrus"
    "bytes"
    "io/ioutil"
)

func main() {
    router := mux.NewRouter()
    router.HandleFunc("/", forwarder).Methods("POST")

    server := http.Server{
        Handler: router,
        Addr:    ":8443",
    }

    logrus.Fatal(server.ListenAndServeTLS("cert.pem", "key.pem"))
}

var client = &http.Client{}

func forwarder(w http.ResponseWriter, r *http.Request) {
    // read request
    body, err := ioutil.ReadAll(r.Body)
    if err != nil {
        logrus.Error(err.Error())
        ServerError(w, nil)
        return
    }

    // create forwarding request
    req, err := http.NewRequest("POST", "http://localhost:8000", bytes.NewReader(body))
    if err != nil {
        logrus.Error(err.Error())
        ServerError(w, nil)
        return
    }

    resp, err := client.Do(req)

    if err != nil {
        logrus.Error(err.Error())
        ServerError(w, nil)
        return
    }

    // read response
    respBody, err := ioutil.ReadAll(resp.Body)
    if err != nil {
        logrus.Error(err.Error())
        ServerError(w, nil)
        return
    }
    resp.Body.Close()

    // return response
    w.Header().Set("Content-Type", "application/json; charset=utf-8")
    w.WriteHeader(resp.StatusCode)
    w.Write(respBody)
}

From the client side I just measure the roundtrip time. And when I fire 100 Requests per second the response time goes up quite fast.

It starts with a response time of about 50ms. After 10 Seconds the response time is at 500ms. After 10 more seconds the response time is at 8000ms and so on, until I get timeouts.

When I use the nginx instead of my server there is no problem running 100 requests per second. Using nginx it stays at 40ms per each request.

Some observation: using nginx: lsof -i | grep nginx has no more than 2 connections open.

using my server the number of connection increases up to 500 and then the connections with state SYN_SENT increases and then the requets run into timeouts.

Another finding: I measured the delay of this code line:

resp, err := client.Do(req)

There is where most of the time is spent, but the could also just be because the go routines are starving!?

What I also tried:

I don't know why I got such a bad performance.

My guess is that go runs into problems because of the huge number of go routines. Maybe most of the time is used up scheduling this go routines and so the latency goes up?

I also tried to use the included httputil.NewSingleHostReverseProxy(). Performance is a little bit better, but still the same problem.

UPDATE:

Now I tried fasthttp:

package main

import (
    "github.com/sirupsen/logrus"
    "github.com/valyala/fasthttp"
)

func StartNodeManager() {
    fasthttp.ListenAndServeTLS(":8443", "cert.pem", "key.pem", forwarder)
}

var client = fasthttp.Client{}

func forwarder(ctx *fasthttp.RequestCtx) {

    resp := fasthttp.AcquireResponse()    

    req := fasthttp.AcquireRequest()
    req.Header.SetMethod("POST")
    req.SetRequestURI("http://127.0.0.1:8000")
    req.SetBody(ctx.Request.Body())


    err := client.Do(req, resp)

    if err != nil {
        logrus.Error(err.Error())
        ctx.Response.SetStatusCode(500)
        return
    }

    ctx.Response.SetBody(resp.Body())
    fasthttp.ReleaseRequest(req)
    fasthttp.ReleaseResponse(resp)
}

Little bit better but after 30 seconds the first timeouts arrive and the response time goes up to 5 seconds.

Upvotes: 4

Views: 5146

Answers (2)

rajni kant
rajni kant

Reputation: 114

The root cause of the problem is GO http module is not handling connections to upstream in a manged way, time is increasing because lots of connections are getting opened and they go into time_wait state. So with number of increasing connections, you will get decrease in performance.

You just have to set

// 1000 what I am using
http.DefaultTransport.(*http.Transport).MaxIdleConns = 1000
http.DefaultTransport.(*http.Transport).MaxIdleConnsPerHost = 1000
in your forwarder and this will solve your problem.

By the way, use go std library reverse proxy, this will take away lot of headache. But still for reverse proxy you need to set MaxIdleConns and MaxIdleConnsPerHost , in it's transport.

Follow the article given below.

Upvotes: 1

Alexander Trakhimenok
Alexander Trakhimenok

Reputation: 6278

First of all you should profile your app and find out where is the bottleneck.

Second I would be looking to way write code with less memory allocation in heap and more on stack.

Few ideas:

  1. Do you need read request body for all request?
  2. Do you need always read response body?
  3. Can you pass body of client request to request to server? func NewRequest(method, url string, body io.Reader) (*Request, error)
  4. Use sync.Pool
  5. Consider using fasthttp as it creates less pressure to garbage collector
  6. Check if your server uses same optimisation as Nginx. E.g. Keep-Alive, caching, etc.
  7. Again profile and compare against Nginx.

Seems there is a lot of space for optimization.

Upvotes: 0

Related Questions