Reputation: 8389
With the client code below (and a listening web server on port 8088 on this box), I am rarely able to get more than 23000 hits before this error pops up from the client.Get()
:
panic: Get http://localhost:8088/: dial tcp 127.0.0.1:8088: can't assign requested address
Oddly, if I increase the timer delay (i.e. from a millisecond to a microsecond) it takes far more hits to get the error, 170,000 or even more.
Looking at the network traffic, each client connection is used only a handful of times before it disconnects (i.e. the client side sends a FIN). So clearly it's making many TCP connections and overflowing the socket table. Given that the Golang HTTP docs say that keepalives are enabled by default, I would't expect this. A kernel trace shows no errors being emitted by the underlying socket before the close (other than EAGAIN, which is expected and doesn't always precede a socket close).
This is with Go 1.4.2 on OSX (14.4.0). Why are the client connections not being reused the whole time?
package main
import (
"io/ioutil"
"net/http"
"runtime"
"sync"
"time"
)
var reqnum = 0
func hit(client *http.Client) {
resp, err := client.Get("http://localhost:8088/")
if err != nil {
println(reqnum)
panic(err)
}
defer resp.Body.Close()
_, err = ioutil.ReadAll(resp.Body)
if err != nil {
panic(err)
}
reqnum++ // not thread safe, but shouldn't cause errors.
}
func main() {
var wg sync.WaitGroup
runtime.GOMAXPROCS(runtime.NumCPU())
client := &http.Client{}
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
ticker := time.NewTicker(time.Microsecond * 1000)
for j := 0; j < 120000; j++ {
<-ticker.C
hit(client)
}
ticker.Stop()
}()
}
wg.Wait()
}
Upvotes: 5
Views: 7882
Reputation: 109404
The error can't assign requested address
during a Dial is caused by running out of local ephemeral ports to use for the client connection. The reason you're running out of ports, is simply because you're making too many connections, too fast. What happens when you speed up the connection rate, is that you start to catch the idle connections going back into the pool before they are closed. There's a code path that catches these newly idle connections during a Dial to return a connection more quickly, but there's no way to deterministically catch these connections every time.
What you need to do since you're connecting to only one host (as discussed in the comments), is to set the Transport.MaxIdleConnsPerHost
a lot higher. You'll need to see where it balances out, between too many open connections, and when you start recycling them too quickly.
It may even be advantageous to have a semephore on client to prevent too many simultaneous connections, which would start causing the connections to again recycle too quickly.
Upvotes: 12