Reputation: 63
I am creating a simple http server using golang. I have two questions, one is more theoretic and another one about the real program.
I create a server and use s.ListenAndServe() to handle the requests. As much as I understand the requests served concurrently. I use a simple handler to check it:
func ServeHTTP(rw http.ResponseWriter, request *http.Request) {
fmt.Println("1")
time.Sleep(1 * time.Second) //Phase 2 delete this line
fmt.Fprintln(rw, "Hello, world.")
fmt.Println("2")
}
I see that if I send several requests, I will see all the "1" appear and only after a second all the "2" appear. But if I delete the Sleep line, I see that program never start a request before it finishes with the previous one (the output is 1 2 1 2 1 2 ...). So I don't understand, if they are concurrent or not really. If they are I would expect to see some mess in prints...
In the real handler, I send the request to another server and return the answer to the user (with some changes to request and the answer but in idea it is kind of a proxy). All this of course takes time and from what can see (by adding some prints to the handler), the requests are handled one by one, with no concurrency between them (my prints show me that a request starts, go through all the steps, ends and only then I see a new start....).
What can I do to make them really concurrent?
Putting the handler function as goroutine gives an error, that body of the request is already closed. Also if it is already concurrent adding more goroutines will make things only worse.
Thank you!
Upvotes: 3
Views: 13885
Reputation: 611
The HTTP server in Go standard library is highly concurrent. If you look inside its code, you'll see something like this (a simplified version):
l, _ := net.Listen("tcp", addr)
for {
rw, _ := l.Accept()
conn := &conn{
server: srv,
rwc: rwc,
}
go s.serve(conn)
In short: the server listens for connections in simple for
loop but after it accepts a connection, it serves it in a separate goroutine.
There's really no problem of handling tens of thousands concurrent connections.
In your handler, feel free to access the database, do long running calculations etc. The only limit is whatever timeouts you chose for the HTTP server (and load-balances etc. when deploying to production).
Upvotes: 3
Reputation: 581
While Go serves requests concurrently, the client might actually be blocking (wait for the first request to complete before sending the second), and then one will see exactly the behavior reported initially.
I was having the same problem, and my client code was sending a "GET" request via XMLHttpRequest to a slow handler (I used a similar handler code as posted above, with 10 sec timeout). In turns out that such requests block each other. Example JavaScript client code:
for (var i = 0; i < 3; i++) {
var xhr = new XMLHttpRequest();
xhr.open("GET", "/slowhandler");
xhr.send();
}
Note that xhr.send()
will return immediately, since this is an asynchronous call, but that doesn't guarantee that the browser will send the actual "GET" request right away.
GET requests are subject to caching, and if one tries to GET the same URL, caching might (in fact, will) affect how the requests to the server are made. POST requests are not cached, so if one changes "GET"
to "POST"
in the example above, the Go server code will show that the /slowhandler
will be fired concurrently (you will see "1 1 1 [...pause...] 2 2 2" printed).
Upvotes: 0
Reputation: 46
It does serve requests in a concurrent fashion as can be seen here in the source https://golang.org/src/net/http/server.go#L2293.
Here is a contrived example:
package main
import (
"fmt"
"log"
"net/http"
"sync"
"time"
)
func main() {
go startServer()
sendRequest := func() {
resp, _ := http.Get("http://localhost:8000/")
defer resp.Body.Close()
}
start := time.Now()
var wg sync.WaitGroup
ch := make(chan int, 10)
for i := 0; i < 10; i++ {
wg.Add(1)
go func(n int) {
defer wg.Done()
sendRequest()
ch <- n
}(i)
}
go func() {
wg.Wait()
close(ch)
}()
fmt.Printf("completion sequence :")
for routineNumber := range ch {
fmt.Printf("%d ", routineNumber)
}
fmt.Println()
fmt.Println("time:", time.Since(start))
}
func startServer() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
time.Sleep(1 * time.Second)
})
if err := http.ListenAndServe(":8000", nil); err != nil {
log.Fatal(err)
}
}
Over several runs it is easy to visualize that the completion ordering of the go routines which send the requests are completely random and given the fact that channels are fifo, we can summarize that the server handled the requests in a concurrent fashion, irrespective of whether HandleFunc sleeps or not. (Assumption being all the requests start at about the same time).
In addition to the above, if you did sleep for a second in HandleFunc, the time it takes to complete all 10 routines is consistently 1.xxx seconds which further shows that the server handled the requests concurrently as else the total time to complete all requests should have been 10+ seconds.
Example:
completion sequence :3 0 6 2 9 4 5 1 7 8
time: 1.002279359s
completion sequence :7 2 3 0 6 4 1 9 5 8
time: 1.001573873s
completion sequence :6 1 0 8 5 4 2 7 9 3
time: 1.002026465s
Analyzing concurrency by printing without synchronization is almost always indeterminate.
Upvotes: 1
Reputation: 12320
Your example makes it very hard to tell what is happening.
The below example will clearly illustrate that the requests are run in parallel.
package main
import (
"fmt"
"log"
"net/http"
"time"
)
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
if len(r.FormValue("case-two")) > 0 {
fmt.Println("case two")
} else {
fmt.Println("case one start")
time.Sleep(time.Second * 5)
fmt.Println("case one end")
}
})
if err := http.ListenAndServe(":8000", nil); err != nil {
log.Fatal(err)
}
}
Make one request to http://localhost:8000
Make another request to http://localhost:8000?case-two=true within 5 seconds
console output will be
case one start
case two
case one end
Upvotes: 7