Reputation: 13241
I'm new to goroutines, channels and the likes so apologies if this seems trivial.
I've written the following code:
for _, h := range hosts {
go func() {
httpClient := cleanhttp.DefaultPooledClient()
// format the URL with the passed host and por
url := fmt.Sprintf("https://%s:%v", h.Name, h.Port)
// create a vault client
client, err := api.NewClient(&api.Config{Address: url, HttpClient: httpClient})
if err != nil {
panic(err)
}
// get the current status
status := v.VaultStatus(client)
// send the status to a channel
s <- strconv.FormatBool(status.Ready)
}()
// assign the value of channel to a var
cs := <-s
// print it
fmt.Printf("Host: %s Status: %s\n", h.Name, cs)
}
},
The idea is simple, it takes a list of hosts and then uses the Golang Vault API to go and determine the current status. I'm happy enough that it works.
What'd I'd like to do is ensure these operations happen in parallel. When I run the following code, I get the results as follows:
host: Host1: status: true
host: Host2: status: false
host: Host3: status: true
host: Host4: status: true
The issue here is that these hosts are always returned in the same order. I don't think the goroutines are executing in parallel at all, as they appear to operate one after the other and then get printed in the same order every time.
Is the code doing what I think it should? How can I know this goroutine is operating in parallel?
Upvotes: 0
Views: 877
Reputation: 2755
Assuming you have a:
type Status struct {
URL string
Ready bool
}
And s
initialized as:
s := make(chan Status)
Then you could write:
var wg sync.WaitGroup
for _, h := range hosts {
h := h
wg.Add(1)
go func() {
defer wg.Done()
httpClient := cleanhttp.DefaultPooledClient()
// format the URL with the passed host and por
url := fmt.Sprintf("https://%s:%v", h.Name, h.Port)
// create a vault client
client, err := api.NewClient(&api.Config{Address: url, HttpClient: httpClient})
if err != nil {
panic(err)
}
// get the current status
status := v.VaultStatus(client)
// send the status to the channel
s <- Status{url, status.Ready}
}()
}
// this goroutine's job is closing s after all above goroutines have finished
go func() {
wg.Wait()
close(s) // so the following loop does not block after reading all statuses
}()
for st := range s {
// here you could collect all statuses in a []Status or something
// for simplicity, just print them as you did
fmt.Printf("Host: %s Status: %v\n", st.URL, st.Ready)
}
Upvotes: 1
Reputation: 6749
You are only running one goroutine at a time, because the main goroutine is waiting on the channel before continuing with the next iteration of the loop. Instead, you should wait for the results on the channel outside the for loop after all the goroutines have been started. By the way, you'll need to send something identifying the host on the channel as well.
By the way, you have a potential problem in your goroutine function. You're using the variable h
, which is being changed by the main goroutine each time through the loop, so you don't really know what you're getting in the other goroutines (assuming you take care of the problem I mentioned above so that the goroutines do run in parallel). Instead of referencing that variable directly, you should pass it as an argument to the goroutine function (or you can create a different variable inside the for loop and assign it the value of h
and use that variable inside the function).
Try doing it like this:
var wg sync.WaitGroup
for _, h := range hosts {
h := h // create local copy of loop var
wg.Add(1)
go func() {
defer wg.Done()
httpClient := cleanhttp.DefaultPooledClient()
// format the URL with the passed host and por
url := fmt.Sprintf("https://%s:%v", h.Name, h.Port)
// create a vault client
client, err := api.NewClient(&api.Config{Address: url, HttpClient: httpClient})
if err != nil {
panic(err)
}
// get the current status
status := v.VaultStatus(client)
// print it
fmt.Printf("Host: %s Status: %v\n", h.Name, status.Ready)
}()
}
wg.Wait()
Upvotes: 3
Reputation: 3559
Generally speaking, if you want to know whether goroutines are operating in parallel, you should trace the scheduler.
Upvotes: 1