Reputation: 53
After many additions/deletions in a map, ranging over the map is slower than expected.
Consider the following code
package main
import (
"fmt"
"time"
)
func main() {
test := make(map[int]bool)
for i := 0; i < 1000000; i++ {
test[i] = true
}
for k := range test {
delete(test, k)
}
test[0] = true
fmt.Printf("test len(): %d\n", len(test))
now := time.Now()
for range test {
}
fmt.Printf("range took %v\n", time.Since(now))
}
On my system, looping over the test map (which contains 1 entry) takes 3.5ms. This is an issue because we're running similar code in production, and after a while we have many millions of deleted entries per map.
The GC doesn't seem to help, and the only solution I can think of is to set the entire map to nil and then remake it. Are there any other ways?
Upvotes: 0
Views: 1844
Reputation: 106
I'd expect performance to be roughly scaling based on the maximum size the map has ever had, but I wouldn't expect it to get worse with additional writes-and-deletes over time. There's two components; there's the time to check all the buckets, which will scale with the size the map has ever had, and the time to yield the results, which will scale with the number of entries currently present.
But adding and deleting things shouldn't further change it.
https://play.golang.org/p/Ouj-3MuZvVt
See also https://github.com/golang/go/issues/20135, the issue tracking "maps don't shrink after delete".
If you really need to iterate over a very small portion of a map, you might want to make a new map containing only the items you're keeping, but it's not insertions/deletions costing you time, it's the maximum size the map has had; note how much smaller the time consumption is if you add-and-delete 1000 items, 1000 times, rather than adding a million items and then deleting them.
Upvotes: 1