Reputation: 5580
Consider these code snippets.
1.
vector<int> v;
f1(v.begin(), v.end());
f2(v.begin(), v.end());
2.
vector<int> v;
auto begin = v.begin();
auto end = v.end();
f1(begin, end);
f2(begin, end);
With nowadays compilers is there any performance benefit in doing the second? Let's imagine that it's just not f1 and f2, but fN.
Upvotes: 1
Views: 55
Reputation: 391
I'd say probably. The only way the function calls can be automatically optimized out is if the compiler can guarantee without doubt that the result will not have changed, and that they do not have any other side effects.
Of course, the only reason to do that optimization manually is if you yourself know this to be true. So the question becomes, is the compiler smarter than you? It's possible, but I wouldn't always count on it. There are situations where the compiler can't make any guarantees about a function, such as when it resides in an external library. It would be impossible for the compiler to perform such an optimization in that case.
In the example you showed with iterator getters, the functions are probably inlined and const-qualified, which would make things a lot easier. In situtations like that, you can probably trust the compiler to make the right decision. But if you're really really worried about it, either use second approach when you know it to be safe, or disassemble the output to make sure your compiler does what you want it to.
Upvotes: 0
Reputation: 126418
There is the issue that they don't necessarily do the same thing, if f1
modifies v
in some way. If v
is a local variable (and so cannot be modified by f1
), then the code generated for both is likely to be the same. If f1
DOES modify v
in some way, then (2) is likely to have undefined behavior, as the iterators were invalidated before f2
was called.
So in general, (1) is likely to be just as fast and safer...
Upvotes: 3