Reputation: 384
I have a program that involves a lot of calls to a handful of functions, each of which locally allocates fixed sized arrays (about a few hundred bytes total). Is it correct to assume that moving all the allocations to main and then passing pointers will get better speed? In other words, does subtracting from the stack pointer take linear or constant time, and, if it takes constant time, what's the cost compared to passing a pointer to a function?
I did a small speed test. Example #1 runs a little faster.
Example #1
using namespace std;
#include <iostream>
int f(int* a){
// do stuff
return 0;
}
int main(){
int a[1000];
int x;
for (int i = 0; i < 50000; ++i){
x=f(a);
}
return 0;
}
Example #2
using namespace std;
#include <iostream>
int f(){
int a[1000];
// do stuff...
return 0;
}
int main(){
for (int i = 0; i < 50000; ++i){
x=f();
}
return 0;
}
Upvotes: 0
Views: 174
Reputation: 21607
There is no difference between the two the way you have written them.
On some systems large allocations on the stack can cause problems but [1000] is a relatively small array and you are never allocating more than one of them.
Consider the case where f() is a recursive function. Then it would be possible to have large, repeated allocations.
Upvotes: 0
Reputation: 3335
You seem to understand allocation of local's space as expensive when in fact it isn't (it's just a substraction from the stack pointer).
Considering the mess you'd probably make with pointers back-referencing "semi-global" local variables in main()
, I can't see any real value in what you propose, although it's certainly possible to come up with a special example that proves me wrong.
In general, trying to optimize in early stages of coding is a bad idea. Especially if you trade simpleness and easy reading/understanding for (questionable) efficiency.
Try to code as simple and straightforward as possible. Optimize at later stage if necessary and not before you clearly identified bottlenecks (which is not easy).
Upvotes: 5