Fernando
Fernando

Reputation: 655

C++11 pass by value of POD type worse than const reference

This question is NOT about passing large objects by value or reference -- and also not about move semantics -- like many other questions around.

I wanted to know how small a POD type has to be so that it is a better idea to pass it by value rather than by const reference. I wrote the following code:

#include <ctime>
#include <iostream>
#include <complex>

using namespace std;

using Number = double; //complex<double>;

struct acc {
  Number a;
  void f(const Number& x) { a += x; }
  void g(Number x) { a += x; }
};

int main()
{
  int n = 1000000000;
  Number *v = new Number[n];

  for (int i = 0; i < n; i++) {
    v[i] = Number(i);
  }

  clock_t b, e;
  acc foo;

#ifdef _const
  b = clock();
  for (int i = 0; i < n; i++)
    foo.f(v[i]);

  e = clock();

  cout << ((double) e - b) / CLOCKS_PER_SEC << endl;
#else
  b = clock();
  for (int i = 0; i < n; i++)
    foo.g(v[i]);

  e = clock();

  cout << ((double) e - b) / CLOCKS_PER_SEC << endl;
#endif

  cout << foo.a << endl;

  return 0;
}

I compiled with gcc without optimization.

When using Number = complex, const reference was faster, and I expected that a bit. But pass by const reference was also faster when using Number = double, which completely surprises me (in my computer, it was 3.5 for pass-by-value and 2.9 for const reference).

Why is this? Isn't the amount of work, including memory access, the same in such a simple example? I have to write a template library, and I wanted to be careful and use const references or pass-by-value depending on the size of the template arguments, but now I think it is rather useless to worry about this. Anyone else has any idea what is going on?

If I compile with optimization, then both varieties run equally fast.

Upvotes: 0

Views: 791

Answers (1)

Yakk - Adam Nevraumont
Yakk - Adam Nevraumont

Reputation: 275760

The compiler writers do not care care if your unoptimized toy code is 20% slower in one similar case to another. That is why.

Neither should you, unless you are in an extreme corner case where you need your debug build to be fast enough to hit some soft realtime requirements (say, finish a render every X Hz, or process data before the other end of a network connection times out) and that 20% slowdown is on a critical path.

Upvotes: 3

Related Questions