Reputation: 102464
-Og
is a relatively new optimization option that is intended to improve the debugging experience while apply optimizations. If a user selects -Og
, then I'd like my source files to activate alternate code paths to enhance the debugging experience. GCC offers the __OPTIMIZE__
preprocessor macro, but its only set to 1 when optimizations are in effect.
Is there a way to learn the optimization level, like -O1
, -O3
or -Og
, for use with the preprocessor?
Upvotes: 31
Views: 7885
Reputation: 755
Some system-specific preprocessor macros exist, depending on your target. For example, the Microchip-specific XC16 variant of gcc (currently based on gcc 4.5.1) has the preprocessor macro __OPTIMIZATION_LEVEL__
, which takes on values 0, 1, 2, s, or 3.
Note that overriding optimization for a specific routine, e.g. with __attribute__((optimize(0)))
, does not change the value of __OPTIMIZE__
or __OPTIMIZATION_LEVEL__
within that routine.
Upvotes: 1
Reputation: 10445
I don't know if this is clever hack, but it is a hack.
$ gcc -Xpreprocessor -dM -E - < /dev/null > 1
$ gcc -Xpreprocessor -dM -O -E - < /dev/null > 2
$ diff 1 2
53a54
> #define __OPTIMIZE__ 1
68a70
> #define _FORTIFY_SOURCE 2
154d155
< #define __NO_INLINE__ 1
clang didn't produce the FORTIFY one.
Upvotes: 28
Reputation: 5542
I believe this is not possible to know directly the optimization level used to compile the software as this is not in the list of defined preprocessor symbols
You could rely on -DNDEBUG
(no debug) which is used to disable assertions in release code and enable your "debug" code path in this case.
However, I believe a better thing to do is having a system wide set of symbols local to your project and let the user choose what to use explicitly.:
MYPROJECT_DNDEBUG
MYPROJECT_OPTIMIZE
MYPROJECT_OPTIMIZE_AGGRESSIVELY
This makes debugging or the differences of behavior between release/debug much easier as you can incrementally turn on/off the different behaviors.
Upvotes: 11