Reputation: 32001
How do I tell if gcc (more specifically, g++) is optimizing tail recursion in a particular function? (Because it's come up a few times: I don't want to test if gcc can optimize tail recursion in general. I want to know if it optimizes my tail recursive function.)
If your answer is "look at the generated assembler", I'd like to know exactly what I'm looking for, and whether or not I could write a simple program that examines the assembler to see if there's optimization.
PS. I know this appears as part of the question Which, if any, C++ compilers do tail-recursion optimization? from 5 months ago. However, I don't think this part of that question was answered satisfactorily. (The answer there was "The easiest way to check if the compiler did the optimization (that I know of) is perform a call that would otherwise result in a stack overflow – or looking at the assembly output.")
Upvotes: 72
Views: 31787
Reputation: 79820
In recent versions, GCC added support for the [[musttail]]
function call annotation, which had been introduced by Clang. As described in this article:
Since 2021, Clang has supported a
[[clang::musttail]]
attribute on function calls, which instructs the compiler that the call must be an optimized tail call. If the compiler can't comply for some reason, it will raise an error. GCC added support for the same attribute in 2024.
If this annotation is added to a function call, then it guarantees that either the call is optimized as a tail call or the compilation fails:
int foo(int x) {
return x + 42;
}
int bar(int x) {
[[musttail]] return foo(x);
}
Upvotes: 0
Reputation: 609
These days you could check the DWARF for DW_AT_call_tail_call
and DW_AT_call_all_calls
attributes.
gcc -g -O2 foo.c
dwarfdump a.out
You will get a DW_TAG_call_site where the tail call happened, something like this:
< 2><0x00000174> DW_TAG_call_site
DW_AT_call_return_pc 0x000011cc
DW_AT_call_tail_call yes(1)
DW_AT_call_origin <0x000001a6>
DW_AT_sibling <0x00000192>
The DWARF5 standard has the details for this.
Upvotes: 1
Reputation: 2518
We can use objdump
for this. Here is an example. The code computes n-th triangular number = 1 + 2 + ... + n :
// File tri.c
// See "-O1" vs "-O1 -foptimize-sibling-calls" to see the tail recursion optimization
#include <stdio.h>
int tri(int n, int result)
{
if (0 == n)
{
return result;
}
result += n;
--n;
return tri(n, result);
}
int main()
{
printf("%d\n", tri(36, 0)); // Prints 666 for the 36th triangular number.
return 0;
}
For gcc 11.3.1 with -O1
flag you get NO tail recursion optimization:
; gcc tri.c -O1
; objdump -d a.out
...
0000000000401126 <tri>:
401126: 89 f0 mov %esi,%eax
401128: 85 ff test %edi,%edi
40112a: 75 01 jne 40112d <tri+0x7>
40112c: c3 ret
40112d: 48 83 ec 08 sub $0x8,%rsp
401131: 8d 34 37 lea (%rdi,%rsi,1),%esi
401134: 83 ef 01 sub $0x1,%edi
401137: e8 ea ff ff ff call 401126 <tri> <--- The recursion call
40113c: 48 83 c4 08 add $0x8,%rsp
401140: c3 ret
...
For gcc 11.3.1 with -O1 gcc tri.c -O1 -foptimize-sibling-calls
flags you GET the tail recursion optimization:
; gcc tri.c -O1 -foptimize-sibling-calls
; objdump -d a.out
...
0000000000401126 <tri>:
401126: 89 f0 mov %esi,%eax
401128: 85 ff test %edi,%edi
40112a: 74 07 je 401133 <tri+0xd>
40112c: 01 f8 add %edi,%eax
40112e: 83 ef 01 sub $0x1,%edi
401131: 75 f9 jne 40112c <tri+0x6>
401133: c3 ret
...
Upvotes: 0
Reputation: 170559
You could craft input data that would lead to stack overflow because of too deep recursion of that function calls if there were no optimization and see if it happens. Of course, this is not trivial and sometimes big enough inputs will make the function run for intolerably long period of time.
Upvotes: 0
Reputation:
Another way I checked this is:
Upvotes: 2
Reputation: 1503
EDIT My original post also prevented GCC from actually doing tail call eliminations. I've added some additional trickiness below that fools GCC into doing tail call elimination anyways.
Expanding on Steven's answer, you can programmatically check to see if you have the same stack frame:
#include <stdio.h>
// We need to get a reference to the stack without spooking GCC into turning
// off tail-call elimination
int oracle2(void) {
char oracle; int oracle2 = (int)&oracle; return oracle2;
}
void myCoolFunction(params, ..., int tailRecursionCheck) {
int oracle = oracle2();
if( tailRecursionCheck && tailRecursionCheck != oracle ) {
printf("GCC did not optimize this call.\n");
}
// ... more code ...
// The return is significant... GCC won't eliminate the call otherwise
return myCoolFunction( ..., oracle);
}
int main(int argc, char *argv[]) {
myCoolFunction(..., 0);
return 0;
}
When calling the function non-recursively, pass in 0 the check parameter. Otherwise pass in oracle. If a tail recursive call that should've been eliminated was not, then you'll be informed at runtime.
When testing this out, it looks like my version of GCC does not optimize the first tail call, but the remaining tail calls are optimized. Interesting.
Upvotes: 17
Reputation: 61252
i am way too lazy to look at a disassembly. Try this:
void so(long l)
{
++l;
so(l);
}
int main(int argc, char ** argv)
{
so(0);
return 0;
}
compile and run this program. If it runs forever, the tail-recursion was optimized away. if it blows the stack, it wasn't.
EDIT: sorry, read too quickly, the OP wants to know if his particular function has its tail-recursion optimized away. OK...
...the principle is still the same - if the tail-recursion is being optimized away, then the stack frame will remain the same. You should be able to use the backtrace function to capture the stack frames from within your function, and determine if they are growing or not. If tail recursion is being optimized away, you will have only one return pointer in the buffer.
Upvotes: 3
Reputation: 205014
Expanding on PolyThinker's answer, here's a concrete example.
int foo(int a, int b) {
if (a && b)
return foo(a - 1, b - 1);
return a + b;
}
i686-pc-linux-gnu-gcc-4.3.2 -Os -fno-optimize-sibling-calls
output:
00000000 <foo>: 0: 55 push %ebp 1: 89 e5 mov %esp,%ebp 3: 8b 55 08 mov 0x8(%ebp),%edx 6: 8b 45 0c mov 0xc(%ebp),%eax 9: 85 d2 test %edx,%edx b: 74 16 je 23 <foo+0x23> d: 85 c0 test %eax,%eax f: 74 12 je 23 <foo+0x23> 11: 51 push %ecx 12: 48 dec %eax 13: 51 push %ecx 14: 50 push %eax 15: 8d 42 ff lea -0x1(%edx),%eax 18: 50 push %eax 19: e8 fc ff ff ff call 1a <foo+0x1a> 1e: 83 c4 10 add $0x10,%esp 21: eb 02 jmp 25 <foo+0x25> 23: 01 d0 add %edx,%eax 25: c9 leave 26: c3 ret
i686-pc-linux-gnu-gcc-4.3.2 -Os
output:
00000000 <foo>: 0: 55 push %ebp 1: 89 e5 mov %esp,%ebp 3: 8b 55 08 mov 0x8(%ebp),%edx 6: 8b 45 0c mov 0xc(%ebp),%eax 9: 85 d2 test %edx,%edx b: 74 08 je 15 <foo+0x15> d: 85 c0 test %eax,%eax f: 74 04 je 15 <foo+0x15> 11: 48 dec %eax 12: 4a dec %edx 13: eb f4 jmp 9 <foo+0x9> 15: 5d pop %ebp 16: 01 d0 add %edx,%eax 18: c3 ret
In the first case, <foo+0x11>-<foo+0x1d>
pushes arguments for a function call, while in the second case, <foo+0x11>-<foo+0x14>
modifies the variables and jmp
s to the same function, somewhere after the preamble. That's what you want to look for.
I don't think you can do this programatically; there's too much possible variation. The "meat" of the function may be closer to or further away from the start, and you can't distinguish that jmp
from a loop or conditional without looking at it. It might be a conditional jump instead of a jmp
. gcc
might leave a call
in for some cases but apply sibling call optimization to other cases.
FYI, gcc's "sibling calls" is slightly more general than tail-recursive calls -- effectively, any function call where re-using the same stack frame is okay is potentially a sibling call.
[edit]
As an example of when just looking for a self-recursive call
will mislead you,
int bar(int n) {
if (n == 0)
return bar(bar(1));
if (n % 2)
return n;
return bar(n / 2);
}
GCC will apply sibling call optimization to two out of the three bar
calls. I'd still call it tail-call-optimized, since that single unoptimized call never goes further than a single level, even though you'll find a call <bar+..>
in the generated assembly.
Upvotes: 6
Reputation: 400642
Look at the generated assembly code and see if it uses a call
or jmp
instruction for the recursive call on x86 (for other architectures, look up the corresponding instructions). You can use nm
and objdump
to get just the assembly corresponding to your function. Consider the following function:
int fact(int n)
{
return n <= 1 ? 1 : n * fact(n-1);
}
Compile as
gcc fact.c -c -o fact.o -O2
Then, to test if it's using tail recursion:
# get starting address and size of function fact from nm
ADDR=$(nm --print-size --radix=d fact.o | grep ' fact$' | cut -d ' ' -f 1,2)
# strip leading 0's to avoid being interpreted by objdump as octal addresses
STARTADDR=$(echo $ADDR | cut -d ' ' -f 1 | sed 's/^0*\(.\)/\1/')
SIZE=$(echo $ADDR | cut -d ' ' -f 2 | sed 's/^0*//')
STOPADDR=$(( $STARTADDR + $SIZE ))
# now disassemble the function and look for an instruction of the form
# call addr <fact+offset>
if objdump --disassemble fact.o --start-address=$STARTADDR --stop-address=$STOPADDR | \
grep -qE 'call +[0-9a-f]+ <fact\+'
then
echo "fact is NOT tail recursive"
else
echo "fact is tail recursive"
fi
When ran on the above function, this script prints "fact is tail recursive". When instead compiled with -O3
instead of -O2
, this curiously prints "fact is NOT tail recursive".
Note that this might yield false negatives, as ehemient pointed out in his comment. This script will only yield the right answer if the function contains no recursive calls to itself at all, and it also doesn't detect sibling recursion (e.g. where A()
calls B()
which calls A()
). I can't think of a more robust method at the moment that doesn't involve having a human look at the generated assembly, but at least you can use this script to easily grab the assembly corresponding to a particular function within an object file.
Upvotes: 8
Reputation: 163357
Let's use the example code from the other question. Compile it, but tell gcc not to assemble:
gcc -std=c99 -S -O2 test.c
Now let's look at the _atoi
function in the resultant test.s file (gcc 4.0.1 on Mac OS 10.5):
.text
.align 4,0x90
_atoi:
pushl %ebp
testl %eax, %eax
movl %esp, %ebp
movl %eax, %ecx
je L3
.align 4,0x90
L5:
movzbl (%ecx), %eax
testb %al, %al
je L3
leal (%edx,%edx,4), %edx
movsbl %al,%eax
incl %ecx
leal -48(%eax,%edx,2), %edx
jne L5
.align 4,0x90
L3:
leave
movl %edx, %eax
ret
The compiler has performed tail-call optimization on this function. We can tell because there is no call
instruction in that code whereas the original C code clearly had a function call. Furthermore, we can see the jne L5
instruction, which jumps backward in the function, indicating a loop when there was clearly no loop in the C code. If you recompile with optimization turned off, you'll see a line that says call _atoi
, and you also won't see any backward jumps.
Whether you can automate this is another matter. The specifics of the assembler code will depend on the code you're compiling.
You could discover it programmatically, I think. Make the function print out the current value of the stack pointer (register ESP on x86). If the function prints the same value for the first call as it does for the recursive call, then the compiler has performed the tail-call optimization. This idea requires modifying the function you hope to observe, though, and that might affect how the compiler chooses to optimize the function. If the test succeeds (prints the same ESP value both times), then I think it's reasonable to assume that the optimization would also be performed without your instrumentation, but if the test fails, we won't know whether the failure was due to the addition of the instrumentation code.
Upvotes: 72
Reputation: 5218
A simple method: Build a simple tail recursion program, compile it, and dissemble it to see if it is optimized.
Just realized that you already had that in your question. If you know how to read assembly, it's quite easy to tell. Recursive functions will call themselves (with "call label") from within the function body, and a loop will be just "jmp label".
Upvotes: 2