Reputation: 2435
I did a little experiment to see if Clang would produce better code if I compiled a bunch of dummy C source files into a single LLVM bitcode file (first using -emit-llvm
to compile to .bc
files, and then using llvm-link
to smush them into one .bc
file) before compiling into a dummy library as opposed to the usual compilation to individual object files to be linked, and it seemed to be able to perform some WPO's (whole program optimizations) such as inlining functions across different translation units, which it would not have done otherwise. I am aware of LTO (link-time optimizations) via -flto
, so this is more so a little experiment of mine to see how differently Clang would behave in this particular case.
My question is, however, is it advisable at all to build binaries this way? Is the end result any different from simply having used -flto
? If so, what would be different, whether in terms of the process or end result? If not, is this just a more contrived way of invoking LTO?
Upvotes: 1
Views: 363
Reputation: 34391
If not, is this just a more contrived way of invoking LTO?
Basically, yes.
Is the end result any different from simply having used -flto
Well, I think there will be some differencies, but they shouldn't be of any significance. When LTO-aware linker links bytecode and runs optimization passes, it uses PassManagerBuilder::addLTOOptimizationPasses
pipeline from lib/Transforms/IPO/PassManagerBuilder.cpp
. And when you optimize the code produced by llvm-link
, the opt
tool uses PassManagerBuilder::populateModulePassManager
, which is clearly different. It is hard to say what exactly differencies will be, but most probably it would be a matter that some passes would be ran twice in llvm-link+opt
case.
Upvotes: 1