Andre
Andre

Reputation: 141

Using GCC with new glibc and binutils to build software for system with older sysroot

I have a question since some months and I can't come to an answer with Google for a long time.

Background: I am cross compiling software for arm based controllers which are running the linux distribution ptxdist. The complete linux image is built with a cross gcc (4.5.2) that was built against glibc-2.13 and binutils-2.21. The c++ standard is quite old so I built a new toolchain which supports c++11 (gcc 4.8.5). It now is built against glibc-2.20 and binutils-2.24. I want to use that new compiler for my application software on the controller (not the complete image, just this one "main" binary) which is updated through a package management system.

The software seems to run. I just need to set LD_LIBRARY_PATH pointing to libstdc++.so.0.19 instead of libstdc++.so.14 for the binary. It does not accept the new libc, which is libc-2.20 instead of libc-2.13, though.

So binary uses libstdc++.so.0.19 and the rest of the system is unchanged.

Question: Why is this working? What risks could I expect running this software and should I anyway? For example will the binary miss some functions of glibc-2.20 in future because it just gets glibc-2.13 on the target machine? Building gcc-4.8.5 against glibc-2.13 is not possible.

I have read so far that it depends on changes inside the ABI: Impact on upgrade gcc or binutils

Here it is said that C Code is compatible if build by GCC4.1 to GCC 4.8.

Thank you!

Upvotes: 2

Views: 2226

Answers (2)

Andre
Andre

Reputation: 141

Good material for this could be here:

Multiple glibc libraries on a single host

Glibc vs GCC vs binutils compatibility

My final solution is this:

I built the GCC 4.8.5 as a cross compiler. I could not manage to build it with the older glibc2.13, only with version 2.20. It may be possible but in my case it did not work. Anyway, that is not a problem because I also built it with the sysroot-flag. Compiling new software depends completely on my old system, including C Runtime. I don't get a new C++ standard with this, but if you switch on compiler optimizations, I experienced better binary compression and performance. Regarding a new C++ standard, I could link a newer libstdc++ which came with my cross compiler using -l:libstdc++.so.6.0.19 as LDDFLAG. Therefore I only need to copy an additional libstdc++ on my target beside the old libstdc++. After having a look into the symbols used by the new lib using

strings libstdc++.so.6.0.19 | grep GLIBC_

you can observe that it doesn't depend on any newer symbols than GLIBC_2.4. It looks like I could never run into the problem of missing symbols. So in my case I have luck using a new C++11 standard without having any changes in the rest of the system. If there are introduced new symbols you need to follow the above links in my post which are pretty informative. But I would never try that for myself. In my case, with the GCC 4.9.4, libstdc++.so.6.0.20 got symbols pointing to GLIBC_2.17. That could give me trouble as I am cross compiling with GLIBC_2.13.

Upvotes: 0

Florian Weimer
Florian Weimer

Reputation: 33717

glibc 2.14 introduced the memcpy@GLIBC_2.14 symbol, so pretty much all software compiled against glibc 2.20 will not work on glibc 2.13 because that symbol is missing there. Ideally, you would build your new GCC against glibc 2.13, not glibc 2.20. You claim that building GCC 4.8.5 against glibc 2.13 is not possible, but this is clearly not true in general.

Some newer C++ features will work with the old system libstdc++ because they depend exclusively on templates (from header files) and none of the new code in libstdc++.

You could also investigate how the hybrid linkage model works in Red Hat Developer Toolset. It links the newer parts of libstdc++ statically, while relying on the system libstdc++ for the common, older parts. This way, you get proper interoperability for things like exceptions and you do not have to install a newer libstdc++ on the target.

Upvotes: 0

Related Questions