Yixing Liu
Yixing Liu

Reputation: 2429

How can I specify the GLIBC version in cargo build for Rust?

I use rust 1.34 and 1.35. Currently it links to GLIBC_2.18.

How can I limit cargo build to link GLIBC up to version 2.14?

Upvotes: 26

Views: 12698

Answers (3)

Cosimos Cendo
Cosimos Cendo

Reputation: 61

crosstool-ng is always my go-to when dealing with compiling for anything but the defaults on my native PC. This includes just using a different GLIBC.

First, you'll need download and install ct-ng.

Once installed, find a target triple to work from:

ct-ng list-samples

This will output a list of target triples that are already setup up and ready to edit for your specific use case:

...
[G...]   x86_64-ol8u7-linux-gnu
[G..X]   x86_64-w64-mingw32,x86_64-pc-linux-gnu
[G...]   x86_64-ubuntu14.04-linux-gnu
[G...]   x86_64-ubuntu16.04-linux-gnu
[G...]   x86_64-unknown-linux-gnu
[G...]   x86_64-unknown-linux-uclibc
[G..X]   x86_64-w64-mingw32
[G..X]   xtensa-fsf-elf
...

Find one that matches the architecture, operating system, and compiler toolchain that you'll be using. In my case, it's x86_64-unknown-linux-gnu.

Next, Activate that target:

ct-ng x86_64-unknown-linux-gnu

At this point we could start building the toolchain for that target, but we need to edit the version of GLIBC it's using first. Open the configuration for that target:

ct-ng menuconfig

There are a lot of options to mess with here. Most of it should be correct since we're working from a preconfigured sample. We'll just change the version of GLIBC. Navigate to C-library and change Version of glibc to your desired version. Exit the configuration and build the toolchain:

ct-ng build

This will take some time. About 15 minutes on my PC. When finished, you'll have a cross compilation toolchain installed in ~/x-tools/<target>.

In our rust project, create a file called .cargo/config.toml and point to your cross-compiler in it. Here is what mine looks like:

[target.x86_64-unknown-linux-gnu]
linker = "/home/cendo/x-tools/x86_64-unknown-linux-gnu/bin/x86_64-unknown-linux-gnu-gcc"

Finally, build your project using that target:

cargo build --target x86_64-unknown-linux-gnu

Your binaries will be located in target/x86_64-unknown-linux-gnu/debug/.

Upvotes: 0

owndampu
owndampu

Reputation: 123

I just found out about zigbuild, this project can enable you to do this by specifying a target with a glibc version at the end:
cargo zigbuild --target aarch64-uknown-linux-gnu.2.14 (or whatever target you are aiming for)

Note that you also need to have zig installed for this method.

Upvotes: 4

rburmorrison
rburmorrison

Reputation: 730

Unfortunately, you can't. Not really, and not consistently. This is a problem with any binary that dynamically links to GLIBC. You can try setting up multiple GLIBC versions and linking to one, or you can try patching the resulting binary, but it's inconsistent and impractical.

So what are some practical options?

  1. Compile Statically

By using MUSL instead of GLIBC we can compile statically.

To install the MUSL target with rustup (assuming x86_64 architecture):

$ rustup component add rust-std-x86_64-unknown-linux-musl

And to use it when compiling:

$ cargo build --target x86_64-unknown-linux-musl

This is the easiest method by far, but won't always work, especially when using native libraries, unless they can also be compiled statically.

  1. Make a VM That Has an Older Version

This is a common approach. By using an OS with an outdated, GLIBC the binary will have GLIBC symbols that are compatible with it.

  1. Use a Docker Container

This is probably the most convenient method, in my opinion. If you have Docker, you can just compile your project with a container that contains an old GLIBC. View the Rust contianer's README for compilation instructions. The command below will compile a project using Rust 1.67 and GLIBC 2.28 (which comes with buster):

$ docker run --rm --user "$(id -u)":"$(id -g)" -v "$PWD":/usr/src/myapp -w /usr/src/myapp rust:1.67-buster cargo build --release

I compiled this on Ubuntu 22.04 and tested it on Ubuntu 20.04.

To test further, I made sure the binary relied on another dynamic library (OpenSSL) and here's the result of ldd ./mybinary after compiling with the Docker container:

$ ldd ./mybinary 
    linux-vdso.so.1 (0x00007ffd98fdf000)
    libcrypto.so.1.1 => /lib/x86_64-linux-gnu/libcrypto.so.1.1 (0x00007fe49e248000)
    libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fe49e22d000)
    librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fe49e223000)
    libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fe49e200000)
    libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fe49e0b1000)
    libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fe49e0ab000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fe49deb7000)
    /lib64/ld-linux-x86-64.so.2 (0x00007fe49ea30000)

And this is what it looks like without the container:

$ ldd ./mybinary
    linux-vdso.so.1 (0x00007ffd5d7b7000)
    libcrypto.so.3 => /lib/x86_64-linux-gnu/libcrypto.so.3 (0x00007fe85564c000)
    libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fe85562c000)
    libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fe855545000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fe85531d000)
    /lib64/ld-linux-x86-64.so.2 (0x00007fe855f98000)

Upvotes: 10

Related Questions