Reputation: 1176
I'm trying to build a program using cmake. For several reasons, the program must be built using static libraries rather than dynamic libraries, and I need to use PyTorch so this is what I've done:
libtorch.a
in the proper path, in /home/me/pytorch/torch/lib
)CMakeLists.txt
with the following contents:cmake_minimum_required(VERSION 3.5.1 FATAL_ERROR)
project(example-app LANGUAGES CXX)
find_package(Torch REQUIRED)
add_executable(example-app example-app.cpp argparse/argparse.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}" -static -fopenmp)
set_property(TARGET example-app PROPERTY CXX_STANDARD 14)
FYI, example-app.cpp
is the file with the main function, and argparse/
is a directory with some source code for functions called in example-app.cpp
It works until cmake -DCMAKE_PREFIX_PATH=/home/me/pytorch/torch ..
, but the following build
incurs some errors, saying it could not find the reference to some functions, namely functions starting with fbgemm::
. fbgemm
is (as long as I know) some sort of GEMM library used in implementing PyTorch.
It seems to me that while linking the static PyTorch library, its internal libraries like fbgemm
stuff have not been linked properly, but I'm not an expert on cmake
and honestly not entirely sure.
Am I doing something wrong, or is there a workaround for this problem? Any help or push in the right direction would be greatly appreciated.
P.S.
The exact error has not been posted because it is way too long, but it consists of mostly undefined reference to ~
errors. If looking at the error message might be helpful for some people, I'd be happy to edit the question and post it.
build
ing and running the file works fine if I remove the parts that require the library's functions from the code without commenting out #include <torch/torch.h>
from example-app.cpp
.
Upvotes: 6
Views: 5819
Reputation: 24815
Lately went through similar process with static linking of PyTorch and to be honest it wasn't too pretty.
I will outline the steps I have undertaken (you can find exact source code in torchlambda, here is CMakeLists.txt
(it also includes AWS SDK and AWS Lambda static builds), here is a script building pytorch
from source ( cloning and building via /scripts/build_mobile.sh
with only CPU support)),
though it's only with CPU support (though similar steps should be fine if you need CUDA, it will get you started at least).
First of all, you need pre-built static library files (all of them need to be static, hence no .so
, only those with .a
extension are suitable).
Tbh I've been looking for those provided by PyTorch
on installation page, yet there is only shared
version.
In one GitHub issue I've found a way to download them as follows:
Instead of downloading (here via wget
) shared libraries:
$ wget https://download.pytorch.org/libtorch/cu101/libtorch-shared-with-deps-1.4.0.zip
you rename shared
to static
(as described in this issue), so it would become:
$ wget https://download.pytorch.org/libtorch/cu101/libtorch-static-with-deps-1.4.0.zip
Yet, when you download it there is no libtorch.a
under lib
folder (didn't find libcaffe2.a
either as indicated by this issue), so what I was left with was building explicitly from source.
If you have those files somehow (if so, please provide where you got them from please), you can skip the next step.
For CPU version I have used /pytorch/scripts/build_mobile.sh file, you can base your version off of this if GPU support is needed (maybe you only have to pass -DUSE_CUDA=ON
to this script, not sure though).
Most important is cmake
's -DBUILD_SHARED_LIBS=OFF
in order to build everything as static
library. You can also check script from my tool which passes arguments to build_mobile.sh
as well.
Running above will give you static files in /pytorch/build_mobile/install
by default where there is everything you need.
Now you can copy above build files to /usr/local
(better not to unless you are using Docker
as torchlambda
) or set path to it from within your CMakeLists.txt
like this:
set(LIBTORCH "/path/to/pytorch/build_mobile/install")
# Below will append libtorch to path so CMake can see files
set(CMAKE_PREFIX_PATH "${CMAKE_PREFIX_PATH};${LIBTORCH}")
Now the rest is fine except target_link_libraries
, which should be (as indicated by this issue, see related issues listed there for additional reference) used with -Wl,--whole-archive
linker flag, which brought me to this:
target_link_libraries(example-app PRIVATE -lm
-Wl,--whole-archive "${TORCH_LIBRARIES}"
-Wl,--no-whole-archive
-lpthread
${CMAKE_DL_LIBS})
You may not need either of -lm
, -lpthread
or ${CMAKE_DL_LIBS}
, though I needed it when building on Amazon Linux AMI.
Now you are off to building your application. Standard libtorch
way should be fine but here is another command I used:
mkdir build && \
cd build && \
cmake .. && \
cmake --build . --config Release
Above will create build
folder where example-app
binary should be now safely located.
Finally use ld build/example-app
to verify everything from PyTorch
was statically linked, see aforementioned issue point 5.
, your output should look similar.
Upvotes: 7