Reputation: 21
I'm encountering an issue when trying to load a TorchScript model in my C++ application using LibTorch. The model loads and works fine in debug mode, but I get an exception when switching to release mode.Here is a minimal reproducible example of the code
#include <torch/script.h> // Include necessary headers
#include <iostream>
int main() {
try {
std::string model_path = "path/to/your/model.pt"; // make sure that this path is correct
torch::jit::script::Module model;
model = torch::jit::load(model_path); // This is where the error occurs
std::cout << "Model loaded successfully!" << std::endl;
} catch (const c10::Error& e) {
std::cerr << "Error loading the model: " << e.what() << std::endl;
return -1;
}
return 0;
}
this code is supposed to load a PyTorch model using torch::jit::load(model_path);
. However, I encounter the following error: Unhandled exception at 0x00007FFA25F5B699 in myApp.exe: Microsoft C++ exception: c10::Error at memory location 0x000000D4C29DE3F0.
I am using libtorch (Stable 2.5.1,Windows,C++/java, CPU), Visual Studio, on Windows 10 Pro
Sanitized stack trace
KernelBase.dll!00007ffcc7e7b699() Unknown
00007ffcc7e7b699()
00007ffc3a79bbf1()
c10::detail::torchCheckFail(const char *, const char *, unsigned int, const std::string &)
caffe2::serialize::FileAdapter::RAIIFile::RAIIFile(const std::string &)
caffe2::serialize::FileAdapter::FileAdapter(const std::string &)
std::make_unique<caffe2::serialize::FileAdapter,std::string const &,0>(const std::string &)
caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(const std::string &)
std::make_unique<caffe2::serialize::PyTorchStreamReader,std::string const &,0>(const std::string &)
torch::jit::import_ir_module(std::shared_ptr<torch::jit::CompilationUnit>, const std::string &, std::optional<c10::Device>, std::unordered_map<std::string,std::string,std::hash<std::string>,std::equal_to<std::string>,std::allocator<std::pair<std::string const ,std::string>>> &, bool, bool)
torch::jit::import_ir_module(std::shared_ptr<torch::jit::CompilationUnit>, const std::string &, std::optional<c10::Device>, bool)
torch::jit::load(const std::string &, std::optional<c10::Device>, bool)
processImage(int, const std::string &, const std::string &, const std::string &)
main()
The model was provided to me by another company, and I suspect it was built in debug mode. Since I cannot modify or rebuild the model, I need to determine if this discrepancy (debug vs. release) is causing the issue. My task is to inspect the compatibility of this model with our company’s application in both debug and release modes. I need to make sure before asking them for the release mode.
I have checked that the model file path is correct and accessible also there are no linker errors and ensured all necessary libraries are correctly linked in both debug and release configurations. Also verified the file path and access permissions
Upvotes: 2
Views: 76
Reputation: 3321
I have noticed this document says
On Windows, debug and release builds are not ABI-compatible. If you plan to build your project in debug mode, please try the debug version of LibTorch. Also, make sure you specify the correct configuration in the cmake --build . line above.
Based on my understanding, i think if you plan to build your project in debug mode, you should use the debug version of LibTorch. For release mode, you should use the release version of LibTorch.
Try to download Release version:
Download here (Release version):
https://download.pytorch.org/libtorch/nightly/cu118/libtorch-win-shared-with-deps-latest.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/nightly/cu118/libtorch-win-shared-with-deps-debug-latest.zip
Upvotes: 0