Reputation: 382
I am trying to run model inference in C++.
I succesfully traced model in Python with torch.jit.trace
.
I am able to load model in C++ using torch::jit::load()
.
I was able to perform inference both on cpu and gpu, however the starting point was always torch::from_blob
method which seems to be creating cpu-side tensor.
For efficiency, I would like to cast/copy cv::cuda::GpuMat
directly to CUDA Tensor. I have been digging through pytorch tests and docs in search of convinient example, but was unable to find one.
Question: How to create CUDA Tensor from cv::cuda::GpuMat?
Upvotes: 1
Views: 1922
Reputation: 1072
Here is an example:
//define the deleter ...
void deleter(void* arg) {};
//your convert function
cuda::GpuMat gImage;
//build or load your image here ...
std::vector<int64_t> sizes = {1, static_cast<int64_t>(gImage.channels()),
static_cast<int64_t>(gImage.rows),
static_cast<int64_t>(gImage.cols)};
long long step = gImage.step / sizeof(float);
std::vector<int64_t> strides = {1, 1, step, static_cast<int64_t>(gImage.channels())};
auto tensor_image = torch::from_blob(gImage.data, sizes, strides, deleter, torch::kCUDA);
std::cout << "output tensor image : " << tensor_image << std::endl;
Upvotes: 4