Reputation: 4031
I'm trying to run Inference on the Intel Compute Stick 2 (MyriadX chip) connected to a Raspberry Pi 4B using OnnxRuntime and OpenVINO. I have everything set up, the openvino provider gets recognized by onnxruntime and I can see the myriad in the list of available devices.
However, I always get some kind of memory corruption when trying to run inference on the myriad.
I'm not sure where this is coming from. If I use the default CPU inference instead of openvino, everythin works fine. Maybe the way I'm creating the Ort::MemoryInfo
object is incorrect.
output
Available execution providers:
CPUExecutionProvider
OpenVINOExecutionProvider
Available OpenVINO devices:
MYRIAD
Starting Session
[...]
2020-12-11 13:43:13.962093843 [I:onnxruntime:, openvino_execution_provider.h:124 OpenVINOExecutionProviderInfo] [OpenVINO-EP]Choosing Device: MYRIAD , Precision: FP16
[...]
2020-12-11 13:43:13.972813082 [I:onnxruntime:, capability_2021_1.cc:854 GetCapability_2021_1] [OpenVINO-EP] Model is fully supported by OpenVINO
[...]
Loading data
Running Inference
2020-12-11 13:43:21.838737814 [I:onnxruntime:, sequential_executor.cc:157 Execute] Begin execution
2020-12-11 13:43:21.838892108 [I:onnxruntime:, backend_manager.cc:253 Compute] [OpenVINO-EP] Creating concrete backend for key: MYRIAD|50,28,28,1,|10,|84,10,|84,|120,84,|6,1,5,5,|16,|6,|400,120,|16,6,5,5,|120,|
2020-12-11 13:43:21.838926959 [I:onnxruntime:, backend_manager.cc:255 Compute] [OpenVINO-EP] Backend created for graph OpenVINOExecutionProvider_OpenVINO-EP-subgraph_1_0
2020-12-11 13:43:21.845913973 [I:onnxruntime:, backend_utils.cc:65 CreateCNNNetwork] ONNX Import Done
malloc(): unsorted double linked list corrupted
Aborted
Here is the code I'm using
#include <iostream>
#include <iomanip>
#include <chrono>
#include <array>
#include <cmath>
#include <MNIST-Loader/MNIST.h>
#include <onnxruntime_cxx_api.h>
#include <core/framework/allocator.h>
#include <ie_core.hpp> //openvino inference_engine
int main()
{
constexpr const char* modelPath = "/home/pi/data/lenet_mnist.onnx";
constexpr const char* mnistPath = "/home/pi/data/mnist/";
constexpr size_t batchSize = 50;
std::cout << "Available execution providers:\n";
for(const auto& s : Ort::GetAvailableProviders()) std::cout << '\t' << s << '\n';
std::cout << "Available OpenVINO devices:\n";
{ //new scope so the core gets destroyed when leaving
InferenceEngine::Core ieCore;
for(const auto& d : ieCore.GetAvailableDevices()) std::cout << '\t' << d << '\n';
}
// ----------- create session -----------
std::cout << "Starting Session\n";
Ort::Env env(ORT_LOGGING_LEVEL_INFO);
OrtOpenVINOProviderOptions ovOptions;
ovOptions.device_type = "MYRIAD_FP16";
Ort::SessionOptions sessionOptions;
sessionOptions.SetExecutionMode(ORT_SEQUENTIAL);
sessionOptions.SetGraphOptimizationLevel(ORT_DISABLE_ALL);
sessionOptions.AppendExecutionProvider_OpenVINO(ovOptions);
Ort::Session session(env, modelPath, sessionOptions);
// ----------- load data -----------
std::cout << "Loading data\n";
MNIST data(mnistPath);
const std::array<int64_t, 4> inputShape{batchSize, 28, 28, 1};
//const auto memoryInfo = Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPUInput);
auto openvinoMemInfo = new OrtMemoryInfo("OpenVINO", OrtDeviceAllocator);
const Ort::MemoryInfo memoryInfo(openvinoMemInfo);
std::array<float, batchSize*28*28> batch;
for(size_t i = 0; i < batchSize; ++i)
{
const auto pixels = data.trainingData.at(i).pixelData;
for(size_t k = 0; k < 28*28; ++k)
{
batch[k + (i*28*28)] = (pixels[k] == 0) ? 0.f : 1.f;
}
}
const Ort::Value inputValues[] = {Ort::Value::CreateTensor<float>(memoryInfo, batch.data(), batch.size(), inputShape.data(), inputShape.size())};
// ----------- run inference -----------
std::cout << "Running Inference\n";
Ort::RunOptions runOptions;
Ort::AllocatorWithDefaultOptions alloc;
const char* inputNames [] = { session.GetInputName (0, alloc) };
const char* outputNames[] = { session.GetOutputName(0, alloc) };
const auto start = std::chrono::steady_clock::now();
auto results = session.Run(runOptions, inputNames, inputValues, 1, outputNames, 1);
const auto end = std::chrono::steady_clock::now();
std::cout << "\nRuntime: " << std::chrono::duration_cast<std::chrono::milliseconds>(end-start).count() << "ms\n";
// ----------- print results -----------
std::cout << "Results:" << std::endl;
for(Ort::Value& r : results)
{
const auto dims = r.GetTensorTypeAndShapeInfo().GetShape();
for(size_t i = 0; i < dims[0]; ++i)
{
std::cout << "Label: " << data.trainingData.at(i).label << "\tprediction: [ " << std::fixed << std::setprecision(3);
for(size_t k = 0; k < dims[1]; ++k) std::cout << r.At<float>({i, k}) << ' ';
std::cout << "]\n";
}
}
std::cout.flush();
}
Upvotes: 0
Views: 1577
Reputation: 1413
This component (OpenVINO Execution Provider) is not part of the OpenVINO toolkit, hence we require you to post your questions on the ONNX Runtime GitHub as it will help us identify issues with OpenVINO Execution Provider separately from the main OpenVINO toolkit.
We opened a case on Github on your behalf and we should get a reply soon within this thread - https://github.com/microsoft/onnxruntime/issues/6304
Upvotes: -1