Reputation: 11
Traceback (most recent call last): File "yolov4.py", line 342, in <module> main() File "yolov4.py", line 286, in main sess = rt.InferenceSession(args.model, so, providers=['OpenVINOExecutionProvider'], provider_options=[{'device_type' : device}]) File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 335, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 370, in _create_inference_session sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from yolov4.onnx failed:Protobuf parsing failed.
I am trying to test my docker container with the github repository: YOLOv4 onnx inference example
The model I have is in: YOLOv4 Model ONNX
I am using XLaunch on windows to be able to incorporate display functionality for the docker container. What could be the problem with the protobuf?
In addition, is there another way to allow for GUI Applications to show on docker?
Thank you.
I have tried changing the Protobuf versions thinking that might work. In addition, I also attempted to restart the whole project thinking that I might have downloaded something wrong, but from what I can tell after multiple tries is that everything is properly configured. Not sure if it is maybe the XLaunch that is messing up stuff.
Upvotes: 1
Views: 289