Josh Brown Kramer
Josh Brown Kramer

Reputation: 293

How can I get YOLOv4 inference times with OpenVINO that are as fast as OpenCV?

If I run a YoloV4 model with leaky relu activations on my CPU with 256x256 RGB images in OpenCV with an OpenVINO backend, inference time plus non-max suppression is about 80ms. If, on the other hand, I convert my model to an IR following https://github.com/TNTWEN/OpenVINO-YOLOV4, which is linked to from https://github.com/AlexeyAB/darknet, inference time directly using the OpenVINO inference engine is roughly 130ms, which does not even include non-max suppression, which is quite slow when implemented naively in python.

Unfortunately, OpenCV does not offer all of the control I would like for the models and inference schemes I want to try (e.g. I want to change batch size, import models from YOLO repositories other than darknet, etc.)

What is the magic that allows OpenCV with OpenVINO backend to be so much faster?

Upvotes: 3

Views: 1012

Answers (1)

Rommel_Intel
Rommel_Intel

Reputation: 1413

Inference performance is application dependent and subject to many variables such as model size, model architecture, processors, etc.

This benchmark result shows performance results of running yolo-v4-tf on multiple Intel® CPUs, GPUs and VPUs.

For example, you may use an 11th Gen Intel® Core™ i7-11850HE @ 2.60GHz CPU to run yolo-v4-tf, which gives 80.4 ms inferencing time.

yolo-v4-tf and yolo-v4-tiny-tf are public pre-trained models that you can use for learning and demo purposes or for developing deep learning software. You may download these models using Model Downloader.

Upvotes: 2

Related Questions