Reputation: 11
I had been recently training a yolo v8 nano model with 1.5k images with the size of 640x640 and I decided to make it live with this code:
from ultralytics import YOLO
from ultralytics.yolo.v8.detect.predict import DetectionPredictor
import cv2
model = YOLO("Model/best.pt")
model.predict(source="0", show=True, conf=0.4)
the model is able to predict the objects accurately, but the problem is that it is very slow barely achieves 2 fps. I have to hold it for a long time.
My pc specs are: i9-13950H 24 cores and 5.6 hz 32 gigs of ram windows 11 home
also I do not have cuda support for the model detection but my cpu and ram are barely used. Is there a way to use CPU but still achive at least 30-60 fps.
Please drop comments and ideas you are welcomed!
I tried it with opencv and took each frame stored it as an image it was way slower please help!
Upvotes: 0
Views: 2240
Reputation: 1
convert the model to onnx formate using this code:
jupyter notebook:
! yolo export model=path_to_model.pt format=onnx
terminal:
yolo export model=path_to_model.pt format=onnx
you will have another model in the same file as the original model but with .onnx
Upvotes: 0