Varyag
Varyag

Reputation: 696

How to use OpenCV with camera on Jetson Nano with Yocto/poky

I've created a minimal xfce image with Yocto/poky on a Jetson Nano using warrior branches (poky warrior, meta-tegra warrior-l4t-r32.2, openembedded warrior) and CUDA 10.

Image boots and runs perfectly, and the camera test:

$ gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e

works like a charm.

Now I would like to use OpenCV on the camera feed, but I can't get it to work.

I've added these packages to IMAGE_INSTALL:

...
opencv \
libopencv-core \
libopencv-imgproc \
opencv-samples \
gstreamer1.0-omx-tegra \
python3 \
python3-modules \
python3-dev \
python-numpy \
...

To get the OpenCV installed. When I run /usr/bin/opencv_version, it returns version 3.4.5, python version is 3.7.2 and GCC version is 7.2.1.

When I try to run this OpenCV test code it returns

[ WARN:0] VIDEOIO(createGStreamerCapture(filename)): trying ...

(python3.7:5163): GStreamer-CRITICAL **: ..._: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed
[ WARN:0] VIDEOIO(createGStreamerCapture(filename)): result=(nil) isOpened=-1 ...

Unable to open camera

I've tried looking around online for solutions but they don't seem to work.

EDIT: There does appear to be a problem with using CAP_GSTREAMER in the VideoCapture function as running the same program with CAP_FFMPEG instead works just fine on an mp4 video.

Using cv2.VideoCapture("/dev/video0", CAP_FFMPEG) just returns with isOpen=-1. How do I get the camera to open in python?

Upvotes: 2

Views: 5527

Answers (2)

Anchal Gupta
Anchal Gupta

Reputation: 337

Use the following gstreamer pipeline:

stream = 'nvarguscamerasrc ! video/x-raw(memory:NVMM), width=%d, height=%d, format=(string)NV12, framerate=(fraction)%d/1 !nvvidconv flip-method=%d ! nvvidconv ! video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! videoconvert ! appsink' % (1280, 720, 30,0, 640, 480)

cap = cv2.VideoCapture(stream,cv2.CAP_GSTREAMER)

This will solve the problem

Upvotes: 0

seldak
seldak

Reputation: 291

This is the pipeline that you said works for you:

gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e

This is the pipeline that is mentioned in the script:

gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3280, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw, width=820, height=616, format=BGRx' ! videoconvert ! video/x-raw, format=BGR ! appsink

The difference between working and nonworking pipelines is the addition of videoconvert and appsink The error GStreamer-CRITICAL **: ..._: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed indicates there is some GStreamer element missing from your system. You can try adding the missing plugins by adding the following package group to your image:

gstreamer1.0-plugins-base

Alternatively, you can replace the pipeline in face_detect.py with your working pipeline, but keep in mind that the script probably needs the video converted to BGR before feeding it to appsink for the algorithm to work. You might need to look up documentation for the nvidconv element to see if this is supported.

EDIT: Judging by your comment, you may have been missing gstreamer1.0-python as well.

Upvotes: 2

Related Questions