Yunus Temurlenk
Yunus Temurlenk

Reputation: 4367

Opencv how to read webcam stream via on GPU?

I can use VideoReader class of OpenCV to decode an IP camera stream or any video file by using its path. This decoding process is using GPU as expected, no problem up to now. Here is simple code which is working fine and using GPU for decoding:

int main()
{
    const std::string fname("rtsp://user:[email protected]"); 
    // const std::string fname("/path/to/video/file.mp4"); // this also works    
    cv::cuda::GpuMat d_frame;
    cv::Ptr<cv::cudacodec::VideoReader> d_reader = cv::cudacodec::createVideoReader(fname);    
    Mat frame;        
    for (;;)
    {

        if (!d_reader->nextFrame(d_frame))
            break;    
        Mat myMat(d_frame);            
        cv::imshow("GPU", myMat);

        if (cv::waitKey(3) > 0)
            break;
    }

    return 0;
}

I want to use GPU to capture streams from my webcam as like VideoCapture(0). I know as @berak mentioned here : There is no way to do that with VideoCapture

My questions are:

1 - Is it possible to stream by using GPU with VideoReader class? Because VideoReader class only accepts strings not indexes.

2- What are the other ways to be able to stream by using GPU?

Upvotes: 3

Views: 4016

Answers (1)

Alex
Alex

Reputation: 36

1) Yes, it seems so! I found the following code in the openCV GPU samples here. You could give it a try. You need to have OpenCV built with OpenGL though... Currently that's where I'm stuck.

2) I'm not sure about other options, but here is the code from the Github.

#include <iostream>

#include "opencv2/opencv_modules.hpp"

#if defined(HAVE_OPENCV_CUDACODEC)

#include <string>
#include <vector>
#include <algorithm>
#include <numeric>

#include <opencv2/core.hpp>
#include <opencv2/core/opengl.hpp>
#include <opencv2/cudacodec.hpp>
#include <opencv2/highgui.hpp>

int main(int argc, const char* argv[])
{
    std::cout << "Starting,...\n";
    const std::string fname = "0";
    
    cv::namedWindow("CPU", cv::WINDOW_NORMAL);
    cv::namedWindow("GPU", cv::WINDOW_OPENGL);
    cv::cuda::setGlDevice();

    cv::Mat frame;
    cv::VideoCapture reader(fname);

    cv::cuda::GpuMat d_frame;
    cv::Ptr<cv::cudacodec::VideoReader> d_reader = cv::cudacodec::createVideoReader(fname);

    cv::TickMeter tm;
    std::vector<double> cpu_times;
    std::vector<double> gpu_times;

    int gpu_frame_count=0, cpu_frame_count=0;

    for (;;)
    {
        tm.reset(); tm.start();
        if (!reader.read(frame))
            break;
        tm.stop();
        cpu_times.push_back(tm.getTimeMilli());
        cpu_frame_count++;

        cv::imshow("CPU", frame);

        if (cv::waitKey(3) > 0)
            break;
    }

    for (;;)
    {
        tm.reset(); tm.start();
        if (!d_reader->nextFrame(d_frame))
            break;
        tm.stop();
        gpu_times.push_back(tm.getTimeMilli());
        gpu_frame_count++;

        cv::imshow("GPU", d_frame);

        if (cv::waitKey(3) > 0)
            break;
    }

    if (!cpu_times.empty() && !gpu_times.empty())
    {
        std::cout << std::endl << "Results:" << std::endl;

        std::sort(cpu_times.begin(), cpu_times.end());
        std::sort(gpu_times.begin(), gpu_times.end());

        double cpu_avg = std::accumulate(cpu_times.begin(), cpu_times.end(), 0.0) / cpu_times.size();
        double gpu_avg = std::accumulate(gpu_times.begin(), gpu_times.end(), 0.0) / gpu_times.size();

        std::cout << "CPU : Avg : " << cpu_avg << " ms FPS : " << 1000.0 / cpu_avg << " Frames " << cpu_frame_count << std::endl;
        std::cout << "GPU : Avg : " << gpu_avg << " ms FPS : " << 1000.0 / gpu_avg << " Frames " << gpu_frame_count << std::endl;
    }

    return 0;
}

#else

int main()
{
    std::cout << "OpenCV was built without CUDA Video decoding support\n" << std::endl;
    return 0;
}

#endif

Upvotes: 2

Related Questions