Chris
Chris

Reputation: 407

How to optimize video decoding across CPU / GPU?

I have a video processing pipeline that reads live RTSP streams and does some CV filtering over each image. I find that my CV filter takes up low utilization on the GPU. My bottle neck seems to be the amount of video feeds I can run through the GPU Decoder.

Right now I am using GstVideoDecoder which maxes out the GPU's Decoder chip. This is convenient as all my frames are in CUDA memory and runs quickly through my CV.

What strategies could I use to increase the number of streams I can decode? Sharing some decoding load on the CPU? Dedicated decoder hardware?

Upvotes: 0

Views: 1026

Answers (1)

Christoph
Christoph

Reputation: 1068

Normally there are serval options to increase decoding troughput, with just tweaking some parameter on the streams.

  1. Decrease FPS.
  2. Decrease Bitrate.
  3. Decrease Resolution.
  4. Use a codec that are less GPU/CPU intensive.

Of course a better GPU with better decoding performance results in more decoding performance, CPU decoding is not really a very performant way to decode video. So you don't have really much improvement there, if you have a lot of CPU's and you can use one cpu per stream like multiproccessing for decoding maybe you can handle more streams, but in my opinion it's a waste of time to get in this direction.

Hope that give you some helpful ideas.

Upvotes: 2

Related Questions