chronosynclastic
chronosynclastic

Reputation: 1675

FFmpeg GPU 10-bit HEVC encoding using NVENC

I'm encoding some 4K 10-bit YUV test sequences using NVENC HEVC encoder. For an example sequence and configuration, I use the following command.

ffmpeg -hide_banner -benchmark -loglevel debug -y -f rawvideo -s:v 3840x2160 -r 50 -pix_fmt yuv420p10le -i ParkRunning3_3840x2160_50fps_10bit_420.yuv -c:v hevc_nvenc -preset hp -rc cbr -profile:v main10 -b:v 10M output.mp4

My goal is to achieve as low latency as possible; therefore I set the preset to low-latency high-performance. However, I only get around 15 fps encoding speed with this command. A logfile from the ffmpeg output from the above command is here.

I also tried with different presets and different sequences. The results are similar for all the 10-bit sequences I encoded. For 1920x1080 10-bit sequences, I get around 50-60 fps with HEVC encoder. But for 8-bit sequences I'm getting a much higher throughput of around 450-500 fps with similar preset and rate control modes. In the example, I'm using CBR as rate-control mode but I also tested and obtained similar results (in terms of encoding throughput) with VBR and constant QP modes.

Is there anything I'm missing in my command for 10-bit HEVC encoding? I understand that with 10-bit, because of the increased bit-depth, the encoding will take longer. But a reduction in throughput on this scale makes me think that I'm doing something wrong. It seems that FFmpeg is inserting an auto_scaler before the encoder which converts from yuv420p10le (my input format) to p010le (the 10-bit format accepted by NVENC). Could this scaling module reduce the encoder speed so drastically?

Upvotes: 1

Views: 6119

Answers (0)

Related Questions