Reputation: 91830
I have a potential job which will require me to do some video encoding with FFMPEG and x264. I'll have a series of files which I'll need to encode once, then I'll be able to bring down the instances. Since I'm not really sure of the resource utilization of x264 and FFMPEG, what kind of instances should I get? I'm thinking either a
High-CPU Extra Large Instance
7 GB of memory
20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each)
1690 GB of instance storage
64-bit platform
I/O Performance: High
API name: c1.xlarge
or, alternatively a
Cluster GPU Quadruple Extra Large Instance
22 GB of memory
33.5 EC2 Compute Units (2 x Intel Xeon X5570, quad-core “Nehalem” architecture)
2 x NVIDIA Tesla “Fermi” M2050 GPUs
1690 GB of instance storage
64-bit platform
I/O Performance: Very High (10 Gigabit Ethernet)
API name: cg1.4xlarge
What should I use? Does x264/FFMPEG perform better with faster/more CPUs or does it really pound the GPU more? In any case, it seems that the Cluster GPU seems to be the higher performance instance. What should I prefer?
Upvotes: 6
Views: 4781
Reputation: 151
The short answer to CPU/GPU is, it depends on how you use ffmpeg
to do the H264 encoding.
Also, you are confusing the H264 and x264. H264 is the video codec standard and x264 is one implementation of H264 standard. x264 is so popular, so sometimes it has become synonymous and confused with H264. The reason that I point that out is x264 is a software-based implementation of H264, which means it will only use the CPU cores for all the processes. There will be no GPU usage in your use case when you use x264 for video encoding.
That being said, maybe what you are trying to ask is whether to go for
There are several implementations available for each available. Ffmpeg already has a nice page on this. If you are planning to use the Nvidia GPU instances, then you would need to compile FFmpeg with NVENC support to get the hardware implementation. Using GPUs/CPUs to efficiently do all your transcoding process is an art itself.
So in short, x264 will not use GPU. If you want to use GPU, you need to use hardware implementations of the encoders. Which implementation is better largely depends upon your use case and what you care about (quality, cost, turnaround time, etc.)
My background/ Disclaimer: I work as a Senior Engineer at Bitmovin. We solve this "cluster/resource" allocation engineering problem, among many many many other problems, to extract the best possible video quality out of a given bitrate. And in the end, we offer APIs where you can just simply plug them into your workflow. The views expressed here are my own.
Upvotes: 3
Reputation: 1699
In the present, Amazon EC2 offers (some) GPU accelerated instances using modern NVIDIA GPUs, meaning that you can take advantage of NVENC on them.
Upvotes: 1
Reputation: 15
You are probably better off using a service like zencoder.com, they have an excellent API and the quality you will get out of it will most probably be better than hours of fiddling with Ffmpeg parameters optimisation.
Upvotes: -4
Reputation: 79685
Ffmpeg recently added support for VAAPI and VDPAU, but this allows it to use the GPU only for decoding of H.264 video. For encoding, it uses the CPU.
Upvotes: 3