Reputation: 889
I'm trying to test the amount of information lost with some different video codecs. I've got a python script which uses PyPNG to write a series of 8 bit RGB images. I then encode it using avconv, for instance
avconv -r 1 -i ../frames/data%03d.png -c:v ffv1 -qscale:v 0 -r 1
outffv1.avi
I then decode this back into pngs like so
avconv -r 1 -i outffv1.avi -r 1 ./outffv1/frame%03d.png
But when I compare the images before and after the video compression, they are different (mean absolute error of ~~15%). The thing that is confusing me is that this is true (give or take) independent of the codec.
For instance, I get similar answers for libtheora for a range of qscale values.
The png encoding i.e. write to png, and immediately load back in without and video compression step, is lossless.
UPDATE - more precise worked example:
Single input frame here: https://www.dropbox.com/s/2utk1xs2t8heai9/data001.png?dl=0
Encoded to video like this: avconv -r 1 -i ./frames/data%03d.png -c:v ffv1 -qscale:v 0 -r 1 outffv1.avi
resultant video here: https://www.dropbox.com/s/g1babae2a41v914/outffv1.avi?dl=0
decoded to a png again here: https://www.dropbox.com/s/8i8zg1qn7dxsgat/out001.png?dl=0
using this command: avconv -r 1 -i outffv1.avi -qscale:v 31 -r 1 out%03d.png
and image magick differenced like this
compare out001.png ./frames/data001.png diff.png
to give this (non-zero) diff
https://www.dropbox.com/s/vpouk54p0dieqif/diff.png?dl=0
Upvotes: 2
Views: 150
Reputation: 31110
Your video file most likely uses the YUV color format. PNG uses RGB. The conversion of the color is not a lossless process.
Upvotes: 2