Reputation: 183
What are the state-of-art algorithms when it comes to compressing digital images (say for instance color photos, maybe 800x480 pixels)?
Upvotes: 3
Views: 3729
Reputation: 11
It would depend on what you need to do with the encoded images of course. For webpages and small sizes, lossy compression systems may be suitable, but for satellite images, medical images etc. lossless compression may be required.
None of the formats mentioned above satisfy both situations. Not all of the above formats support every pixel format either, so they cannot be compared like for like.
I've been doing my own research into lossless compression for high bit depth images, and what I've found so far is that a Huffman coder with suitable reversible pre-compression filtering beats jpeg2000 and-jpeg xr in terms of file size by 56% on average (i.e., makes files less than half the size) on cinematic real world footage and faster. I can also beats FFV1 in the limited tests I've conducted, producing files under half the size even after FFV1 has truncated the source image pixel depths from 16 bits to 10 bits. Really most surprising.
For lossless compression ratios FLIF is ranked number one for me, but encoding times are astronomical. I've never made a file smaller than a FLIF file when compared. So good things come to those who wait. FLIF uses machine learning to achieve its compression ratios. Applying a lossy pre-compression filter to images before FLIF compression (something the encoder enables), creates visually lossless images that competes with the best lossy encoders, but with the advantage that re-encoding the output files repeatedly won't further reduce quality (as the encoder is lossless).
One thing that is obvious to me - nothing is really state of the art currently. Most formats are using old technology, designed in a time when memory and processing power was a premium. As far as lossless compression goes, FLIF is a big jump forward, but its an area of research that is wide open. Most research seems to be into lossy compression systems.
Upvotes: 1
Reputation: 18962
Some of the formats that are frequently discussed as possible JPEG successors are:
JPEG XR (aka HD Photo, Windows Media Photo). According to a study the Graphics and Media Lab at Moscow State University (MSU) image quality is comparable to JPEG2000 and significantly better than JPEG, compression efficiency is comparable to JPEG-2000
WebP is already tested in the wild on Google properties mainly, where the format is served to Chrome users exclusively (if you connect with a different browser, you get png or jpg images instead). It's very web-oriented
HEVC-MSP. In a study of Mozilla Corporation (oct 2013) HEVC-MSP performed best in most tests, and in the tests that it was not best, it came in second to the original JPEG format (but the study only looked at the image compression efficiency and not at other metrics and data that matters: feature sets, performance during run-rime, licensing...)
Jpeg 2000. The most computational intensive to encode/decode. Compared with the regular JPEG format, it offers advantages such as support for higher bit depths, more advanced compression and a lossless compression option. It is the standard comparison term for the others but it is a bit "slow in acceptance".
Anyway JPEG encoders haven't really reached their full compression potential after 20+ years. Even within the constraints of strong compatibility requirements, there are projects (e.g. Mozilla mozjpeg Project or Google Guetzli) that can produce smaller JPG files without sacrificing quality.
Upvotes: 11