Mat Jones
Mat Jones

Reputation: 976

Is Extreme Image Optimization Possible in Java?

I am attempting to write an image optimization software in Java. The first and most obvious step was to strip EXIF metadata, which I have done successfully. I also tried to compress the images using ImageIO and the compression quality parameter as below:

filepath=chosen.getCanonicalPath()+"-temp.jpg";
file=new File(filepath);
Iterator<ImageWriter> writers = ImageIO.getImageWritersByFormatName("jpg");
if (!writers.hasNext()){
    throw new IllegalStateException("No writers found");
}
OutputStream os = new FileOutputStream(file);
ImageWriter writer = (ImageWriter) writers.next();
ImageOutputStream ios = ImageIO.createImageOutputStream(os);
writer.setOutput(ios);

ImageWriteParam param = writer.getDefaultWriteParam();
param.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
param.setCompressionQuality(0.9f);
writer.write(null, new IIOImage(optimized, null, null), param);

However, this really doesn't work all that well. It works sort of okay when the source image is a different format (i.e. not jpeg), but when compressing from jpeg to jpeg, it sometimes even makes the file size larger.

Sites like www.tinyjpg.com claim to (and do) reduce file size of jpeg images by 40%-60% with no quality loss. How on earth do they do this (both procedurally and programmatically)? What types of data are they removing, and how is it possible to remove this data with no quality loss? Is this something I could possibly achieve in Java?

Any guidance and/or resources you can give me are greatly appreciated!

Upvotes: 3

Views: 1203

Answers (3)

Roc Boronat
Roc Boronat

Reputation: 12171

There's an Open Source library to achieve lossless JPEG optimizations. It's called SlimJPG and you can find more information here: https://github.com/Fewlaps/slim-jpg

To use it, just call it like this:

SlimJpg.file(yourImage).optimize();

yourImage can be a byte[], an InputStream or a File.

Disclamer: I'm one of the committeers... :·)

Upvotes: 4

user3344003
user3344003

Reputation: 21647

In JPEG there are three steps that are optimizable:

  1. Relative sampling of components. One samples the Cb or Cr components at a lower rate than the Y component. You smooth out Cb and Cr values so that 1, 2, 4, 8, or 16 pixels share the same Cb or Cr value (officially, JPEG also supports fractional sampling rates but most decoders to not support them). If you use 4:1:1 sampling ratio, that cuts the amount of data to compress in half.

  2. (The big one and most difficult to do) selecting optimized quantization table.

  3. Generating optimized Huffman tables (easy to do but many encoders don't).

Upvotes: 2

M.P. Korstanje
M.P. Korstanje

Reputation: 12059

From Wikipedia - JPG - Lossless further compression I would guess that Tiny JPG has been using improved algorithms that were developed after the creation of most standard tools. The improved algorithms are implemented in PackJPG which is conveniently open-sourced. There doesn't appear to be a Java implementation.

As a side note: PackJPG claims a 20% improvement while Tiny JPG claims 70%. This might be an overstatement but you might want to test both claims anyway.

Upvotes: 2

Related Questions