Vebjorn Ljosa
Vebjorn Ljosa

Reputation: 18008

Rescale intensities of a PIL Image

What is the simplest/cleanest way to rescale the intensities of a PIL Image?

Suppose that I have a 16-bit image from a 12-bit camera, so only the values 0–4095 are in use. I would like to rescale the intensities so that the entire range 0–65535 is used. What is the simplest/cleanest way to do this when the image is represented as PIL's Image type?

The best solution I have come up with so far is:

pixels = img.getdata()
img.putdata(pixels, 16)

That works, but always leaves the four least significant bits blank. Ideally, I would like to shift each value four bits to the left, then copy the four most significant bits to the four least significant bits. I don't know how to do that fast.

Upvotes: 5

Views: 5286

Answers (5)

tzot
tzot

Reputation: 96001

Since you know that the pixel values are 0-4095, I can't find a faster way than this:

new_image= image.point(lambda value: value<<4 | value>>8)

According to the documentation, the lambda function will be called at most 4096 times, whatever the size of your image.

EDIT: Since the function given to point must be of the form argument * scale + offset for in I image, then this is the best possible using the point function:

new_image= image.point(lambda argument: argument*16)

The maximum output pixel value will be 65520.

A second take:

A modified version of your own solution, using itertools for improved efficiency:

import itertools as it # for brevity
import operator

def scale_12to16(image):
    new_image= image.copy()
    new_image.putdata(
        it.imap(operator.or_,
            it.imap(operator.lshift, image.getdata(), it.repeat(4)),
            it.imap(operator.rshift, image.getdata(), it.repeat(8))
        )
    )
    return new_image

This avoids the limitation of the point function argument.

Upvotes: 3

Paul
Paul

Reputation: 3112

Maybe you should pass 16. (a float) instead of 16 (an int). I was trying to test it, but for some reason putdata does not multiply at all... So I hope it just works for you.

Upvotes: 0

Ivan
Ivan

Reputation: 7506

You need to do a histogram stretch (link to a similar question I answered) not histogram equalization: histogram stretch http://cct.rncan.gc.ca/resource/tutor/fundam/images/linstre.gif

Image source

In your case you one need to multiply all the pixel values by 16, which is the factor between the two dynamic ranges (65536/4096).

Upvotes: 2

Pratik Deoghare
Pratik Deoghare

Reputation: 37172

What you need to do is Histogram Equalization. For how to do it with python and pil:

EDIT: Code to shift each value four bits to the left, then copy the four most significant bits to the four least significant bits...

def f(n):
   return  n<<4 + int(bin(n)[2:6],2)

print(f(0))
print(f(2**12))

# output
>>> 0
    65664 # Oops > 2^16

Upvotes: 1

user162756
user162756

Reputation:

Why would you want to copy the 4 msb back into the 4 lsb? You only have 12 significant bits of information per pixel. Nothing you do will improve that. If you are OK with only having 4K of intensities, which is fine for most applications, then your solution is correct and probably optimal. If you need more levels of shading, then as David posted, recompute using a histogram. But, this will be significantly slower.

But, copying the 4 msb into the 4 lsb is NOT the way to go :)

Upvotes: 2

Related Questions