Stas Buzuluk
Stas Buzuluk

Reputation: 865

Sepia filter inverting

I stacked with inverting sepia filter. The result of inverting the filter is not as expected.

My logic is the following: processed_pixel = np.dot(sepia_filter, original_pixel)

It means that: original_pixel = np.dot(np.inverse(sepia_filter), processed_pixel)

Here's code I've tried - I've also tried a couple of other approaches, such as reversing colours individually and then solving a system of linear equations but get the same result, so I assume that I don't understand something crucial.

Requirements:

import numpy as np
from PIL import Image, ImageDraw

Sepia filter code:

def get_pixel_after_sepia(source, pixel, sepia_filter):
    colors = np.array(source.getpixel(pixel)) 
    colors_new = tuple(map(int, np.dot(sepia_filter,colors))) + (255,) #  apply filter, transform results to ints, cut to 255
    return colors_new
                       
def sepia(source, result_name):
    result = Image.new('RGB', source.size)
    sepia_filter = np.array([[0.393,0.769,0.189], [0.349,0.686,0.168], [0.272,0.534,0.131]])

    #  for every pixel                   
    for x in range(source.size[0]):
        for y in range(source.size[1]):
            new_pixel = get_pixel_after_sepia(source, (x,y), sepia_filter)
            result.putpixel((x, y),new_pixel)
            
    result.save(result_name, "JPEG")
    return result

Inverted sepia code:

def get_pixel_before_sepia(source, pixel, inversed_sepia_filter):
    colors = np.array(source.getpixel(pixel)) 
    colors_new = tuple(map(int, np.dot(inversed_sepia_filter, colors)))+ (255,)
    return colors_new

def inverse_sepia(image_with_sepia, result_file):
    result = Image.new('RGB', image_with_sepia.size) 
    sepia_filter = np.array([[0.393,0.769,0.189], [0.349,0.686,0.168], [0.272,0.534,0.131]])
    inverse_sepia_filter = np.linalg.inv(sepia_filter)
    
    for x in range(image_with_sepia.size[0]):
        for y in range(image_with_sepia.size[1]):
            new_pixel = get_pixel_before_sepia(image_with_sepia, (x,y), inverse_sepia_filter)
            result.putpixel((x, y),new_pixel)
            
    result.save(result_file, "JPEG")
    return result

Functions execution:

image = Image.open("original_image.jpg")
filtered_image = sepia(image, "filtered.jpg") # result_pixel = dot_product(Filter, origin_pixel)  
image_after_filter_reversing = inverse_sepia(filtered_image,'restored.jpg' ) # result_pixel = dot_product(Filter^(-1), filtering_result_pixel)  

Original image

enter image description here

Filtered_image

enter image description here

Image_after_filter_reversing

enter image description here


I understand that it's impossible to make perfect reverse since we are cutting results of calculation and rounding them to int. But I expect the image after reversing to be quite close to the original. I'm a novice in image processing, but mathematically problem looks perfectly valid for me.

Upvotes: 3

Views: 1061

Answers (2)

user3386109
user3386109

Reputation: 34829

An interesting question.
The answer may be a little disappointing: a sepia filter is not reversible in theory, or in practice.

Theory

The numeric matrix used in the code is:

0.393   0.769   0.189
0.349   0.686   0.168
0.272   0.534   0.131

The corresponding symbolic matrix is:

 x       y       z
mx      my      mz
nx      ny      nz

where x=0.393 y=0.769 z=0.189 m=0.89 n=0.69.
When you compute the determinant of the symbolic matrix, it is zero. Hence, the matrix is not invertible, and neither is the sepia filter.

The fact that the numeric matrix has a non-zero determinant is simply due to the limited precision (3 digits) of the numbers. Computing the determinant of the numeric matrix gives 0.000000121, which is essentially 0, plus/minus a few rounding errors.

As a side note, multiplying a pixel value by the numeric matrix is equivalent to the following calculations

R = 0.393*r + 0.769*g + 0.189*b
G = 0.89*R
B = 0.69*R

where 'rgb' is the original pixel value, and 'RGB' is the sepia pixel value.

Practice

The New Oxford American Dictionary defines sepia as "a reddish-brown color associated particularly with monochrome photographs of the 19th and early 20th centuries"

The key word is monochrome. What sepia encodes is the apparent brightness of every pixel in the image. It retains none of the color information in the original image. This may seem counterintuitive since the sepia image appears to have some color in it.

To get a better understanding, try the following matrix in your code

0.291  0.569  0.140
0.291  0.569  0.140
0.291  0.569  0.140

That will convert the image into a grayscale image, aka a black and white image. Like sepia, the grayscale image encodes only the apparent brightness of each pixel. It retains none of the color information. The difference is that grayscale is using a gray hue as the base color, whereas sepia uses a brown hue as the base color. The other colors you see in a sepia image: orange, peach, yellow, and black are just different brightness levels of brown in the RGB colorspace.

Another difference between grayscale and sepia is that sepia is able to encode more levels of brightness. Grayscale has a palette of 256 shades. Sepia has 346 different pixel values. The reason is clipping. Given an input pixel of (255, 255, 255) the corresponding sepia pixel is (345, 307, 239) before clipping, and (255, 255, 239) after clipping. The red component of a sepia pixel has 346 possible values before clipping. For each red value, the green and blue values are proportional (G=0.89*R and B=0.69*R).

Here's the sepia palette (minus the four darkest shades):

enter image description here

Hence, the practical problem you face is that the original color image has a palette of 16 million colors, whereas the sepia image has a palette of only 346 colors. There's no way to recreate the original image, since a pixel in the sepia image corresponds to any of roughly 48000 possible colors in the original.

Upvotes: 4

David Eisenstat
David Eisenstat

Reputation: 65458

Now that you fixed the code, the lamb is sort of recognizable, but there's a lot of clipping going on. I think the problem is that the sepia filter, while invertible, is almost singular. You can see in the SVD how one of the singular values is much larger than the others. Thus small changes, as with the truncation to obtain an integer value (you might try rounding instead, might be a little better) get magnified a lot by the inverse operation, which results in the inaccurate looking reconstruction.

Upvotes: 3

Related Questions