Reputation: 167
I'm using Python and PIL (or Pillow) and want to run code on files that contain two pixels of a given intensity and RGB code (0,0,255).
The pixels may also be close to (0,0,255) but slightly adjusted ie (0,1,255). I'd like to overwrite the two pixels closest to (0,0,255) with (0,0,255).
Is this possible? If so, how?
Here's an example image , here zoomed with the pixels I want to make "more blue" here
The attempt at code I'm looking at comes from here:
# import the necessary packages
import numpy as np
import scipy.spatial as sp
import matplotlib.pyplot as plt
import cv2
from PIL import Image, ImageDraw, ImageFont
#Stored all RGB values of main colors in a array
# main_colors = [(0,0,0),
# (255,255,255),
# (255,0,0),
# (0,255,0),
# (0,0,255),
# (255,255,0),
# (0,255,255),
# (255,0,255),
# ]
main_colors = [(0,0,0),
(0,0,255),
(255,255,255)
]
background = Image.open("test-small.tiff").convert('RGBA')
background.save("test-small.png")
retina = cv2.imread("test-small.png")
#convert BGR to RGB image
retina = cv2.cvtColor(retina, cv2.COLOR_BGR2RGB)
h,w,bpp = np.shape(retina)
#Change colors of each pixel
#reference :https://stackoverflow.com/a/48884514/9799700
for py in range(0,h):
for px in range(0,w):
########################
#Used this part to find nearest color
#reference : https://stackoverflow.com/a/22478139/9799700
input_color = (retina[py][px][0],retina[py][px][1],retina[py][px][2])
tree = sp.KDTree(main_colors)
ditsance, result = tree.query(input_color)
nearest_color = main_colors[result]
###################
retina[py][px][0]=nearest_color[0]
retina[py][px][1]=nearest_color[1]
retina[py][px][2]=nearest_color[2]
print(str(px), str(py))
# show image
plt.figure()
plt.axis("off")
plt.imshow(retina)
plt.savefig('color_adjusted.png')
My logic is to replace the array of closest RGB colours to only contain (0,0,255) (my desired blue) and perhaps (255,255,255) for white - this way only the pixels that are black, white, or blue come through.
I've run the code on a smaller image, and it converts this to this
as desired.
However, the code runs through every pixel, which is slow for larger images (I'm using images of 4000 x 4000 pixels). I would also like to output and save images to the same dimensions as the original file (which I expect to be an option when using plt.savefig.
If this could be optimized, that would be ideal. Similarly, picking the two "most blue" (ie closest to (0,0,255)) pixels and rewriting them with (0,0,255) should be quicker and just as effective for me.
Upvotes: 0
Views: 1179
Reputation: 207465
Here's a different way to do it. Use SciPy's cdist() to work out the Euclidean distance from each pixel to Blue, then pick the nearest two:
#!/usr/bin/env python3
import cv2
import numpy as np
from scipy.spatial.distance import cdist
# Load image, save shape, reshape as tall column of 3 RGB values
im = cv2.imread('eye.png', cv2.IMREAD_COLOR)
origShape = im.shape
im = im.reshape(-1,3)
# Work out distance to pure Blue for each pixel
blue = np.full((1,3), [255, 0 , 0])
d = cdist(im, blue, metric='euclidean') # THIS LINE DOES ALL THE WORK
indexNearest = np.argmin(d) # get index of pixel nearest to blue
im[np.argmin(d)] = [0,0,255] # make it red
d[indexNearest] = 99999 # make it appear further so we don't find it again
indexNearest = np.argmin(d) # get index of pixel second nearest to blue
im[np.argmin(d)] = [0,0,255] # make it red
# Reshape back to original shape and save result
im = im.reshape(origShape)
cv2.imwrite('result.png',im)
Upvotes: 0
Reputation: 207465
As your image is largely unsaturated greys with just a few blue pixels, it will be miles faster to convert to convert to HLS colourspace and look for saturated pixels. You can do further tests easily enough on the identified pixels if you want to narrow it down to just two:
#!/usr/bin/env python3
import cv2
import numpy as np
# Load image
im = cv2.imread('eye.png', cv2.IMREAD_COLOR)
# Convert to HLS, so we can find saturated blue pixels
HLS = cv2.cvtColor(im,cv2.COLOR_BGR2HLS)
# Get x,y coordinates of pixels that have high saturation
SatPix = np.where(HLS[:,:,2]>60)
print(SatPix)
# Make them pure blue and save result
im[SatPix] = [255,0,0]
cv2.imwrite('result.png',im)
Output
(array([157, 158, 158, 272, 272, 273, 273, 273]), array([55, 55, 56, 64, 65, 64, 65, 66]))
That means pixels 157,55 and 158,55, and 158,56 and so on are blue. The conversion to HLS colourspace, identification of saturated pixels and setting them to solid blue takes 758 microseconds on my Mac.
You can achieve the same type of thing without writing any Python just using ImageMagick on the command line:
magick eye.png -colorspace hsl -channel g -separate -auto-level result.png
Upvotes: 1