Reputation:
How do I get the image color tones of an image and set it to another one?
I have these two images and want to make Ashley Benson in Mona Lisa's image colors.
Upvotes: 0
Views: 1064
Reputation: 397
I don't think there could be a "filter" to do it.
Using classical computer vision you could make Fast Fourier Transform of each image, and than replace low-frequency components of Ashley Benson image with Mona Lisa's. But in this case you can only change the color domain of the image. Code example here:
import cv2
import numpy as np
from matplotlib import pyplot as plt
lisa = cv2.imread(r"path/to/monalisa")
ashley = cv2.imread(r"path/to/ashley")
def domain_adoptation(src, trg, freq):
"""
Parameters:
src - source image, which style has to be changed
trg - target image, which low-frequency domain will be adopted
freq - number of frequencies to be used
Returns:
result - np.array based on srs image (shape and high frequencies)
with low frequencies of the target image
"""
result = np.zeros((src.shape[0],src.shape[1],src.shape[2]))
for i in range(src.shape[2]):
trg_fft = np.fft.fft2(trg[:,:,i])
src_fft = np.fft.fft2(src[:,:,i])
trg_fft_shift = np.fft.fftshift(trg_fft)
src_fft_shift = np.fft.fftshift(src_fft)
src_fft_shift[src.shape[0]//2-freq:src.shape[0]//2+freq,
src.shape[1]//2-freq:src.shape[1]//2+freq] = \
trg_fft_shift[trg.shape[0]//2-freq:trg.shape[0]//2+freq,
trg.shape[1]//2-freq:trg.shape[1]//2+freq]
src_ifft_shift = np.fft.ifftshift(src_fft_shift)
result[:,:,i] = np.fft.ifft2(src_ifft_shift)
result[:,:,i] = np.abs(result[:,:,i])
result = np.float32(result)
result = cv2.cvtColor(result, cv2.COLOR_BGR2RGB)
result = cv2.normalize(result,None,0,1,cv2.NORM_MINMAX)
return result
image = domain_adoptation(src=ashley,trg=lisa,freq=1)
plt.imshow(a)
And there's a GIF:
If you want to have better results, you can take a look at the quite old deep learning method called "Fast neural style". To do this with Mona Lisa's image, you need to train your own model using the examples above. You can check pretrained models in this colab. Trained models (for each style its own model) give these results:
Of course there are many modern state-of-the-art approaches of style transfer, see here.
Upvotes: 3