Reputation: 83
I need to use the SSIM from Sewar as a loss function in order to compare images for my model.
I am getting errors when I try to compile my model. I import the function and compile the model like this:
from sewar.full_ref import ssim
...
model.compile('ssim', optimizer=my_optimizer, metrics=[ssim])
and I get this:
File "/media/merry/merry32/train.py", line 19, in train
model.compile(loss='ssim', optimizer=opt, metrics=[ssim])
File "/home/merry/anaconda3/envs/merry_env/lib/python3.7/site-packages/keras/engine/training.py", line 451, in compile
handle_metrics(output_metrics)
File "/home/merry/anaconda3/envs/merry_env/lib/python3.7/site-packages/keras/engine/training.py", line 420, in handle_metrics
mask=masks[i])
File "/home/merry/anaconda3/envs/merry_env/lib/python3.7/site-packages/keras/engine/training_utils.py", line 404, in weighted
score_array = fn(y_true, y_pred)
File "/home/merry/anaconda3/envs/merry_env/lib/python3.7/site-packages/sewar/full_ref.py", line 143, in ssim
MAX = np.iinfo(GT.dtype).max
File "/home/merry/anaconda3/envs/merry_env/lib/python3.7/site-packages/numpy/core/getlimits.py", line 506, in __init__
raise ValueError("Invalid integer data type %r." % (self.kind,))
ValueError: Invalid integer data type 'O'.
I could also write something like this:
model.compile(ssim(), optimizer=my_optimizer, metrics=[ssim()])
But then I get this error (obviously):
TypeError: ssim() missing 2 required positional arguments: 'GT' and 'P'
I just wanted to do the same I was doing with mean_sqeared_error but with SSIM, like this (which works perfectly with no need of passing parameters to it):
model.compile('mean_squared_error', optimizer=my_optimizer, metrics=['mse'])
Any idea on how should I use this function to compile?
Upvotes: 4
Views: 16474
Reputation: 41
Use the TensorFlow implementation of SSIM. The correct way to SSIM as training loss is as follows. SSIM is defined for positive pixel values only. To be able to compute SSIM on the prediction of your network and the (positive only, and preferrably normalized) input tensors, you should restrict your network's top layer to only output numbers in the range [0, inf] by using a "softplus" activation function.
Because SSIM is subject to maximization, invert it to use it as training loss:
ssim_loss = 1 - tf.reduce_mean(tf.image.ssim(y_true, y_pred, 1.0))
Adapting the example of mujjiga and implementing all before mentioned changes:
from keras.models import Sequential
from keras.layers import Conv2D
import numpy as np
import tensorflow as tf
# normalize the data in the range [0, 2]
def normalize(data):
normalized_data = data / np.max(np.abs(data))
normalized_data += 1
return normalized_data
# Loss function
def ssim_loss(y_true, y_pred):
return 1 - tf.reduce_mean(tf.image.ssim(y_true, y_pred, 2.0))
# dummy input data
input_data = np.random.randn(100, 32, 32, 1)
target_data = np.random.randn(100, 28, 28, 1)
normalized_input_data = normalize(input_data)
normalized_target_data = normalize(target_data)
# Model: Input Image size: 32X32X1 output Image size: 28X28X1
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(32, 32, 1)))
model.add(Conv2D(1, kernel_size=(3, 3), activation='softplus'))
model.compile(optimizer='adam', loss=ssim_loss)
# Train
model.fit(normalized_input_data, normalized_target_data, epochs=100)
Now you see a positive loss decreasing
Epoch 1/100
4/4 [==============================] - 3s 65ms/step - loss: 0.9300
Epoch 2/100
4/4 [==============================] - 0s 7ms/step - loss: 0.9269
[...]
Epoch 99/100
4/4 [==============================] - 0s 7ms/step - loss: 0.9089
Epoch 100/100
4/4 [==============================] - 0s 6ms/step - loss: 0.9093
Upvotes: 0
Reputation: 323
Keras has an implementation of SSIM. You can use it like this:
def SSIMLoss(y_true, y_pred):
return 1 - tf.reduce_mean(tf.image.ssim(y_true, y_pred, 1.0))
self.model.compile(optimizer=sgd, loss=SSIMLoss)
Upvotes: 9
Reputation: 16876
tf.image.ssim
to compute SSIM index between two images.from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten
import numpy as np
import tensorflow as tf
# Loss functtion
def ssim_loss(y_true, y_pred):
return tf.reduce_mean(tf.image.ssim(y_true, y_pred, 2.0))
# Model: Input Image size: 32X32X1 output Image size: 28X28X1
# check model.summary
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=(32,32,1)))
model.add(Conv2D(1, kernel_size=(3, 3),
activation='relu'))
model.compile(optimizer='adam', loss=ssim_loss, metrics=[ssim_loss, 'accuracy'])
# Train
model.fit(np.random.randn(10,32,32,1), np.random.randn(10,28,28,1))
Upvotes: 6
Reputation: 737
You need to create your own custom loss function in order to use external losses. However, these losses must be adapted to use Tensorflow's tensors and not numerical values or matrixes, so it is not so simple.
I suggest you to see how to write a custom loss function, there are a lot of good tutorials about this, like this one.
Upvotes: 2