Reputation: 77
I am running a CNN of a regressive type which inputs and outputs images of different dimensions (so not a an Image-segmentation problem) based on a dataset of samples and corresponding labels. As a result the last dense layer of my network has the height and width of the labels multiplied together. Now, I have been training the network for a while now and I wanted to see what the images looked like so to see how good or bad my model is. Is there a function that provides me with this option or do I have to hard-code it? How do I do it? Down below is attached the code of my network and the network summary as well.
conv2d_1 (Conv2D) (None, 54, 1755, 4) 20
activation_1 (Activation) (None, 54, 1755, 4) 0
max_pooling2d_1 (MaxPooling2 (None, 18, 585, 4) 0
batch_normalization_1 (Batch (None, 18, 585, 4) 16
conv2d_2 (Conv2D) (None, 17, 584, 8) 136
activation_2 (Activation) (None, 17, 584, 8) 0
max_pooling2d_2 (MaxPooling2 (None, 8, 292, 8) 0
batch_normalization_2 (Batch (None, 8, 292, 8) 32
conv2d_3 (Conv2D) (None, 7, 291, 16) 528
activation_3 (Activation) (None, 7, 291, 16) 0
max_pooling2d_3 (MaxPooling2 (None, 3, 145, 16) 0
batch_normalization_3 (Batch (None, 3, 145, 16) 64
conv2d_4 (Conv2D) (None, 2, 144, 32) 2080
activation_4 (Activation) (None, 2, 144, 32) 0
max_pooling2d_4 (MaxPooling2 (None, 1, 72, 32) 0
batch_normalization_4 (Batch (None, 1, 72, 32) 128
flatten_1 (Flatten) (None, 2304) 0
dropout_1 (Dropout) (None, 2304) 0
dense_1 (Dense) (None, 19316) 44523380
activation_5 (Activation) (None, 19316) 0
=================================================================
Total params: 44,526,384 Trainable params: 44,526,264 Non-trainable params: 120
Thanks in advance!
def generator(data_arr, batch_size = 10):
num = len(data_arr)
num = int(num/batch_size)
# Loop forever so the generator never terminates
while True:
for offset in range(0, num):
batch_samples = (data_arr[offset*batch_size:(offset+1)*batch_size])
samples = []
labels = []
for batch_sample in batch_samples:
samples.append(batch_sample[0])
labels.append((np.array(batch_sample[1].flatten())).transpose())
X_ = np.array(samples)
Y_ = np.array(labels)
X_ = X_[:, :, :, newaxis]
yield (X_, Y_)
# compile and train the model using the generator function
train_generator = generator(training_data, batch_size = 10)
validation_generator = generator(val_data, batch_size = 10)
model = Sequential()
model.add(Conv2D(4, (2, 2), input_shape = (55, 1756, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size = (3, 3)))
model.add(BatchNormalization())
model.add(Conv2D(8, (2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size = (2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(16, (2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size = (2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(32, (2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size = (2, 2)))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dropout(0.3))
model.add(Dense(19316))
model.add(Activation('softmax'))
def nrmse(y_true, y_pred):
return backend.sqrt(backend.mean(backend.square(y_pred -
y_true)))/(2)
def rmse(y_true, y_pred):
return backend.sqrt(backend.mean(backend.square(y_pred - y_true),
axis=-1))
model.compile(loss = 'mean_squared_error',
optimizer = 'adam',
metrics = [rmse, nrmse])
model.summary()
Upvotes: 0
Views: 119
Reputation: 1694
From what I understand, the output of your model should represent the grayscale values of the pixels of an image with the dimensions (11,1756).
There is no need to hard-code a special function, you can simply use the standard reshape() function on the output of the model.
images = y_pred.reshape((-1, 11, 1756))
You are probably already doing that when creating the vectors for the y_true parameters that you are using during training (I assume that the shape of the Ground Truth y_true variable is originally (11, 1756), and you reshape it to a single column vector form).
Upvotes: 1