Reputation: 1163
I want to extract CNN activations from the first fully connected layer using keras. There's such a function in Caffe, but I cannot use that framework because I'm facing installation problems. I'm reading a research paper that uses those CNN activations but the author is using Caffe.
Is there a way to extract those CNN activations, so I can use them as items in transactions by using data mining association rules, apriori algorithm.
Of course first I have to extract the k largest magnitudes of CNN activations. So each image will be a transaction, and each activation will be an item.
I have the following code so far:
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.layers import Dense, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.models import Sequential
import matplotlib.pylab as plt
model = Sequential()
model.add(Conv2D(32, kernel_size=(5, 5), strides=(1, 1),
activation='relu',
input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Conv2D(64, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(1000, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),
metrics=['accuracy'])
Upvotes: 1
Views: 699
Reputation:
Mentioning the solution below using Tensorflow Keras
.
In order to be able to access the Activations
, first we should pass one or more images and then the Activations corresponds to those Images.
Code for passing an Input Image
and it's preprocessing
is shown below:
from tensorflow.keras.preprocessing import image
Test_Dir = '/Deep_Learning_With_Python_Book/Dogs_Vs_Cats_Small/test/cats'
Image_File = os.path.join(Test_Dir, 'cat.1545.jpg')
Image = image.load_img(Image_File, target_size = (150,150))
Image_Tensor = image.img_to_array(Image)
print(Image_Tensor.shape)
Image_Tensor = tf.expand_dims(Image_Tensor, axis = 0)
Image_Tensor = Image_Tensor/255.0
Once the Model is defined, We can access Activations
of a any Layer using the code shown below (with respect to Cat and Dog Dataset):
# Extract the Model Outputs for all the Layers
Model_Outputs = [layer.output for layer in model.layers]
# Create a Model with Model Input as Input and the Model Outputs as Output
Activation_Model = Model(model.input, Model_Outputs)
Activations = Activation_Model.predict(Image_Tensor)
Output of the First Fully Connected Layer
(with respect to Cat and Dog Data) is:
print('Shape of Activation of First Fully Connected Layer is', Activations[-2].shape)
print('------------------------------------------------------------------------------------------')
print('Activation of First Fully Connected Layer is', Activations[-2])
It's Output is shown below:
Shape of Activation of First Fully Connected Layer is (1, 512)
------------------------------------------------------------------------------------------
Activation of First Fully Connected Layer is [[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0.02759874 0. 0. 0. 0.
0. 0. 0.00079661 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0.04887392 0. 0.
0.04422646 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.01124999
0. 0. 0. 0. 0. 0.
0. 0. 0. 0.00286965 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0.00027195 0.
0. 0.02132209 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0.00511147 0. 0. 0.02347952 0.
0. 0. 0. 0. 0. 0.
0.02570331 0. 0. 0. 0. 0.03443285
0. 0. 0. 0. 0. 0.
0. 0.0068848 0. 0. 0. 0.
0. 0. 0. 0. 0.00936454 0.
0.00389365 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0.00152553 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0.09215052 0. 0. 0.0284613 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0.00198757 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0.02395868 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0.01150922 0.0119792
0. 0. 0. 0. 0. 0.
0.00775307 0. 0. 0. 0. 0.
0. 0. 0. 0.01026413 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0.01522083 0. 0.00377031 0. 0.
0. 0. 0. 0. 0. 0.
0. 0.02235368 0. 0. 0. 0.
0. 0. 0. 0. 0.00317057 0.
0. 0. 0. 0. 0. 0.
0.03029975 0. 0. 0. 0. 0.
0. 0. 0.03843511 0. 0. 0.
0. 0. 0. 0. 0. 0.02327696
0.00557329 0. 0.02251234 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0.01655817 0. 0.
0. 0. 0. 0. 0.00221658 0.
0. 0. 0. 0.02087847 0. 0.
0. 0. 0.02594821 0. 0. 0.
0. 0. 0.01515464 0. 0. 0.
0. 0. 0. 0. 0.00019883 0.
0. 0. 0. 0. 0. 0.00213376
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.00237587
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0.02521542 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0.00490679 0. 0.04504126 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. ]]
For more information, please refer the Section 5.4.1 Visualizing intermediate activations of the Book, Deep Learning Using Python
by Francois Chollet, Father of Keras.
Hope this helps. Happy Learning!
Upvotes: 2