Reputation: 2789
I am trying to extract feature vectors from an added Dense layer after fine tuning the Inception v3 CNN on keras with new data. Basically, I load the network structure and its weights, add two dense layers (my data is for a two class problem) and update weights from only some part of the network as the code below shows:
# create the base pre-trained model
base_model = InceptionV3(weights='imagenet', include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(64, activation='relu')(x)
# and a logistic layer -- I have 2 classes only
predictions = Dense(2, activation='softmax')(x)
# this is the model to train
model = Model(inputs=base_model.input, outputs=predictions)
# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
for layer in base_model.layers:
layer.trainable = False
# compile the model (should be done *after* setting layers to non-trainable)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
#load new training data
x_train, x_test, y_train, y_test =load_data(train_data, test_data, train_labels, test_labels)
datagen = ImageDataGenerator()
datagen.fit(x_train)
epochs=1
batch_size=32
# train the model on the new data for a few epochs
model.fit_generator(datagen.flow(x_train, y_train,
batch_size=batch_size),
steps_per_epoch=x_train.shape[0] //
batch_size,
epochs=epochs,
validation_data=(x_test, y_test))
# at this point, the top layers are well trained and
#I can start fine-tuning convolutional layers from inception V3.
#I will freeze the bottom N layers and train the remaining top layers.
#I chose to train the top 2 inception blocks, i.e. I will freeze the
#first 249 layers and unfreeze the rest:
for layer in model.layers[:249]:
layer.trainable = False
for layer in model.layers[249:]:
layer.trainable = True
# I need to recompile the model for these modifications to take effect
# I use SGD with a low learning rate
from keras.optimizers import SGD
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics=['binary_accuracy'])
# I train our model again (this time fine-tuning the top 2 inception blocks alongside the top Dense layers
model.fit_generator(datagen.flow(x_train, y_train,
batch_size=batch_size),
steps_per_epoch=x_train.shape[0] //
batch_size,
epochs=epochs,
validation_data=(x_test, y_test))
This code runs perfectly well and its not my problem.
My problem is that, after fine tuning this network I want the output from the last but one layer on my train and test data because I want to use this new network as a feature extractor. I want the output from this part of the network that you can see in the code above:
x = Dense(64, activation='relu')(x)
I tried the following code but it does not work:
from keras import backend as K
inputs = [K.learning_phase()] + model.inputs
_convout1_f = K.function(inputs, model.get_layer(dense_1).output)
The error is the following
_convout1_f = K.function(inputs, model.get_layer(dense_1).output)
NameError: global name 'dense_1' is not defined
How can I extract features from the new layer I added after fine tuning a pre-trained network in my new data? what I did wrong here?
Upvotes: 1
Views: 1575
Reputation: 2789
I solved my own problem with this. Hope it fits well to you too.
First, the K.function to extract the features is this
_convout1_f = K.function([model.layers[0].input, K.learning_phase()],[model.layers[312].output])
where 312 is the 312th layer I want to extract features
Then I pass this _convout1_f parameter to a function like this
features_train, features_test=feature_vectors_generator(x_train,x_test,_convout1_f)
The function to extract these features is like this
def feature_vectors_generator(x_train,x_test, _convout1_f):
print('Generating Training Feature Vectors...')
batch_size=100
index=0
if x_train.shape[0]%batch_size==0:
max_iterations=x_train.shape[0]/batch_size
else:
max_iterations=(x_train.shape[0]/batch_size)+1
for i in xrange(0, max_iterations):
if(i==0):
features=_convout1_f([x_train[index:batch_size], 1])[0]
index=index+batch_size
features = numpy.squeeze(features)
features_train = features
else:
if(i==max_iterations-1):
features=_convout1_f([x_train[index:x_train.shape[0],:], 1])[0]
features = numpy.squeeze(features)
features_train =numpy.append(features_train,features, axis=0)
else:
features=_convout1_f([x_train[index:index+batch_size,:], 1])[0]
index=index+batch_size
features = numpy.squeeze(features)
features_train=numpy.append(features_train,features, axis=0)
print('Generating Testing Feature Vectors...')
batch_size=100
index=0
if x_test.shape[0]%batch_size==0:
max_iterations=x_test.shape[0]/batch_size
else:
max_iterations=(x_test.shape[0]/batch_size)+1
for i in xrange(0, max_iterations):
if(i==0):
features=_convout1_f([x_test[index:batch_size], 0])[0]
index=index+batch_size
features = numpy.squeeze(features)
features_test = features
else:
if(i==max_iterations-1):
features=_convout1_f([x_test[index:x_test.shape[0],:], 0])[0]
features = numpy.squeeze(features)
features_test = numpy.append(features_test,features, axis=0)
else:
features=_convout1_f([x_test[index:index+batch_size,:], 0])[0]
index=index+batch_size
features = numpy.squeeze(features)
features_test=numpy.append(features_test,features, axis=0)
return(features_train, features_test)
Upvotes: 1