Syzygy
Syzygy

Reputation: 402

Unable to freeze Keras layers in a Tensorflow workflow

I'm trying to freeze Keras layers in a Tensorflow workflow. This is how I define the graph :

import tensorflow as tf
from keras.layers import Dropout, Dense, Embedding, Flatten
from keras import backend as K
from keras.objectives import binary_crossentropy


import tensorflow as tf
sess = tf.Session()

from keras import backend as K
K.set_session(sess)

labels = tf.placeholder(tf.float32, shape=(None, 1))
user_id_input = tf.placeholder(tf.float32, shape=(None, 1))
item_id_input = tf.placeholder(tf.float32, shape=(None, 1))



max_user_id = all_ratings['user_id'].max()
max_item_id = all_ratings['item_id'].max()

embedding_size = 30
user_embedding = Embedding(output_dim=embedding_size, input_dim=max_user_id+1,
                           input_length=1, name='user_embedding', trainable=all_trainable)(user_id_input)
item_embedding = Embedding(output_dim=embedding_size, input_dim=max_item_id+1,
                           input_length=1, name='item_embedding', trainable=all_trainable)(item_id_input)



user_vecs = Flatten()(user_embedding)
item_vecs = Flatten()(item_embedding)


input_vecs = concatenate([user_vecs, item_vecs])

x = Dense(30, activation='relu')(input_vecs)
x1 = Dropout(0.5)(x)
x2 = Dense(30, activation='relu')(x1)
y = Dense(1, activation='sigmoid')(x2)

loss = tf.reduce_mean(binary_crossentropy(labels, y))

train_step = tf.train.AdamOptimizer(0.004).minimize(loss)

Then I just train the model :

with sess.as_default():

train_step.run(..)

Everything is working fine when the trainable flag is set to True. Then when I set it to False, it does not freeze the layers.

I also tried to minimize only over the variable that I want to train by using train_step_freeze = tf.train.AdamOptimizer(0.004).minimize(loss, var_list=[user_embedding]), and I get :

('Trying to optimize unsupported type ', <tf.Tensor 'Placeholder_33:0' shape=(?, 1) dtype=float32>)

Is it possible to use Keras layers in Tensorflow and freeze them ?

EDIT

In order to make things clear, I want to train the model using Tensorflow, and not by using model.fit(). The way to do it in Tensorflow seems to be by passing var_list=[] to the minimize() method. But I get an error while doing this :

('Trying to optimize unsupported type ', <tf.Tensor 'Placeholder_33:0' shape=(?, 1) dtype=float32>)

Upvotes: 2

Views: 1877

Answers (2)

Rohan Saxena
Rohan Saxena

Reputation: 3328

I finally found a way to do this.

Instead of explicitly freezing the Keras model, TensorFlow gives you the option of specifying which variables you want to train.

In the following example, I instantiate a pretrained VGG16 model from Keras, define a few layers over that model, and freeze this model (that is, train only the layers following the Keras model):

import tensorflow as tf
from tensorflow.python.keras.applications.vgg16 import VGG16, preprocess_input
from tensorflow.python.keras import backend as K
import numpy as np

inputs = tf.placeholder(dtype=tf.float32, shape=(None, 224, 224, 3))
labels = tf.placeholder(dtype=tf.float32, shape=(None, 1))
model = VGG16(include_top=False, weights='imagenet')

features = model(preprocess_input(inputs))

# Define the further layers

conv = tf.layers.Conv2D(filters=1, kernel_size=(3, 3), strides=(2, 2), activation=tf.nn.relu, use_bias=True)
conv_output = conv(features)
flat = tf.layers.Flatten()
flat_output = flat(conv_output)
dense = tf.layers.Dense(1, activation=tf.nn.tanh)
dense_output = dense(flat_output)

# Define the loss and training ops

loss = tf.losses.mean_squared_error(labels, dense_output)
optimizer = tf.train.AdamOptimizer()

# Specify which variables you want to train in `var_list`
train_op = optimizer.minimize(loss, var_list=[conv.variables, flat.variables, dense.variables])

To use this method, you will have to instantiate an object for each layer, as that will allow you to explicitly access the variables of that layer using layer_name.variables. Alternatively, you can use the low level API and define your own tf.Variable objects and make layers using them.

You can easily verify that the above method works:

sess = K.get_session()
K.set_session(sess)

image = np.random.randint(0, 255, size=(1, 224, 224, 3))

for _ in range(100):

    old_features = sess.run(features, feed_dict={inputs: image})
    sess.run(train_op, feed_dict={inputs: np.random.randint(0, 255, size=(2, 224, 224, 3)), labels: np.random.randint(0, 10, size=(2, 1))})
    new_features = sess.run(features, feed_dict={inputs: image})

    print(np.all(old_features == new_features))

This will print True one hundred times, meaning that the model's weights are not changing upon running the training op.

Upvotes: 5

Daniel M&#246;ller
Daniel M&#246;ller

Reputation: 86610

Keras will only make layers truly untrainable if you compile the model again before training.

Now, I don't see you compile your model anywhere, and you're mixing Keras with TensorFlow commands.

If you want Keras to work properly, you must use Keras commands.

Creating a model in Keras:

You did the right thing up to y, except for defining an input layer. Before the first embedding layer, you need:

from keras.layers import Input

labels = Input((None, 1)) #is this really an input?????
user_id_input = Input((None, 1))
item_id_input = Input((None, 1))

Then you create a model in Keras:

from keras.models import Model

#supposing you want it to start with two inputs and the output being y
model = Model([user_id_input, item_id_input], y)

Then you compile your model, with the optimizer and loss you want (you must make layers untrainable before this step, or compile again whenever you change that attribute):

model.compile(optimizer='adam',loss='binary_crossentropy')

And for training, you also train with Keras commands:

model.fit([Xuser,Xitem],Y, epochs=..., batch_size=...., ....)
#where Xuser and Xitem are actual training input data and Y contains the actual labels. 

Upvotes: 0

Related Questions