Reputation: 366
I have an embedding matrix e
defined as follows
e = tf.get_variable(name="embedding", shape=[n_e, d],
initializer=tf.contrib.layers.xavier_initializer(uniform=False))
where n_e
refers to the number of entities and d
is the number of latent dimensions. For this example, say d=10.
Training:
optimizer = tf.train.GradientDescentOptimizer(0.01)
grads_and_vars = optimizer.compute_gradients(loss)
train_op = optimizer.apply_gradients(grads_and_vars, global_step=global_step)
The model is saved after training.
At some point later, new entities(e.g., 2) are added resulting in n_e_new
. Now I would like to re-train the model, however retaining the embeddings for the already trained entities i.e., retraining only the delta (the 2 new entities).
I load the saved e
and
init_e = np.zeros((n_e_new, d), dtype=np.float32)
r = list(range(n_e_new - 2))
init_e[r, :] = # load e from saved model
e = tf.get_variable(name="embedding", initializer=init_e)
gather_e = tf.nn.embedding_lookup(e, [n_e, n_e+1])
Training:
optimizer = tf.train.GradientDescentOptimizer(0.01)
grads_and_vars = optimizer.compute_gradients(loss, gather_e)
train_op = optimizer.apply_gradients(grads_and_vars, global_step=global_step)
I get an error at compute_gradients
:
NotImplementedError: ('Trying to optimize unsupported type ', )
I understand that the second parameter gather_e
to compute_gradients
is not a variable but cannot figure out how to achieve this partial training/update.
P.S - I also had a look at this post, but cannot seem to find a solution there either.
EDIT: Code sample(as per the approach suggested by @meruf):
if new_data_available:
e = tf.get_variable(name="embedding", shape=[n_e_new, 1, d],
initializer=tf.contrib.layers.xavier_initializer(uniform=False))
e_old = tf.get_variable(name="embedding_old", initializer=<load e from saved model>, trainable=False)
e_new = tf.concat([e_old, e], 0)
else:
e = tf.get_variable(name="embedding", shape=[n_e, d],
initializer=tf.contrib.layers.xavier_initializer(uniform=False))
Lookup is as follows:
if new_data_available:
var_p = tf.nn.embedding_lookup(e_new, indices)
else:
var_p = tf.nn.embedding_lookup(e, indices)
loss = #some operations on var_p and other variabes that are a result of the lookup above
The issue is that when new_data_available
is true, neither e
nor e_new
change during each epoch. They remain same.
Upvotes: 4
Views: 1644
Reputation: 790
You should not change code at optimizer level. You can easily tell tensorflow which variable is trainable or not.
Let's take a look at tf.getVariable()
defination,
tf.get_variable(
name,
shape=None,
dtype=None,
initializer=None,
regularizer=None,
trainable=True,
collections=None,
caching_device=None,
partitioner=None,
validate_shape=True,
use_resource=None,
custom_getter=None,
constraint=None
)
Here trainable
parameter represents that if the parameter is trainable or not. When you do not want to train a parameter then make it false.
for your case make 2 set of variable. One is trainable=True
and for other trainable=false
.
Assume you have 100 pretrained variable and 10 new variables to train. Now load the pretrained variable to A
and new variables to B
.
Note:
For implementation details, you should take a look at tf.cond
function for runtime decisions. Mostly for lookup. because now your new B
embeddings have index starting from 0
. But you may have assigned them from # of pretrained embedding+1
in your dataset or program. So in tensorflow you can take runtime decision that
pseudocode
if index_number is >= number of pretrained embedding
index_number = index_number - number of pretrained embedding
look_up on B matrix
else
look_up on A matrix
An Ipython Notebook of the example. (slightly different than the example given here.)
update:
Let's take look at an example what I meant,
import tensorflow as tf
y_ = tf.placeholder(tf.float32, [None, 2])
x = tf.placeholder(tf.int32, [None])
z = tf.placeholder(tf.bool, []) # is the example in the x contains new data or not
e = tf.get_variable(name="embedding", shape=[5,10],initializer=tf.contrib.layers.xavier_initializer(uniform=False))
e_old = tf.get_variable(name="embedding1", shape=[5,10],initializer=tf.contrib.layers.xavier_initializer(uniform=False),trainable=False)
out = tf.cond(z,lambda : e, lambda : e_old)
lookup = tf.nn.embedding_lookup(out,x)
W = tf.get_variable(name="weight", shape=[10,2],initializer=tf.contrib.layers.xavier_initializer(uniform=False))
l = tf.nn.relu(tf.matmul(lookup,W))
y = tf.nn.softmax(l)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
We are printing the values so that we can check later if our value changes or not.
e_out_tf,e_out_old_tf = sess.run([e,e_old])
print("New Data ", e_out_tf)
print("Old Data", e_out_old_tf)
('New Data ', array([[-0.38952214, -0.37217963, 0.11370762, -0.13024905, 0.11420489,
-0.09138191, 0.13781562, -0.1624797 , -0.27410012, -0.5404499 ],
[-0.0065698 , 0.04728106, 0.53637034, -0.13864517, -0.36171854,
0.40325132, 0.7172644 , -0.28067762, -0.0258827 , -0.5615116 ],
[-0.17240004, 0.3765518 , 0.4658525 , 0.16545495, -0.37515178,
-0.39557686, -0.50662124, -0.06570222, -0.3605038 , 0.13746035],
[ 0.19647208, -0.16588202, 0.5739292 , 0.43803877, -0.05350745,
0.71350956, 0.39937392, -0.45939735, 0.09050641, -0.18077391],
[-0.05588558, 0.7295865 , 0.42288807, 0.57227516, 0.7268311 ,
-0.1194113 , 0.28589466, 0.09422033, -0.10094754, 0.3942643 ]],
dtype=float32))
('Old Data', array([[ 0.5308224 , -0.14003026, -0.7685277 , 0.06644323, -0.02585996,
-0.1713268 , 0.04987739, 0.01220775, 0.33571896, 0.19891626],
[ 0.3288728 , -0.09298109, 0.14795913, 0.21343362, 0.14123142,
-0.19770677, 0.7366793 , 0.38711038, 0.37526497, 0.440099 ],
[-0.29200613, 0.4852043 , 0.55407804, -0.13675605, -0.2815263 ,
-0.00703347, 0.31396288, -0.7152872 , 0.0844975 , 0.4210107 ],
[ 0.5046112 , 0.3085646 , 0.19497707, -0.5193338 , -0.0429871 ,
-0.5231836 , -0.38976955, -0.2300536 , -0.00906788, -0.1689194 ],
[-0.1231837 , 0.54029703, 0.45702592, -0.07886257, -0.6420077 ,
-0.24090563, -0.02165782, -0.44103763, -0.20914222, 0.40911582]],
dtype=float32))
Now we will test our theory if
1. non-trainable variable changes or not
2. trainable variable changes or not.
We declared an additional placeholder z
to indicate if the our input ontains new data
or old data
.
Here, index 0 contains new data that is trainable if z
is True
.
feed_dict={x: [0],z:True}
lookup_tf = sess.run([lookup], feed_dict=feed_dict)
print(lookup_tf)
[array([[-0.38952214, -0.37217963, 0.11370762, -0.13024905, 0.11420489,
-0.09138191, 0.13781562, -0.1624797 , -0.27410012, -0.5404499 ]],
dtype=float32)]
So while you send a batch make sure that the batch contains only either old data or new data.
feed_dict={x: [0], y_: [[0,1]], z:True}
_, = sess.run([train_step], feed_dict=feed_dict)
lookup_tf = sess.run([lookup], feed_dict=feed_dict)
print(lookup_tf)
[array([[-0.559212 , -0.362611 , 0.06011545, -0.02056453, 0.26133284,
-0.24933788, 0.18598196, -0.00602196, -0.12775017, -0.6666256 ]],
dtype=float32)]
See index 0 contains new data that is trainable and changes from previous value because of SGD update.
feed_dict={x: [0], y_: [[0,1]], z:False}
lookup_tf = sess.run([lookup], feed_dict=feed_dict)
print(lookup_tf)
_, = sess.run([train_step], feed_dict=feed_dict)
lookup_tf = sess.run([lookup], feed_dict=feed_dict)
print(lookup_tf)
[array([[ 0.5308224 , -0.14003026, -0.7685277 , 0.06644323, -0.02585996,
-0.1713268 , 0.04987739, 0.01220775, 0.33571896, 0.19891626]],
dtype=float32)]
[array([[ 0.5308224 , -0.14003026, -0.7685277 , 0.06644323, -0.02585996,
-0.1713268 , 0.04987739, 0.01220775, 0.33571896, 0.19891626]],
dtype=float32)]
Upvotes: 1