piccolo
piccolo

Reputation: 2217

Converting short tensorflow 1.13 script into tensorflow 2.0

I am trying to learn the dynamics of tensorflow2.0 by converting my tensorflow1.13 script (below) into a tensorflow2.0 script. However I am struggling to do this.

I think the main reason why I am struggling is because the examples of tensorflow2.0 I have seen train neural networks and so they have a model which they compile and fit. However in my simple example below I am not using a neural network so I can't see how to adapt this code to tensorflow2.0 (For example, how do I replace session?). Help is much appreciated and thanks in advance.

data = tf.placeholder(tf.int32)
theta = tf.Variable(np.zeros(100))
p_s = tf.nn.softmax(theta)

loss = tf.reduce_mean(-tf.log(tf.gather(p_s, data)))
train_step = tf.train.AdamOptimizer().minimize(loss)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for epoch in range(10):
        for datum in sample_data(): #sample_data() is a list of integer datapoints
            _ = sess.run([train_step], feed_dict={data:datum})
    print(sess.run(p_s))

I have looked at this (which is most relavant) and so far I have come up with the below:

#data = tf.placeholder(tf.int32)
theta = tf.Variable(np.zeros(100))
p_s = tf.nn.softmax(theta)

loss = tf.reduce_mean(-tf.math.log(tf.gather(p_s, **data**)))
optimizer = tf.keras.optimizers.Adam()

for epoch in range(10):
    for datum in sample_data(): 
        optimizer.apply_gradients(loss)

print(p_s)

However the above obviously does not run because the placeholder data inside the loss function does not exist anymore - however I am not sure how to replace it. :S

Anyone? Note that I don't have a def forward(x) because my input datum isn't transformed - it is used directly to calculate the loss.

Upvotes: 0

Views: 679

Answers (1)

nessuno
nessuno

Reputation: 27042

Instead of using the conversion tool (that exists, but I don't like it since it just prefixes (more or less) the API calls with tf.compat.v1 and uses the old Tensoflow 1.x API) I help you convert your code to the new version.

Sessions are disappeared, and so are the placeholders. The reason? The code is executed line by line - that is the Tensorflow eager mode.

To train a model you correctly have to use an optimizer. If you want to use the minimize method, in Tensorflowe 2.0 you have to define the function to minimize (the loss) as a Python callable.

# This is your "model"
theta = tf.Variable(np.zeros(100))
p_s = tf.nn.softmax(theta)

# Define the optimizer
optimizer = tf.keras.optimizers.Adam()

# Define the training loop with the loss inside (because we use the
# .minimnize method that requires a callable with no arguments)

trainable_variables = [theta]

for epoch in range(10):
    for datum in sample_data():
        # The loss must be callable and return the value to minimize
        def loss_fn():
            loss = tf.reduce_mean(-tf.math.log(tf.gather(p_s, datum)))
            return loss
        optimizer.minimize(loss_fn, var_list=trainable_variables)
    tf.print("epoch ", epoch, " finished. ps: ", p_s)

Disclaimer: I haven't tested the code - but it should work (or at least give you an idea on how to implement what you're trying to achieve in TF 2)

Upvotes: 2

Related Questions