VansFannel
VansFannel

Reputation: 45921

Migrate Tensorflow 1.x code to Tensorflow 2.x using Keras' model class

I've just started to learn Tensorflow 2.1.0 with Keras 2.3.1 and Python 3.7.7.

I have found this "Omniglot Character Set Classification Using Prototypical Network" github Jupyter Notebook, that I think it works with Tensorflow 1.x.

My problem is with this piece of code:

for epoch in range(num_epochs):

    for episode in range(num_episodes):

        # select 60 classes
        episodic_classes = np.random.permutation(no_of_classes)[:num_way]

        support = np.zeros([num_way, num_shot, img_height, img_width], dtype=np.float32)

        query = np.zeros([num_way, num_query, img_height, img_width], dtype=np.float32)


        for index, class_ in enumerate(episodic_classes):
            selected = np.random.permutation(num_examples)[:num_shot + num_query]
            support[index] = train_dataset[class_, selected[:num_shot]]

            # 5 querypoints per classs
            query[index] = train_dataset[class_, selected[num_shot:]]

        support = np.expand_dims(support, axis=-1)
        query = np.expand_dims(query, axis=-1)
        labels = np.tile(np.arange(num_way)[:, np.newaxis], (1, num_query)).astype(np.uint8)
        _, loss_, accuracy_ = sess.run([train, loss, accuracy], feed_dict={support_set: support, query_set: query, y:labels})

        if (episode+1) % 10 == 0:
            print('Epoch {} : Episode {} : Loss: {}, Accuracy: {}'.format(epoch+1, episode+1, loss_, accuracy_))

Is there any tutorial or book or article to help me to migrate this code to Tensorflow 2.x and Keras, using Keras' model?

I want to write the code from the link about like this one:

import numpy as np 
import os
import skimage.io as io
import skimage.transform as trans
import numpy as np
from keras.models import *
from keras.layers import *
from keras.optimizers import *
from keras.callbacks import ModelCheckpoint, LearningRateScheduler
from keras import backend as keras

def unet(pretrained_weights = None,input_size = (256,256,1)):
    inputs = Input(input_size)
    conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
    conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
    pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
    conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
    conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
    pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
    conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
    conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)
    pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
    conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
    conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
    drop4 = Dropout(0.5)(conv4)
    pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)

    conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
    conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
    drop5 = Dropout(0.5)(conv5)

    up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5))
    merge6 = concatenate([drop4,up6], axis = 3)
    conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
    conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6)

    up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
    merge7 = concatenate([conv3,up7], axis = 3)
    conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
    conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7)

    up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
    merge8 = concatenate([conv2,up8], axis = 3)
    conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
    conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8)

    up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
    merge9 = concatenate([conv1,up9], axis = 3)
    conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
    conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
    conv9 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
    conv10 = Conv2D(1, 1, activation = 'sigmoid')(conv9)

    model = Model(input = inputs, output = conv10)

    model.compile(optimizer = Adam(lr = 1e-4), loss = 'binary_crossentropy', metrics = ['accuracy'])

    #model.summary()

    if(pretrained_weights):
        model.load_weights(pretrained_weights)

    return model

and in train.py:

model = unet(...)
model.compile(...)
model.fit(...)

Upvotes: 1

Views: 1283

Answers (1)

Orphee Faucoz
Orphee Faucoz

Reputation: 1290

There is this tutorial from Tensorflow dow that sums everything up.

The most important things is that Sessions doesn't exist anymore, and model should be created using tensorflow.keras.layers.

Now when training a model you have 2 choices, either you use the Keras way or you can use the GradientTape (which is kinda the old way).

That means you have two choices, one that won't make much difference in your code (GradientTape), one that will make you change few things(Keras).

Gradient Tape

GradientTape is used when you want to do your own loop and compute the gradient like you want, it's kinda like Tensorflow 1.X.

  • Build your model using Keras API:
import tensorflow as tf

def unet(...):
    inputs = tf.keras.layers.Input(shape_images)
    ...
    model = Model(input = inputs, output = conv10)

    model.compile(...)

    return model

...

model = unet(...)
  • Define loss
mse = tf.keras.losses.MeanSquaredError()
  • Define optimizer
optimizer = tf.keras.optimizer.Adam(lr=1e-4)

Then, you do the training as you would usually do, except that you replace the old Session mechanism by GradientTape :


for epoch in range(num_epochs):

    for episode in range(num_episodes):

        # select 60 classes
        episodic_classes = np.random.permutation(no_of_classes)[:num_way]

        support = np.zeros([num_way, num_shot, img_height, img_width], dtype=np.float32)

        query = np.zeros([num_way, num_query, img_height, img_width], dtype=np.float32)


        for index, class_ in enumerate(episodic_classes):
            selected = np.random.permutation(num_examples)[:num_shot + num_query]
            support[index] = train_dataset[class_, selected[:num_shot]]

            # 5 querypoints per classs
            query[index] = train_dataset[class_, selected[num_shot:]]

        support = np.expand_dims(support, axis=-1)
        query = np.expand_dims(query, axis=-1)
        labels = np.tile(np.arange(num_way)[:, np.newaxis], (1, num_query)).astype(np.uint8)

        # No session here but a Gradient computing

        with tf.GradientTape() as tape:
            prediction = model(support) # or whatever you need as input of model
            loss = mse(label, prediction)
        # apply gradient descent
        grads = tape.gradient(loss, model.trainable_weights)
        optimizer.apply_gradients(zip(grads, model.trainable_weights))

Keras

For keras you need to change your way of feeding the data, you won't have a for loop since you use fit, whereas you need to implement a Generator or whatever data structure that can be iterated through. That means that you basically need a list of (X, y). data_struct[0] will give you the first X,Y pair.

Once you have this data structure it's easy.

  • Define the model just like GradientTape

  • Define optimizer just like GradientTape

  • Compile the model


model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy']) # Or whatever you need as loss/metrics

  • Fit the model using your data_struct
model.fit(data_struct, epochs=500) # Add validation_data if you want, callback ...

Upvotes: 1

Related Questions