ixeption
ixeption

Reputation: 2050

TF2: Add preprocessing to pretrained saved model for tensorflow serving (Extending the graph of a savedModel)

I upgraded to TensorFlow 2 and now I am facing a problem when extending a pre-trained model with some additional preprocessing.

I have a pre-trained object detection model (SSD ResNet50 FPN), which I want to deploy to TensorFlow serve. I want to load the SavedModel and add the necessary preprocessing to accept base64 encoded jpegs directly. I did this before with a TF 1.x and another Keras model, which works:

string_inp = tf.placeholder(tf.string, shape=(None,), name='base64_in')
imgs_map = tf.map_fn(
    tf.image.decode_image,
    string_inp,
    dtype=tf.uint8
)
imgs_map.set_shape((None, None, None, 3))
imgs = tf.image.resize_images(imgs_map, [456, 456], method=tf.image.ResizeMethod.BILINEAR)
imgs = tf.reshape(imgs, (-1, 456, 456, 3))
img_uint8 = tf.image.convert_image_dtype(imgs, dtype=tf.uint8, saturate=False)

pretrained_model= load_model('my-keras-model.h5', compile=False)
ouput_tensor= pretrained_model(img_uint8)

signature = tf.saved_model.signature_def_utils.predict_signature_def(                                                                        
    inputs={'jpegbase64': string_inp}, outputs={'probabilities': ouput_tensor})

builder.add_meta_graph_and_variables(                                                                                                        
    sess=K.get_session(),                                                                                                                    
    tags=[tf.saved_model.tag_constants.SERVING],                                                                                             
    signature_def_map={                                                                                                                      
        tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:                                                                
            signature                                                                                                                        
    })                                                                                                                                       
builder.save()

But once I try to get it workign with a SavedModel loaded with TF model = tf.keras.models.load_model("my_saved_model") it throws: TypeError: 'AutoTrackable' object is not callable

I guess it does not support to stack the model on top of my custom input tensor, but I didn't find any other working solution for it. I also experimented with connecting the input tensor from the SavedModel directly with the img_uint8 tensor, but I don't know how I can get them connected correctly. Any ideas?

Upvotes: 1

Views: 1162

Answers (1)

ixeption
ixeption

Reputation: 2050

Ok, I found a solution, here we go:

graph_model = tf.Graph()
sess = tf.Session(graph=graph_model)
sess.as_default()
graph_model.as_default()
model = tf.saved_model.load(sess, export_dir="myModel", tags=['serve'])
graph_model_def = graph_model.as_graph_def()

# here is the important step, create a new graph and DON'T create a new session explicity
graph_base64 = tf.Graph()
graph_base64.as_default()

string_inp = tf.placeholder(tf.string, shape=(None,), name='base64_in')
imgs_map = tf.map_fn(
    tf.image.decode_image,
    string_inp,
    dtype=tf.uint8
)
imgs_map.set_shape((None, None, None, 3))
imgs = tf.image.resize_images(imgs_map, [300, 300], method=tf.image.ResizeMethod.BILINEAR)
imgs = tf.reshape(imgs, (-1, 300, 300, 3))
img_uint8 = tf.image.convert_image_dtype(imgs, dtype=tf.uint8, saturate=False)

# import the model graph with the new input
tf.import_graph_def(graph_model_def, name='', input_map={"image_tensor:0": img_uint8})

The important part is to NOT create a new session. If you do so, it won't work anymore. Here is a more detailed description.

Upvotes: 1

Related Questions