Marti Colominas
Marti Colominas

Reputation: 33

How to add 'instance keys' to a keras model input for batch prediction in gcloud ai-platform?

I'm trying to add 'keys' to match the batch prediction output from Google AI Platform, however my model input just allows for one input.

It looks like that:

input = tf.keras.layers.Input(shape=(max_len,))

x = tf.keras.layers.Embedding(max_words, embed_size, weights=[embedding_matrix], trainable=False)(input)
x = tf.keras.layers.Bidirectional(tf.keras.layers.GRU(128, return_sequences=True, dropout=0.3, recurrent_dropout=0.1))(x)
x = tf.keras.layers.Conv1D(128, kernel_size=3, padding="valid", kernel_initializer="glorot_uniform")(x)

avg_pool = tf.keras.layers.GlobalAveragePooling1D()(x)
max_pool = tf.keras.layers.GlobalMaxPooling1D()(x)

x = tf.keras.layers.concatenate([avg_pool, max_pool])

preds = tf.keras.layers.Dense(2, activation="sigmoid")(x)
model = tf.keras.Model(input, preds)
model.summary()
model.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.Adam(lr=1e-3), metrics=['accuracy','binary_crossentropy'])

I came across this article, but can't figure out how to apply this to my code.

Any ideas? Thanks!

Upvotes: 3

Views: 784

Answers (2)

dhodun
dhodun

Reputation: 1

Alternatively you can change the serving function serialized to SavedModel (or a second serving function). This is handy because you can have one serving infrastructure (i.e. TFServing, Google Cloud AI Platform online/batch) serve both keyed and unkeyed predictions. Also, you can add keys to a SavedModel when you don't have access the underlying keras code that generated it.

tf.saved_model.save(model, MODEL_EXPORT_PATH)

loaded_model = tf.keras.models.load_model(MODEL_EXPORT_PATH)

@tf.function(input_signature=[tf.TensorSpec([None], dtype=tf.string),tf.TensorSpec([None, 28, 28], dtype=tf.float32)])
def keyed_prediction(key, image):
    pred = loaded_model(image, training=False)
    return {
        'preds': pred,
        'key': key
    }

loaded_model.save(KEYED_EXPORT_PATH, signatures={'serving_default': keyed_prediction})

More examples here and the notebook here.

Upvotes: 0

Tlaquetzal
Tlaquetzal

Reputation: 2850

Following your code, you could do something like this:

First, get the key value from the input:

input = tf.keras.layers.Input(shape=(max_len,))
key_raw = tf.keras.layers.Input(shape=(), name='key')

Reshape it to use later for concatenation

key = tf.keras.layers.Reshape((1,), input_shape=())(key_raw)

Concatenate the key with the final result

preds = tf.keras.layers.Dense(2, activation="sigmoid")(x)
preds = tf.keras.layers.concatenate([preds, key])

Add it to the input of your model

model = tf.keras.Model([input, key_raw], preds)

Input json file example:

{"input_1": [1.2,1.1,3.3,4.3], "key":1}
{"input_1": [0.3, 0.4, 1.5, 1], "key":2}

Now, you can get the key as the last element of the prediction result. Output example:

[0.48686566948890686, 0.5113844275474548, 1.0]
[0.505149781703949, 0.5156428813934326, 2.0]

Upvotes: 3

Related Questions