Reputation: 2708
I am training a simple CNN based on a Custom Estimator with TF Records.
I am trying to export the best model in terms of validation loss during the train_and_evaluate
phase.
According to the documentation of the tf.estimator.BestExporter
, I should feed a function that returns a ServingInputReceiver
but after doing so, the train_and_evaluate
phase crashes with a NotFoundError: model/m01/eval; No such file or directory
.
Seems like if the BestExporter does not permit saving the evaluation results as it would do without the exporter. I tried with different ServingInputReceiver
but I keep getting the same error.
As defined here:
feature_spec = {
'shape': tf.VarLenFeature(tf.int64),
'image_raw': tf.FixedLenFeature((), tf.string),
'label_raw': tf.FixedLenFeature((43), tf.int64)
}
def serving_input_receiver_fn():
serialized_tf_example = tf.placeholder(dtype=tf.string,
shape=[120, 120, 3],
name='input_example_tensor')
receiver_tensors = {'image': serialized_tf_example}
features = tf.parse_example(serialized_tf_example, feature_spec)
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
and here
def serving_input_receiver_fn():
feature_spec = {
'image': tf.FixedLenFeature((), tf.string)
}
return tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
Here are my exporter and training procedure:
exporter = tf.estimator.BestExporter(
name="best_exporter",
serving_input_receiver_fn=serving_input_receiver_fn,
exports_to_keep=5)
train_spec = tf.estimator.TrainSpec(
input_fn=lambda: imgs_input_fn(train_path, True, epochs, batch_size))
eval_spec = tf.estimator.EvalSpec(
input_fn=lambda: imgs_input_fn(eval_path, perform_shuffle=False, batch_size=1),
exporters=exporter)
tf.estimator.train_and_evaluate(ben_classifier, train_spec, eval_spec)
This is a gist with the output.
What's the correct way to define a ServingInputReceiver
for the BestExporter
?
Upvotes: 0
Views: 2221
Reputation:
Can you try the code shown below:
def serving_input_receiver_fn():
"""
This is used to define inputs to serve the model.
:return: ServingInputReciever
"""
reciever_tensors = {
# The size of input image is flexible.
'image': tf.placeholder(tf.float32, [None, None, None, 1]),
}
# Convert give inputs to adjust to the model.
features = {
# Resize given images.
'image': tf.reshape(reciever_tensors[INPUT_FEATURE], [-1, INPUT_SHAPE])
}
return tf.estimator.export.ServingInputReceiver(receiver_tensors=reciever_tensors,
features=features)
Then use tf.estimator.BestExporter
as shown below:
best_exporter = tf.estimator.BestExporter(
serving_input_receiver_fn=serving_input_receiver_fn,
exports_to_keep=1)
exporters = [best_exporter]
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
x={input_name: eval_data},
y=eval_labels,
num_epochs=1,
shuffle=False)
eval_spec = tf.estimator.EvalSpec(
input_fn=eval_input_fn,
throttle_secs=10,
start_delay_secs=10,
steps=None,
exporters=exporters)
# Train and evaluate the model.
tf.estimator.train_and_evaluate(classifier, train_spec=train_spec, eval_spec=eval_spec)
For more info, refer the link: https://github.com/yu-iskw/tensorflow-serving-example/blob/master/python/train/mnist_keras_estimator.py
Upvotes: 3