Reputation: 433
I'm pretty new to Tensorflow and have been running experiments with SSDs with the Tensorflow Object Detection API. I can successfully train a model, but by default, it only save the last n checkpoints. I'd like to instead save the last n checkpoints with the lowest loss (I'm assuming that's the best metric to use).
I found tf.estimator.BestExporter and it exports a saved_model.pb along with variables. However, I have yet to figure out how to load that saved model and run inference on it. After running models/research/object_detection/export_inference_graph.py on the checkpoiont, I can easily load a checkpoint and run inference on it using the object detection jupyter notebook: https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb
I've found documentation on loading saved models, and can load a graph like this:
with tf.Session(graph=tf.Graph()) as sess:
tags = [tag_constants.SERVING]
meta_graph = tf.saved_model.loader.load(sess, tags, PATH_TO_SAVED_MODEL)
detection_graph = tf.get_default_graph()
However, when I use that graph with the above jupyter notebook, I get errors:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-17-9e48f0d04df2> in <module>
7 image_np_expanded = np.expand_dims(image_np, axis=0)
8 # Actual detection.
----> 9 output_dict = run_inference_for_single_image(image_np, detection_graph)
10 # Visualization of the results of a detection.
11 vis_util.visualize_boxes_and_labels_on_image_array(
<ipython-input-16-0df86999596e> in run_inference_for_single_image(image, graph)
31 detection_masks_reframed, 0)
32
---> 33 image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
34 # image_tensor = tf.get_default_graph().get_tensor_by_name('serialized_example')
35
~/anaconda3/envs/sb/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in get_tensor_by_name(self, name)
3664 raise TypeError("Tensor names are strings (or similar), not %s." %
3665 type(name).__name__)
-> 3666 return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
3667
3668 def _get_tensor_by_tf_output(self, tf_output):
~/anaconda3/envs/sb/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in as_graph_element(self, obj, allow_tensor, allow_operation)
3488
3489 with self._lock:
-> 3490 return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
3491
3492 def _as_graph_element_locked(self, obj, allow_tensor, allow_operation):
~/anaconda3/envs/sb/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _as_graph_element_locked(self, obj, allow_tensor, allow_operation)
3530 raise KeyError("The name %s refers to a Tensor which does not "
3531 "exist. The operation, %s, does not exist in the "
-> 3532 "graph." % (repr(name), repr(op_name)))
3533 try:
3534 return op.outputs[out_n]
KeyError: "The name 'image_tensor:0' refers to a Tensor which does not exist. The operation, 'image_tensor', does not exist in the graph."
Is there a better way to load the saved model or convert it to an inference graph?
Thanks!
Upvotes: 4
Views: 5579
Reputation: 198
Tensorflow detection API supports different input formats during exporting as discribed in documentation of file export_inference_graph.py:
image_tensor
: Accepts a uint8 4-D tensor of shape [None, None, None, 3]encoded_image_string_tensor
: Accepts a 1-D string tensor of shape [None]
containing encoded PNG or JPEG images. Image resolutions are expected to be
the same if more than 1 image is provided.tf_example
: Accepts a 1-D string tensor of shape [None] containing
serialized TFExample protos. Image resolutions are expected to be the same
if more than 1 image is provided.So you should check that you use image_tensor
input_type. The chosen input node will be named as "inputs" in exported model. So I suppose that replacing image_tensor:0
with inputs
(or maybe inputs:0
) will solve your problem.
Also I would like to recommend a useful tool to run exported models with several lines of code: tf.contrib.predictor.from_saved_model
. Here is example of how to use it:
import tensorflow as tf
import cv2
img = cv2.imread("test.jpg")
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img_rgb = np.expand_dims(img, 0)
predict_fn = tf.contrib.predictor.from_saved_model("./saved_model")
output_data = predict_fn({"inputs": img_rgb})
print(output_data) # detector output dictionary
Upvotes: 5