Matheus Correia
Matheus Correia

Reputation: 101

Loading TensorFlow Object Detection Model After Export

I have trained an Object Detection model using the TensorFlow API by following the steps provided in this official tutorial. As such, by the end of the whole process, as described in the exporting step, I have got my model saved in the following format.

my_model/
├─ checkpoint/
├─ saved_model/
└─ pipeline.config

My question is, once the model has been saved to such a format, how can I load it and use it to make detections?

I am able to successfully do that with the training checkpoints by using the code below. And it is after that point (where I load the checkpoint that generated the best result) that I export the model.

# Load pipeline config and build a detection model
configs = config_util.get_configs_from_pipeline_file(PATH_TO_PIPELINE_CONFIG)
model_config = configs['model']
detection_model = model_builder.build(model_config=model_config, is_training=False)

# Restore checkpoint
ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
ckpt.restore(PATH_TO_CKPT).expect_partial()

However, in production, I am not looking to use those checkpoints. I am looking to load the model from the exported format.

I have tried the following command to load the exported model, but I have had no luck. It returns no errors and I can use the model variable below to make detections, but the output (bounding boxes, classes, scores) is incorrect, which leads me to believe there are some steps missing in the loading process.

model = tf.saved_model.load(path_to_exported_model)

Any tips?

Upvotes: 1

Views: 1429

Answers (3)

Eddie Tang
Eddie Tang

Reputation: 133

Looking at what @Matheus Correia posted, I slightly modified his answer to suit what I was doing (in 2022) in google colabs.

category_index = your_generated_label_map
# e.g. category_index = {1: {'id': 1, 'name': 'tomato'}, 2: {'id': 2, 'name': 'egg'}, 3: {'id': 3, 'name': 'potato'}, 4: {'id': 4, 'name': 'broccoli'}, 5: {'id': 5, 'name': 'beef'}, 6: {'id': 6, 'name': 'chicken'}}

# set your own threshold here
Threshold = 0.5

def ExtractBBoxes(bboxes, bclasses, bscores, im_width, im_height):
    bbox = []
    class_labels = []
    for idx in range(len(bboxes)):
        if bscores[idx] >= Threshold:
          y_min = int(bboxes[idx][0] * im_height)
          x_min = int(bboxes[idx][1] * im_width)
          y_max = int(bboxes[idx][2] * im_height)
          x_max = int(bboxes[idx][3] * im_width)
          class_label = category_index[int(bclasses[idx])]['name']
          class_labels.append(class_label)
          bbox.append([x_min, y_min, x_max, y_max, class_label, float(bscores[idx])])
    return (bbox, class_labels)

# @Matheus Correia's code but modified

# Loading saved mode.
detect_fn = tf.saved_model.load("--saved model folder's path--")
# model = tf.saved_model.load("--saved model folder's path--")

# Pre-processing image.
image = tf.image.decode_image(open(IMAGE_PATH, 'rb').read(), channels=3)
image = tf.image.resize(image, (width,height))
im_height, im_width, _ = image.shape
# Model expects tf.uint8 tensor, but image is read as tf.float32.
input_tensor = np.expand_dims(image, 0)
detections = detect_fn(input_tensor)

bboxes = detections['detection_boxes'][0].numpy()
bclasses = detections['detection_classes'][0].numpy().astype(np.int32)
bscores = detections['detection_scores'][0].numpy()
det_boxes, class_labels = ExtractBBoxes(bboxes, bclasses, bscores, im_width, im_height)
print(class_labels)

Hope this helps.

Upvotes: 1

Rama
Rama

Reputation: 23

Check this link.....Abdul Rehman has few python codes to run detection of saved_models to inference on Images as well as Videos......I use the codes extensively, to check on detection on saved_models from the TF2 Model Zoo, as well as trained models on custom datasets......

https://github.com/abdelrahman-gaber/tf2-object-detection-api-tutorial

Upvotes: 1

Matheus Correia
Matheus Correia

Reputation: 101

Ok, as it turns out, the code is correct. I ran a test with another model (which is also an EfficientDet) and the code worked. It seems something went wrong when the original model was exported, which I am still trying to figure out.

To those looking for an answer, here's the full code for loading and using a saved model.

# Loading saved mode.
model = tf.saved_model.load(path_to_exported_model)

# Pre-processing image.
image = tf.image.decode_image(open(path_to_image, 'rb').read(), channels=3)
image = tf.expand_dims(image, 0)
image = tf.image.resize(image, (size_expected_by_model, size_expected_by_model))

# Model expects tf.uint8 tensor, but image is read as tf.float32.
image = tf.image.convert_image_dtype(image, tf.uint8)

# Executing object detection.
detections = model(image)

# Formatting returned detections.
num_detections = int(detections.pop('num_detections'))
detections = {key: value[0, :num_detections].numpy()
              for key, value in detections.items()}

detections['num_detections'] = num_detections

detections['detection_classes'] = detections['detection_classes'].astype(np.int64)

Upvotes: 1

Related Questions