COROSCOP
COROSCOP

Reputation: 31

Tensorflow object_detection correct way to save and load fine tune model

I'm using this example from the colabs tutorial to fine tune a model, after training I want to save the model and load on my local computer using:

ckpt_manager = tf.train.CheckpointManager(ckpt, directory="test_data/checkpoint/", max_to_keep=5)
...
...
print('Done fine-tuning!')

ckpt_manager.save()
print('Checkpoint saved!')

but after restore on my local computer using the checkpoint files doesn't detect any object (the scores are too low)

Also I have tried with

tf.saved_model.save(detection_model, '/content/new_model/')

And load with this:

detection_model = tf.saved_model.load('/saved_model_20201226/')

input_tensor = tf.convert_to_tensor(image, dtype=tf.float32)
detections = detection_model(input_tensor)

Give me this error: TypeError: '_UserObject' object is not callable

What is the correct way to save and load a fine tuned model?

EDIT 1: It was pending to save the new pipeline config, after that finally worked! This is my answer:

# Save new pipeline config
new_pipeline_proto = config_util.create_pipeline_proto_from_configs(configs)
config_util.save_pipeline_config(new_pipeline_proto, '/content/new_config')
exported_ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
ckpt_manager = tf.train.CheckpointManager(
exported_ckpt, directory="test_data/checkpoint/", max_to_keep=5)
...
...
print('Done fine-tuning!')

ckpt_manager.save()
print('Checkpoint saved!')

Upvotes: 0

Views: 753

Answers (2)

Chaitanya S
Chaitanya S

Reputation: 1

But the model performance when you try to build the model from the trained checkpoints are not same to the original performance of the model on the local notebook. Instead prefer saving the model in the following way below:

def detect(input_tensor):
  """Run detection on an input image.

  Args:
    input_tensor: A [1, height, width, 3] Tensor of type tf.float32.
      Note that height and width can be anything since the image will be
      immediately resized according to the needs of the model within this
      function.

  Returns:
    A dict containing 3 Tensors (`detection_boxes`, `detection_classes`,
      and `detection_scores`).
  """
  detection_model = detection_model
  preprocessed_image, shapes = detection_model.preprocess(input_tensor)
  prediction_dict = detection_model.predict(preprocessed_image, shapes)
  return detection_model.postprocess(prediction_dict, shapes)```

tf.saved_model.save(
    detection_model , 'trained_model',
    signatures={
      'detect': detect.get_concrete_function()
    })    

With this method, you don't have to worry about the dependencies of object_detection api. You just need tensorflow library for inference.

INFERENCE CODE:

new_model=tf.saved_model.load('trained_model')
new_model.signatures['detect']
detections= new_model.signatures['detect']( image_tensor)

Keep in mind about the image_tensor shape.

Solution from github https://github.com/tensorflow/models/issues/8862

Upvotes: -1

COROSCOP
COROSCOP

Reputation: 31

It was pending to save the new pipeline config, after that finally worked! This is my answer:

# Save new pipeline config
new_pipeline_proto = config_util.create_pipeline_proto_from_configs(configs)
config_util.save_pipeline_config(new_pipeline_proto, '/content/new_config')

exported_ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
ckpt_manager = tf.train.CheckpointManager(
exported_ckpt, directory="test_data/checkpoint/", max_to_keep=5)
...
...
print('Done fine-tuning!')

ckpt_manager.save()
print('Checkpoint saved!')

Upvotes: 1

Related Questions