wheresmycookie
wheresmycookie

Reputation: 773

Object detection running very slowly on laptop

I stepped through the following tutorial: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_pets.md

I then tried to run inference using: https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb

This ran very quickly (in ~1s for each image). I noticed that the code is pulling down a pre-trained model (not the one I tried in the first tutorial) from http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2017_11_17.

When I load and perform inference using the graph that I exported in the tutorial, it takes around 10s to finish. I exported the graph using:

python object_detection/export_inference_graph.py \
    --input_type image_tensor \
    --pipeline_config_path object_detection/samples/configs/faster_rcnn_resnet101_pets.config \
    --trained_checkpoint_prefix model.ckpt-${CHECKPOINT_NUMBER} \
    --output_directory exported_graphs

I'm running inference using this code:

model = tf.saved_model.load(
        os.path.join(os.getcwd(), 'exported_graphs', 'saved_model'))
model = model.signatures['serving_default']

output_dict = model(input_tensor)

I'm running a MacOS with HighSierra. Have I exported the graph in a sub-optimal way? Is my machine just not fast enough? I'm still very much a beginner so any direction would be really helpful - I might be missing something really basic.

Upvotes: 0

Views: 132

Answers (1)

Vedanshu
Vedanshu

Reputation: 2296

The graph which you run first was of ssd_mobilenet_v1 which took you around 1 sec. The other graph if of faster_rcnn_resnet101 which took you around 10 sec. The two of them are completely different architecture. faster_rcnn_resnet101 will run slower than ssd_mobilenet_v1.

Upvotes: 1

Related Questions