Chew Kok Wah
Chew Kok Wah

Reputation: 1

Inference on TPU for model trained on GPU (Tensorflow Object Detection API)

I try to follow the guide below on exporting some Object Detection Model (based on Tensorflow Object Detection API) trained with GPU to be use in TPU for Inference,

https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tpu_exporters.md

  1. In one of the requirement, it mentioned:
    "Users are assumed to have: PIPELINE_CONFIG: A pipeline_pb2.TrainEvalPipelineConfig config file" , but I am unable to find the file pipeline_pb2.TrainEvalPipelineConfig anywhere online or in any repository, may I know how do I get the file?

  2. What is "INPUT_PLACEHOLDER: Name of input placeholder in model's signature_def_map", where can I find it?

  3. What is "INPUT_TYPE: Type of input node, which can be one of 'image_tensor', 'encoded_image_string_tensor', or 'tf_example'"? Where can I find it?

  4. Where can I get an example related to "performing Inference on TPU using object detection model trained on GPU"?

Best regards, Chew Kok Wah

Upvotes: 0

Views: 439

Answers (1)

Related Questions