David
David

Reputation: 43

OpenVINO Convert TF Model to IR file Issue

I'm trying to convert tensorflow model to OpenVINO IR files. I have downloaded a pre-trained model from the following address:

http://download.tensorflow.org/models/object_detection/mask_rcnn_inception_v2_coco_2018_01_28.tar.gz

Then I extracted the file to get a .pb file named "frozen_inference_graph.pb" Then I used the conversion command in the OpenVINO folder

"IntelSWTools\openvino_2019.2.275\deployment_tools\model_optimizer\"

as following:

python mo_tf.py --input_model frozen_inference_graph.pb

but I got following error message. How can I modify anything to solve this issue?

Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:  <my folder>\frozen_inference_graph.pb
    - Path for generated IR:    <my OpenVINO folder>\IntelSWTools\openvino_2019.2.275\deployment_tools\model_optimizer\.
- IR output name:   frozen_inference_graph
- Log level:    ERROR
- Batch:    Not specified, inherited from the model
- Input layers:     Not specified, inherited from the model
- Output layers:    Not specified, inherited from the model
- Input shapes:     Not specified, inherited from the model
- Mean values:  Not specified
- Scale values:     Not specified
- Scale factor:     Not specified
- Precision of IR:  FP32
- Enable fusing:    True
- Enable grouped convolutions fusing:   True
- Move mean values to preprocess section:   False
- Reverse input channels:   False
TensorFlow specific parameters:
- Input model in text protobuf format:  False
- Path to model dump for TensorBoard:   None
- List of shared libraries with TensorFlow custom layers implementation:    None
- Update the configuration file with input/output node names:   None
- Use configuration file used to generate the model with Object Detection API:  None
- Operations to offload:    None
- Patterns to offload:  None
- Use the config file:  None
Model Optimizer version:    2019.2.0-436-gf5827d4

C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\tensorflow\python\framework\dtypes.py:458: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])

[ ERROR ]  Shape [-1 -1 -1  3] is not fully defined for output 0 of "image_tensor". Use --input_shape with positive integers to override model input shapes.
[ ERROR ]  Cannot infer shapes or values for node "image_tensor".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "image_tensor".  For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #40. 
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function Parameter.__init__.<locals>.<lambda> at 0x000002032A17D378>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "image_tensor" node. 
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38. 

Process finished with exit code 1

I have tried many other tensorflow models but all have the same issue. I used different tensorflow version from 1.2.0 to 1.14.0 but the same.

The key word about the shape seems to be the main cause. but how can I add something to avoid this issue?

Shape [-1 -1 -1  3] is not fully defined for output 0 of "image_tensor". Use --input_shape with positive integers to override model input shapes.

I hope the IR file can be generated correctly.

Upvotes: 4

Views: 1486

Answers (1)

Aswathy - Intel
Aswathy - Intel

Reputation: 648

Openvino model optimizer (mo_tf.py) expects more arguments. Please pass the below as well.

python mo_tf.py --output_dir <\PATH> --input_model <\PATH>\mask_rcnn_inception_v2_coco_2018_01_28\frozen_inference_graph.pb --tensorflow_use_custom_operations_config extensions\front\tf\mask_rcnn_support.json --tensorflow_object_detection_api_pipeline_config <\PATH>\mask_rcnn_inception_v2_coco_2018_01_28\pipeline.config

mask_rcnn_inception_v2_coco model can be downloaded from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md

For more details refer : https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html

Upvotes: 4

Related Questions