tuscasp
tuscasp

Reputation: 49

OpenVino: how to add support to FusedBatchNormV3 in model optimizer?

I am trying to understand how to add support for the TensorFlow layer FusedBatchNormV3 at the model optimizer of OpenVino. I am running on an Ubuntu 18.03 and using Tensorflow 15.

My goal is to do several tests with some pre-trained standard network on the Neural Computer Stick 2, and I am working with ResNet50 by now. I have downloaded the network as follows:

import tensorflow as tf
keras = tf.keras

input_shape = (200,200,3)
model = keras.applications.resnet50.ResNet50(input_shape=input_shape,
                                              include_top=False, 
                                              weights='imagenet')

After I have frozen model as described in this post.

I am running the model optimizer with the command:

sudo python3 mo.py \
--input_model ~<PATH_TO_MODEL>/model.pb \
--output_dir ~<PATH_TO_MODEL> \
--data_type FP16 -b 1

But I am getting this error message:

[ ERROR ]  1 elements of 64 were clipped to infinity while converting a blob for node [['conv1_bn_1/cond/FusedBatchNormV3_1/ReadVariableOp_1/Output_0/Data__const']] to <class 'numpy.float16'>. 
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #76. 
[ ERROR ]  List of operations that cannot be converted to Inference Engine IR:
[ ERROR ]      FusedBatchNormV3 (53)
[ ERROR ]          conv1_bn_1/cond/FusedBatchNormV3_1
[ ERROR ]          conv2_block1_0_bn_1/cond/FusedBatchNormV3_1
[ ERROR ]          conv2_block1_1_bn_2/cond/FusedBatchNormV3_1
...
[ ERROR ]          conv5_block3_3_bn_1/cond/FusedBatchNormV3_1
[ ERROR ]  Part of the nodes was not converted to IR. Stopped.

I have found this forum post suggesting to downgrade TensorFlow to version 13, but after doing so I also have got in another error with the same layer:

[ ERROR ]  Cannot infer shapes or values for node "conv1_bn_1/cond/FusedBatchNormV3_1".
[ ERROR ]  Op type not registered 'FusedBatchNormV3' in binary running on <USER>. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

My current idea is to add support for the FusedBatchNormV3 by using the Sub-Graph replacement introduced in the model optimizer (described in this official page). I would like to replace the function FusedBatchNormV3 by the ScaleShift operation, since here FusedBatchNorm is said to be associated to it, but I do not know how to find this ScaleShift object. Can someone please help me?

Upvotes: 2

Views: 959

Answers (1)

Artemy Skrebkov
Artemy Skrebkov

Reputation: 345

Unfortunately, I cannot help with the replacement mechanism but I have another thing which should help.

According to the comment from https://github.com/opencv/dldt/issues/352 you can pretend that FusedBatchNormV3 behaves the same way like FusedBatchNorm and it does not lead to accuracy drop.

I added a patch to model optimizer which implements the behavior described above. Please check it out: https://github.com/ArtemSkrebkov/dldt/tree/askrebko/treat_bnv3_as_bn

I checked inference results on the IR generated (using one picture) and I got the same top-3 as the Keras model gives.

Model optimizer command I used (Not sure about preprocessing parameters): python3 ./mo_tf.py --input_model ~/workspace/reps/keras_to_tensorflow/resnet-50.pb --input_shape [1,224,224,3] --mean_values [103.939,116.779,123.68]

Is that solution OK to you?

Upvotes: 1

Related Questions