gopal raghavan
gopal raghavan

Reputation: 41

Extracting Fused Activation Type from TFLite File

I am parsing TFLite files (Tensorflow 1.15, schema version 3) using Python.Everything (strides and others) works fine but the fused activation type always returns 0. This means no activation layer but we know there is a Relu6. What am I doing wrong in this snippet of code ?

conv2d_opt = DepthwiseConv2DOptions.DepthwiseConv2DOptions()
conv2d_opt.Init(graph.Operators(operator_index).BuiltinOptions().Bytes,graph.Operators(operator_index).BuiltinOptions().Pos)
row_stride = conv2d_opt.StrideW()
col_stride = conv2d_opt.StrideH()
FusedActivationFunction = conv2d_opt.FusedActivationFunction()

Upvotes: 1

Views: 312

Answers (1)

gopal raghavan
gopal raghavan

Reputation: 41

Found the answer with help from Daniel Situnayake. Basically the Conv2D or DS_Conv2D kernels clip the output to the layer output_activation_max and output_activation_min values always. Please see below. So you get Relu/Relu6/None by default without having to add an extra layer. An example from depthwise_conv kernel at https://github.com/tensorflow/tensorflow/blob/88bd10e84273f558a72714890ab7d04789ebbe37/tensorflow/lite/kernels/internal/reference/depthwiseconv_uint8.h#L266

acc = DepthwiseConvRound<output_rounding>(
      acc, output_multiplier[output_channel],
      output_shift[output_channel]);
acc += output_offset;
**acc = std::max(acc, output_activation_min);
acc = std::min(acc, output_activation_max);**
output_data[Offset(output_shape, batch, out_y, out_x,
                                 output_channel)] = static_cast<int8_t>(acc);

Upvotes: 1

Related Questions