Reputation: 67
bazel run -c opt tensorflow/lite/toco:toco -- \
--input_file=$INPUT_PATH/tflite_graph.pb \
--output_file=$OUTPUT_PATH/detect.tflite \
--input_shapes=1,300,300,3 \
--input_arrays=normalized_input_image_tensor \
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
--inference_type=FLOAT \
--allow_custom_ops
I re-trained a float model ssd_mobilenet_v2_coco on my own dataset to detect one object , after freezing the graph, and runing the inference using this notebook, the model performed well and as expected. Afterwards, I exported the tflite_graph.pb using this command :
python3 object_detection/export_tflite_ssd_graph.py \
--pipeline_config_path=/PATH/pipeline.config \
--trained_checkpoint_prefix=/PATH/model.ckpt-50000 \
--output_directory=/PATH/tflite \
--add_postprocessing_op=True
Then converted the tflite_graph.pb into a detect.tflite using this command :
bazel run -c opt tensorflow/lite/toco:toco -- \
--input_file=/PATH/tflite_graph.pb \
--output_file=/PATH/detect.tflite \
--input_shapes=1,300,300,3 \
--input_arrays=normalized_input_image_tensor \
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
--inference_type=FLOAT \
--allow_custom_ops
The labelmap.txt
contains the following label using the coco dataset labelmap style :
???
Object_1
I run the inference using this script which is a custom version of this script
The output is pretty odd as we have negative values and inaccurate ones and therefore we can't visualize any detection :
image: /PATH/Inference_Notebooks/test_images/image15.jpg
boxes:
[[ 3.1171997 4.9266524 -15.893956 7.4959326 ]
[ -1.904324 1.0337602 -7.818109 -7.9575577 ]
[ 1.4761205 2.4604938 -14.553953 8.159015 ]
[ 3.4024968 2.7483184 -9.744125 6.332321 ]
[ -4.447262 -2.6578145 -1.9118335 -12.579478 ]
[ 1.5781038 -2.980986 -15.902752 5.9894757 ]
[ -0.4003443 -12.641836 -5.6216025 -0.9522631 ]
[ -1.3472033 -5.514964 -4.7609305 -11.9099045 ]
[ 2.6661258 -4.2592344 -13.687805 -4.15193 ]
[ -0.49181542 9.271766 -3.5316777 -3.233222 ]]
classes: [ 2 0 -10 9 -4 3 -6 -7 0 2]
scores: [ -5.54325 1.9508497 -6.1604195 -4.2281013 -0.02703065 0.707987 -11.534006 7.781439 -2.5938861 -2.5299745 ]
number of detections: 0.0
I tested the inference files on default SSD mobilenet models using their default weights trained on the COCO dataset, and I was able to visualize the boxes, as the model detected cars, people, etc.. I converted a QUANTIZED tflite_graph.pb of defualt models like the quantized SSD mobilenet v1 into detect.tflite, and it was able to output boxes and the coco dataset labels.
I don't understand where these values are coming from, and why the tflite model isn't detecting while the tensorflow one does .
Also a quantized SSD mobilenetv2 model I trained and converted following the same steps but using this command:
bazel run -c opt tensorflow/lite/toco:toco -- \
--input_file=/PATH/tflite_graph.pb \
--output_file=/PATH/detect.tflite \
--input_shapes=1,300,300,3 \
--input_arrays=normalized_input_image_tensor \
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
--inference_type=QUANTIZED_UINT8 \
--mean_values=128 \
--std_values=128 \
--change_concat_input_ranges=false \
--allow_custom_ops
outputs these values:
image: /home/neox/HistorIAR_Detection/Inference_Notebooks/test_images/image15.jpg
boxes:
[[1.0170084e+32 1.6485735e+30 2.5740889e+31 2.6057175e+31]
[2.4791379e+31 6.3874716e+33 1.0232719e+32 1.0043315e+32]
[6.4686013e+33 4.0425799e+32 1.0107439e+32 2.5712148e+34]
[1.6069700e+33 4.0430743e+32 2.5712782e+34 1.0106698e+32]
[2.5426435e+31 1.0233461e+32 1.0232968e+32 1.0170082e+32]
[1.6272522e+33 4.0426789e+32 1.0234205e+32 1.6272126e+33]
[2.5266129e+31 6.5147562e+30 2.5740879e+31 2.5742122e+31]
[1.0423612e+32 1.0296598e+32 6.5144491e+30 6.3561451e+30]
[1.0170081e+32 1.6372740e+33 6.1586925e+30 1.6170719e+33]
[4.0172261e+32 1.0170823e+32 6.5090083e+33 1.0106451e+32]]
classes: [-2147483648 -2147483648 -2147483648 -2147483648 -2147483648 -2147483648
-2147483648 -2147483648 -2147483648 -2147483648]
scores: [6.3787720e+33 1.0090909e+35 6.3066602e+33 1.6144633e+36 1.6146259e+36 1.6042820e+36 1.6145852e+36 6.4585042e+36 1.6042415e+36 4.0624248e+35]
num: 0.0
Upvotes: 4
Views: 1844
Reputation: 131
The output of tflite model requires post-processing. The model returns a fixed number (here, 10 detections) by default. Use the output tensor at index 3 to get the number of valid boxes, num_det
. (i.e. top num_det
detections are valid, ignore the rest).
num_det = int(interpreter.get_tensor(output_details[3]['index']))
boxes = interpreter.get_tensor(output_details[0]['index'])[0][:num_det]
classes = interpreter.get_tensor(output_details[1]['index'])[0][:num_det]
scores = interpreter.get_tensor(output_details[2]['index'])[0][:num_det]
As for your question, there is no valid detections num: 0.0
. So the output tensors have garbage values.
Here's a link to an inference script with input preprocessing, output post-processing and mAP evaluation.
Upvotes: 2
Reputation: 1
`frame=cv2.resize(frame,(300,300))
frame=np.expand_dims(frame,axis=0)
frame=frame/128.0
frame=frame.astype('float64')
frame=np.interp(frame,(frame.min(),frame.max()),(-1,+1))
Upvotes: 0