L.Lukas
L.Lukas

Reputation: 98

How to compile a embedding extractor on Edge TPU Model Compiler?

Error when compiling embedded extractor on coral

I am trying to retrain an image classifier on my coral edgeTPU device. Therefore I followed the steps explained on the Coral's "Retrain an image classification model on-device" tutorial:

embedded extractor creation

As a matter of fact, I created an embedding extractor tflite file according to the given example:

tflite_convert \
--output_file=mobilenet_v1_embedding_extractor.tflite \
--graph_def_file=mobilenet_v1_1.0_224_quant_frozen.pb \
--input_arrays=input \
--output_arrays=MobilenetV1/Logits/AvgPool_1a/AvgPool

Edge TPU Model Compiler upload

I got the file mobilenet_v1_embedding_extractor.tflite and uploaded it into the Edge TPU Model Compiler . Unfortunately the compiling process does fail and I get the following error message:


ERROR: Something went wrong. Couldn't compile model.

More details
--------------
Start Time     2019-05-02T14:14:53.309219Z
State          FAILED
Duration       5.963912978s
Type           type.googleapis.com/google.cloud.iot.edgeml.v1beta1.CompileOperationMetadata
Name           operations/compile/16259636989695619987

From my point of understanding the above mentioned procedure has to be accomplished before the on-device learning with the classification_transfer_learning.py script is executed on the raspberryPi + edgeTPU/ devBoard.

I hope you can give me a hint to solve the problem and thanks in advance.

Update May 3 2019

The compling works without any erros, when i use the unmodified mobilenet_v1_1.0_224_quant.tflite model.

I used the quantized model from tensorflow.

Upvotes: 0

Views: 568

Answers (1)

qiqix
qiqix

Reputation: 46

It seems that some flags for the tflite_convert is missing. We will fix on the website asap. Please try:

tflite_convert \
--output_file=mobilenet_v1_embedding_extractor.tflite \
--graph_def_file=mobilenet_v1_1.0_224_quant_frozen.pb \
--inference_type=QUANTIZED_UINT8 \
--mean_values=128 \
--std_dev_values=128 \
--input_arrays=input \
--output_arrays=MobilenetV1/Logits/AvgPool_1a/AvgPool

It is to indicate that you'd like to convert to a quantization model,which is the only valid format for the edgetpu converter yet. With these flags, it should work fine.

Upvotes: 3

Related Questions