Reputation: 1
I have developed a model for custom speech recognition by following this tutorial: https://www.tensorflow.org/tutorials/sequences/audio_recognition I have customized the model with custom parameters and I have frozen my graph. Now, I would like to deploy this model on a Coral Dev Board. For these reasons, I have performed an 8-bit quantization aware training. However, I'm in trouble converting the frozen graph into Tensorflow Lite model, by using the tflite_convert tool. The command:
tflite_convert --output_file=model.tflite --graph_def_file=frozen.pb --input_arrays=wav_data --output_arrays=labels_softmax --inference_type=QUANTIZED_UINT8
return the following error:
ValueError: Provide an input shape for input array 'wav_data'.
How can I find the correct values for the requested parameters? Any idea? Thanks.
Upvotes: 0
Views: 468
Reputation: 2878
I think that you might be using the wrong input node, 'wav_data' is a DecodeWav op that takes in the content of a .wav file, but you'll probably want to pass in raw sample data captured from a microphone, which will go in 'decoded_sample_data' instead. Here are the arguments I typically use to toco in this case:
--input_shapes=16000,1:1 --input_arrays=decoded_sample_data,decoded_sample_data:1
Upvotes: 1