jk78346
jk78346

Reputation: 41

tf converter all mapped while still encounter unresolved custom op

In order to deploy dummy model from tf to edgetpu board, I already made a model has all operations passed and mapped to edgetpu. However, when I use tflite interpreter to run inference, it shows that

Traceback (most recent call last):
  File "run_model.py", line 6, in <module>
    interpreter.allocate_tensors()
...
RuntimeError: Encountered unresolved custom op: edgetpu-custom-op.Node number 0 (edgetpu-custom-op) failed to prepare.

And this is my simple addition model showing by edgetpu_compilter -s option:

Operator                       Count      Status

ADD                            1          Mapped to Edge TPU
QUANTIZE                       2          Mapped to Edge TPU

I checked, the tf.add should be able to execute no matter whether on cpu or edge tpu.

But one weird thing is that while I use visualize.py to inspect, it shows the following:

Tensors
index   name    type    shape   buffer  quantization
0   input   UINT8   [2, 3]  0   {'quantized_dimension': 0, 'scale': [0.003921], 'details_type': 'NONE', 'zero_point': [0]}
1   out UINT8   [2, 3]  0   {'quantized_dimension': 0, 'scale': [0.027404], 'details_type': 'NONE', 'zero_point': [0]}
Ops
index   inputs  outputs builtin_options opcode_index
0   
[0] 
[1] None    CUSTOM (0)

So, maybe my question would be: why addition operation is still called CUSTOM op here? And is this the reason why allocated_tensors fails to recognize it?

Upvotes: 2

Views: 2884

Answers (1)

lutybr
lutybr

Reputation: 55

I had the same issue. Then re-read the instructions at: https://coral.withgoogle.com/docs/edgetpu/tflite-python/ and saw I was missing:

  1. Adding the load_delegate library
from tflite_runtime.interpreter import load_delegate
  1. Calling the delegate
interpreter = Interpreter(model_path,
  experimental_delegates=[load_delegate('libedgetpu.so.1.0')])

After these changes my model runs in the Coral board.

Upvotes: 3

Related Questions