bobharris
bobharris

Reputation: 51

Illegal Instruction when invoking Tensorflow Lite .tflite CNN model from C++

I am getting a Illegal Instruction upon using the folling line of code to invoke a Tensorflow Lite .tflite model.

The platform is Raspian Stretch running on a BeagleBone Black.

if (interpreter->Invoke() != kTfLiteOk) {
      std::cout << "Failed to invoke tflite!\n";
    }

I have successfully used the same code to use a converted pure ANN model. However upon using CNN type models I am hitting this problem.

Attached is a gdb backtrace().

I have also tried to invoke several other Tensorflow TFLITE hosted models: mobilenet and squeezenet and I am hit with the same thing. The structure of the converted model is also displayed above the backtrace.

The backtrace is:

input(0) name: images
0: ArgMax, 8, 4, 0, 0
1: ArgMax/dimension, 4, 2, 0, 0
2: ConvNet/Reshape, 45120, 1, 0, 0
3: ConvNet/Reshape/shape, 16, 2, 0, 0
4: ConvNet/conv2d/Conv2D_bias, 64, 1, 0, 0
5: ConvNet/conv2d/Relu, 674880, 1, 0, 0
6: ConvNet/conv2d/kernel, 1024, 1, 0, 0
7: ConvNet/conv2d_1/Conv2D_bias, 128, 1, 0, 0
8: ConvNet/conv2d_1/Relu, 299520, 1, 0, 0
9: ConvNet/conv2d_1/kernel, 18432, 1, 0, 0
10: ConvNet/dense/BiasAdd, 1024, 1, 0, 0
11: ConvNet/dense/MatMul_bias, 1024, 1, 0, 0
12: ConvNet/dense/kernel/transpose, 19169280, 1, 0, 0
13: ConvNet/dense_1/BiasAdd, 8, 1, 0, 0
14: ConvNet/dense_1/MatMul_bias, 8, 1, 0, 0
15: ConvNet/dense_1/kernel/transpose, 2048, 1, 0, 0
16: ConvNet/max_pooling2d/MaxPool, 164864, 1, 0, 0
17: ConvNet/max_pooling2d_1/MaxPool, 74880, 1, 0, 0
18: images, 45120, 1, 0, 0
input: 18
About to memcpy
About to invoke mod!

Thread 1 "minimal" received signal SIGILL, Illegal instruction.
0x0007de64 in EigenForTFLite::TensorCostModel<EigenForTFLite::Threanst&, int) ()
(gdb) bt
#0  0x0007de64 in EigenForTFLite::TensorCostModel<EigenForTFLite::Tt const&, int) ()
#1  0x000901aa in void EigenForTFLite::TensorEvaluator<EigenForTFLi>, 1u> const, EigenForTFLite::TensorReshapingOp<EigenForTFLite::DSigenForTFLite::TensorMap<EigenForTFLite::Tensor<float const, 4, 1, iForTFLite::TensorReshapingOp<EigenForTFLite::DSizes<int, 2> const,  1, int>, 16, EigenForTFLite::MakePointer> const> const, EigenForTFevice>::evalProduct<0>(float*) const ()
#2  0x00090bae in tflite::multithreaded_ops::EigenTensorConvFunctorat const*, float*, int, int, int, int, float const*, int, int, int,
#3  0x00091200 in void tflite::ops::builtin::conv::EvalFloat<(tflit, TfLiteConvParams*, tflite::ops::builtin::conv::OpData*, TfLiteTen, TfLiteTensor*) ()
#4  0x0009134e in TfLiteStatus tflite::ops::builtin::conv::Eval<(tfde*) ()
#5  0x00047c2e in tflite::Subgraph::Invoke() ()
#6  0x00013b70 in tflite::Interpreter::Invoke() ()
#7  0x00012fc4 in main ()
(gdb)

Initially I thought I was including some type of Tensorflow operation not supported by tensorflow lite, but now as the other models don't appear to invoke either, I'm unsure.

Tensorflow Git tag/version is 1.13.1.

Compiling the demo's out of the source tree with commands like:

CC_PREFIX=arm-linux-gnueabihf- make -j 3 -f -g tensorflow/lite/tools/make/Makefile TARGET=rpi TARGET_ARCH=armv7l minimal

where minimal is a new makefile target created in

/tensorflow/tensorflow/lite/tools/make/Makefile

More of the code modified from their minimal and label_image tflite demo's :

std::unique_ptr<tflite::FlatBufferModel> model =
tflite::FlatBufferModel::BuildFromFile(filename);
TFLITE_MINIMAL_CHECK(model != nullptr);

// Build the interpreter
tflite::ops::builtin::BuiltinOpResolver resolver;
InterpreterBuilder builder(*model, resolver);
std::unique_ptr<Interpreter> interpreter;
builder(&interpreter);
TFLITE_MINIMAL_CHECK(interpreter != nullptr);

// Allocate tensor buffers.
TFLITE_MINIMAL_CHECK(interpreter->AllocateTensors() == kTfLiteOk);
printf("=== Pre-invoke Interpreter State ===\n");
tflite::PrintInterpreterState(interpreter.get());

  int input = interpreter->inputs()[0];
  LOG(INFO) << "input: " << input << "\n";

std::cout << "About to memcpy\n";

float* input_ptr = interpreter->typed_tensor<float>(input);
memcpy(input_ptr,float_buf,tf_input_size*sizeof(float));

if (interpreter->Invoke() != kTfLiteOk) {
      std::cout << "Failed to invoke tflite!\n";
    }

Any directions appreciated.

::EDIT::

Wow. Running the exact same executable and .tflite w/ data on a raspberry pi works 100%.

Upvotes: 3

Views: 974

Answers (1)

bobharris
bobharris

Reputation: 51

I was compiling for the wrong FPU, which only showed up when invoking a CNN and not an ANN.

Change the rpi_makefile.inc -mfpu to target neon (v3)

-mfpu=neon \

Tflite appears to work on the beaglebone black now much better.

Upvotes: 2

Related Questions