Farynx
Farynx

Reputation: 113

Tensorflow lite model output always gives same output no matter the input

My goal is to run a Keras model I have made in my ESP32 microcontroller. I have the libraries all working correctly.

I have created a Keras model using google Collab that looks to be working fine when I give it random test data within google Collab. The model has two input features and 4 different outputs.(a multiple-output regression model)

However, when I export and load the model into my c++ application in the ESP32 it does not matter what the inputs are, it always predicts the same output.

I have based myself in this code in order to load and run the model in c++ : https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/magic_wand/main_functions.cc

And this is my version of the code

namespace {
    tflite::ErrorReporter* error_reporter = nullptr;
    const tflite::Model* model = nullptr;
    tflite::MicroInterpreter* interpreter = nullptr;
    TfLiteTensor* input = nullptr;
    TfLiteTensor* output = nullptr;
    int inference_count = 0;

    // Create an area of memory to use for input, output, and intermediate arrays.
    // Finding the minimum value for your model may require some trial and error.
    constexpr int kTensorArenaSize = 2 * 2048;
    uint8_t tensor_arena[kTensorArenaSize];
}  // namespace 
static void setup(){
    static tflite::MicroErrorReporter micro_error_reporter;
    error_reporter = &micro_error_reporter;

    model = tflite::GetModel(venti_model);
    if (model->version() != TFLITE_SCHEMA_VERSION) {
        error_reporter->Report(
            "Model provided is schema version %d not equal "
            "to supported version %d.",
            model->version(), TFLITE_SCHEMA_VERSION);
        return;
    }

    // This pulls in all the operation implementations we need.
    // NOLINTNEXTLINE(runtime-global-variables)
    static tflite::ops::micro::AllOpsResolver resolver;

    // Build an interpreter to run the model with.
    static tflite::MicroInterpreter static_interpreter(
            model, resolver, tensor_arena, kTensorArenaSize, error_reporter);
    interpreter = &static_interpreter;

    // Allocate memory from the tensor_arena for the model's tensors.
    TfLiteStatus allocate_status = interpreter->AllocateTensors();
    if (allocate_status != kTfLiteOk) {
        error_reporter->Report("AllocateTensors() failed");
        return;
    }

    // Obtain pointers to the model's input and output tensors.
    input = interpreter->input(0);

    ESP_LOGI("TENSOR SETUP", "input size = %d", input->dims->size);
    ESP_LOGI("TENSOR SETUP", "input size in bytes = %d", input->bytes);
    ESP_LOGI("TENSOR SETUP", "Is input float32? = %s", (input->type == kTfLiteFloat32) ? "true" : "false");
    ESP_LOGI("TENSOR SETUP", "Input data dimentions = %d",input->dims->data[1]);

    output = interpreter->output(0);

    ESP_LOGI("TENSOR SETUP", "output size = %d", output->dims->size);
    ESP_LOGI("TENSOR SETUP", "output size in bytes = %d", output->bytes);
    ESP_LOGI("TENSOR SETUP", "Is input float32? = %s", (output->type == kTfLiteFloat32) ? "true" : "false");
    ESP_LOGI("TENSOR SETUP", "Output data dimentions = %d",output->dims->data[1]);

}

static bool setupDone = true;

static void the_ai_algorithm_task(){

    /* First time task is init setup the ai model */
    if(setupDone == false){
        setup();
        setupDone = true;
    }

    /* Load the input data i.e deltaT1 and deltaT2 */
    //int i = 0;
    input->data.f[0] = 2.0;   /* Different values dont change the output */
    input->data.f[1] = 3.2;   


    // Run inference, and report any error
    TfLiteStatus invoke_status = interpreter->Invoke();
    if (invoke_status != kTfLiteOk) {
        error_reporter->Report("Invoke failed");
        // return;
    }

    /* Retrieve outputs Fan , AC , Vent 1 , Vent 2 */
    double fan = output->data.f[0];
    double ac = output->data.f[1];
    double vent1 = output->data.f[2];
    double vent2 = output->data.f[3];


    ESP_LOGI("TENSOR SETUP", "fan = %lf", fan);
    ESP_LOGI("TENSOR SETUP", "ac = %lf", ac);
    ESP_LOGI("TENSOR SETUP", "vent1 = %lf", vent1);
    ESP_LOGI("TENSOR SETUP", "vent2 = %lf", vent2);
    
}

The model seems to load ok as the dimensions and sizes are correct. But the output is always the same 4 values

fan = 0.0087
ac = 0.54
vent1 = 0.73
vent2 = 0.32

Any idea on what can be going wrong? Is it something about my model or am I just not using the model correctly in my c++ application?

Upvotes: 2

Views: 3403

Answers (2)

Farynx
Farynx

Reputation: 113

I have found the issue and the answer.

It was not the C++ code, it was the model. Originally, I made my model with 3 hidden layers of 64, 20, and 8 (I am new to ML so I was only playing with random values) and it was giving me the issue.

To solve it I just changed the hidden layers to 32, 16, and 8 and the C++ code was outputting right values.

Upvotes: 0

Meghna Natraj
Meghna Natraj

Reputation: 691

Could you refer to the "Test the model" section here - https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/train/train_hello_world_model.ipynb#scrollTo=f86dWOyZKmN9 and verify if the TFLite model is producing the correct results?

You can find the issue by testing the 1) TFModel (which you have done already) 2) TFLite model and then 3) TFLite Micro Model (C Source File)

You also need to verify that the inputs passed to the model are of the same type and distribution. eg: If your TFModel was trained on Images in the range 0-255, then you need to pass this to the TFLite and TFLite Micro Model. Instead, if you trained the model using preprocessed data (0-255 get normalized to 0-1 during training), then you need to do the same and preprocess the data for the TFLite and TFLite Micro model.

Upvotes: 1

Related Questions