Tensor Allocation Issue in TinyML Deployment on Arduino Nano 33 BLE Sense

I'm working on a TinyML project using an Arduino Nano 33 BLE Sense microcontroller with a built-in 9-axis inertial measurement unit (IMU) sensor (LSM9DS1). My goal is to deploy a gesture recognition model using TensorFlow Lite for Microcontrollers. I have followed the steps to collect data, train the model|, and deploy` it onto the microcontroller, but I am encountering an error during inference. here's my code
#include <Arduino.h>
#include <TensorFlowLite.h>
#include "model.h"
#include <Wire.h>
#include <Arduino_L

const int kTensorArenaSize = 10 * 1024;
uint8_t tensor_arena[kTensorArenaSize];

const tflite::Model* model = tflite::GetModel(g_model);
tflite::MicroInterpreter* interpreter = nullptr;
TfLiteTensor* input = nullptr;
TfLiteTensor* output = nullptr;

void setup() {
Serial.begin(115200);
while (!Serial);


if (!IMU.begin()) {
Serial.println("Failed to initialize IMU!");
while (1);
}

static_interpreter(
model, resolver, tensor_arena, kTensorArenaSize);
interpreter = &static_interpreter;

if (interpreter->AllocateTensors() != kTfLiteOk) {
Serial.println("Error allocating tensors!");
return;
}

input = interpreter->input(0);
output = interpreter->output(0);
}

void loop() {
float x, y, z;


if (IMU.accelerationAvailable()) {
IMU.readAcceleration(x, y, z);

input->data.f[0] = x;
input->data.f[1] = y;
input->data.f[2] = z;

if (interpreter->Invoke() != kTfLiteOk) {
Serial.println("Error during inference!");
return;
}

float gesture = output->data.f[0];
Serial.print("Detected gesture: ");
Serial.println(gesture);
}

delay(100);
}
#include <Arduino.h>
#include <TensorFlowLite.h>
#include "model.h"
#include <Wire.h>
#include <Arduino_L

const int kTensorArenaSize = 10 * 1024;
uint8_t tensor_arena[kTensorArenaSize];

const tflite::Model* model = tflite::GetModel(g_model);
tflite::MicroInterpreter* interpreter = nullptr;
TfLiteTensor* input = nullptr;
TfLiteTensor* output = nullptr;

void setup() {
Serial.begin(115200);
while (!Serial);


if (!IMU.begin()) {
Serial.println("Failed to initialize IMU!");
while (1);
}

static_interpreter(
model, resolver, tensor_arena, kTensorArenaSize);
interpreter = &static_interpreter;

if (interpreter->AllocateTensors() != kTfLiteOk) {
Serial.println("Error allocating tensors!");
return;
}

input = interpreter->input(0);
output = interpreter->output(0);
}

void loop() {
float x, y, z;


if (IMU.accelerationAvailable()) {
IMU.readAcceleration(x, y, z);

input->data.f[0] = x;
input->data.f[1] = y;
input->data.f[2] = z;

if (interpreter->Invoke() != kTfLiteOk) {
Serial.println("Error during inference!");
return;
}

float gesture = output->data.f[0];
Serial.print("Detected gesture: ");
Serial.println(gesture);
}

delay(100);
}
Solution:
And you would notice that static_interpreter is being used, but it's not properly initialized. use the tflite::MicroInterpreter the right way. Correct the initialization of the interpreter. Instead of static_interpreter, use something more like ``` static tflite::MicroInterpreter static_interpreter(model, resolver, tensor_arena, kTensorArenaSize);...
Jump to solution
7 Replies
Enthernet Code
Enthernet Code3mo ago
How can I resolve this tensor allocation issue, and what further steps should I take to ensure successful deployment of TinyML models on memory-constrained microcontrollers?
Crratite - Ludoplex
what size is the ml model binary?
Marvee Amasi
Marvee Amasi3mo ago
Hey @Enthernet Code in your source code the #include <Arduino_L is not complete. You normally should replace it with the correct library for the IMU sensor. Since you're using the Arduino Nano 33 BLE Sense, you should include the correct library for the onboard IMU
#include <Arduino_LSM9DS1
#include <Arduino_LSM9DS1
Solution
Marvee Amasi
Marvee Amasi3mo ago
And you would notice that static_interpreter is being used, but it's not properly initialized. use the tflite::MicroInterpreter the right way. Correct the initialization of the interpreter. Instead of static_interpreter, use something more like
static tflite::MicroInterpreter static_interpreter(model, resolver, tensor_arena, kTensorArenaSize);
static tflite::MicroInterpreter static_interpreter(model, resolver, tensor_arena, kTensorArenaSize);
Enthernet Code
Enthernet Code3mo ago
30kb
Enthernet Code
Enthernet Code3mo ago
Ooh sorry that must have been a typographically error while copying my code here
Enthernet Code
Enthernet Code3mo ago
Would try this out
Want results from more Discord servers?
Add your server