Tensor Allocation Issue in TinyML Deployment on Arduino Nano 33 BLE Sense
I'm working on a TinyML project using an
Arduino Nano 33 BLE
Sense microcontroller with a built-in 9-axis
inertial measurement unit (IMU
) sensor (LSM9DS1
). My goal is to deploy a gesture recognition model using TensorFlow Lite
for Microcontrollers. I have followed the steps to collect data
, train
the model|, and
deploy` it onto the microcontroller, but I am encountering an error during inference.
here's my code
Solution:Jump to solution
And you would notice that
static_interpreter
is being used, but it's not properly initialized. use the tflite::MicroInterpreter
the right way.
Correct the initialization of the interpreter. Instead of static_interpreter, use something more like
```
static tflite::MicroInterpreter static_interpreter(model, resolver, tensor_arena, kTensorArenaSize);...7 Replies
How can I resolve this tensor allocation issue, and what further steps should I take to ensure successful deployment of TinyML models on memory-constrained microcontrollers?
what size is the ml model binary?
Hey @Enthernet Code in your source code the
#include <Arduino_L
is not complete.
You normally should replace it with the correct library for the IMU sensor. Since you're using the Arduino Nano 33 BLE Sense, you should include the correct library for the onboard IMU
Solution
And you would notice that
static_interpreter
is being used, but it's not properly initialized. use the tflite::MicroInterpreter
the right way.
Correct the initialization of the interpreter. Instead of static_interpreter, use something more like
30kb
Ooh sorry that must have been a typographically error while copying my code here
Would try this out