wafa_ath
wafa_ath
DIIDevHeads IoT Integration Server
Created by wafa_ath on 7/30/2024 in #firmware-and-baremetal
How to Resolve RuntimeError in TensorFlow Lite for Microcontrollers on ESP32?
Thank you for the suggestion! I've enabled verbose logging using the TF_LITE_REPORT_ERROR macro and here's the detailed error message I received:
Model inference started...
Node 1: Custom Op1 executed successfully
Node 2: Conv2D executed successfully
Node 3: DepthwiseConv2D executed successfully
Node 4: FullyConnected executed successfully
Node 5: Custom Op2 executed successfully
...
Node 12: Softmax executed successfully
Node 13: LSTM encountered an error
Error: Internal error: Failed to run model at node 13 with status 1
TensorFlow Lite Micro Interpreter Error:
Node type: LSTM
Node ID: 13
Status: kTfLiteError (1)
Error message: Memory allocation failed
Additional details:
Input tensor: Shape [1, 128]
Output tensor: Shape [1, 64]
Activation tensor: Shape [1, 256]
Total required memory: 2048 bytes
Available memory: 1024 bytes
Model inference terminated with error.
Model inference started...
Node 1: Custom Op1 executed successfully
Node 2: Conv2D executed successfully
Node 3: DepthwiseConv2D executed successfully
Node 4: FullyConnected executed successfully
Node 5: Custom Op2 executed successfully
...
Node 12: Softmax executed successfully
Node 13: LSTM encountered an error
Error: Internal error: Failed to run model at node 13 with status 1
TensorFlow Lite Micro Interpreter Error:
Node type: LSTM
Node ID: 13
Status: kTfLiteError (1)
Error message: Memory allocation failed
Additional details:
Input tensor: Shape [1, 128]
Output tensor: Shape [1, 64]
Activation tensor: Shape [1, 256]
Total required memory: 2048 bytes
Available memory: 1024 bytes
Model inference terminated with error.
It seems like the LSTM node is encountering a memory allocation problem. What Are there specific memory management strategies or configurations in TensorFlow Lite that could help improve the stability of NLP models on the ESP32? Thanks again for your help
5 replies