wafa_ath
DIIDevHeads IoT Integration Server
•Created by wafa_ath on 10/18/2024 in #middleware-and-os
Segmentation Fault with Flatten() in TensorFlow Lite for Microcontrollers on STM32F746NG
I'm building a CNN model to run on an STM32F746NG Discovery board following the "TensorFlow Lite for Microcontrollers" tutorials and the TinyML book. I know that the supported TensorFlow-Keras functions are listed in the all_ops_resolver.cc file (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/all_ops_resolver.cc), but I'm having trouble with the Flatten() function. It doesn’t seem to be listed, even though it's such a basic operation. I’m using other functions from the list, and my model looks like this:
model = models.Sequential()
model.add(layers.Conv2D(16, (3, 3), activation='relu', input_shape=(36, 36, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(36, 36, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu', input_shape=(36, 36, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten()) # Potential issue here
model.add(layers.Dense(8, activation='softmax'))
model.add(layers.Dense(2))
model.summary()
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
I get a segmentation fault when running the model on the STM32F746NG board, no matter how much memory I allocate. I suspect the issue might be related to the Flatten() function, but I wanted to ask: is Flatten() supported by TensorFlow Lite for Microcontrollers? If not, does it go by a different name or is there an alternative I should use? The segmentation fault seems to be triggered in the process, and I’m trying to figure out if it’s specific to this function or something else in my setup.
Has anyone encountered this issue or found a workaround for Flatten() on the STM32F746NG with TensorFlow Lite?
6 replies