Segmentation Fault with Flatten() in TensorFlow Lite for Microcontrollers on STM32F746NG

I'm building a CNN model to run on an STM32F746NG Discovery board following the "TensorFlow Lite for Microcontrollers" tutorials and the TinyML book. I know that the supported TensorFlow-Keras functions are listed in the all_ops_resolver.cc file (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/all_ops_resolver.cc), but I'm having trouble with the Flatten() function. It doesn’t seem to be listed, even though it's such a basic operation. I’m using other functions from the list, and my model looks like this: model = models.Sequential() model.add(layers.Conv2D(16, (3, 3), activation='relu', input_shape=(36, 36, 1))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(36, 36, 1))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu', input_shape=(36, 36, 1))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Flatten()) # Potential issue here model.add(layers.Dense(8, activation='softmax')) model.add(layers.Dense(2)) model.summary() model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) I get a segmentation fault when running the model on the STM32F746NG board, no matter how much memory I allocate. I suspect the issue might be related to the Flatten() function, but I wanted to ask: is Flatten() supported by TensorFlow Lite for Microcontrollers? If not, does it go by a different name or is there an alternative I should use? The segmentation fault seems to be triggered in the process, and I’m trying to figure out if it’s specific to this function or something else in my setup. Has anyone encountered this issue or found a workaround for Flatten() on the STM32F746NG with TensorFlow Lite?
5 Replies
Marvee Amasi
Marvee Amasi3mo ago
Since Flatten() is a simple operation that reshapes the data from a multi dimensional array to a single dimensional array, you can replace it with a Reshape operation, which is supported in TensorFlow Lite for Microcontrollers
Marvee Amasi
Marvee Amasi3mo ago
You can reshape the data without relying on Flatten() by adding a Reshape layer.
Marvee Amasi
Marvee Amasi3mo ago
This might be helpful
Renuel Roberts
Renuel Roberts3mo ago
@wafa_ath You're encountering a segmentation fault when running your CNN model on the STM32F746NG board using TensorFlow Lite, specifically when using the Flatten() function. Based on your description, the issue might stem from the fact that Flatten() is not explicitly supported in TensorFlow Lite for Microcontrollers, as it’s not included in the all_ops_resolver.cc file. However Flatten() is a common operation in standard TensorFlow models, it's not supported by TensorFlow Lite for Microcontrollers. However, you can achieve the same functionality by replacing the Flatten() layer with a Reshape layer, which is supported in TensorFlow Lite for Microcontrollers and performs the equivalent operation—flattening a multi-dimensional tensor into a 1D array. Here’s how you can modify your model to replace the Flatten() layer:
model = models.Sequential()
model.add(layers.Conv2D(16, (3, 3), activation='relu', input_shape=(36, 36, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(32, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))

# Replace Flatten() with Reshape
model.add(layers.Reshape((-1,))) # This layer automatically flattens the tensor

model.add(layers.Dense(8, activation='softmax'))
model.add(layers.Dense(2))

model.summary()

model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model = models.Sequential()
model.add(layers.Conv2D(16, (3, 3), activation='relu', input_shape=(36, 36, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(32, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))

# Replace Flatten() with Reshape
model.add(layers.Reshape((-1,))) # This layer automatically flattens the tensor

model.add(layers.Dense(8, activation='softmax'))
model.add(layers.Dense(2))

model.summary()

model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
wafa_ath
wafa_ath3mo ago
Thanks so much for the suggestion! I tried replacing Flatten() with Reshape((-1,)), but I’m still getting the same segmentation fault after that layer on the STM32F746NG. I also checked and adjusted memory allocation, but no luck so far. I’m wondering if the issue might be related to the overall model size or how the layers are handled during deployment. If anyone has other ideas or suggestions on how to fix this, I’d love to hear them! Thanks again for the help—I really appreciate it!
Want results from more Discord servers?
Add your server