How to Build ARM Firmware Using Bazel?

Hey @araki we need some help from you please, we are also in the same boat you were. We are trying to get arm firmware built using bazel and we can't seem to get that going @undefined.elf is working on this now, could we get some help from you pleae
112 Replies
araki
araki2mo ago
I didn't use bazel. Since I'm using cube ide, I used this generation script: https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/docs/new_platform_support.md#step-1-build-tflm-static-library-with-reference-kernels It ouputs a directory that I included in my project and added necessary directories as "includes" folder in build. I still needed to paste further directories from the tflite-micro projects itself as I'm getting error about specific header files being not found when building
GitHub
tflite-micro/tensorflow/lite/micro/docs/new_platform_support.md at ...
Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors). - tensorflow/tflite-micro
araki
araki2mo ago
But once I successfully integrated it into my project, I'm getting a lot of errors from the tensorflow code itself. So I temporarily kept my project aside for now. I'm away without my laptop now, but I can answer any questions.
undefined.elf
undefined.elf2mo ago
I saw that
undefined.elf
undefined.elf2mo ago
after adding tflite with X-Cube-AI my project won't even build
undefined.elf
undefined.elf2mo ago
so I left the cubeide
undefined.elf
undefined.elf2mo ago
now trying some other Baaazeeel way
araki
araki2mo ago
GitHub
tflite-micro/tensorflow/lite/micro/examples/hello_world at main · t...
Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors). - tensorflow/tflite-micro
araki
araki2mo ago
I didn't use X-cube-AI. I tried to just integrate the whole micro directory into my build. For me, it seems like they want us to follow their own directory tree and build system instead of using it as a library in another project
ZacckOsiemo
ZacckOsiemo2mo ago
Ah so Bazel is the way
undefined.elf
undefined.elf2mo ago
we can change it most probably but we have to change it quite a lot
undefined.elf
undefined.elf2mo ago
I would say most probably I am at the last stage of success
undefined.elf
undefined.elf2mo ago
one more error fixed and I am good to go
ZacckOsiemo
ZacckOsiemo2mo ago
and such his epitaph read
undefined.elf
undefined.elf2mo ago
/usr/bin/arm-none-eabi-gcc -MD -MF bazel-out/k8-fastbuild/bin/_objs/firmware/main.d '-frandom-seed=bazel-out/k8-fastbuild/bin/_objs/firmware/main.o' -iquote . -iquote bazel-out/k8-fastbuild/bin -iquote external/bazel_tools -iquote bazel-out/k8-fastbuild/bin/external/bazel_tools '-mcpu=cortex-m4' -mthumb '-mfloat-abi=hard' '-mfpu=fpv4-sp-d16' '-mcpu=cortex-m4' -mthumb -g -O2 -ffunction-sections -fdata-sections -DSTM32F446xx -c Sources/main.c -o bazel-out/k8-fastbuild/bin/_objs/firmware/main.o)
# Configuration: f484d465c2bae92f555f3b567bbd4c38bbdaf165e5a9c796aa82359aca108a23
# Execution platform: @@local_config_platform//:host
/usr/bin/arm-none-eabi-gcc -MD -MF bazel-out/k8-fastbuild/bin/_objs/firmware/main.d '-frandom-seed=bazel-out/k8-fastbuild/bin/_objs/firmware/main.o' -iquote . -iquote bazel-out/k8-fastbuild/bin -iquote external/bazel_tools -iquote bazel-out/k8-fastbuild/bin/external/bazel_tools '-mcpu=cortex-m4' -mthumb '-mfloat-abi=hard' '-mfpu=fpv4-sp-d16' '-mcpu=cortex-m4' -mthumb -g -O2 -ffunction-sections -fdata-sections -DSTM32F446xx -c Sources/main.c -o bazel-out/k8-fastbuild/bin/_objs/firmware/main.o)
# Configuration: f484d465c2bae92f555f3b567bbd4c38bbdaf165e5a9c796aa82359aca108a23
# Execution platform: @@local_config_platform//:host
undefined.elf
undefined.elf2mo ago
see my toolchain is now changed
undefined.elf
undefined.elf2mo ago
Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
Sources/main.c:1:10: fatal error: stm32f446xx.h: No such file or directory
1 | #include "stm32f446xx.h"
| ^~~~~~~~~~~~~~~
compilation terminated.
Target //:firmware failed to build
INFO: Elapsed time: 0.180s, Critical Path: 0.04s
Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
Sources/main.c:1:10: fatal error: stm32f446xx.h: No such file or directory
1 | #include "stm32f446xx.h"
| ^~~~~~~~~~~~~~~
compilation terminated.
Target //:firmware failed to build
INFO: Elapsed time: 0.180s, Critical Path: 0.04s
bad news i was unable to handle this include
undefined.elf
undefined.elf2mo ago
I tried many ways but sadly it didn't work
undefined.elf
undefined.elf2mo ago
I hope it will work by tomorrow morning
ZacckOsiemo
ZacckOsiemo2mo ago
don't be weary, issues are finite
daleonpz
daleonpz2mo ago
GitHub
cmake-template/src at example/hello_world_stm32f746G_disc · daleonp...
Reusable project skeleton for embedded C & C++ projects using CMake. - daleonpz/cmake-template
daleonpz
daleonpz2mo ago
GitHub
cmake-buildsystem/toolchains/cross/cortex-m7_hardfloat.cmake at 04d...
Helper scripts, cross-compilation-files, makefile shims, and other helpful tools for working with CMake - daleonpz/cmake-buildsystem
undefined.elf
undefined.elf2mo ago
with cmake we have no issue man
undefined.elf
undefined.elf2mo ago
but we have to use bazel
ZacckOsiemo
ZacckOsiemo2mo ago
Hmmm @undefined.elf try finding a guide to migrate from cmake to Bazel
daleonpz
daleonpz2mo ago
GitHub
GitHub - bazelembedded/bazel-embedded: Tools for embedded/bare-meta...
Tools for embedded/bare-metal development using bazel - bazelembedded/bazel-embedded
wafa_ath
wafa_ath2mo ago
@ZacckOsiemo , did you try to convert the model to c array
ZacckOsiemo
ZacckOsiemo2mo ago
the model is converted its running the inference thats an issue
araki
araki2mo ago
@undefined.elf I think I got it working. Run make -f tensorflow/lite/micro/tools/make/Makefile TARGET=cortex_m_generic TARGET_ARCH=cortex-m4+fp microlite to create a libtensorflow-microlite.a library file. Then add it to your stm project and add it as a library to link in build settings. Include the necessary header files as well.
araki
araki2mo ago
I just made an inference with their hello world example. I keep getting output 52, which I don't know is correct or not but atleast it is working.
undefined.elf
undefined.elf2mo ago
let me look into it
undefined.elf
undefined.elf2mo ago
bazel is killing me man
undefined.elf
undefined.elf2mo ago
trying it since last week
undefined.elf
undefined.elf2mo ago
non stop everyday
undefined.elf
undefined.elf2mo ago
@araki how you have tried to implement in your project
undefined.elf
undefined.elf2mo ago
can you please share your github?
araki
araki2mo ago
I'm not making any actual project at the moment. I'm just learning and wanted to get TFLM working inside an STM generated project. Here's a gist on steps I followed:
https://gist.github.com/arkreddy21/427a97d4cd1431ebc766dd70b5dc8104
Gist
Setup TFLM in STM project.md
GitHub Gist: instantly share code, notes, and snippets.
araki
araki2mo ago
I just solved many errors that happened while tring to make it work. So ask me if there is any specific issue
undefined.elf
undefined.elf2mo ago
I am getting error related to flatbuffers
undefined.elf
undefined.elf2mo ago
include headers
araki
araki2mo ago
add core/Inc/third_party/flatbuffers/include to the includes directory in build settings
undefined.elf
undefined.elf2mo ago
there is nothing in it
undefined.elf
undefined.elf2mo ago
include is empty
undefined.elf
undefined.elf2mo ago
I compiled the library
undefined.elf
undefined.elf2mo ago
now after that I have to use that i guess in my project
undefined.elf
undefined.elf2mo ago
I have a libtensorflow-microlite.a
araki
araki2mo ago
you also have to generate a file structure and copy it to your project, once see my gist
undefined.elf
undefined.elf2mo ago
Generate TFLM tree
Refer: new_platform_support readme

python3 tensorflow/lite/micro/tools/project_generation/create_tflm_tree.py \
-e hello_world \
-e micro_speech \
-e person_detection \
/your/desired/path/tflm-tree
Run this command to generate tflm-tree directory in your desired path. Copy the tensorflow, third_party and signal folders into your stm32 project includes directory (eg: /core/Inc).

Additional you have to copy fixedpoint and internal directories from the tflite-micro repository that are generated while buiding the library. You can find them in tflite-micro/tensorflow/lite/micro/tools/make/downloads/gemmlowp/ directory. You need to add these to your includes directory (core/Inc) as well.

Note: Only header files are needed. You can run the command find . -type f -name "*.cc" -exec rm -f {} + to remove all .cc files in the current directory.

Additionally, you might also need to add the following as includes directories under Project properties -> C/C++ build -> settings -> MCU/MPU g++ compiler -> include paths

"core/Inc/third_party/flatbuffers/include"
"core/Inc/third_party/kissfft"
Generate TFLM tree
Refer: new_platform_support readme

python3 tensorflow/lite/micro/tools/project_generation/create_tflm_tree.py \
-e hello_world \
-e micro_speech \
-e person_detection \
/your/desired/path/tflm-tree
Run this command to generate tflm-tree directory in your desired path. Copy the tensorflow, third_party and signal folders into your stm32 project includes directory (eg: /core/Inc).

Additional you have to copy fixedpoint and internal directories from the tflite-micro repository that are generated while buiding the library. You can find them in tflite-micro/tensorflow/lite/micro/tools/make/downloads/gemmlowp/ directory. You need to add these to your includes directory (core/Inc) as well.

Note: Only header files are needed. You can run the command find . -type f -name "*.cc" -exec rm -f {} + to remove all .cc files in the current directory.

Additionally, you might also need to add the following as includes directories under Project properties -> C/C++ build -> settings -> MCU/MPU g++ compiler -> include paths

"core/Inc/third_party/flatbuffers/include"
"core/Inc/third_party/kissfft"
undefined.elf
undefined.elf2mo ago
this portion?
araki
araki2mo ago
yep
undefined.elf
undefined.elf2mo ago
I want to use my custom model what should I do?
undefined.elf
undefined.elf2mo ago
any idea
araki
araki2mo ago
This just sets up everything. To run inference, first convert your model into C array. Then you need to invoke it. I added an example main file to the gist, once refer it.
araki
araki2mo ago
You can also look at the generated examples in /tflm-tree
undefined.elf
undefined.elf2mo ago
I did it to an extent but have you used error reporther?
undefined.elf
undefined.elf2mo ago
in my case error reporter is misisng
araki
araki2mo ago
I did not use error reporter but it seems to be present in tflm-tree.
attachment 0
araki
araki2mo ago
Also, be sure to include all the Operations your model needs in the OpResolver. And allocate enough arena size
undefined.elf
undefined.elf2mo ago
my bad
undefined.elf
undefined.elf2mo ago
in my case it is also there
undefined.elf
undefined.elf2mo ago
just the path was a bit different
undefined.elf
undefined.elf2mo ago
let me fix it too
undefined.elf
undefined.elf2mo ago
thanks for all the help
undefined.elf
undefined.elf2mo ago
I will write a detailed guide since I am not doing it with CubeIDE
undefined.elf
undefined.elf2mo ago
@araki here comes the nightmare
attachment 0
araki
araki2mo ago
Ah, I guess rtos and tflm is too much. You're doing release build right, What board/mcu are you using?
undefined.elf
undefined.elf2mo ago
I have only 128KB ram
undefined.elf
undefined.elf2mo ago
I think too much
32bitSaviour
32bitSaviour2mo ago
I wonder what optimization you will employ now
undefined.elf
undefined.elf2mo ago
I can make release but that won't do much
undefined.elf
undefined.elf2mo ago
Os
araki
araki2mo ago
Idk about your build system but stm cube gave me 10x less size on release. Also STM32F407 has 128kb ram + additional 64kb core coupled ram, which could be used with some tinkering I think.
undefined.elf
undefined.elf2mo ago
I have 128KB ram
undefined.elf
undefined.elf2mo ago
using stm32f446re
undefined.elf
undefined.elf2mo ago
no extra ram
undefined.elf
undefined.elf2mo ago
I am using optimization though
undefined.elf
undefined.elf2mo ago
but the issue is with my data sample's array and also the model
undefined.elf
undefined.elf2mo ago
model was approx 58kb
araki
araki2mo ago
Well, that itself takes half the ram. Maybe you need to retrain smaller network, with quantization and all
undefined.elf
undefined.elf2mo ago
also rtos taking some memory itself
araki
araki2mo ago
Its just 9% over, so I think you could optimize it
undefined.elf
undefined.elf2mo ago
well whole firmware is not complete
undefined.elf
undefined.elf2mo ago
I haven't created the task and assigned them stack
undefined.elf
undefined.elf2mo ago
I was just trying to do a test compile
araki
araki2mo ago
Oh. I don't really know about your project so can't help much. I don't have experience with rtos either
undefined.elf
undefined.elf2mo ago
yeah but now I am happy atlease I added tflite-micro in it
araki
araki2mo ago
Yeah that's good. I'm still learning. I'm taking a tinyml course and wanted to use tflite-micro on my board. There's still a long way before I can make any decent project
undefined.elf
undefined.elf2mo ago
same here too
undefined.elf
undefined.elf2mo ago
maybe I should move to some nucleo 144 board
undefined.elf
undefined.elf2mo ago
that have approximately 1MB+ ram
araki
araki2mo ago
Yeah, a bigger mcu is best when tinkering. You can focus on functionality first and optimization later
ZacckOsiemo
ZacckOsiemo2mo ago
So what is happening, did you minimize enough and get inference running.
undefined.elf
undefined.elf2mo ago
nope it seems there is not much to minimize the code I was compiling it was without assigning stack to each task
undefined.elf
undefined.elf2mo ago
so there are more code coming
undefined.elf
undefined.elf2mo ago
also, I haved enabled the grabage collection of linker
undefined.elf
undefined.elf2mo ago
that also didn't help much
ZacckOsiemo
ZacckOsiemo2mo ago
ah so you need to optimize you stack sizes for the tasks? so a freertos issue not a ML issue
undefined.elf
undefined.elf2mo ago
no no what I am saying is I haven't even assigned memory for each task before that it already overflowing
ZacckOsiemo
ZacckOsiemo2mo ago
Ah then where is your memory going?
undefined.elf
undefined.elf2mo ago
58kb for the model
undefined.elf
undefined.elf2mo ago
and the rest to freeRTOS
undefined.elf
undefined.elf2mo ago
normal freeRTOS use to consume around 50kb of my ram and the rest increament wasn't proportional
ZacckOsiemo
ZacckOsiemo2mo ago
how much RAM is on the st you are using
undefined.elf
undefined.elf2mo ago
128KByte
ZacckOsiemo
ZacckOsiemo2mo ago
so your tasks are taking 70kb?
undefined.elf
undefined.elf2mo ago
I'm a fool
undefined.elf
undefined.elf2mo ago
My stack
ZacckOsiemo
ZacckOsiemo2mo ago
NO no I didn't say that, I am asking it this is indeed the case
undefined.elf
undefined.elf2mo ago
I was using default configuration of freeRTOS
undefined.elf
undefined.elf2mo ago
and that assigned 75kb for heap
ZacckOsiemo
ZacckOsiemo2mo ago
nuance is key
araki
araki2mo ago
David Civera
ST life.augmented Blog
STM32N6: Our very own NPU in the most powerful STM32 to inaugurate ...
The STM32N6 is our newest and most powerful STM32 and the first to come with our Neural-ART Accelerator, a custom neural processing unit (NPU) capable of 600 GOPS, thus allowing machine learning applications that demanded an accelerated microprocessor to now run on an MCU. It’s also our first Cortex-M55 MCU and one of the few in the industry to ...
undefined.elf
undefined.elf2mo ago
it's an interesting MCU but let's see when we can use it

Did you find this page helpful?