wafa_ath
DIIDevHeads IoT Integration Server
•Created by wafa_ath on 8/8/2024 in #firmware-and-baremetal
How Does STM32 Handle Mixed Precision Weight Transfers in AI Models?
Hi, I am working on AI for embedded systems and I am very curious to understand something more hardware-related. I am currently researching mixed precision models, which use different precisions for the weights. My question is about how these weights are moved within the microcontroller. If I understand correctly, each RAM register is 32 bits wide, meaning that with 8-bit representations, I can store 4 weights in one register.
My question is, when these weights are moved, does the microcontroller (STM32) transfer each bit one by one, or are all the bits moved together? I am asking this to understand the energy consumption, as I want to determine if it is based on the number of registers moved or the number of bits moved. This understanding is crucial since moving bits is one of the most power-consuming operations when running a neural network.
4 replies