Enthernet Code
Enthernet Code
DIIDevHeads IoT Integration Server
Created by Enthernet Code on 8/13/2024 in #middleware-and-os
Efficiently Converting and Quantizing a Trained Model to TensorFlow Lite
Hey guys in contiuation of my project Disease Detection from X-Ray Scans Using TinyML, i am done training my model and would like to know the most easiest and efficient method for converting the trained model to TensorFlow Lite for deployment on a microcontroller, i have converted it using TensorFlow Lite's converter to convert it to a .tflite file but dont know if its the best method, and also how can i quantinize it to reduce the model size and improve inference speed
5 replies