How do I fix a tensor dimension mismatch in TinyML disease detection?
Hello guys am workin on Disease Detection from X-Ray Scans Using TinyML, i have gathered a diverse dataset of X-ray images from public medical databases, used images labeled with specific diseases or conditions, such as pneumonia, tuberculosis, or normal/healthy cases, i have also prepared my training script but keep getting an error while training the model
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "tensorflow/lite/python/interpreter.py", line 42, in set_tensor
self._interpreter.SetTensor(self._tensor_index_map[tensor_index], value)
ValueError: Cannot set tensor: Dimension mismatch. Got [1, 128, 128, 3], expected [1, 64, 64, 1]
here's my code
2 Replies
Hello @Enthernet Code The error you’re encountering is due to a mismatch between the
input
image dimensions
expected by your model
and the actual dimensions
of the images being fed into
it. Your model
expects images with a shape of (64, 64, 1)
(grayscale), but the images you’re providing have a shape of (128, 128, 3)
(colored).
To resolve this, you have two options:
1. Preprocess your images to resize them to (64, 64)
and convert them to grayscale before feeding them into the model:
2. If you prefer to use the images in their original size and color, you’ll need to adjust the model’s input shape accordingly:
@RED HAT Thanks for the detailed response, I see the issue now. I'll try the first option since memory usage is a big concern for this project. Resizing the images and converting them to grayscale should keep things efficient.If I run into any other issues I’ll reach out. I appreciate the help 👍