Issue with 4-Dimensional Input Error in PyTorch Model Inference

Hello everyone, I'm working on an object recognition project using PyTorch in Python, and I'm encountering an issue with model inference. After successfully training my model, I'm getting an error during inference: Error: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [32, 3, 3, 3], but got 3-dimensional input of size [3, 224, 224] instead. I'm using a pre-trained ResNet model and passing in images of size 224x224 pixels. Any ideas on why this error is occurring and how I can resolve it? Your insights would be much appreciated @Middleware & OS
Solution:
Hi @wafa_ath see what's happening here ur images prolly is coming in as single tensors of size (224, 224, 3) for RGB channels, so the model expects an extra dimension at the beginning to represent the batch size. Reshape your image or instead add a batch dimension of size 1
Jump to solution
4 Replies
Solution
Marvee Amasi
Marvee Amasi4mo ago
Hi @wafa_ath see what's happening here ur images prolly is coming in as single tensors of size (224, 224, 3) for RGB channels, so the model expects an extra dimension at the beginning to represent the batch size. Reshape your image or instead add a batch dimension of size 1
wafa_ath
wafa_ath4mo ago
Ow now i can see it
Marvee Amasi
Marvee Amasi4mo ago
Fixed ? It works now?
wafa_ath
wafa_ath4mo ago
Thanx
Want results from more Discord servers?
Add your server