Normalizing Input Data for CNN Model in Image Recognition System on ESP32

still based on my project image recognition system that can analyze images of tissue samples, identify malignancies, and predict possible symptoms and causes. How do i train a CNN to accurately identify malignant tissues? My aim is to train a convolutional neural network (CNN) model for image recognition. But I keep encountering the error
ValueError: Input data not properly normalized

ValueError: Input data not properly normalized

Here's my code
import tensorflow as tf
from tensorflow.keras import layers, models

model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 1)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(1, activation='sigmoid')
])

model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])

history = model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels))

import tensorflow as tf
from tensorflow.keras import layers, models

model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 1)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(1, activation='sigmoid')
])

model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])

history = model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels))

Solution:
your code should look like ```python import tensorflow as tf from tensorflow.keras import layers, models ...
Jump to solution
3 Replies
Enthernet Code
Enthernet Code3mo ago
It looks like you're encountering a ValueError: Input data not properly normalized while working on your CNN for image recognition. This error implies that your input data needs to be preprocessed before feeding it into your model. - CNNs perform better when the input data is normalized. For images with pixel values ranging from 0 to 255, scaling them to [0, 1] is standard practice. Normalize your images like this:
train_images = train_images / 255.0
test_images = test_images / 255.0

train_images = train_images / 255.0
test_images = test_images / 255.0

- Check that the dimensions of your images match the expected input shape of your model. If your first convolutional layer expects grayscale images of shape (224, 224, 1), but your images are RGB, adjust the input_shape accordingly:
model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 3)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(1, activation='sigmoid')
])

model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 3)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(1, activation='sigmoid')
])

- Ensure your data (train_images and test_images) is in a compatible format for TensorFlow, such as numpy arrays or TensorFlow tensors. Incompatible data types can cause issues.
Solution
Enthernet Code
Enthernet Code3mo ago
your code should look like
import tensorflow as tf
from tensorflow.keras import layers, models

# Normalize the images
train_images = train_images / 255.0
test_images = test_images / 255.0

# Define the model
model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 1)), # Adjust input_shape if using RGB
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(1, activation='sigmoid')
])

model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])

# Train the model
history = model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels))
import tensorflow as tf
from tensorflow.keras import layers, models

# Normalize the images
train_images = train_images / 255.0
test_images = test_images / 255.0

# Define the model
model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 1)), # Adjust input_shape if using RGB
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(1, activation='sigmoid')
])

model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])

# Train the model
history = model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels))
Boss lady
Boss lady3mo ago
Thanks it did work out
Want results from more Discord servers?
Add your server