train_datagen = ImageDataGenerator(rescale=1./255) # Rescale pixel values
test_datagen = ImageDataGenerator(rescale=1./255) # Rescale pixel values
train_generator = train_datagen.flow_from_directory(
train_data_path,
target_size=(150, 150),
batch_size=32,
class_mode='categorical'
)
test_generator = test_datagen.flow_from_directory(
test_data_path,
target_size=(150, 150),
batch_size=32,
class_mode='categorical'
)
# Set the seed for NumPy
np.random.seed(42)
# Set the seed for TensorFlow
tf.random.set_seed(42)
# Define the model architecture
L1 = 32
L2 = 64
L3 = 128
L4 = 128
L5 = 512
model1 = Sequential()
model1.add(Conv2D(L1, (3,3), activation='relu', input_shape = (150, 150, 3),
kernel_initializer = 'he_normal',kernel_regularizer = regularizers.l2(0.01)))
model1.add(MaxPooling2D(2, 2))
model1.add(Conv2D(L2, (3,3), activation='relu', padding='same'))
model1.add(MaxPooling2D(2,2))
model1.add(Conv2D(L3, (3,3), activation='relu', padding='same'))
model1.add(MaxPooling2D(2,2))
model1.add(Conv2D(L4, (3,3), activation='relu', padding='same'))
model1.add(MaxPooling2D(2,2))
model1.add(Flatten())
model1.add(Dense(L5, activation='relu'))
model1.add(Dropout(0.3))
model1.add(Dense(2, activation='sigmoid'))
# Compile the model with a regression-specific loss function
model1.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# training the model for 10 epochs
history1 = model1.fit(
train_generator,
epochs=10,
steps_per_epoch=len(train_generator),
validation_data=test_generator,
validation_steps=len(test_generator)
)
Hello,
I'm doing image classification for gender. In total, I have 3500 male images and 3500 female images for training dataset and 700 male images and 700 females images for testing dataset. When I run the algorithm above, this is the model performance that I got:
It seems like the model is performing quite good, but when I run the classification report below:
# Make predictions on the test dataset
y_pred = model1.predict(test_generator)
y_pred_classes = np.argmax(y_pred, axis=1) # Convert probabilities to class labels
# Get true labels
y_true = test_generator.classes
# Define class labels
class_labels = {0: 'female', 1: 'male'}
# Compute confusion matrix
conf_matrix = confusion_matrix(y_true, y_pred_classes)
# Compute classification report
class_report = classification_report(y_true, y_pred_classes)
print("\nClassification Report:")
print(class_report)
This is the result that I got from the report:
The accuracy from the classification report is only 0.5. May I know why the accuracy from the classification report seems like randomly predicting on the testing dataset instead of using the trained model? Where I did wrongly? Appreciate any help or advise, thank you very much.