This repository hosts a TensorFlow-based custom image classification model. The model is designed to classify images into binary classes, labeled as 'Happy' and 'Sad'.
- The images are sourced from a directory named 'data'.
- The dataset is split into training (70%), validation (20%), and testing (10%) sets.
- Images are resized to 256x256 pixels and normalized to have pixel values in the range [0, 1].
The Sequential model consists of:
- Convolutional layers (Conv2D) with ReLU activation for feature extraction.
- MaxPooling layers for downsampling.
- A Flatten layer to convert 2D features into a 1D vector.
- Dense layers with ReLU and sigmoid activations for classification.
- The model is compiled using the Adam optimizer and binary crossentropy loss function.
- Accuracy is used as a metric.
- Training occurs over 20 epochs with validation data for performance monitoring.
- TensorBoard is used for tracking and visualizing metrics.
After training, the model's performance is evaluated using:
- Precision
- Recall
- Binary Accuracy These metrics are calculated on the test dataset.
- The model predicts the class of a new image (e.g., 'cat.jpg').
- The image is resized to 256x256 pixels, normalized, and fed into the model for prediction.
- The output classifies the image as either 'Happy' or 'Sad'.
- Loss and validation loss over epochs are plotted using Matplotlib.
- The original and resized images are displayed using Matplotlib.
To use the model:
- Prepare a dataset in a directory and load it using
tf.keras.utils.image_dataset_from_directory
. - Split the dataset into training, validation, and testing sets.
- Define and compile the Sequential model.
- Train the model using the training data and validate it.
- Evaluate the model using precision, recall, and accuracy metrics.
- Predict the class of new images.
- numpy
- tensorflow
- matplotlib
- os (for file handling)
- cv2 (for image processing)
This code is designed for binary image classification and can be adapted for other similar tasks. The model's architecture, hyperparameters, and training duration can be modified to suit different datasets and requirements.