This repository contains my project on Sign Language Detection using two different approaches:
- CNN-Based Approach
- Mediapipe + Random Forest Approach
Sign language detection is a critical application in making communication more inclusive. This project explores two distinct methods to recognize hand gestures and detect alphabets.
In this approach, I trained a Convolutional Neural Network (CNN) on preprocessed sign language images.
- Gaussian Blur Filters: To minimize background influence.
- Data Augmentation: Techniques like zooming, flipping, and rotation.
- Histogram Equalization: To enhance image contrast.
Prediction1: | Prediction2: | Prediction3: |
---|---|---|
This approach uses Mediapipe to extract hand landmarks and a Random Forest classifier for gesture recognition.
- Landmark Extraction: Using Mediapipe to extract hand and finger positions.
- Feature Storage: Saving the landmarks for classification.
- Random Forest Classifier: Training the model on the extracted features.
Prediction1: | Prediction2: | Prediction3: |
---|---|---|