A deep learning-based Sign Language Detection project that classifies American Sign Language (ASL) hand gestures.
- 🏗 CNN Model trained on ASL dataset
- 📷 Real-time gesture recognition
- 🖼 Image-based classification
- 📊 Data preprocessing and augmentation
- 🌍 Deployed using Streamlit
git clone https://github.com/AryanDhanuka10/Sign_Language_Detection.git
cd Sign_Language_Detectionpip install virtualenv
virtualenv venv
source venv/bin/activate # On Windows, use: venv\Scripts\activatepip install -r requirements.txtstreamlit run app.pyThe dataset used is an American Sign Language (ASL) alphabet dataset (excluding the letters 'J' and 'Z' as they require motion).
📌 Dataset Source: Available on Kaggle
The project is built using Convolutional Neural Networks (CNNs) to classify ASL hand gestures.
- Conv2D: Extracts spatial features from images.
- BatchNormalization: Normalizes activations to improve training.
- MaxPooling2D: Reduces feature dimensions while preserving important information.
- Dropout: Prevents overfitting by randomly deactivating neurons.
- Flatten: Converts multidimensional tensors into vectors.
- Dense: Fully connected layers for classification.
The project is deployed on Streamlit. You can access the live demo here:
- 🎥 Add real-time video detection
- 🤖 Enhance model accuracy with more training data
- 📊 Implement Transfer Learning using pre-trained models
- 📱 Deploy as a mobile app
Contributions are welcome! If you find a bug or want to improve the model, feel free to submit a Pull Request.
This project is open-source and available under the MIT License.

