This project tries to breach the gap between a mute and/or deaf and person who do not know sign language using supervised machine learning. At this point, system can translate sign language into single text. Work in pipeline for constructing whole word/ sentense. This is NOT orignial project but a simulation of it. Simple and Easy simulation using "Tensorflow for Poets" and Inception V3 checkpoint for demo execution of SHABD.
General Overview:
- Take frame by frame image
- Detect boundry condition between two hand gestures in live feed i.e. spacial location
- Feed each hand gesture to the trained model
- Using word vectors, project gestures in multidimentional "word space".
- Use google talkback or other voice processing model to read it aloud