Skip to content
/ SHABD Public

Simple and Easy simulation using "Tensorflow for Poets" and Inception V3 checkpoint for demo execution of my work.

Notifications You must be signed in to change notification settings

onkkul/SHABD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Supervised machine learning for Handgesture Analysis eneabling Bi-Directional Communication - SHABD

This project tries to breach the gap between a mute and/or deaf and person who do not know sign language using supervised machine learning. At this point, system can translate sign language into single text. Work in pipeline for constructing whole word/ sentense. This is NOT orignial project but a simulation of it. Simple and Easy simulation using "Tensorflow for Poets" and Inception V3 checkpoint for demo execution of SHABD.

General Overview:

  1. Take frame by frame image
  2. Detect boundry condition between two hand gestures in live feed i.e. spacial location
  3. Feed each hand gesture to the trained model
  4. Using word vectors, project gestures in multidimentional "word space".
  5. Use google talkback or other voice processing model to read it aloud

OR you can use this simulation to experience what happens here.... :-)

About

Simple and Easy simulation using "Tensorflow for Poets" and Inception V3 checkpoint for demo execution of my work.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages