The proposed model provides a textual output upon detecting Indian Sign Language (ISL). This will introduce a medium that'll facilitate an easier form of communication with the deaf people. Thus, the proposed model can be used as a means to help and support the people affected.
The model was built using languages like python and dart, using the open source framework Flutter for developing an application for the same. The model has been trained using a personally built and procured dataset with better accuracy.
The dataset was hard to build and required a lot of patience and perseverance. Integration of the trained model to perform detection of the signs in flutter was also a demanding task.
The model achieved a decent amount of accuracy and was successfully able to distinguish between several signs. We were also able to form full-fledged sentences using the proposed model, which turned out to be exceptional.
The hackathon served as a means of testing ourselves and understanding how to perform the set of tasks under a given time constraint. It also allowed us to dive deep into the problems faced by the less-fortunate and be aware of the challenges faced by these people. Thus, by ideating on the solutions to the situations faced by these people, it also helped us to develop our logical skills and perform real life implementation. By working with the goal in mind for the development of society, we developed empathy and a sense of responsibility to do better.
The journey of Vocalize does not end here. Far from it, in fact, as this is just the beginning. The proposed application has the potential of allowing an extremely easy form of communication for people affected with hearing loss of all depths. Upon increasing the number of signs for detection and also enhancing the compatibility and accuracy of the model with the flutter application, the presented prototype has the ability of finally providing a voice to everyone.