Skip to content

Surya-sourav/VirtuEye

Repository files navigation

VirtuEye

VirtuEye is an innovative AI-powered solution developed to enhance accessibility and provide an immersive experience for visually impaired individuals. Leveraging computer vision, machine learning, natural language processing, and haptic feedback, VirtuEye translates visual information into descriptive audio, helping users navigate their surroundings. The system also integrates Google Maps for real-time navigation and haptic feedback to enhance spatial awareness.

This project was recognized in the Google Solution Challenge 2024 and ranked among the Global Top 100.

Table of Contents

Project Overview

VirtuEye is designed to aid visually impaired individuals by converting visual data from their environment into meaningful, descriptive audio. The system uses image recognition to identify objects, text detection (OCR) to read signage or documents, and Google Maps for real-time navigation. Haptic feedback is utilized to provide users with spatial cues during navigation, enhancing the tactile understanding of their surroundings.

The project implements AI-based image recognition, NLP-driven descriptions, and audio generation, ensuring smooth and intuitive interaction for users.

Features

  • Real-time Object Detection: Recognizes objects and provides detailed audio descriptions to help users understand their environment.
  • Text Recognition (OCR): Extracts and reads text from images, documents, or signs.
  • Scene Description: Offers an overview of the entire scene, converting visual context into auditory information.
  • Navigation with Google Maps: Provides turn-by-turn navigation using Google Maps, with audio guidance and haptic feedback to signal direction changes.
  • Haptic Feedback: Delivers tactile feedback to assist with spatial awareness during navigation.
  • User-Friendly Interface: Simple and intuitive controls tailored for visually impaired users.
  • Portable and Efficient: Runs efficiently on mobile devices, making it accessible on the go.
  • Voice Commands: Supports voice commands for hands-free interaction.

Technologies Used

  • C++: 32.3%
  • CMake: 26.3%
  • Dart: 21.6%
  • Python: 11.5%
  • HTML: 2.5%
  • Swift: 2.3%

Usage

  1. Launch the VirtuEye application.
  2. Use the camera to capture images or navigate using Google Maps.
  3. The system will process the image, identify objects or text, and generate a descriptive audio output.
  4. During navigation, follow the turn-by-turn instructions and feel haptic feedback signals for guidance.
  5. Use voice commands to control the app hands-free.

System Architecture

VirtuEye follows a modular architecture:

  • Image Input: Captures images via a mobile or web camera.
  • Object Detection: Uses pre-trained models to detect and classify objects.
  • Text Recognition: Applies OCR to extract text from images.
  • NLP Processing: Converts visual data into a meaningful description.
  • Navigation Module: Uses the Google Maps API for real-time navigation.
  • Haptic Feedback: Provides tactile signals based on the user’s current location and direction.
  • Audio Output: Generates real-time audio feedback for both object detection and navigation.

Future Enhancements

  • Improved Navigation: Enhance navigation with dynamic obstacle detection and avoidance.
  • Multi-language Support: Expand language capabilities to cater to a broader user base.
  • Wearable Device Integration: Support for AR/VR glasses or smartwatches to offer more immersive interaction.
  • Offline Functionality: Add an offline mode for areas with limited connectivity.

Contributors

Demo Image Overview

Slide 16_9 - 13 Slide 16_9 - 12 Slide 16_9 - 15 Screenshot from 2024-01-20 18-08-22 Screenshot from 2024-01-20 18-09-07 Screenshot from 2024-01-20 18-10-22

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published