Skip to content

Voice Assistant with an ever-growing list of available commands!

Notifications You must be signed in to change notification settings

kristenprescott/VirtualAssistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Virtual Assistant


Virtual Assistant is a React web app that uses the react-speech-recognition and react-speech-kit hooks to utilize the Web Speech API for Speech Recognition and Voice Synthesis. It combines these libraries to enable voice command recognition - users have the ability to speak commands such as add/remove items on a to-do list, get current weather data, and set a timer.


Technologies Used:


Dependencies:

Backend dependencies:
  • express
  • mongoose
APIs:
This project was bootstrapped with Create React App.

Getting Started with Create React App

This project was bootstrapped with Create React App

Available Scripts

In the project directory, you can run:

yarn start

Runs the app in the development mode.
Open http://localhost:3000 to view it in the browser.

The page will reload if you make edits.
You will also see any lint errors in the console.

yarn test

Launches the test runner in the interactive watch mode.
See the section about running tests for more information.

yarn build

Builds the app for production to the build folder.
It correctly bundles React in production mode and optimizes the build for the best performance.

The build is minified and the filenames include the hashes.
Your app is ready to be deployed!

See the section about deployment for more information.

yarn eject

Note: this is a one-way operation. Once you eject, you can’t go back!

If you aren’t satisfied with the build tool and configuration choices, you can eject at any time. This command will remove the single build dependency from your project.

Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.

You don’t have to ever use eject. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.

Learn More

You can learn more in the Create React App documentation.

To learn React, check out the React documentation.

Code Splitting

This section has moved here: https://facebook.github.io/create-react-app/docs/code-splitting

Analyzing the Bundle Size

This section has moved here: https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size

Making a Progressive Web App

This section has moved here: https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app

Advanced Configuration

This section has moved here: https://facebook.github.io/create-react-app/docs/advanced-configuration

Deployment

This section has moved here: https://facebook.github.io/create-react-app/docs/deployment

yarn build fails to minify

This section has moved here: https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify

Planning Stages:


My original plans for the app were somewhat vague - I wasn't quite sure yet what the API could do (or what I could do with it). You can see from the wireframes (below) I had something different in mind. That being said, it was actually a really important step in the process. Figuring out how to represent voice controlled components was an interesting challenge, and one I hope to continue to expand upon in the future.

Original wireframes

Getting Started:


Making new commands:

  • Commands are held in an array called "commands", located inside the VirtualAssistant component. Each command is an object with two properties:

    1. command: a string for Virtual Assistant to listen for
    2. callback: a callback function to execute in response; there is one already built-in to the react-speech-recognition hook called resetTranscript() - it resets the transcript. (I use the command "clear" or "reset" here)
  • A common pattern I've used here for action commands is to:

    1. Set some state [state, setState] = useState(default)
    2. Write doFunction() to setState(newState)
    3. Call doFunction() to execute in the callback function of the new command
  • To set a verbal response in a command:

    speak({"these words will be spoken"})
  • To set a written response (displayed in the message display):

    setMessage("these words will be displayed in text");
Example command:
    {
      command: "thank you",
      callback: () => {
        setMessage("You're welcome.");
        speak({ text: "you're welcome" });
      },
    },
More:

For more information browse these docs:

Unsolved Problems:


  • Make Voice Assistant global to the app - currently, components are rendered conditionally right on the Voice Assistant page
  • When fetching from weather api, a fetch command must first be made (once) before any further requests can be made to the weather API

Future Enhancements:


  • Login/registration for user-specific todo lists
  • User restricted data access
  • User varibale memory (for example, when you say "My name is :name", it is stored to memory)
  • Delete/update todo list tasks by voice command
  • Voice memo recorder
  • Further utilization of command options for better command recognition

About

Voice Assistant with an ever-growing list of available commands!

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published