Virtual Assistant is a React web app that uses the react-speech-recognition and react-speech-kit hooks to utilize the Web Speech API for Speech Recognition and Voice Synthesis. It combines these libraries to enable voice command recognition - users have the ability to speak commands such as add/remove items on a to-do list, get current weather data, and set a timer.
- express
- mongoose
- Open Weather Map
- Web Speech API (via linked libraries)
This project was bootstrapped with Create React App.
This project was bootstrapped with Create React App
In the project directory, you can run:
Runs the app in the development mode.
Open http://localhost:3000 to view it in the browser.
The page will reload if you make edits.
You will also see any lint errors in the console.
Launches the test runner in the interactive watch mode.
See the section about running tests for more information.
Builds the app for production to the build
folder.
It correctly bundles React in production mode and optimizes the build for the best performance.
The build is minified and the filenames include the hashes.
Your app is ready to be deployed!
See the section about deployment for more information.
Note: this is a one-way operation. Once you eject
, you can’t go back!
If you aren’t satisfied with the build tool and configuration choices, you can eject
at any time. This command will remove the single build dependency from your project.
Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject
will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.
You don’t have to ever use eject
. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.
You can learn more in the Create React App documentation.
To learn React, check out the React documentation.
This section has moved here: https://facebook.github.io/create-react-app/docs/code-splitting
This section has moved here: https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size
This section has moved here: https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app
This section has moved here: https://facebook.github.io/create-react-app/docs/advanced-configuration
This section has moved here: https://facebook.github.io/create-react-app/docs/deployment
This section has moved here: https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify
My original plans for the app were somewhat vague - I wasn't quite sure yet what the API could do (or what I could do with it). You can see from the wireframes (below) I had something different in mind. That being said, it was actually a really important step in the process. Figuring out how to represent voice controlled components was an interesting challenge, and one I hope to continue to expand upon in the future.
-
Commands are held in an array called "commands", located inside the VirtualAssistant component. Each command is an object with two properties:
- command: a string for Virtual Assistant to listen for
- callback: a callback function to execute in response; there is one already built-in to the react-speech-recognition hook called resetTranscript() - it resets the transcript. (I use the command "clear" or "reset" here)
-
A common pattern I've used here for action commands is to:
- Set some state
[state, setState] = useState(default)
- Write
doFunction()
tosetState(newState)
- Call
doFunction()
to execute in the callback function of the new command
- Set some state
-
To set a verbal response in a command:
speak({"these words will be spoken"})
-
To set a written response (displayed in the message display):
setMessage("these words will be displayed in text");
{
command: "thank you",
callback: () => {
setMessage("You're welcome.");
speak({ text: "you're welcome" });
},
},
For more information browse these docs:
- react-speech-recognition Docs (extensive and informative, highly recommend)
- react-speech-kit Docs (has great examples of how to use)
- Web Speech API Docs:
- Make Voice Assistant global to the app - currently, components are rendered conditionally right on the Voice Assistant page
- When fetching from weather api, a fetch command must first be made (once) before any further requests can be made to the weather API
- Login/registration for user-specific todo lists
- User restricted data access
- User varibale memory (for example, when you say "My name is :name", it is stored to memory)
- Delete/update todo list tasks by voice command
- Voice memo recorder
- Further utilization of command options for better command recognition