Skip to content

simonnagel/VRED-voiceRecognition

Repository files navigation

Use Python Voice recognition to control your VRED 2021

Virtual reality allows us to jump over the barrier to the digital easily. The access to complex digital data can be simplified by using VR, as there is no need of any software knowhow at all. That is why VR and VR Collaboration are an important part not only in the current design review process already.

VRED offers a standard set for collaboration and interacting with the scene to change variants, viewpoints, measure, teleporting and so on. With the new integration of Python 3, almost everything can be customized for your needs. With Python 3 you now have easy access to various libraries, that makes the creation of interactive Experiences with VRED even more straight forward as before.

We have seen great usage among our customers and made our learnings while hosting guests in our Munich VR Center of Excellence. There was one thing we noticed. Nevertheless, no matter how well those actions in VR are working, they are still based on the fact that the user takes a physical action. Even if this action is just pressing a button on a controller or do a hand gesture.

In some occasions this is not helpful. What if I would like to document an information. How can I write or type in VR? What if I just want to make a change a variant in your scene and you want to focus on the current view to understand the differences in option A and B. Of course, somebody else could to that for us. It is great to have an assistant aside. Unluckily that can not be always the case and obviously this creates dependency and is a challenge in current times, where virtual coworking takes place everywhere.

Could this assistant be virtual? We are working virtual anyway, could that be an option?

Virtual assistants are based on speech recognition which is already part of our live. Properly it was used by most of you when driving a car, dictating text messages or giving commands in your smart home. There are multiple technologies on the market which are working very well already. There also is a python API for speech recognition, that can be accessed with VRED 2021. Wouldn’t it be great to just say a command and see the result in VR? Wouldn’t it be great to just leave a comment in VR and document it in a text-based format? Wouldn’t it be great to compare multiple design options without losing sight off the small differences on screen? Wouldn’t it be great to to make precise adjustments?

Wouldn’t it be great to just make changes in your scene just by the sound of your voice?

Well this is actually possible by using python 3 in VRED 2021 and we would like to show you, how it can be used and how it is working.

Take a look at the reference Video https://youtu.be/aYafEdaQeQc

Make sure to follow the Initial Setup

And copy and paste one of the Python Files (VRED-voiceRecogControlTemplate.py VRED-voiceRecogAnnotation.py VRED-voiceRecogControlTemplateAnnotation.py) to your Script Editor.

About

Shows how to use python Voice Recognition in VRED 2021

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages