Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add testing/evaluation/calibration functionality #15

Open
Kaljurand opened this issue Apr 9, 2015 · 0 comments
Open

Add testing/evaluation/calibration functionality #15

Kaljurand opened this issue Apr 9, 2015 · 0 comments

Comments

@Kaljurand
Copy link
Owner

Original issue 11 created by Kaljurand on 2011-11-03T12:10:04.000Z:

There should be a built-in tool which guides the user through a list of written utterances (with their corresponding normalizations) and asks to speak each of them, showing continuously the speech recognizer performance of getting the matching transcription.

The purpose is to test/evaluate the speech recognizer, or to train the recognizer (for the latter, the API needs to provide a way to communicate the existing written utterance to the server, e.g. with every query).

In terms of the UI

  • it could be something similar to the current Repeater-demo (and it could actually replace it also as a demo);
  • it should be able to pull existing written utterances from a webservice (which possibly generates them randomly using a given a grammar);
  • it should display a final report about the recognition accuracy/speed and share it via other apps.

All this functionality could also be packaged as an independent app, but it's probably easier to start building it as part of RecognizerIntent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant