Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

recording best practice to get best result ? #50

Open
allan-simon opened this issue Dec 12, 2021 · 9 comments
Open

recording best practice to get best result ? #50

allan-simon opened this issue Dec 12, 2021 · 9 comments

Comments

@allan-simon
Copy link

Hello
first thanks a lot to make your work so easily available

I'm trying to make a software to help my friends improve their French pronunciation by doing the following things :

  1. put a french sentence to read (for which I have the IPA and a native recording )
  2. let them read aloud this sentence
  3. transcribe their recording to IPA using allosaurus
  4. compare with the expected IPA and point out mistakes

I've started to first play with allosaurus to chekc if it can correctly transcribe me (a french native) pronouncing some simple words, but it seems to have some trouble doing so (the result is quite approximate) I've added -l fra which seems to improve slightly the accuracy but not by much

Is there some best practice regarding recording to give best results ? Is there some other way I have to improve the accuracy for french ? (I'm a software engineer with good knowledge in python but not that much in machine learning )

thanks a lot for the pointers you can give me

@allan-simon
Copy link
Author

allan-simon commented Dec 12, 2021

for example I have worked with people from https://shtooka.net to make recording of words so could I use them (as i can get the IPA of these recordings quite easily) to create a french model ?

@allan-simon
Copy link
Author

Sorry i had not read the part about fine tuning , i will try to use the instruction to create a model for french

@xinjli
Copy link
Owner

xinjli commented Dec 12, 2021

hi, thanks for your question!

There is another model called interspeech21 which I have not included in the main branch.
It is in the pull request and should already be able to work after you merge that.
You can download that model and run inference by specifying that model, it should give you better results than the current default one.

Another thing you might do is the fine-tuning as you mentioned. It will significantly boost your results when even some small datasets.

@allan-simon
Copy link
Author

ok thanks , so you would advice to use your branch + create a fine tune model ?

I continued thinking about it today and I was wondering something about my above use case :

If I create a model trained with french words only pronounced by native speakers , wouldn't that create a bias ?

i.e if one of my Chinese friends wants to correct their accent, allosaurus using that model will not recognize some mandarin-specific phone and will try hard instead to match it to the closest french phone (which is not good in my case because I want to point out their pronunciation is bad ) ? Instead shouldn't I train a mode with both mandarin+french , so that if they pronounce a french words with a strong chinese accent I will be able to correctly transcribe it ?

@xinjli
Copy link
Owner

xinjli commented Dec 15, 2021

Yeah, I think you can try using the new branch first and see whether it will work for not.
The new branch is trained with both Mandarin and French, so should be able to distinguish phones in both languages

If you fine-tune the model using native speakers, it might work very well for mandarin-speakers as you point out. Training with both languages might be one option.

@allan-simon
Copy link
Author

I tried your branch but I'm running into the following issues

  1. torchaudio was missing as dependencies from setup.py ( adding to the setup.py seems to be enough)

allospeech is not to be found

Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/asimon/git/allosaurus/allosaurus/run.py", line 1, in <module>
    from allosaurus.app import read_recognizer
  File "/home/asimon/git/allosaurus/allosaurus/app.py", line 4, in <module>
    from allosaurus.pm.factory import read_pm
  File "/home/asimon/git/allosaurus/allosaurus/pm/factory.py", line 4, in <module>
    from allosaurus.utils.config import dotdict
  File "/home/asimon/git/allosaurus/allosaurus/utils/config.py", line 4, in <module>
    from allospeech.config import allospeech_config
ModuleNotFoundError: No module named 'allospeech'

@xinjli
Copy link
Owner

xinjli commented Dec 16, 2021

I see. allospeech is my private library I used to develop models.
I will fix them. sorry for the inconvenience.

@allan-simon
Copy link
Author

ok no problem :) it's already very nice of you to both opensource the library and to take time to answer my questions

@xinjli
Copy link
Owner

xinjli commented Dec 16, 2021

yeah, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants