You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For fine-tuning the current model to other languages, is it better to use the existing trained model and prompt tokenizer "parler-tts/parler_tts_mini_v0.1" or maybe it better train from scratch with a custom tokenizer? Any suggestions for the multilingual tokenizer if using espeak-ng? Thank you for your insights.
The text was updated successfully, but these errors were encountered:
Hey @taalua, it depends on the languages you want to fine-tune on!
If the flan T5 tokenizer covers your language (say Spanish or French), you can fine-tune the existing model, otherwise you probably need another custom tokenizer or one suited for multilinguality (say mt5 or something) and to train your model from scratch!
Right now i'm on my first step with "dataspeech" and ask myself if i have to or can simply adjust this code or have to switch to another phonemizer like "phonemizer" to support my work on a pure german single speaker voice dataset.
Hi,
For fine-tuning the current model to other languages, is it better to use the existing trained model and prompt tokenizer "parler-tts/parler_tts_mini_v0.1" or maybe it better train from scratch with a custom tokenizer? Any suggestions for the multilingual tokenizer if using espeak-ng? Thank you for your insights.
The text was updated successfully, but these errors were encountered: