-
Notifications
You must be signed in to change notification settings - Fork 226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About reduction_factor_schedule #79
Comments
Your intuition is right!
You need your model layers to be shape static, so you will initialize your projection output with the largest value of TransformerTTS/model/models.py Line 83 in e4ded5b
When you reduce the value of TransformerTTS/model/models.py Lines 148 to 151 in e4ded5b
|
Thanks for your elaboration. |
Thank you @myagues, excellent explanation. |
Hi, thanks for sharing this great work.
I want to ask the training skill why we need to use dynamic input length in decode module, the relative variables
self.max_r
andself.r
can be found inmodels.py
.The purpose seems to let it harder to train in the beginning since we only use less data to predict the whole mel sequence, but getting easier when the
reduction_factor_schedule
changes smaller which indicates larger input length. It looks a bit like simulated annealing algorithm, does it really work as I described? What will happen whenself.max_r
andself.r
not the same.The text was updated successfully, but these errors were encountered: