Training improvement #168
Unanswered
binhphamthanh
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello everyone,
I'm working on a single speaker model training from scratch using piper in colab environment.
My approach is to use multiple small datasets (200 sentences/train).
Then I fine-tune the last trained checkpoint with another small dataset.
So far, everything is going well.
However, I have some concerns:
I'm setting the maximum 2000 epochs/train, but it seems too much. I don't see much difference between the testing results of 1000 and 2000 checkpoints. Can you please suggest some tips to avoid wasting training time?
Is it possible to do multiple training sessions, and then merge the checkpoints?
Something like this:
Appreciate your help!
Beta Was this translation helpful? Give feedback.
All reactions