You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am training a YourTTS model using multiple internal dataset that all use the same custom formatter. I am hit two problems:
The embedding computing compute_embeddings() does not accept custom formatters as an input. Solving this seems trivial, just adding formatter kwarg for the method and passing it onwards to load_tts_samples().
The load_tts_samples() only uses the given custom formatter for the first dataset as the formatter is set to None at end of each loop. And thus only the first dataset uses the custom formatter and the second one tries to use a formatter based on the formatter in the config.
To Reproduce
Define multiple datasets with custom formatter in your training script(for me the YourTTS recipe from VCTK directory). e.g.
The problem 1. as said should be trivial and the addition of the kwarg defaulting to None should not cause any side effects.
The problem 2. might require so discussion. For our usage I made a fix that if the formatter argument is given it is used for all the dataset, but I started wondering if this is actually the best solution. How this would not work for a situation where there would be a set of different formatter different formatters used. For example if you would like to fine-tune a model, that is base trained with VCTK-dataset, with a custom formatted dataset. And include some of the original VCTK-data to avoid overfitting to the new speaker.
An other way to solve this would be to change the custom dataset formatter documentation to suggest monkey patching the dataset object, but to my understanding that is not consider as a best practice in python community. Or add an api for adding and getting formatters.
Do you love to contribute on this, but I think it having an input from others would help to reach a desired outcome. Any takes on this. Would it be easier to discuss if I open a pull request with an example fix?
The text was updated successfully, but these errors were encountered:
Yes, we have also encountered this. PRs in that direction would be very welcome, maybe separately for these 2 issues.
For the second issue, it's been a while since I looked into it, but maybe optionally accepting a list of custom formatters is feasible, matching the length of datasets to allow setting a different formatter for each:
Describe the bug
I am training a YourTTS model using multiple internal dataset that all use the same custom formatter. I am hit two problems:
compute_embeddings()
does not accept custom formatters as an input. Solving this seems trivial, just addingformatter
kwarg for the method and passing it onwards toload_tts_samples()
.load_tts_samples()
only uses the given custom formatter for the first dataset as the formatter is set toNone
at end of each loop. And thus only the first dataset uses the custom formatter and the second one tries to use a formatter based on the formatter in the config.To Reproduce
uv run python recipes/vctk/yourtts/train_yourtts_jackson_finetune.py
and the compute embeddings will fail with missing formatter.compute_embeddings()
for support custom formatter. But still the second dataset formatting fails.Expected behavior
No response
Logs
Environment
Additional context
The problem 1. as said should be trivial and the addition of the kwarg defaulting to
None
should not cause any side effects.The problem 2. might require so discussion. For our usage I made a fix that if the formatter argument is given it is used for all the dataset, but I started wondering if this is actually the best solution. How this would not work for a situation where there would be a set of different formatter different formatters used. For example if you would like to fine-tune a model, that is base trained with VCTK-dataset, with a custom formatted dataset. And include some of the original VCTK-data to avoid overfitting to the new speaker.
An other way to solve this would be to change the custom dataset formatter documentation to suggest monkey patching the dataset object, but to my understanding that is not consider as a best practice in python community. Or add an api for adding and getting formatters.
Do you love to contribute on this, but I think it having an input from others would help to reach a desired outcome. Any takes on this. Would it be easier to discuss if I open a pull request with an example fix?
The text was updated successfully, but these errors were encountered: