-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fine-tuning pre train model #28
Comments
Hi, you can fine-tune the model with the FunASR auto model pipeline. |
this is what you call fine-tuning ? model = AutoModel(model="iic/emotion2vec_base_finetuned") # Alternative: iic/emotion2vec_plus_seed, iic/emotion2vec_plus_base, iic/emotion2vec_plus_large and iic/emotion2vec_base_finetuned wav_file = f"{model.model_path}/example/test.wav" ? |
I want to load the model and finetune it myself for better accuracy on my voice and the voice of my colleagues. I see that you provided the code for the upstream model and the model checkpoints i get from the emotion2vec_base.pt. However, the checkpoint does not fully match the model. Specifically i get a RuntimeError
I think there are many such missmatches later as well once i resolve this. So to save myself some time, could you provide me the corresponding configs? thanks in advance |
Hi, thank you very much for your work.
I want to continue to do some interesting work based on your work.
I have not found any related model fine-tuning on modelscore and github.
Can you please guide me on how to use your model for model fine-tuning and retraining?
many thanks
The text was updated successfully, but these errors were encountered: