-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
self-supervised pretraining(wav2vec 2.0/data2vec) for wenet #1003
base: main
Are you sure you want to change the base?
Conversation
cool! |
nice |
Looking forward to the latest developments |
我尝试复现这个例子,使用https://huggingface.co/emiyasstar/ch-w2v-conformer 这个预训练模型,报错如下: |
ch-w2v-conformer使用的是6倍降采样模型,并且去除了预训练部分的训练参数以兼容master分支代码,你可以配合我们放出的 openasr recipe 里面提供的配置文件去加载模型 |
感谢您的回复,https://github.com/wenet-e2e/wenet/blob/1269a6e5bbec440302e934f243f623baeebf2758/examples/aishell/s0_ssl/README.md 提到的使用fbank作为特征输入所训练的w2v-conformer 模型开源了吗? |
1.support self-supervised pretraining using wav2vec 2.0/data2vec method
2.add ssl recipe in librispeech/ssl
3.add ssl recipe in aishell/ssl