-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error in loading WavLLM model #78
Comments
I've tried to download the checkpoint and load it directly using |
Hi, Thanks for the prompt response. I checked, and the model download link is not working. I also checked the link with wget, but it is not reachable. Can you please help me by providing the alternate link? |
Hi, the link has been updated. Do you use this new link, or the old version? |
Hi, Thanks for responding. I checked with the new link, but I am still getting errors in the download link. The link is not working for me. The error is |
We have uploaded the checkpoint to huggingface. You could download it from https://huggingface.co/v-sjhu/WavLLM |
@rishabh004-ai Were you able to inference successfully? @XiaoshanHsj Do you consider releasing the inference framework under transformers instead of fairseq? |
The hugging face link seems not work, can you help upload it, thanks! |
I made it work.
Also, have to change the content in asr.csv. Otherwie, will lead to not found error.
|
@BinWang28 Thanks for your reply. |
I have installed all the libraries and whenever I am running
bash examples/wavllm/scripts/inference_sft.sh $model_path $data_name
. The code is throwing the error as _pickle.UnpicklingError: invalid load key, '\xef'.The error is originating from the line models, saved_cfg = checkpoint_utils.load_model_ensemble() in 454 of SpeechT5/WavLLM/fairseq/examples/wavllm/inference/generate.py
File "/workspace/SpeechT5/WavLLM/fairseq/examples/wavllm/inference/generate.py", line 454, in <module> cli_main() File "/workspace/SpeechT5/WavLLM/fairseq/examples/wavllm/inference/generate.py", line 450, in cli_main main(args) File "/workspace/SpeechT5/WavLLM/fairseq/examples/wavllm/inference/generate.py", line 50, in main return _main(cfg, h) File "/workspace/SpeechT5/WavLLM/fairseq/examples/wavllm/inference/generate.py", line 122, in _main models, saved_cfg = checkpoint_utils.load_model_ensemble( File "/workspace/SpeechT5/WavLLM/fairseq/fairseq/checkpoint_utils.py", line 363, in load_model_ensemble ensemble, args, _task = load_model_ensemble_and_task( File "/workspace/SpeechT5/WavLLM/fairseq/fairseq/checkpoint_utils.py", line 421, in load_model_ensemble_and_task state = load_checkpoint_to_cpu(filename, arg_overrides) File "/workspace/SpeechT5/WavLLM/fairseq/fairseq/checkpoint_utils.py", line 315, in load_checkpoint_to_cpu state = torch.load(f, map_location=torch.device("cpu")) File "/root/miniconda3/envs/wavllm/lib/python3.10/site-packages/torch/serialization.py", line 1040, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/root/miniconda3/envs/wavllm/lib/python3.10/site-packages/torch/serialization.py", line 1258, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) _pickle.UnpicklingError: invalid load key, '\xef'.
The text was updated successfully, but these errors were encountered: