-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why is the inference blue low in translation task (en-de)? #5
Comments
Can you provide your running commands and log? Without these, I cannot locate the bugs. Thx. |
Hello, thanks for your reply! I first run inference.sh with following parameters to generate translation results: |
Thanks. |
Thanks for your response, I will try it again. Your efforts in addressing my questions have truly helped me gain a deeper understanding of the workflow. I sincerely hope you will produce more solid work! |
Hello, I'm sorry to bother you again. Loading checkpoint shards: 0%| | 0/33 [00:00<?, ?it/s] During handling of the above exception, another exception occurred: Traceback (most recent call last): Loading checkpoint shards: 0%| | 0/33 [00:00<?, ?it/s] During handling of the above exception, another exception occurred: Traceback (most recent call last): The above llama-7b weights seem to load fail. |
Seems that the llama model weights have not been downloaded correctly. Can you take a look at the weights you downloaded and those in https://huggingface.co/wxjiao/llama-7b. I'd recommend to download the weights directly using |
Hello, I have a problem while using ParroT inference.
In the translation task of wmt22 testset (en-de), I loaded the parameters of Lrama-7b for inference, and the bleu was only 6.9808, while after loading the fine-tuning parameters of ParroT-Hint-7b-lora you provided, without adding Hint, bleu did not improve. How can I improve inference performance? Thank you!
The text was updated successfully, but these errors were encountered: