You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear Author,
Hello, I encountered the following issues while reproducing the NLU-deberta-v2-xxlarge experiment in the example folder:
After running deberta_v2_xxlarge_mnli.sh file, I find it does not generate lora file such as deberta_v2_xxlarge_lora_mnli.bin. However generate whole model parameter in mnli/model path. I wonder if there is something wrong with my operation?
In readme.md in NLU, the section "Evaluate the checkpoints" use do_eval parameter, however, should we use 'do_predict' instead?
After submitting the result file on kaggle-mnli challenge, the result of the lora finetune only get public score:0.33156. This is a big difference from the results reported in the paper. I wonder if this is because of the hyperparameter selection (I didn't change the provided hyperparameters) or because of the first problem mentioned above?
The text was updated successfully, but these errors were encountered:
Dear Author,
Hello, I encountered the following issues while reproducing the NLU-deberta-v2-xxlarge experiment in the example folder:
The text was updated successfully, but these errors were encountered: