Skip to content
This repository has been archived by the owner on May 31, 2024. It is now read-only.

issues on bert-base-cased-qa-evaluator #22

Open
chaozz98 opened this issue May 23, 2023 · 0 comments
Open

issues on bert-base-cased-qa-evaluator #22

chaozz98 opened this issue May 23, 2023 · 0 comments

Comments

@chaozz98
Copy link

Hello, I have used your training and validation datasets in qa_eval_train.py to train the qa-evaluator model, but the validation accuracy results are always around 0.5, which results the in the same score in the final output[0][0][1]. May I ask what is the reason for this? I really didn't find the problem.
I have not made any changes to all the training code, and the final Validation accuracy is always around 0.5, so that the final score,output[0][0][1], such as
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
please help me,thank you

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant