You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have extracted the scores for the test list while using a pretrained model and enabling the --eval option. I am a bit confused and unable to figure out why the same model gives me different score values for each run. This is because I am not training the model, only testing it.
Additionally, I am concerned that this may change the False Negative Rate (FNR) and False Positive Rate (FPR) values, leading to several different tuneThreshold values.
Could anyone provide an explanation for the difference in score values?
here is an example of different score values I obtained.
Dear all,
I have extracted the scores for the test list while using a pretrained model and enabling the --eval option. I am a bit confused and unable to figure out why the same model gives me different score values for each run. This is because I am not training the model, only testing it.
Additionally, I am concerned that this may change the False Negative Rate (FNR) and False Positive Rate (FPR) values, leading to several different tuneThreshold values.
Could anyone provide an explanation for the difference in score values?
here is an example of different score values I obtained.
1st test run
2nd test run
3rd test run
Thank you.
The text was updated successfully, but these errors were encountered: