You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
it looks like there can be multiple correct answers for a given question.
{
"id": "00001",
"is_impossible": False,
"question": "Where does the series take place?",
"answers": [
{
"text": "region called the Final Empire",
"answer_start": 38,
},
{
"text": "world called Scadrial",
"answer_start": 74,
},
],
}
Am I interpreting the code correctly? Is this a bug?
To Reproduce
N/A
Expected behavior
Take "maximum" across right answers -- i.e., compare given answer to each of right answers, and if any one is correct, then correct; else, if any one is similar, then similar; else incorrect.
Screenshots
N/A
Desktop (please complete the following information):
N/A
Additional context
N/A
The text was updated successfully, but these errors were encountered:
Describe the bug
In reviewing the code for
calculate_result
forQuestionAnsweringModel
, I saw this linesimpletransformers/simpletransformers/question_answering/question_answering_model.py
Line 1416 in 76d9801
which made me think that for evaluation we're only considering the first correct answer. However, according to the docs here
https://simpletransformers.ai/docs/qa-data-formats/#evaluation-data-format
it looks like there can be multiple correct answers for a given question.
Am I interpreting the code correctly? Is this a bug?
To Reproduce
N/A
Expected behavior
Take "maximum" across right answers -- i.e., compare given answer to each of right answers, and if any one is correct, then correct; else, if any one is similar, then similar; else incorrect.
Screenshots
N/A
Desktop (please complete the following information):
N/A
Additional context
N/A
The text was updated successfully, but these errors were encountered: