Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RE metric #14

Open
btaille opened this issue Apr 25, 2019 · 2 comments
Open

RE metric #14

btaille opened this issue Apr 25, 2019 · 2 comments

Comments

@btaille
Copy link

btaille commented Apr 25, 2019

Hello Victor,

Concerning the RE metric, do you consider a relation correct when the full boundaries of both heads are correctly detected or only the last token of each head? Do predicted entity types of both heads need to be correct for the relation to be correct?

@VictorSanh
Copy link
Contributor

VictorSanh commented Apr 25, 2019

Hello @btaille,
For RE metrics, a relation is considered correct when 3 conditions are met:

  • head 1 is correctly detected (last token)
  • head 2 is correctly detected (last token)
  • relation between these heads is correctly predicted

We don't predict the entity types of the heads involved in the relation even though this might be interesting in the sense that the model might be able to learn the constraints/high correlation between some relation types and entity types.
Victor

@btaille
Copy link
Author

btaille commented Apr 25, 2019

Thanks for your answer.

In this case I think you cannot compare directly your RE results to (Li and Ji 2014), (Miwa and Bansal 2016) and (Katiyar and Cardie 2017).

(Bekoulis 2018) defines 3 evaluation types :

  1. Strict: an entity is considered correct if the boundaries and the type of the entity are both correct; a relation is correct when the type of the relation and the argument entities are both correct,
  2. Boundaries: an entity is considered correct if only the boundaries of the entity are correct (entity type is not considered); a relation is correct when the type of the relation and the argument entities are both correct and
  3. Relaxed: we score a multi-token entity as correct if at least one of its comprising token types is correct assuming that the boundaries are given; a relation is correct when the type of the relation and the argument entities are both correct.

As far as I understand (Miwa and Bansal 2016) and (Katiyar and Cardie 2017) use Strict evaluation and (Li and Ji 2014) uses Boundaries. Your evaluation would lie in between Boundaries and Relaxed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants