Skip to content

Entity Linking Benchmark for QA (R2)

Dennis Diefenbach edited this page Oct 29, 2016 · 4 revisions

Our second group of contribution consist of two resources:

  • R2.1 A QALD-based Benchmark Component for EL Tasks To evaluate different EL strategies in QA we created a benchmark using QALD and provide a corresponding benchmark component. This enables QA researchers to rapidly compare new NER/NED approaches. The Benchmark component can be found at : https://github.com/WDAqua/Qanary/tree/master/qald-evaluator

So if researchers have a new EL tool, they can integrate their component in Qanary Pipeline, and evaluate its performance and applicability w.r.t question answering domain using our QALD based benchmark component.

  • R2.2 Dataset of Annotated Questions for Processing in QA systems Here, we present a new dataset with questions of the QALD-6 benchmark, which are completely annotated with disambiguated named entities (DBpedia resource URIs) computed by applying our benchmarking to the EL configuration ( we performed evaluation by integrating R1.2-R1.8 into QA pipeline (R1.9) ). This dataset contains 267 questions (out of 350 questions in QALD-6 training set) because the components could not able to annotate the rest. A Turtle file, representing the results in terms of the qa vocabulary can be found at: https://github.com/WDAqua/Qanary/tree/master/SWJ-results/completly_annotated_questions
Clone this wiki locally