Pular para o conteúdo principal

QALD-9 Plus Um benchmark para KGQA em várias línguas

GitHub

https://github.com/Perevalov/qald_9_plus

Apresentação no Youtube

https://www.youtube.com/watch?v=W1w7CJTV48c

Artigo

https://arxiv.org/pdf/2202.00120v2.pdf

16th IEEE International Conference on SEMANTIC COMPUTING - ICSC 2022
January 26-28, 2022
Virtual

QALD-9-plus: A Multilingual Dataset for Question Answering over DBpedia and Wikidata Translated by Native Speakers

O artigo descreve a criação do dataset para benchmark de KGQA/KBQA usando o dataset original QALD-9 que continha perguntas em inglês que foram traduzidas automaticamente para outros idiomas com modelos de linguagem.
Usaram "trabalhadores" para traduzir as perguntas e respostas para outras línguas.

O QALD-9 original é sobre a DBPedia e "51 of the DBpedia queries were not transferable to Wikidata due to lack of corresponding data."
Os KGs são incompletos, ou melhor, cobrem diferentes aspectos do mundo.

Sobre outros datasets para benchmark


QALD-9 contains 558 questions incorporating information of the DBpedia knowledge base where for each question the following is given: a textual representation in multiple languages, the corresponding SPARQL query (over DBpedia), the answer entity URI, and the answer type.

[Mais usado]

RuBQ 2.0 is a KGQA dataset over Wikidata that contains 2910 questions. The questions are represented in native Russian language and machine-translated to English language. Additionally, it contains a list of entities, relations, answer entities, SPARQL queries, answer-bearing paragraphs, and query type tags.

[Originalmente em Russo e Traduzido para o Inglês por máquina]

CWQ is a recently published KGQA dataset over Wikidata that is based on CFQ data [14]. CWQ contains questions in Hebrew, Kannada, Chinese, and English languages.

[Olhar esse, Compositional Freebase Questions (CFQ)]
[[14] D. Keysers, N. Sch ̈arli, N. Scales, H. Buisman, D. Furrer, S. Kashubin, N. Momchev, D. Sinopalnikov, L. Stafiniak, T. Tihon et al., “Measuring compositional generalization: A comprehensive method on realistic data,” arXiv preprint arXiv:1912.09713, 2019.]

Trabalhadores nativos na língua tiveram dificuldade em entender as perguntas na sua língua que foram traduzidas automaticamente.
Algumas perguntas do dataset eram ambíguas para interpretação por humanos. (How often x How many times)

Despite the considered benchmarks contain questions in multiple languages, these multilingual representations were either machine-translated (RuBQ 2.0, CWQ) or have doubtful quality (QALD-9, see Section III-A).

Slides da apresentação

https://drive.google.com/file/d/1cDphq4DeSiZr-WBvdwu34rcxQ0aP4q95/view

 

Related Work — Multilingual Knowledge Graph Question Answering

[Exemplos de sistemas que se propõem a ser KGQA / KBQA em múltiplas línguas]

Multilingual KGQA systems from past decade:
- Low number of systems;
- Not all of them are available;
- No systematic review.

[1] Freitas, J. a. G. Oliveira, S. O’Riain, E. Curry, and J. a. C. P. Da Silva, “Querying linked data using semantic relatedness: A vocabulary independent approach”, 2011.
[2] Oscar Ferrandez, C. Spurk, M. Kouylekov, I. Dornescu, S. Ferrandez, M. Negri, R. Izquierdo, D. Tomas, C. Orasan, G. Neumann, B. Magnini, and J. L. Vicedo, “The QALL-ME Framework: A specifiable-domain multilingual question answering architecture”, 2011.
[3] N. Aggarwal, “Cross lingual semantic search by improving semantic similarity and relatedness measures”, 2012.
[4] L. Zhang, M. Farber, and A. Rettinger, “Xknowsearch! Exploiting knowledge bases for entity-based cross-lingual information retrieval”, 2016.
[5] A. Pouran Ben Veyseh, “Cross-lingual question answering using common semantic space”, 2016.
[6] L. Zhang, M. Acosta, M. Farber, S. Thoma, and A. Rettinger, “Brexearch: Exploring brexit data using cross-lingual and cross-media semantic search”, 2017.
[7] M. Burtsev, A. Seliverstov, R. Airapetyan, M. Arkhipov, D. Bay-murzina, N. Bushkov, O. Gureenkova, T. Khakhulin, Y. Kuratov, D.Kuznetsov, A. Litinsky, V. Logacheva, A. Lymar, V. Malykh, M. Petrov, V. Polulyakh, L. Pugachev, A. Sorokin, M. Vikhreva, and M. Zaynutdinov, “DeepPavlov: Open-source library for dialogue systems”, 2018.
[8] T. P. Tanon, M. D. de Assuncao, E. Caron, and F. M. Suchanek, “Demoing platypus – a multilingual question answering platform for Wikidata”, 2018.
[9] D. Diefenbach, A. Both, K. Singh, and P. Maret, “Towards a question answering system over the semantic web”, 2020.
[10] Y. Zhou, X. Geng, T. Shen, W. Zhang, and D. Jiang, “Improving zero-shot cross-lingual transfer for multilingual question answering over knowledge graph”, 2021.

Como analisar os bechmarks (QALD-9 e CWQ) para a minha pesquisa:

1) Quais perguntas envolvem dimensões contextuais?
Quando? Onde? Vigente?

2) O contexto está presente de modo explícito ou implícito?
Tempo verbal: presente x HOJE, ATUALMENTE

3) Existe um nível de granularidade que corresponda ao contexto exato? 

“Where was Nelson Mandela born?” X “Where was Nelson Mandela born in South Africa?”

Obs. Todas as perguntas dos benchmarks tem respostas. As respostas são exatas e não são ordenadas, não tem noção de relevância. 

QALD-9

https://raw.githubusercontent.com/Perevalov/qald_9_plus/main/data/qald_9_plus_train_wikidata.json
https://raw.githubusercontent.com/Perevalov/qald_9_plus/main/data/qald_9_plus_test_wikidata.json

Abordagens QA baseadas em texto fornecem respostas ordenadas? Ou somente as KwS?

Comentários

Postagens mais visitadas deste blog

Connected Papers: Uma abordagem alternativa para revisão da literatura

Durante um projeto de pesquisa podemos encontrar um artigo que nos identificamos em termos de problema de pesquisa e também de solução. Então surge a vontade de saber como essa área de pesquisa se desenvolveu até chegar a esse ponto ou quais desdobramentos ocorreram a partir dessa solução proposta para identificar o estado da arte nesse tema. Podemos seguir duas abordagens:  realizar uma revisão sistemática usando palavras chaves que melhor caracterizam o tema em bibliotecas digitais de referência para encontrar artigos relacionados ou realizar snowballing ancorado nesse artigo que identificamos previamente, explorando os artigos citados (backward) ou os artigos que o citam (forward)  Mas a ferramenta Connected Papers propõe uma abordagem alternativa para essa busca. O problema inicial é dado um artigo de interesse, precisamos encontrar outros artigos relacionados de "certa forma". Find different methods and approaches to the same subject Track down the state of the art rese...

Knowledge Graph Embedding with Triple Context - Leitura de Abstract

  Jun Shi, Huan Gao, Guilin Qi, and Zhangquan Zhou. 2017. Knowledge Graph Embedding with Triple Context. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM '17). Association for Computing Machinery, New York, NY, USA, 2299–2302. https://doi.org/10.1145/3132847.3133119 ABSTRACT Knowledge graph embedding, which aims to represent entities and relations in vector spaces, has shown outstanding performance on a few knowledge graph completion tasks. Most existing methods are based on the assumption that a knowledge graph is a set of separate triples, ignoring rich graph features, i.e., structural information in the graph. In this paper, we take advantages of structures in knowledge graphs, especially local structures around a triple, which we refer to as triple context. We then propose a Triple-Context-based knowledge Embedding model (TCE). For each triple, two kinds of structure information are considered as its context in the graph; one is the out...

KnOD 2021

Beyond Facts: Online Discourse and Knowledge Graphs A preface to the proceedings of the 1st International Workshop on Knowledge Graphs for Online Discourse Analysis (KnOD 2021, co-located with TheWebConf’21) https://ceur-ws.org/Vol-2877/preface.pdf https://knod2021.wordpress.com/   ABSTRACT Expressing opinions and interacting with others on the Web has led to the production of an abundance of online discourse data, such as claims and viewpoints on controversial topics, their sources and contexts . This data constitutes a valuable source of insights for studies into misinformation spread, bias reinforcement, echo chambers or political agenda setting. While knowledge graphs promise to provide the key to a Web of structured information, they are mainly focused on facts without keeping track of the diversity, connection or temporal evolution of online discourse data. As opposed to facts, claims are inherently more complex. Their interpretation strongly depends on the context and a vari...