Pular para o conteúdo principal

QALD-9 Plus Um benchmark para KGQA em várias línguas

GitHub

https://github.com/Perevalov/qald_9_plus

Apresentação no Youtube

https://www.youtube.com/watch?v=W1w7CJTV48c

Artigo

https://arxiv.org/pdf/2202.00120v2.pdf

16th IEEE International Conference on SEMANTIC COMPUTING - ICSC 2022
January 26-28, 2022
Virtual

QALD-9-plus: A Multilingual Dataset for Question Answering over DBpedia and Wikidata Translated by Native Speakers

O artigo descreve a criação do dataset para benchmark de KGQA/KBQA usando o dataset original QALD-9 que continha perguntas em inglês que foram traduzidas automaticamente para outros idiomas com modelos de linguagem.
Usaram "trabalhadores" para traduzir as perguntas e respostas para outras línguas.

O QALD-9 original é sobre a DBPedia e "51 of the DBpedia queries were not transferable to Wikidata due to lack of corresponding data."
Os KGs são incompletos, ou melhor, cobrem diferentes aspectos do mundo.

Sobre outros datasets para benchmark


QALD-9 contains 558 questions incorporating information of the DBpedia knowledge base where for each question the following is given: a textual representation in multiple languages, the corresponding SPARQL query (over DBpedia), the answer entity URI, and the answer type.

[Mais usado]

RuBQ 2.0 is a KGQA dataset over Wikidata that contains 2910 questions. The questions are represented in native Russian language and machine-translated to English language. Additionally, it contains a list of entities, relations, answer entities, SPARQL queries, answer-bearing paragraphs, and query type tags.

[Originalmente em Russo e Traduzido para o Inglês por máquina]

CWQ is a recently published KGQA dataset over Wikidata that is based on CFQ data [14]. CWQ contains questions in Hebrew, Kannada, Chinese, and English languages.

[Olhar esse, Compositional Freebase Questions (CFQ)]
[[14] D. Keysers, N. Sch ̈arli, N. Scales, H. Buisman, D. Furrer, S. Kashubin, N. Momchev, D. Sinopalnikov, L. Stafiniak, T. Tihon et al., “Measuring compositional generalization: A comprehensive method on realistic data,” arXiv preprint arXiv:1912.09713, 2019.]

Trabalhadores nativos na língua tiveram dificuldade em entender as perguntas na sua língua que foram traduzidas automaticamente.
Algumas perguntas do dataset eram ambíguas para interpretação por humanos. (How often x How many times)

Despite the considered benchmarks contain questions in multiple languages, these multilingual representations were either machine-translated (RuBQ 2.0, CWQ) or have doubtful quality (QALD-9, see Section III-A).

Slides da apresentação

https://drive.google.com/file/d/1cDphq4DeSiZr-WBvdwu34rcxQ0aP4q95/view

 

Related Work — Multilingual Knowledge Graph Question Answering

[Exemplos de sistemas que se propõem a ser KGQA / KBQA em múltiplas línguas]

Multilingual KGQA systems from past decade:
- Low number of systems;
- Not all of them are available;
- No systematic review.

[1] Freitas, J. a. G. Oliveira, S. O’Riain, E. Curry, and J. a. C. P. Da Silva, “Querying linked data using semantic relatedness: A vocabulary independent approach”, 2011.
[2] Oscar Ferrandez, C. Spurk, M. Kouylekov, I. Dornescu, S. Ferrandez, M. Negri, R. Izquierdo, D. Tomas, C. Orasan, G. Neumann, B. Magnini, and J. L. Vicedo, “The QALL-ME Framework: A specifiable-domain multilingual question answering architecture”, 2011.
[3] N. Aggarwal, “Cross lingual semantic search by improving semantic similarity and relatedness measures”, 2012.
[4] L. Zhang, M. Farber, and A. Rettinger, “Xknowsearch! Exploiting knowledge bases for entity-based cross-lingual information retrieval”, 2016.
[5] A. Pouran Ben Veyseh, “Cross-lingual question answering using common semantic space”, 2016.
[6] L. Zhang, M. Acosta, M. Farber, S. Thoma, and A. Rettinger, “Brexearch: Exploring brexit data using cross-lingual and cross-media semantic search”, 2017.
[7] M. Burtsev, A. Seliverstov, R. Airapetyan, M. Arkhipov, D. Bay-murzina, N. Bushkov, O. Gureenkova, T. Khakhulin, Y. Kuratov, D.Kuznetsov, A. Litinsky, V. Logacheva, A. Lymar, V. Malykh, M. Petrov, V. Polulyakh, L. Pugachev, A. Sorokin, M. Vikhreva, and M. Zaynutdinov, “DeepPavlov: Open-source library for dialogue systems”, 2018.
[8] T. P. Tanon, M. D. de Assuncao, E. Caron, and F. M. Suchanek, “Demoing platypus – a multilingual question answering platform for Wikidata”, 2018.
[9] D. Diefenbach, A. Both, K. Singh, and P. Maret, “Towards a question answering system over the semantic web”, 2020.
[10] Y. Zhou, X. Geng, T. Shen, W. Zhang, and D. Jiang, “Improving zero-shot cross-lingual transfer for multilingual question answering over knowledge graph”, 2021.

Como analisar os bechmarks (QALD-9 e CWQ) para a minha pesquisa:

1) Quais perguntas envolvem dimensões contextuais?
Quando? Onde? Vigente?

2) O contexto está presente de modo explícito ou implícito?
Tempo verbal: presente x HOJE, ATUALMENTE

3) Existe um nível de granularidade que corresponda ao contexto exato? 

“Where was Nelson Mandela born?” X “Where was Nelson Mandela born in South Africa?”

Obs. Todas as perguntas dos benchmarks tem respostas. As respostas são exatas e não são ordenadas, não tem noção de relevância. 

QALD-9

https://raw.githubusercontent.com/Perevalov/qald_9_plus/main/data/qald_9_plus_train_wikidata.json
https://raw.githubusercontent.com/Perevalov/qald_9_plus/main/data/qald_9_plus_test_wikidata.json

Abordagens QA baseadas em texto fornecem respostas ordenadas? Ou somente as KwS?

Comentários

Postagens mais visitadas deste blog

Aula 12: WordNet | Introdução à Linguagem de Programação Python *** com NLTK

 Fonte -> https://youtu.be/0OCq31jQ9E4 A WordNet do Brasil -> http://www.nilc.icmc.usp.br/wordnetbr/ NLTK  synsets = dada uma palavra acha todos os significados, pode informar a língua e a classe gramatical da palavra (substantivo, verbo, advérbio) from nltk.corpus import wordnet as wn wordnet.synset(xxxxxx).definition() = descrição do significado É possível extrair hipernimia, hiponimia, antonimos e os lemas (diferentes palavras/expressões com o mesmo significado) formando uma REDE LEXICAL. Com isso é possível calcular a distância entre 2 synset dentro do grafo.  Veja trecho de código abaixo: texto = 'útil' print('NOUN:', wordnet.synsets(texto, lang='por', pos=wordnet.NOUN)) texto = 'útil' print('ADJ:', wordnet.synsets(texto, lang='por', pos=wordnet.ADJ)) print(wordnet.synset('handy.s.01').definition()) texto = 'computador' for synset in wn.synsets(texto, lang='por', pos=wn.NOUN):     print('DEF:',s...

truth makers AND truth bearers - Palestra Giancarlo no SBBD

Dando uma googada https://iep.utm.edu/truth/ There are two commonly accepted constraints on truth and falsehood:     Every proposition is true or false.         [Law of the Excluded Middle.]     No proposition is both true and false.         [Law of Non-contradiction.] What is the difference between a truth-maker and a truth bearer? Truth-bearers are either true or false; truth-makers are not since, not being representations, they cannot be said to be true, nor can they be said to be false . That's a second difference. Truth-bearers are 'bipolar,' either true or false; truth-makers are 'unipolar': all of them obtain. What are considered truth bearers?   A variety of truth bearers are considered – statements, beliefs, claims, assumptions, hypotheses, propositions, sentences, and utterances . When I speak of a fact . . . I mean the kind of thing that makes a proposition true or false. (Russe...

DGL-KE : Deep Graph Library (DGL)

Fonte: https://towardsdatascience.com/introduction-to-knowledge-graph-embedding-with-dgl-ke-77ace6fb60ef Amazon recently launched DGL-KE, a software package that simplifies this process with simple command-line scripts. With DGL-KE , users can generate embeddings for very large graphs 2–5x faster than competing techniques. DGL-KE provides users the flexibility to select models used to generate embeddings and optimize performance by configuring hardware, data sampling parameters, and the loss function. To use this package effectively, however, it is important to understand how embeddings work and the optimizations available to compute them. This two-part blog series is designed to provide this information and get you ready to start taking advantage of DGL-KE . Finally, another class of graphs that is especially important for knowledge graphs are multigraphs . These are graphs that can have multiple (directed) edges between the same pair of nodes and can also contain loops. The...