Pular para o conteúdo principal

Title: "Text Representation, Retrieval, and Understanding with Knowledge Graphs" - Ai2

Vídeo -> https://youtu.be/ZyYec3X4pkY

Abstract: Search engines and other information systems have started to evolve from retrieving documents to providing more intelligent information access. However, the evolution is still in its infancy due to computers’ limited ability in representing and understanding human language. This talk will present my work addressing these challenges with knowledge graphs. 

The first part is about utilizing entities from knowledge graphs to improve search. I will discuss how we build better text representations with entities and how the entity-based text representations improve text retrieval. 

The second part is about better text understanding through modeling entity salience (importance), as well as how the improved text understanding helps search under both feature-based and neural ranking settings. This talk concludes with future directions towards the next generation of intelligent information systems.

 Bag of Words

 

Vocabulary Mismatch, Shalow Understanding, Writing Queries requires knowledge (content oriented). 

  • KG (Structured Semantics) and Semantics: Entity-oriented search (Entity retrieval) + Semantic Search

node = entity (concrete and abstract), has attributes

Entity linking between Query and KG entities. 

Query and documents are represented as bag-of-entities and match occurs in the entity space

Move from words to entities. 

Sparse Graphs because similar entities may not be connected.

Connect all entities by their similarities in the embeddings spaces

Soft Match (embeddings) x Exact Match (words ou entities)

Ranking Performance

Search for one or more concepts. Search for relations is more common in Q&A

 

 Lessons learned: combined approaches

  • Large Scale Text Understanding: Entity salience (ranking)

Bag-of-Words x Bag-of-Entities: set of individuals, bag of things, shallow understanding


 

More than count "things" frequency, identify the importance, the central entity (centrality)

 

Hubness Problem of semantic similarity ... ver artigo: [Xu et al. 2015]

Very high dimensional embeddings space, anything that is not similar has approximate the same distance. (Para reduzir a sobrecarga de informação isso pode não ser um problema)

 

Separate similarity into several difference ranges to model how other entities are connected (not only similar): related, unrelated, conflict

 

 NÃO ENTENDI A A FÓRMULA DE CALCULO

Similarity != Relevance


 

  • Deep Learning
Matching the word embeddings with trigram and bigram, learn the ranking score

 

More Contributing, more Relevant

Ranking is learned from Document, Query and Entity embeddings

 

Combination of Entity Retrieval and Information Retrieval

Semantic: Structured (KG) and Distributed (Embeddings)

Comentários

Postagens mais visitadas deste blog

Connected Papers: Uma abordagem alternativa para revisão da literatura

Durante um projeto de pesquisa podemos encontrar um artigo que nos identificamos em termos de problema de pesquisa e também de solução. Então surge a vontade de saber como essa área de pesquisa se desenvolveu até chegar a esse ponto ou quais desdobramentos ocorreram a partir dessa solução proposta para identificar o estado da arte nesse tema. Podemos seguir duas abordagens:  realizar uma revisão sistemática usando palavras chaves que melhor caracterizam o tema em bibliotecas digitais de referência para encontrar artigos relacionados ou realizar snowballing ancorado nesse artigo que identificamos previamente, explorando os artigos citados (backward) ou os artigos que o citam (forward)  Mas a ferramenta Connected Papers propõe uma abordagem alternativa para essa busca. O problema inicial é dado um artigo de interesse, precisamos encontrar outros artigos relacionados de "certa forma". Find different methods and approaches to the same subject Track down the state of the art rese...

Aprendizado de Máquina Relacional

 Extraído de -> https://www.lncc.br/~ziviani/papers/Texto-MC1-SBBD2019.pdf   Aprendizado de máquina relacional (AMR) destina-se à criação de modelos estatísticos para dados relacionais (seria o mesmo que dados conectados) , isto é, dados cuja a informação relacional é tão ou mais impor tante que a informação individual (atributos) de cada elemento.    Essa classe de aprendizado tem sido utilizada em diversas aplicações, por exemplo, na extração de informação de dados não estruturados [Zhang et al. 2016] e na modelagem de linguagem natural [Vu et al. 2018].   A adoção de técnicas de aprendizado de máquina relacional em tarefas de comple mentação de grafo de conhecimento se baseia na premissa de existência de regularidades semânticas presentes no mesmo . Modelos grafos probabilísticos  Baseada em regras / heurísticas que não podem garantir 100% de precisão no resultado da inferência mas os resultados podem ser explicados. Modelos de características de ...

Knowledge Graph Embedding with Triple Context - Leitura de Abstract

  Jun Shi, Huan Gao, Guilin Qi, and Zhangquan Zhou. 2017. Knowledge Graph Embedding with Triple Context. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM '17). Association for Computing Machinery, New York, NY, USA, 2299–2302. https://doi.org/10.1145/3132847.3133119 ABSTRACT Knowledge graph embedding, which aims to represent entities and relations in vector spaces, has shown outstanding performance on a few knowledge graph completion tasks. Most existing methods are based on the assumption that a knowledge graph is a set of separate triples, ignoring rich graph features, i.e., structural information in the graph. In this paper, we take advantages of structures in knowledge graphs, especially local structures around a triple, which we refer to as triple context. We then propose a Triple-Context-based knowledge Embedding model (TCE). For each triple, two kinds of structure information are considered as its context in the graph; one is the out...