Pular para o conteúdo principal

Context-Aware Temporal Knowledge Graph Embedding - Leitura de Artigo

Yu Liu, Wen Hua, Kexuan Xin, and Xiaofang Zhou. 2020. Context-Aware Temporal Knowledge Graph Embedding. In Web Information Systems Engineering – WISE 2019: 20th International Conference, Hong Kong, China, January 19–22, 2020, Proceedings. Springer-Verlag, Berlin, Heidelberg, 583–598. https://doi.org/10.1007/978-3-030-34223-4_37

Abstract

Knowledge graph embedding (KGE) is an important technique used for knowledge graph completion (KGC). However, knowledge in practice is time-variant and many relations are only valid for a certain period of time. This phenomenon highlights the importance of temporal knowledge graph embeddings.

[Considerar o contexto temporal para a validade de relacionamentos]

Currently, existing temporal KGE methods only focus on one aspect of facts, i.e., the factual plausibility, while ignoring the other aspect, i.e., the temporal consistency. Temporal consistency models the interactions between a fact and its contexts, and thus is able to capture fine-granularity temporal relationships, such as temporal orders, temporal distances and overlapping.

[Relacionamentos adicionais entre as afirmações em função do contexto temporal]

In order to determine the useful contexts for the fact to be predicted, we propose a two-way strategy for context selection. In particular, we decompose the target fact into two parts, relation and entities, and measure the usefulness of a context for each part respectively. Furthermore, we adopt deep neural networks to encode contexts and score the temporal consistency. This consistency is used with factual plausibility to model a fact. Due to the incorporation of temporal information and the interactions between facts and contexts, our model learns a more representative embeddings for temporal KG.

[A tarefa é completar o grafo predizendo relacionamentos com base no contexto temporal e também predizer o período em caso de ausencia]

We conduct extensive experiments on real world datasets and the experimental results verify the effectiveness of our proposals.

1 Introduction

However, not all previous facts stored in KGs can always keep valid over time because many relations are time-sensitive and only valid for a certain time period.

[A validade depende do aspecto temporal]

... a few models have been built on temporal KGE [2–4, 10, 14, 17, 29] where the timestamps of facts are simply projected into the entities and/or relations [2, 4, 14], or incorporated into entities via neural networks [3, 4, 29]. Unfortunately, most of them still focus on designing a new factual-plausibility score function on entities and relations, while pay less attention on the reasonability of the temporal information, which results in massive inaccurate and contradictory predictions.

[Problema das abordagens atuais]

To address this issue, in this paper, we start from a new direction on modeling the contextual interactions of the target fact with its related contexts. Here, we regard all the facts that share a certain component (i.e., head entity or tail entity) with the target fact as the contexts of the fact. Then we propose a temporal KGE model that measures a predicted fact from two aspects: the fact itself and the interactions with its contexts. That is, a fact is valid only if (1) it is composed of plausible head entity, relation and tail entity; (2) the fact is consistent with its surrounding contexts temporally and semantically.

[Temporal não é o contexto aqui, o contexto ainda é sintático? A seleção de aresta é feita por DL e não por caminhos ou regras. Esse contexto é aprendido e tem características latentes]

... (1) Recall that the first step taken by human is to select the contexts that are useful to make the conclusion because not all contexts are useful for the target prediction and some can be even misleading. ... (2) There are many different kinds of temporal relationships and interactions, e.g., temporal orders, intersections and distances.

[Dentro do aspecto temporal existem diferentes relacionamentos entre as afirmações]

To deal with the above challenges, we propose a context-aware KGE model composed of three main modules: (1) a context selection module; (2) temporal consistency measure module; (3) contextual consistency measure module. In particular, we introduce a relation-entity-aware mechanism to determine the usefulness of each context with respect to the prediction of a target fact.

[Como são selecionados os contextos? São aprendidos.]

2 Related work

2.1 Traditional KG embedding

2.2 Temporal KG embedding

As KGE is very current, only a few works incorporate temporal information in knowledge graph embedding [2–4, 10, 14, 29]. Based on how they incorporate the time information, we divide them into three categories. First, some works [3, 29] incorporate timestamps into entities (i.e., head entity and tail entity) of a triplet and model the involvements of entities over time either using time-specific matrix [3] or using neural networks [29].

Some works [4, 14] study various strategies to directly combine relation embedding and the time embedding together, such as concatenate, sum or dot product operations.

3 Problem statement

In this paper, we consider the task of learning the representations of a temporal knowledge graph via enforcing the consistency between contexts and valid temporal intervals. ... Our framework for temporal knowledge graph embedding can be defined as follows:

Definition 1 (Temporal Knowledge Graph) A temporal KG is defined as a directed graph G = (E, R, T ) where (1) E is the set of entities (nodes); (2) R is the set of relations (edges); (3) T is the set of valid temporal intervals (labels)

[O tempo são anotações em rótulos, não são nós do vértice (pode ser representado em LPG) e não correspondem a período ou eras, épocas, etc ...]
[No hiper relacional o contexto temporal tem predicados (relações, propriedades, qualificadores) explícitamente definidos e valores literais como datas ou entidades para representar eventos históricos como Primeira Guerra Mundial, Era Glaxial, etc ... ]

Definition 2 (Fact) A fact is defined as the 4-tuples f = (h, r, t, τ ), where h, t pertencem a E, is the head entity and the tail entity, respectively, r pertence R is a relation between h and t, and τ = [τs , τe] pertencem T × T is the temporal interval when (h, r, t) holds in the real world. Considering that some facts do not have the end time τe (i.e., they are still valid till now), we add a predefined padding label “[holding]” into the time label set T .

[Regra para lidar com a ausência da marcação de fim]
[Nem todo evento temporal tem início e fim, Pode ser um instantâneo]

Definition 3 (Context) The context of a target entity e is defined as the aggregate set of facts Ce = {f1, · · · , f n} such that each fact f i contains e.
Example: (1) Fact (“Cristiano Ronaldo”, “playFor” “Real Madrid”, “[2009,2018]”) is a context for entity “Cristiano Ronaldo”; (2) Fact (“Wayne Rooney”, “playFor” “Manchester United”, “[2004,2017]”) is a context for entity “Manchester United”.

[Contexto de uma entidade são todas as triplas ÚTEIS que a mesma faz parte. A utilidade é calculada para cada fato.]

Definition 4 (Temporal Knowledge Graph Embedding) Temporal KGE is the task to learn the representations of a temporal Knowledge Graph G. In other words, the temporal KGE task aims to embed the entity set E, relation set R and time set T into a low dimensional continuous vector space, say Rd . Generally, KGE enforces the embeddings are compatible with observed facts

4 Methodology

We can see that there are six modules, i.e., one embedding layer, two for the factual plausibility and three for the contextual interactions. Given a fact candidate f = (h, r, t, [τs , τe]), our framework predicts whether it holds true.

4.2.1 Context selection

Obviously, not all information of an entity are related to the target fact and some can be noisy and misleading. Therefore, we first propose a module that measures the context usefulness for the target fact. Heuristically, contexts on different relations have different influences for a certain fact. For example, consider the target fact (“Cristiano Ronaldo”, “playFor”, “Manchester United”, [2003,2009]). Contexts about “Cristiano” in the soccer field are more useful than the information about his personal life, such as facts on “spouseOf” or “hasChildren”. Besides, information about “Manchester United”, such as facts on “locationOf”, “hasCapacity”, and “foundedOf” are noisy for the target fact. Based on this observation, we calculate the relation-usefulness via deep convolutional neural networks (CNNs).

[Calcular a utilidade da tripla que a entidade faz parte]

Besides, we also observe that useful contexts are able to distinguish similar entities. For example, contexts of “Cristiano Ronaldo” are more useful if these contexts are able to distinguish “Cristiano” with some other football players, e.g. “Lionel Messi” and “Zinedine Zidane”. Based on this observation, we propose another entity-usefulness function via the similar structure. Since entity embeddings essentially represent semantics, we regard the similar entities of a target entity as its nearest neighbours in the vector space.

[Calcular os embeddings de modo que seja possível diferenciar as entidades similares]

4.2.2 Temporal consistency

After obtaining the top-k useful head contexts  ̃Ch and tail contexts  ̃Ct , we now model the temporal interactions and calculate temporal consistency.

4.2.3 Contextual consistency

In this part, we introduce how to use the selected contexts, i.e.,  ̃Ch and  ̃Ct , to calculate the contextual consistency s c for improving temporal KG embeddings. Contextual consistency is a state in which head contexts and tail contexts occur together without conflicts regarding to the target fact, which requires capturing the contextual interactions as a whole group instead of modeling them one by one. As human beings, in order to detect conflicts between the target fact and its contexts, we have to first read all the contexts and understand their semantics.

[Por aqui entendi que o contexto aprendido também é semântico uma vez que reflete características latentes das entidades envolvidas]

4.3.1 Time projection

Before calculating factual plausibility as traditional KGE methods, we need to deal with temporal information well. Since temporal KG facts are only valid during the given period, we believe the temporal constraint modifies only on relations.

5 Experiments

5.1 Experimental setting

Datasets We use YAGO11k and Wikidata12k, two real world datasets, as previous work [2]. YAGO11k and Wikidata12k are the subset of facts sampling from YAGO3 [21] and Wikidata [14], respectively. Both datasets only contain temporal-aware facts, i.e., the facts that are coupled with their valid temporal interval.

[Usou a WD logo lidou com a estrutura hiper relacional mas tratou como um "LPG"]

Link prediction In order to evaluate KG embeddings, we adopt widely used link prediction tasks [1] which contain head prediction, relation prediction, tail prediction and three new time prediction tasks.

Baselines As research on temporal KGE is very current, we not only compare with state-of-the-art temporal KGE models, but also with traditional KGE models.

5.2 Effectiveness of entity predictions and relation prediction

In this part, we show the effectiveness of our proposed context-aware model for traditional KG completion by comparing with six baseline methods. In particular, we conducted experiments on both YAGO11k and Wikidata12K datasets for head prediction, relation prediction and tail prediction tasks.

5.3 Effectiveness of time predictions

In this part, we evaluate the effectiveness of our context-aware embedding model for the time prediction task. In particular, we examine the impact of our proposed contextual consistency for improving time predictions. We conduct experiments on two different settings, i.e., (1) interval prediction; (2) start/end predictions.

Impact of context selection As our model is based on the context selection mechanism, we also conduct experiments for time predictions under different context size and on different context selection methods, ....

[Os métodos de seleção do contexto não são modelados mas são aprendidos e podem varias de tamanho (hops?, número de entidades? número de relações?)]

5.3.2 Start/end prediction

We also conduct experiments on a new time prediction task, namely start/end prediction, which aims to infer the start/end time of a given fact respectively. For example, given a test fact (h, r, t, [?, τe]) for start time prediction, we calculate the scores of all candidates and rank them in decreasing order.

[Prever o inicio e o fim!!!]


Comentários

  1. Yu Liu, Wen Hua, Jianfeng Qu, Kexuan Xin, and Xiaofang Zhou. 2021. Temporal knowledge completion with context-aware embeddings. World Wide Web 24, 2 (Mar 2021), 675–695. https://doi.org/10.1007/s11280-021-00867-6

    Abstract

    Temporal knowledge graph embedding can be used to improve the coverage of temporal KGs via link predictions. Most existing works only concentrate on the target facts themselves, regardless of the rich and informative interactions between the target facts and their highly-related contexts.

    [Completar o grafo]

    In this paper, we propose a novel approach to take advantage of useful contextual interactions from two aspects, namely temporal consistency and contextual consistency. More specifically, temporal consistency measures how well the target fact interacts with its surrounding contexts in the temporal dimension, while contextual consistency treats all facts as a whole integrity and captures the semantic interactions between multiple contexts.

    [Temporal e outros Contextos?]

    Additionally, considering the existence of useless and misleading context information, we design a crafted context selection strategy to pick out the most useful contexts with reference to the target facts, and then encode them using deep neural networks to capture the temporal and semantic interactions.

    Experimental results on real world datasets verify the effectiveness of our proposals comparing with competitive KGE methods and temporal KGE methods.

    ResponderExcluir

Postar um comentário

Sinta-se a vontade para comentar. Críticas construtivas são sempre bem vindas.

Postagens mais visitadas deste blog

Aula 12: WordNet | Introdução à Linguagem de Programação Python *** com NLTK

 Fonte -> https://youtu.be/0OCq31jQ9E4 A WordNet do Brasil -> http://www.nilc.icmc.usp.br/wordnetbr/ NLTK  synsets = dada uma palavra acha todos os significados, pode informar a língua e a classe gramatical da palavra (substantivo, verbo, advérbio) from nltk.corpus import wordnet as wn wordnet.synset(xxxxxx).definition() = descrição do significado É possível extrair hipernimia, hiponimia, antonimos e os lemas (diferentes palavras/expressões com o mesmo significado) formando uma REDE LEXICAL. Com isso é possível calcular a distância entre 2 synset dentro do grafo.  Veja trecho de código abaixo: texto = 'útil' print('NOUN:', wordnet.synsets(texto, lang='por', pos=wordnet.NOUN)) texto = 'útil' print('ADJ:', wordnet.synsets(texto, lang='por', pos=wordnet.ADJ)) print(wordnet.synset('handy.s.01').definition()) texto = 'computador' for synset in wn.synsets(texto, lang='por', pos=wn.NOUN):     print('DEF:',s...

truth makers AND truth bearers - Palestra Giancarlo no SBBD

Dando uma googada https://iep.utm.edu/truth/ There are two commonly accepted constraints on truth and falsehood:     Every proposition is true or false.         [Law of the Excluded Middle.]     No proposition is both true and false.         [Law of Non-contradiction.] What is the difference between a truth-maker and a truth bearer? Truth-bearers are either true or false; truth-makers are not since, not being representations, they cannot be said to be true, nor can they be said to be false . That's a second difference. Truth-bearers are 'bipolar,' either true or false; truth-makers are 'unipolar': all of them obtain. What are considered truth bearers?   A variety of truth bearers are considered – statements, beliefs, claims, assumptions, hypotheses, propositions, sentences, and utterances . When I speak of a fact . . . I mean the kind of thing that makes a proposition true or false. (Russe...

DGL-KE : Deep Graph Library (DGL)

Fonte: https://towardsdatascience.com/introduction-to-knowledge-graph-embedding-with-dgl-ke-77ace6fb60ef Amazon recently launched DGL-KE, a software package that simplifies this process with simple command-line scripts. With DGL-KE , users can generate embeddings for very large graphs 2–5x faster than competing techniques. DGL-KE provides users the flexibility to select models used to generate embeddings and optimize performance by configuring hardware, data sampling parameters, and the loss function. To use this package effectively, however, it is important to understand how embeddings work and the optimizations available to compute them. This two-part blog series is designed to provide this information and get you ready to start taking advantage of DGL-KE . Finally, another class of graphs that is especially important for knowledge graphs are multigraphs . These are graphs that can have multiple (directed) edges between the same pair of nodes and can also contain loops. The...