Pular para o conteúdo principal

edX @ From Graph to Knowledge Graph – Algorithms and Applications / Graph Representation Learning

Microsoft: DAT278x
From Graph to Knowledge Graph – Algorithms and Applications

Module 3: Graph Representation Learning

Representation learning problem

Basically, given an input graph we design and extract structure features for nodes and edges such as different centrality scores for nodes, various similarities scores for one pair of nodes on an link.
Basically, a hand-crafted feature matrix is created with expensive human and computing effort.

What is the issue here? 

First, we need to pre-design the features according to our domain knowledge
and graph mining experience. The quality of those mining task, therefore, largely depend on the hand-crafted features Second, to do so it will require significant human effort as well as, potentially, very expensive computational cost.

Ao invés de construir/modelar manualmente (engeneering) com o esforço de um humano, a ideia é aprender automaticamente (learning). 

To address these two issues, very recently, feature representation learning is proposed because its power to capture internal relations from rich, complex data.  Instead of hand crafting features from the  networks, graph representation learning aimed to learn latent feature matrix. By latent, here, we mean we really don't design or know the meaning of each element. Unsupervised representation learning
would enable automatic discovery of useful and meaningful latent features from the graph.

Formerly, the problem of representation learning in graphs is defined as follows.



The learning goal is to map each node into a latent low-dimensional space such that graphs structure information is encoded into distributional node representations.

Graph representation learning, aka, graph embedding or network embedding.

Skip-gram based graph embedding (DeepWalk & node2vec)

One of the important graph representation learning ideas is inspired by the word embedding techniques
in natural language processing. The basic assumption of word embedding is that geographically close words in a sentence or document contain interrelations in human natural language. Then the key idea of skip-gram based word embedding models is then to predict these words that are surrounding each other in the text corpus.

Como aplicar o word embedding usado em documentos em grafos?

How to transform the graph into a word document? How can we transform the nonlinear graph structure into linear structures?

In particular, notice that the graph structures are not linear, whereas the words in a document are linear in language. We can consider the sentence as a node path, with each node representing a word. We can transform the nonlinear graph structures into a linear node path by simply using random walks over the graph to generate node paths.

Escolher um nó de partida e gerar todos os caminhos possíveis a partir desse nó (DFS)

Understanding skip-gram based graph embedding I



Recall that this graph representation learning technique we introduce in previous section includes two steps. (1) Random walks over graph to generate node path (2) Skip gram model to apply to node path generated in the first step.

Basically, in word embedding, research found that the objective of the skip gram model with negative sampling is actually equivalent to matrix factorization. The matrix that is factorized is the PMI matrix,
short for pointwise mutual information matrix.

Understanding skip-gram based graph embedding II

Essentially, the graph representation learning framework we introduced before is, in theory, performing implicit matrix factorization.

Here we show a matrix factorization model called NetMF for graph representation learning. Basically, the main idea of this model is to explicitly factorize the closed-form matrix by using singular-value decomposition (SVD). It turns out that, by factorizing the matrix explicitly, the latent representation learned for each node is more effective for down-stream graph application, such as node clustering and classification.

Heterogeneous graph embedding I

The challenge here are then how to effectively preserve the concept of node context among multi-types of nodes in heterogeneous graph structures.

Meta-path in heterogeneous graph defines node path with different semantics. For example, in academic heterogeneous graph, a meta-path APA, short for Author Paper Author, represent two authors co-author on a paper. The goal is to generate node path that are able to capture both the semantic and structural correlations between different types of nodes in the heterogeneous graph. It will not randomly jump to all of its' neighbors, but only neighbors with a specific type as indicated by this meta-path.


Heterogeneous graph embedding II

How we can incorporate the heterogeneous node paths generated by meta-paths random walk into a skip-gram model? 

By ignoring node type information, to predict the context node with a specific type t, the skip-gram will always fail when the predicted context node have different types with the underlying context node type.

In heterogeneous skip-gram, the multinomial distribution dimension for type t nodes is determined by the number of t-type nodes in the heterogeneous graph. To optimize this model fast, we use the same technique used in homogeneous skip-gram.
That is negative sampling.
The difference lies in that when we sample negative nodes, we take the context node's type into consideration.
Finally, this heterogeneous skip-gram model is learned by using stochastic gradient descent algorithm.

Very nice examples with MAG

Graph convolutional network I

The success of deep neural networks is built upon the flexibility to transform information from layers to layers. Basically, we can take a fixed size of small grid, say a three times three grid, to combine  information from pixels of these nine node of this sub-grid. We then can swipe through the full grid
by moving the three times three sub-grid. We will move it to the next sub-grid we do the sum over all nine pixels again to pass the information into the second layer. Here is the convolution over the next sub-grid. One by one. To the end we can transform the input layer into a hidden layer by leveraging convolutions defined over the sub-grid.

How we define convolution over graphs with arbitrary structure and size? 

Graph convolutional network II

Basically, we are also given a graph with its Adjacency matrix A and its corresponding Degree matrix D. At the same time, each node may also associate with some features, which is represented in the Feature matrix X. However, note that the problem setting and the solution can work with or without the Feature matrix X. We need to find the function F with parameter W to transform graph structure and existing features into the first hidden layer in the neural networks.



It turns out that F could be any non-linear activity function.

To normalize Adjacency matrix A, we can leverage the Degree matrix D. In this way, the multiplication
with normalized Adjacency matrix A will average impact from the neighborhood.


Comentários

Postagens mais visitadas deste blog

Aula 12: WordNet | Introdução à Linguagem de Programação Python *** com NLTK

 Fonte -> https://youtu.be/0OCq31jQ9E4 A WordNet do Brasil -> http://www.nilc.icmc.usp.br/wordnetbr/ NLTK  synsets = dada uma palavra acha todos os significados, pode informar a língua e a classe gramatical da palavra (substantivo, verbo, advérbio) from nltk.corpus import wordnet as wn wordnet.synset(xxxxxx).definition() = descrição do significado É possível extrair hipernimia, hiponimia, antonimos e os lemas (diferentes palavras/expressões com o mesmo significado) formando uma REDE LEXICAL. Com isso é possível calcular a distância entre 2 synset dentro do grafo.  Veja trecho de código abaixo: texto = 'útil' print('NOUN:', wordnet.synsets(texto, lang='por', pos=wordnet.NOUN)) texto = 'útil' print('ADJ:', wordnet.synsets(texto, lang='por', pos=wordnet.ADJ)) print(wordnet.synset('handy.s.01').definition()) texto = 'computador' for synset in wn.synsets(texto, lang='por', pos=wn.NOUN):     print('DEF:',s

truth makers AND truth bearers - Palestra Giancarlo no SBBD

Dando uma googada https://iep.utm.edu/truth/ There are two commonly accepted constraints on truth and falsehood:     Every proposition is true or false.         [Law of the Excluded Middle.]     No proposition is both true and false.         [Law of Non-contradiction.] What is the difference between a truth-maker and a truth bearer? Truth-bearers are either true or false; truth-makers are not since, not being representations, they cannot be said to be true, nor can they be said to be false . That's a second difference. Truth-bearers are 'bipolar,' either true or false; truth-makers are 'unipolar': all of them obtain. What are considered truth bearers?   A variety of truth bearers are considered – statements, beliefs, claims, assumptions, hypotheses, propositions, sentences, and utterances . When I speak of a fact . . . I mean the kind of thing that makes a proposition true or false. (Russell, 1972, p. 36.) “Truthmaker theories” hold that in order for any truthbe

DGL-KE : Deep Graph Library (DGL)

Fonte: https://towardsdatascience.com/introduction-to-knowledge-graph-embedding-with-dgl-ke-77ace6fb60ef Amazon recently launched DGL-KE, a software package that simplifies this process with simple command-line scripts. With DGL-KE , users can generate embeddings for very large graphs 2–5x faster than competing techniques. DGL-KE provides users the flexibility to select models used to generate embeddings and optimize performance by configuring hardware, data sampling parameters, and the loss function. To use this package effectively, however, it is important to understand how embeddings work and the optimizations available to compute them. This two-part blog series is designed to provide this information and get you ready to start taking advantage of DGL-KE . Finally, another class of graphs that is especially important for knowledge graphs are multigraphs . These are graphs that can have multiple (directed) edges between the same pair of nodes and can also contain loops. The