Pular para o conteúdo principal

edX @ From Graph to Knowledge Graph – Algorithms and Applications / Graph Representation Learning

Microsoft: DAT278x
From Graph to Knowledge Graph – Algorithms and Applications

Module 3: Graph Representation Learning

Representation learning problem

Basically, given an input graph we design and extract structure features for nodes and edges such as different centrality scores for nodes, various similarities scores for one pair of nodes on an link.
Basically, a hand-crafted feature matrix is created with expensive human and computing effort.

What is the issue here? 

First, we need to pre-design the features according to our domain knowledge
and graph mining experience. The quality of those mining task, therefore, largely depend on the hand-crafted features Second, to do so it will require significant human effort as well as, potentially, very expensive computational cost.

Ao invés de construir/modelar manualmente (engeneering) com o esforço de um humano, a ideia é aprender automaticamente (learning). 

To address these two issues, very recently, feature representation learning is proposed because its power to capture internal relations from rich, complex data.  Instead of hand crafting features from the  networks, graph representation learning aimed to learn latent feature matrix. By latent, here, we mean we really don't design or know the meaning of each element. Unsupervised representation learning
would enable automatic discovery of useful and meaningful latent features from the graph.

Formerly, the problem of representation learning in graphs is defined as follows.



The learning goal is to map each node into a latent low-dimensional space such that graphs structure information is encoded into distributional node representations.

Graph representation learning, aka, graph embedding or network embedding.

Skip-gram based graph embedding (DeepWalk & node2vec)

One of the important graph representation learning ideas is inspired by the word embedding techniques
in natural language processing. The basic assumption of word embedding is that geographically close words in a sentence or document contain interrelations in human natural language. Then the key idea of skip-gram based word embedding models is then to predict these words that are surrounding each other in the text corpus.

Como aplicar o word embedding usado em documentos em grafos?

How to transform the graph into a word document? How can we transform the nonlinear graph structure into linear structures?

In particular, notice that the graph structures are not linear, whereas the words in a document are linear in language. We can consider the sentence as a node path, with each node representing a word. We can transform the nonlinear graph structures into a linear node path by simply using random walks over the graph to generate node paths.

Escolher um nó de partida e gerar todos os caminhos possíveis a partir desse nó (DFS)

Understanding skip-gram based graph embedding I



Recall that this graph representation learning technique we introduce in previous section includes two steps. (1) Random walks over graph to generate node path (2) Skip gram model to apply to node path generated in the first step.

Basically, in word embedding, research found that the objective of the skip gram model with negative sampling is actually equivalent to matrix factorization. The matrix that is factorized is the PMI matrix,
short for pointwise mutual information matrix.

Understanding skip-gram based graph embedding II

Essentially, the graph representation learning framework we introduced before is, in theory, performing implicit matrix factorization.

Here we show a matrix factorization model called NetMF for graph representation learning. Basically, the main idea of this model is to explicitly factorize the closed-form matrix by using singular-value decomposition (SVD). It turns out that, by factorizing the matrix explicitly, the latent representation learned for each node is more effective for down-stream graph application, such as node clustering and classification.

Heterogeneous graph embedding I

The challenge here are then how to effectively preserve the concept of node context among multi-types of nodes in heterogeneous graph structures.

Meta-path in heterogeneous graph defines node path with different semantics. For example, in academic heterogeneous graph, a meta-path APA, short for Author Paper Author, represent two authors co-author on a paper. The goal is to generate node path that are able to capture both the semantic and structural correlations between different types of nodes in the heterogeneous graph. It will not randomly jump to all of its' neighbors, but only neighbors with a specific type as indicated by this meta-path.


Heterogeneous graph embedding II

How we can incorporate the heterogeneous node paths generated by meta-paths random walk into a skip-gram model? 

By ignoring node type information, to predict the context node with a specific type t, the skip-gram will always fail when the predicted context node have different types with the underlying context node type.

In heterogeneous skip-gram, the multinomial distribution dimension for type t nodes is determined by the number of t-type nodes in the heterogeneous graph. To optimize this model fast, we use the same technique used in homogeneous skip-gram.
That is negative sampling.
The difference lies in that when we sample negative nodes, we take the context node's type into consideration.
Finally, this heterogeneous skip-gram model is learned by using stochastic gradient descent algorithm.

Very nice examples with MAG

Graph convolutional network I

The success of deep neural networks is built upon the flexibility to transform information from layers to layers. Basically, we can take a fixed size of small grid, say a three times three grid, to combine  information from pixels of these nine node of this sub-grid. We then can swipe through the full grid
by moving the three times three sub-grid. We will move it to the next sub-grid we do the sum over all nine pixels again to pass the information into the second layer. Here is the convolution over the next sub-grid. One by one. To the end we can transform the input layer into a hidden layer by leveraging convolutions defined over the sub-grid.

How we define convolution over graphs with arbitrary structure and size? 

Graph convolutional network II

Basically, we are also given a graph with its Adjacency matrix A and its corresponding Degree matrix D. At the same time, each node may also associate with some features, which is represented in the Feature matrix X. However, note that the problem setting and the solution can work with or without the Feature matrix X. We need to find the function F with parameter W to transform graph structure and existing features into the first hidden layer in the neural networks.



It turns out that F could be any non-linear activity function.

To normalize Adjacency matrix A, we can leverage the Degree matrix D. In this way, the multiplication
with normalized Adjacency matrix A will average impact from the neighborhood.


Comentários

Postagens mais visitadas deste blog

Connected Papers: Uma abordagem alternativa para revisão da literatura

Durante um projeto de pesquisa podemos encontrar um artigo que nos identificamos em termos de problema de pesquisa e também de solução. Então surge a vontade de saber como essa área de pesquisa se desenvolveu até chegar a esse ponto ou quais desdobramentos ocorreram a partir dessa solução proposta para identificar o estado da arte nesse tema. Podemos seguir duas abordagens:  realizar uma revisão sistemática usando palavras chaves que melhor caracterizam o tema em bibliotecas digitais de referência para encontrar artigos relacionados ou realizar snowballing ancorado nesse artigo que identificamos previamente, explorando os artigos citados (backward) ou os artigos que o citam (forward)  Mas a ferramenta Connected Papers propõe uma abordagem alternativa para essa busca. O problema inicial é dado um artigo de interesse, precisamos encontrar outros artigos relacionados de "certa forma". Find different methods and approaches to the same subject Track down the state of the art rese...

Embedding Logical Queries on Knowledge Graphs - Leitura de Artigo

William L. Hamilton, Payal Bajaj, Marinka Zitnik, Dan Jurafsky, Jure Leskovec: Embedding Logical Queries on Knowledge Graphs . NeurIPS 2018: 2030-2041 Abstract Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities. However, an open challenge in this area is developing techniques that can go beyond simple edge prediction and handle more complex logical queries, which might involve multiple unobserved edges, entities, and variables. [ Link Prediction é a tarefa mais comum em GRL, é uma query do tipo <s, p, ?o> ou <s, ?p, o> ou <?s, p, o>, ou seja, Look up ou Existe <s, p, o> (ASK) ] For instance, given an incomplete biological knowledge graph, we might want to predict what drugs are likely to target proteins involved with both diseases X and Y? —a query that requires reasoning about all possible proteins that might interact with diseases X and Y. [ Query conjuntiva, BGP com join ...

Knowledge Graphs as a source of trust for LLM-powered enterprise question answering - Leitura de Artigo

J. Sequeda, D. Allemang and B. Jacob, Knowledge Graphs as a source of trust for LLM-powered enterprise question answering, Web Semantics: Science, Services and Agents on the World Wide Web (2025), doi: https://doi.org/10.1016/j.websem.2024.100858. 1. Introduction These question answering systems that enable to chat with your structured data hold tremendous potential for transforming the way self service and data-driven decision making is executed within enterprises. Self service and data-driven decision making in organizations today is largly made through Business Intelligence (BI) and analytics reporting. Data teams gather the original data, integrate the data, build a SQL data warehouse (i.e. star schemas), and create BI dashboards and reports that are then used by business users and analysts to answer specific questions (i.e. metrics, KPIs) and make decisions. The bottleneck of this approach is that business users are only able to answer questions given the views of existing dashboa...