Pular para o conteúdo principal

Self-Supervised Dynamic Hypergraph Recommendation based on Hyper-Relational Knowledge Graph - Leitura de Artigo

ABSTRACT

... However, the long-tail distribution of entities leads to sparsity in supervision signals, which weakens the quality of item representation when utilizing KG enhancement. Additionally, the binary relation representation of KGs simplifies hyper-relational facts, making it challenging to model complex real-world information. Furthermore, the over-smoothing phenomenon results in indistinguishable representations and information loss. To address these challenges, we propose the SDK (Self-Supervised Dynamic Hypergraph Recommendation based on Hyper-Relational Knowledge Graph) framework. This framework establishes a cross-view hypergraph self-supervised learning mechanism for KG enhancement. Specifically, we model hyper-relational facts in KGs to capture interdependencies between entities under complete semantic conditions. With the refined representation, a hypergraph is dynamically constructed to preserve features in the deep vector space, thereby alleviating the over-smoothing problem. ... 

[Dois problemas que afetam o uso de KGs Sistemas de Recomendação e também LLM: tail entities (poucos dados) e relação binária]

1 INTRODUCTION

Specifically, we transform the KG into a hyper-relational format and model facts within the context of
𝑁 -ary relations. This approach enables us to capture more nuanced relationships among entities. For example, a hyper-relational fact is depicted in Figure 1. 

The form of hyper-relational facts, which consists of a basic triplet (ℎ, 𝑟, 𝑡) and several qualifiers (𝑞𝑟, 𝑞𝑣) ...  Unlike triplet-based facts that model each piece of semantic information independently before aggregation, hyper-relational facts represent intrinsic semantic associations by directly modeling the basic triplet and qualifiers as a whole. By employing the SDK framework, we aim to enhance the representation ability and generalization performance of KG-based recommendation systems by leveraging hyper-relational facts and their qualifiers. 

2 RELATED WORK

Hyper-relational KG: Since the triplets in the traditional KG over-simplify the complexity of the data, recent studies have begun to model hyper-relational facts. m-TransH [32] is a method based on TransH [30] to transform hyper-relational facts through star-to-clique conversion. RAE [46] builds upon m-TransH and further transforms hyper-relational facts into 𝑁 -ary facts with abstract relations. NaLP [6] proposes a link prediction method that models 𝑁 -ary facts as role-value pairs and utilizes a convolution-based framework to compute the similarity of each pair. StarE [3] specifically designed an encoder for 𝑁-ary facts to be compatible with indefinite-length qualifiers and emphasize the interaction of the basic triplets to qualifiers.

[Embeddings em KG Hiper Relacionais]

3 PRELIMINARIES

Knowledge Graph: A KG provides auxiliary information for the recommender system to alleviate the problem of data sparsity. The KG G𝑘 utilizes the triplet set {(ℎ, 𝑟, 𝑡)|ℎ, 𝑡 ∈ 𝐸, 𝑟 ∈ 𝑅} to describe facts, where 𝐸 and 𝑅 are respectively the sets of entities and relations, and (ℎ, 𝑟, 𝑡) indicates there is a relation 𝑟 from head entity ℎ to tail entity 𝑡. In KG-based recommendation where 𝑉 ∈ 𝐸, an item 𝑣 ∈ 𝑉 may form of triplets with several different entities in the given KG G𝑘 .

Hyper-relational Knowledge Graph: HKG is an extension of the standard KG, which describes the 𝑁 -ary facts in the real world by supplementing the basic triple semantics with qualifier pairs. A hyper-relational fact is represented by a tuple (ℎ, 𝑟, 𝑡, Qℎ𝑟𝑡 ), where (ℎ, 𝑟, 𝑡) is the knowledge triplet, and Qℎ𝑟𝑡 is the set of qualifier pairs {(𝑞𝑣𝑖 , 𝑞𝑟𝑖 )}| Qℎ𝑟𝑡 | 𝑖=1 with qualifier relations 𝑞𝑟 ∈ 𝑅 and qualifier entities 𝑞𝑣 ∈ 𝐸. In this way, these entities can be seen as connected by a hyperedge in the HKG, representing an 𝑁 -ary fact, also known as a statement.

[Para apresentar a formalização dos modelos de KG]

4 METHODOLOGY

5 EXPERIMENTS

Comentários

Postagens mais visitadas deste blog

Aula 12: WordNet | Introdução à Linguagem de Programação Python *** com NLTK

 Fonte -> https://youtu.be/0OCq31jQ9E4 A WordNet do Brasil -> http://www.nilc.icmc.usp.br/wordnetbr/ NLTK  synsets = dada uma palavra acha todos os significados, pode informar a língua e a classe gramatical da palavra (substantivo, verbo, advérbio) from nltk.corpus import wordnet as wn wordnet.synset(xxxxxx).definition() = descrição do significado É possível extrair hipernimia, hiponimia, antonimos e os lemas (diferentes palavras/expressões com o mesmo significado) formando uma REDE LEXICAL. Com isso é possível calcular a distância entre 2 synset dentro do grafo.  Veja trecho de código abaixo: texto = 'útil' print('NOUN:', wordnet.synsets(texto, lang='por', pos=wordnet.NOUN)) texto = 'útil' print('ADJ:', wordnet.synsets(texto, lang='por', pos=wordnet.ADJ)) print(wordnet.synset('handy.s.01').definition()) texto = 'computador' for synset in wn.synsets(texto, lang='por', pos=wn.NOUN):     print('DEF:',s...

truth makers AND truth bearers - Palestra Giancarlo no SBBD

Dando uma googada https://iep.utm.edu/truth/ There are two commonly accepted constraints on truth and falsehood:     Every proposition is true or false.         [Law of the Excluded Middle.]     No proposition is both true and false.         [Law of Non-contradiction.] What is the difference between a truth-maker and a truth bearer? Truth-bearers are either true or false; truth-makers are not since, not being representations, they cannot be said to be true, nor can they be said to be false . That's a second difference. Truth-bearers are 'bipolar,' either true or false; truth-makers are 'unipolar': all of them obtain. What are considered truth bearers?   A variety of truth bearers are considered – statements, beliefs, claims, assumptions, hypotheses, propositions, sentences, and utterances . When I speak of a fact . . . I mean the kind of thing that makes a proposition true or false. (Russe...

DGL-KE : Deep Graph Library (DGL)

Fonte: https://towardsdatascience.com/introduction-to-knowledge-graph-embedding-with-dgl-ke-77ace6fb60ef Amazon recently launched DGL-KE, a software package that simplifies this process with simple command-line scripts. With DGL-KE , users can generate embeddings for very large graphs 2–5x faster than competing techniques. DGL-KE provides users the flexibility to select models used to generate embeddings and optimize performance by configuring hardware, data sampling parameters, and the loss function. To use this package effectively, however, it is important to understand how embeddings work and the optimizations available to compute them. This two-part blog series is designed to provide this information and get you ready to start taking advantage of DGL-KE . Finally, another class of graphs that is especially important for knowledge graphs are multigraphs . These are graphs that can have multiple (directed) edges between the same pair of nodes and can also contain loops. The...