Pular para o conteúdo principal

Postagens

Mostrando postagens de setembro, 2023

Álgebra de Contra Domínio - Espacial e Temporal

A Álgebra de Contra Domínio permite encontrar relações entre entidades, entre alegações ou entre entidades e alegações que podem não estar materializadas no grafo através de arestas. Mas é possível representar explicitamente na resposta. A Álgebra de Contra Domínio para valores de datas no Contexto Temporal que comentei é a Allen's Interval Algebra. Esta cobre as relações possíveis entre períodos de tempo. Existem extensões que incluem não só o intervalo mas também um momento no tempo. James F. Allen: Maintaining Knowledge about Temporal Intervals. Commun. ACM 26(11): 832-843 (1983)   Em relação ao Contradomínio para valores de formas espaciais do Contexto de Localização a mais antiga que encontrei foi Region Connection Calculus. Esta trata a relação entre duas formas e existem extensões (ou outras algebras) que tratam ponto, reta, 3D, ... Randell, D.A.; Cui, Z; Cohn, A.G. (1992). "A spatial logic based on regions and connection". 3rd Int. Conf. on Knowledge Representa...

truth makers AND truth bearers - Palestra Giancarlo no SBBD

Dando uma googada https://iep.utm.edu/truth/ There are two commonly accepted constraints on truth and falsehood:     Every proposition is true or false.         [Law of the Excluded Middle.]     No proposition is both true and false.         [Law of Non-contradiction.] What is the difference between a truth-maker and a truth bearer? Truth-bearers are either true or false; truth-makers are not since, not being representations, they cannot be said to be true, nor can they be said to be false . That's a second difference. Truth-bearers are 'bipolar,' either true or false; truth-makers are 'unipolar': all of them obtain. What are considered truth bearers?   A variety of truth bearers are considered – statements, beliefs, claims, assumptions, hypotheses, propositions, sentences, and utterances . When I speak of a fact . . . I mean the kind of thing that makes a proposition true or false. (Russe...

Knowledge Graph Definition - From Twente University

Na palestra do Giancarlo no SBBD 2023 ele comentou que uma das primeiras definições sobre Knowledge Graphs veio da Universidade de Twente (onde ele está lecionando / pesquisando atualmente)  Encontrei no Survey sobre Definições de KG o seguinte trecho: In the 1980s , researchers from the University of Groningen and the University of Twente in the Netherlands initially introduced the term knowledge graph to formally describe their knowledge-based system that integrates knowledge from different sources for representing natural language [10, 15]. The authors proposed KGs with a limited set of relations and focus on qualitative modeling including human interaction, which clearly contrasts with the idea of KGs that has been widely discussed in recent years. Survey : Towards a Definition of Knowledge Graphs. Lisa Ehrlinger and Wolfram Wöß. Institute for Application Oriented Knowledge Processing. Johannes Kepler University Linz, Austria [10] P. James. Knowledge Graphs. In Linguistic. Ins...

Competency Quesitons from Ontologies to CKG

Elisa F. Kendall and Deborah L. McGuinness. Ontology Engineering. Morgan and Claypool, 2019. Definition 12.1: A competency question is a (usually application-related) question towards the KG that is formalised in a query language, together with a formal specification of how an acceptable answer may look. Competency questions focus on functional metrics: • coverage/completeness (but cannot check all cases) • correctness • accessibility (using query answering software) They can be used in several situations: • To define the initial scope (requirements) of a new KG project • To formalise data modelling decisions (how should knowledge be encoded to be accessible) • For regression testing (ensure that KG does not break in the future) Competency questions take a content-oriented view (application- and domain-specific), but the approach can be generalised to set up unit testing: • Define a test suite of queries + (constraints on) expected answers • Automatically run queries to detect problems...

ADBIS 2023 - No Intelligence Without Knowledge

Keynote on Youtube -> https://youtu.be/DZ6NlcW4YV8?si=4Z5zDA1Vx_D10GKz No Intelligence Without Knowledge Katja Hose TU Wien, Austria Abstract. Knowledge graphs and graph data in general are becoming more and more essential components of intelligent systems. This does not only include native graph data, such as social networks or Linked Data on the Web. The flexibility of the graph model and its ability to store data relationships explicitly enables the integration and exploitation of data from very diverse sources. However, to truly exploit their potential, it becomes crucial to provide intelligent systems with verifiable knowledge, reliable facts, patterns, and a deeper understanding of the underlying domains. This talk will therefore chart a number of challenges for exploiting graphs to manage and bring meaning to large amounts of heterogeneous data and discuss opportunities with, without, and for artificial intelligence emerging from research situated at the confluence of data m...

Chain-of-Thought Prompting Elicits Reasoning in Large Language Models - Leitura de Artigo

https://arxiv.org/pdf/2201.11903.pdf Chain-of-Thought (CoT) para elaborar perguntas aos LLMs mas as perguntas podem ser incompletas no que diz respeito ao contexto. A busca exploratório é formada por várias perguntas elaboradas ao longo do processo. Está relacionado com "Fine Tuning" na tarefa a ser executada pq está ensinando ao LLM como responder Introduction   However, scaling up model size alone has not proved sufficient for achieving high performance on challenging tasks such as arithmetic, commonsense, and symbolic reasoning (Raeet al., 2021). This work explores how the reasoning ability of large language models can be unlocked by a simple method motivated by two ideas. First, techniques for arithmetic reasoning can benefit from generating natural language rationales that lead to the final answer. Prior work has given models the ability to generate natural language intermediate steps by training from scratch (Ling et al., 2017) or finetuning a pretrained model (Cobbe e...

Self-Supervised Dynamic Hypergraph Recommendation based on Hyper-Relational Knowledge Graph - Leitura de Artigo

ABSTRACT ... However, the long-tail distribution of entities leads to sparsity in supervision signals, which weakens the quality of item representation when utilizing KG enhancement. Additionally, the binary relation representation of KGs simplifies hyper-relational facts, making it challenging to model complex real-world information. Furthermore, the over-smoothing phenomenon results in indistinguishable representations and information loss. To address these challenges, we propose the SDK (Self-Supervised Dynamic Hypergraph Recommendation based on Hyper-Relational Knowledge Graph) framework. This framework establishes a cross-view hypergraph self-supervised learning mechanism for KG enhancement. Specifically, we model hyper-relational facts in KGs to capture interdependencies between entities under complete semantic conditions. With the refined representation, a hypergraph is dynamically constructed to preserve features in the deep vector space, thereby alleviating the over-smoothing ...

Automatic Question-Answer Generation for Long-Tail Knowledge - Leitura de Artigo

https://knowledge-nlp.github.io/kdd2023/papers/Kumar5.pdf https://github.com/isunitha98selvan/odqa-tail ABSTRACT Pretrained Large Language Models (LLMs) have gained significant attention for addressing open-domain Question Answering (QA). While they exhibit high accuracy in answering questions related to common knowledge, LLMs encounter difficulties in learning about uncommon long-tail knowledge (tail entities).   [Entidades com poucas informações disponíveis, não tão populares ou comuns no interesse do público em geral] 1 INTRODUCTION However, the impressive achievements of LLMs in QA tasks are primarily observed with regard to common concepts that frequently appear on the internet (referred to as "head entities"), which are thus more likely to be learned effectively by LLMs during pretraining time. Conversely, when it comes to dealing with long-tail knowledge, which encompasses rarely occurring entities (referred to as "tail entities"), LLMs struggle to provide ac...