https://arxiv.org/pdf/2308.06374.pdf
community on LLMs (parametric knowledge) and Knowledge Graphs (explicit knowledge)
[sub simbólico (embeddings) x simbólico]
Introduction
era of Knowledge Computing, in which the notion of reasoning within KR is broadened to many computation tasks based on various knowledge representations.
[Modelos de Linguagem com Representação do Conhecimento]
widely used knowledge representation languages, such as RDF [121] and OWL [55], at web scale, using which the large-scale knowledge bases are then more widely known as KGs [123], due to their helpful graph structures, enabling the both logical reasoning and graph-based learning.
[Algoritmos de grafo independentes de domínio]
Some works use LLMs to augment KGs for, e.g., knowledge extraction, KG construction, and refinement, while others use KGs to augment LLMs for, e.g., training and prompt learning, or knowledge augmentation.
[Seria possível usar LLM para completar as respostas unknown em um KG?]
Common Debate Points within the Community
Critics argue that parametric knowledge in LLMs relies on statistical patterns rather than true understanding and reasoning. ... On the one hand, LLMs could generate plausible but incorrect or nonsensical responses, such as hallucinations, due to a lack of explicit knowledge representation
To sum up, in comparison to the classic trade-off between expressiveness and decidability in Knowledge Representation, here we have the trade-off between precision and recall considering using explicit and parametric knowledge in Knowledge Computing tasks.
The success of KGs can largely be attributed to their ability to provide factual information about entities with high accuracy.
Multiple LLMs have been evaluated on their ability to complete KGs using numerical facts from Wikidata [169], such as individuals’ birth and death years. However, none of the tested models accurately predicted even a single year. This raises questions about the capability of current LLMs to correctly memorize numbers during pre-training in a way that enables them for subsequent use in KG completion.
[Já tentaram e não funcionou bem ...]
Long-tail Knowledge: One of the key research questions on LLMs for the Knowledge Computing community (and beyond) is how much knowledge LLMs remember [107]. Investigations indicate that LLMs’ performance significantly deteriorates when dealing with random Wikidata facts, specifically those associated with long-tail entities, in comparison to popular entities,... KGs inherently present an advantage over LLMs through their provision of knowledge about long-tail entities [78, 167] and thus can further help improve the recall for Knowledge Computing tasks.
[167 Blerta Veseli, Sneha Singhania, Simon Razniewski, and Gerhard Weikum. Evaluating language models for knowledge base completion. In ESWC, page 227–243, 2023.]
[O argumento para completar o contexto também é sobre as buscas Long-Tail. O contexto default que é preenchido pelos buscadores não se preocupa com estes casos pouco frequentes]
Explainability and Interpretability: KGs are often preferred in scenarios where explainability and interpretability are crucial [28], as they explicitly represent relationships between entities and provide a structured knowledge representation ... Some also argue that Chain-of-Thoughts (CoT) [177] can also improve the explainability of LLMs, although question decomposition and precisely answering sub-questions with LLMs are still far from being solved. Attribution evaluation and augmentation of LLMs with e.g., source paragraphs and sentences is another recent research topic for improving their explainability in question answering [17]
[Pesqusiar CoT]
[177 Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Huai hsin Chi, F. Xia, Quoc
Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. ArXiv,
abs/2201.11903, 2022]
[Incluir a fonte para dar explicabilidade as repostas dos LLMs]
Opportunities and Visions
LLMs will enable, advance, and simplify crucial steps in the knowledge engineering pipeline so much as to enable KGs at unprecedented scale, quality, and utility
[Usar LLM para construir e refinar KGs]
Key Research Topics and Related Challenges
4.1 LLMs for KGs: Knowledge Extraction and Canonicalisation
4.2 LLMs for KGs: Knowledge Graph Construction
Long-tail Entities: Existing LLMs still manifest a low level of precision on long-tail entities. Models may begin to generate incorrect information when they fail to memorize the right facts. The answers provided by these models often lack consistency. Incorrect correlations drawn from the pre-training corpus can lead to various biases in KG completion. Whether retrieval-augmented models serve as a viable solution to this problem remains uncertain, as does the potential necessity to adapt pre-training and fine-tuning processes to enhance model robustness in handling long-tail entities.
Provenance: Extracting factual knowledge directly from LLMs does not provide provenance, the origin and credibility of the information, which presents multiple issues. Without provenance, verifying the accuracy of information becomes challenging, potentially leading to the spread of misinformation. Additionally, bias detection is hindered, as the lack of source information makes it difficult to account for potential biases in the data used for training. Provenance also provides critical context, without which, information can be misunderstood or misapplied. Lastly, the absence of source information compromises model transparency, making it hard to evaluate the accountability of the LLMs
4.3 LLMs for KGs: Ontological Schema Construction
A KG is often equipped with an ontological schema (including rules, constraints and ontologies) for ensuring quality, enabling easier knowledge access, supporting reasoning, etc.
Moreover, a KG is never considered complete since the closed world assumption does not hold [40, 128], i.e., it is not possible to conclude that a missing fact is false unless it contradicts another existing fact. Instead, we usually consider that in a KG it holds the open-world assumption, that is a missing fact is simply considered as unknown.
[O que não se sabe sobre o Contexto é explícitamente informado como unknown]
Since KGs contain huge amounts of data, it is not feasible to manually inspect and correct their errors. Therefore, a common approach is to instantiate rules and constraints that can be automatically enforced. These constraints express dependencies and conditions that the KG needs to satisfy at all times and that should not be violated by the introduction of new facts or their deletion.
[A WD não é enforced e tem muitas violações]
Once a set of rules or constraints are instantiated, the next step is to either identify which entities or facts in the KG violate any of them, or employ them to delete erroneous information, or, finally, to employ them to deduce any missing information [49, 138]
[Usar LLM para completar e corrigir]
This brings to the question of whether it is possible to train LLMs to treat the task of rule generation as, for example, a summarization task. This would require then the ability to perform both inductive and abductive reasoning and treat rules as summaries from the set of facts in the KG;
[Indução e Abdução]
4.4 KGs for LLMs: Training and Accessing LLMs
C1: KGs can be employed to automatically extract and represent relevant knowledge to generate context-aware writing prompts. Analyze and understand the relationships between different writing prompts, enabling the generation of prompts that build upon each other;
[Contexto da tarefa, da decisão]
C3: KGs can integrate into prompts the definitions of guards exploited during the generative task. Such guards may lead to enhancing the trustworthiness of the information generated by LLMs and make them more compliant with specific domain-wise or context-wise constraints.
[Aumentar a confiança das respostas]
C4: KGs can create prompts that ask questions (e.g., inferring missing relations in an incomplete KG) that trigger KG complex reasoning capabilities and intermediate reasoning steps
[Completar o KG]
4.5 Applications
4.5.1 Commonsense Knowledge
The majority of KGs capture facts of the sort one might encounter in an encyclopedia or in a relational database. However, commonsense knowledge is another important form of world knowledge for AI systems. For instance, we may wish for a KG to not only capture that the Congo rainforest lies in Central Africa, but also that tropical rainforests have significant rainfall and lush green vegetation. ConceptNet is the most well-known commonsense knowledge graph, developed using manual crowdsourcing along with automated refinement techniques [102].
[102 Hugo Liu and Push Singh. Commonsense reasoning in and over natural language. In Knowledge-
Based Intelligent Information and Engineering Systems, pages 293–306, 2004.]
4.5.3 Digital Healthcare
4.5.4 Domain Specific Content Search
Outlook
Don’t throw out the KG with the paradigm shift: For a range of reliability or safety-critical applications, structured knowledge remains indispensible, and we have outlined many ways in which KGs and LLMs can fertilize each other. KGs are here to stay, do not just ditch them out of fashion.
Murder your (pipeline) darlings: LLMs have substantially advanced many tasks in the KG and ontology construction pipeline, and even made some tasks obsolete. Take critical care in examining even the most established pipeline components, and compare them continuously with the LLM-based state of the art.
[Ainda seria necessário realizar o KG Profiling para a etapa de Mapeamento?]
CoT e Busca Exploratória. Quebrar a necessidade de informação em uma sequência de perguntas mais simples (podendo ser do tipo LookUp). Cabe ao usuário analisar e decidir com as respostas.
ResponderExcluir