Pular para o conteúdo principal

Building Trust in Conversational AI: for Explainable, Privacy-Aware Systems using LLMs and KG

Building Trust in Conversational AI: A Comprehensive Review and Solution Architecture for Explainable, Privacy-Aware Systems using LLMs and Knowledge Graph

https://arxiv.org/pdf/2308.13534.pdf

Abstract—Conversational AI systems have emerged as key enablers of human-like interactions across diverse sectors. Nevertheless, the balance between linguistic nuance and factual accuracy has proven
elusive. In this paper, we first introduce LLMXplorer, a comprehensive tool that provides an in-depth review of over 150 Large Language Models (LLMs), ... we propose a novel functional architecture that seamlessly integrates the structured dynamics of KG with the linguistic capabilities of LLMs. ...This research provides insights into the evolving landscape of conversational AI, emphasizing the imperative for systems that are efficient, transparent, and trustworthy.

INTRODUCTION

Hallucination: LLMs may generate information that is coherent but factually incorrect or misaligned with the underlying data.

[Mentem bem! Pq ao invés disso não assumir que não sabe tudo?]

Trustworthiness: Ensuring the reliability and integrity of generated content poses a complex challenge.

[Não sabem responder sobre suas fontes online]

Explainability: The vast number of parameters and complex structures in LLMs often hinder clear understanding and interpretation of the models’ decision-making processes.

[Não sabem explicar as respostas]

METHODS AND TRAINING PROCESS OF LLMS

Three-stage training process of Large Language Models (LLMs): Starting from an expansive pretraining on diverse data sources and utilizing the transformer architecture, transitioning into supervised fine-tuning with labeled datasets tailored for specific tasks, and culminating in dialogue optimization to refine AI-user interactions.

COMPREHENSIVE REVIEW OF STATE-OF-THE-ART LLMS

LMXlorer: Large Language Model Explore

APPLIED AND TECHNOLOGY IMPLICATIONS FOR LLMS

Legal, Privacy, and Regulatory Perspective

MARKET ANALYSIS OF LLMS AND CROSS-INDUSTRY USE CASES

LLM Development Opportunities
Key applied challenges in the development of LLMs include:

Disinformation Generation: LLMs can generate convincing misinformation, undermining information credibility and leading to potential harm.

Deepfakes Creation: LLMs can produce sophisticated manipulated media, posing threats of deception and public manipulation.

Bias Amplification: LLMs might reflect and amplify biases present in training datasets, potentially reinforcing societal prejudices

SOLUTION ARCHITECTURE FOR PRIVACY-AWARE AND TRUSTWORTHY CONVERSATIONAL AI

Integrating Knowledge Graphs (KGs) [47] with LLMs offers a solution to these challenges by coupling the structured knowledge representation of KGs with the linguistic proficiency of LLMs. 

KGs:
– Deliver structured and validated domain-specific knowledge.
– Enhance system explainability by tracing the origin of information.
– Complement LLMs by addressing gaps in domain-specific expertise

[Passos da Integração LLM e KG]

• Step 3: The Prompt Analysis module refines and refactors the user’s prompt (if necessary) and identifies the key capabilities required to put together an appropriate response. In our case of journalism, the key capabilities include natural language understanding and generic output response or specialised capabilities such as similar article finder, sentiment analysis, fact-checking, and prediction for article topics and relevant industry sectors.

• Step 4: The Llama-2 LLM processes the user request based on the identified capabilities. If a generic response is required, the LLM responds to the user directly (Step 4.1). If specialised features from KG are required, the process moves to Step 4.2.

• Step 5: Llama-2 generates or invokes relevant Cypher instructions for Neo4j based on required capabilities.

[A camada de interface resolve a tradução do prompt para a Graph Query]

• Step 9 and 10: LLM formats the insights for user-friendly presentation and provides a response to the user. The User (U) receives the curated data, ensuring only permitted information is accessed. Users have the option to offer feedback through a Feedback Loop (FB), which might guide the LLM’s subsequent interactions.

[A resposta da Graph Query é trabalhada para mostrar ao usuário, ou seja, não visualizam o KG]


Comentários

Postagens mais visitadas deste blog

Connected Papers: Uma abordagem alternativa para revisão da literatura

Durante um projeto de pesquisa podemos encontrar um artigo que nos identificamos em termos de problema de pesquisa e também de solução. Então surge a vontade de saber como essa área de pesquisa se desenvolveu até chegar a esse ponto ou quais desdobramentos ocorreram a partir dessa solução proposta para identificar o estado da arte nesse tema. Podemos seguir duas abordagens:  realizar uma revisão sistemática usando palavras chaves que melhor caracterizam o tema em bibliotecas digitais de referência para encontrar artigos relacionados ou realizar snowballing ancorado nesse artigo que identificamos previamente, explorando os artigos citados (backward) ou os artigos que o citam (forward)  Mas a ferramenta Connected Papers propõe uma abordagem alternativa para essa busca. O problema inicial é dado um artigo de interesse, precisamos encontrar outros artigos relacionados de "certa forma". Find different methods and approaches to the same subject Track down the state of the art rese...

Knowledge Graph Embedding with Triple Context - Leitura de Abstract

  Jun Shi, Huan Gao, Guilin Qi, and Zhangquan Zhou. 2017. Knowledge Graph Embedding with Triple Context. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM '17). Association for Computing Machinery, New York, NY, USA, 2299–2302. https://doi.org/10.1145/3132847.3133119 ABSTRACT Knowledge graph embedding, which aims to represent entities and relations in vector spaces, has shown outstanding performance on a few knowledge graph completion tasks. Most existing methods are based on the assumption that a knowledge graph is a set of separate triples, ignoring rich graph features, i.e., structural information in the graph. In this paper, we take advantages of structures in knowledge graphs, especially local structures around a triple, which we refer to as triple context. We then propose a Triple-Context-based knowledge Embedding model (TCE). For each triple, two kinds of structure information are considered as its context in the graph; one is the out...

KnOD 2021

Beyond Facts: Online Discourse and Knowledge Graphs A preface to the proceedings of the 1st International Workshop on Knowledge Graphs for Online Discourse Analysis (KnOD 2021, co-located with TheWebConf’21) https://ceur-ws.org/Vol-2877/preface.pdf https://knod2021.wordpress.com/   ABSTRACT Expressing opinions and interacting with others on the Web has led to the production of an abundance of online discourse data, such as claims and viewpoints on controversial topics, their sources and contexts . This data constitutes a valuable source of insights for studies into misinformation spread, bias reinforcement, echo chambers or political agenda setting. While knowledge graphs promise to provide the key to a Web of structured information, they are mainly focused on facts without keeping track of the diversity, connection or temporal evolution of online discourse data. As opposed to facts, claims are inherently more complex. Their interpretation strongly depends on the context and a vari...