Pular para o conteúdo principal

Knowledge graphs as tools for explainable machine learning: A survey

Link https://doi.org/10.1016/j.artint.2021.103627

Abstract

This paper provides an extensive overview of the use of knowledge graphs in the context of Explainable Machine Learning. As of late, explainable AI has become a very active field of research by addressing the limitations of the latest machine learning solutions that often provide highly accurate, but hardly scrutable and interpretable decisions.

An increasing interest has also been shown in the integration of Knowledge Representation techniques in Machine Learning applications, mostly motivated by the complementary strengths and weaknesses that could lead to a new generation of hybrid intelligent systems. Following this idea, we hypothesise that knowledge graphs, which naturally provide domain background knowledge in a machine-readable format, could be integrated in Explainable Machine Learning approaches to help them provide more meaningful, insightful and trustworthy explanations.

6. Current challenges (and ideas to go forward)

Finally, we discuss a set of open challenges that we identified for knowledge-based explainable systems.

Knowledge graph maintenance  Explainable AI systems require completeness and accuracy. This means that an important challenge for the field of Knowledge Representation at scale to increase the information coverage and represent more knowledge explicitly across-domains. Additionally, correctness and freshness of the information in large knowledge graph are necessary, requiring not only the investigation of efficient approaches for knowledge graph evolution at scale [100], but also solutions to maintain high-quality cross-domain knowledge graphs without requiring expensive human labour, which can also lead to resources to be discontinued. As already investigated [101], a centralised authoritative hub could be a potential solution to the problem.

[Precisa do contexto para avaliar]

Identity managementDiscrepancy and misalignment between resources of different knowledge graphs is a persistent issue in current KBX-systems. Managing identities is a prerogative for knowledge-based explainable systems to efficiently use the available information and avoid undesirable, wide-ranging effects. While a number of principles exist for publishing and linking resources, a common agreement on what constitutes identical entities is still an open challenge. This also affect the wide-spread adoption of knowledge graphs in eXplainable AI, that cannot tolerated uncertainty over data quality. Solutions to this problems, partly investigated [96], could be services to help data modellers and applications to identify same entities in the real world; better guidelines to correctly use the different types of identity links (e.g. owl:sameAs, owl:equivalentClass, skos:exactMatch). Additionally, error detection and correction approaches to monitor and identify misuse should be investigated [102].

[Identificadores internos e externos]

Automated knowledge extraction from graphs  Knowledge acquisition from the existing knowledge graphs is still an open challenge which deserved deeper investigation. We believe that there is an urgent need to investigate new heuristics that can deal with the scale of the current knowledge graphs and consequently identify the correct portion information in them automatically. An idea explored in [53], [103] but requiring more efforts given the fast-growing nature of knowledge graphs. Application of novel network-analysis methodologies and graph-based strategies to understand the nature of the graphs at hand should also be explored. This would have benefits for KBX-systems in what both the computational costs and the human resource allocation would be significantly reduced.

Understanding the human role  An open challenge remains to understand the role of humans in KBX-systems, i.e. if and how much they should be involved in the process of generating explanation. Some KBX-systems have suggested that better performances are achieved when human users provide feedback to the system. In this sense, an idea to be investigated is the applicability of reinforcement learning and humans-in-the-loop in KBX-systems (so far unexplored), integrating principles and methodologies of hybrid intelligence and collaborative AI [104]. Human assessment should also be employed in the development of benchmarks for KBX-systems, currently lacking in the field, to allow a better understanding of what is a “good explanation” from a human perspective. This would also support a better characterisation of useful explanations in terms of types and satisfaction criteria.

From knowledge to meaningFinally, the biggest challenge we mention is the one of capturing meaning. The KBX-systems we analysed exploit knowledge graphs as silos of facts, from which relevant triples are aggregated to support or explain a given observation, without following any particular semantic structure. We argue that knowledge graphs capture much more information beyond simple facts, and that causal, modal or spatio-temporal relationships could be used to building complex narratives such as claims, ideas, stories, behaviours and experiences. By casting simple facts into coherent narratives through semantic models, machines would be able to capture meaning of certain experiences as humans do, and therefore explain the events underlying them more coherently [105]. The study of how information from knowledge graphs can be manipulated and combined to support machines in encompassing meaning will allow the development of a human-centric AI, where machines support human capabilities instead them [106].

[Os mapeamentos de contexto podem ajudar nesta integração de diferentes KGs se complementados com equivalências (sameAS, equivalentTo) entre entidades e predicados] 

2.1. Overview of explanations before AI

“The word explanation occurs so continually and holds so important a place in philosophy, that a little time spent in fixing the meaning of it will be profitably employed.” (John Stuart Mill, 1884)

Perhaps two centuries after Mill's wish, the success of precise but inscrutable models has pushed researchers from fields such as of cognitive science, law and social sciences to join forces with the Machine Learning community and work towards providing a unified view over the concept of explanation. Indeed, our view aligns with those studies arguing that explainable AI does need insights from those disciplines that extensively discussed explanations over time [1], [12], [13], [14]. We refer the reader to the cited work for an in-depth discussion on the topic; here, we limit ourselves to identify a working definition of explainability for our research through highlighting how the different disciplines have perceived the concept of explanations across time.

Throughout history, philosophers have been looking at explanations as deductive or inductive situations where a set of initial elements (an event and some conditions) needed to be put into a relation with an consequent phenomenon according to a set of empirical/metaphysical laws (see Aristotle's four causes of explanations,1 Mill's scientific explanations [15], Hempel's deductive-nomological model [16]). Psychologists have been focusing on defining explanations as cognitive-social processes, see folks psychology [17], belief-desire-intention models,2 script theory [18]), while linguists and focusing on explanations as the process of transferring knowledge (explanations as conversations [19], argumentation theories [20], Grice's maxims [21]. Finally, with the advent of Artificial Intelligence, explanations have been mostly seen processes where some initial facts and prior knowledge could be mapped to new knowledge under specific constraints (see rule-based models such as expert systems and inductive logics [22]).

While it is clear that a common agreement on a definition was never reached, the aspect that can be remarked here is that disciplines do share a common abstract model to define explanations, composed of law, events linked by causal relationships, and circumstances that constrain the events. Similar abstract models have been identified by recent work (cfr. [1] and [23] in Fig. 1). Following these, we will loosely refer to explanations as answers to why questions, in the form of “when X happens, then, due to a given set of circumstances C, Y will occur because of a given law L”. Once identified a unified working model, we can look at how all explanation components (events, circumstances, law, causal relationships) can be structured as knowledge to be further used to generate explanations.

[As perguntas contextuais do tipo POR QUE, seriam perguntas sobre Proveniência? Se formuladas como BASEADAS EM QUÊ ficaria mais claro]

2.2. Structuring knowledge in the form of large-scale graphs

One of the main benefits of structuring knowledge in the form of graphs instead of typical relational settings is the flexibility towards the schema, that maintainers can define at a later stage, and change over time. This allows more flexibility for data evolution, as well as capturing of incomplete knowledge
 
We additionally categorised them according to three categories, i.e. “common-sense KGs” that contain knowledge about the everyday world, “factual KGs” containing knowledge about facts and events, and “domain KGs” that encode knowledge from a specific area (linguistics, biomedical, geographical etc.).
 
[tipo de conhecimento que existem em KGs, outra classificação]

WikiData,11 also a free, collaborative project build on top of wiki contents, additionally providing metadata about data provenance;

[Proveniência pq são qualificadores diferentes]

2.3. Explainable AI needs structured knowledge: practical examples

Knowledge graphs and the structured Web represent a valuable form of domain-specific, machine-readable knowledge, and the connected or centralised datasets available could serve as background knowledge for the AI system to better explain its decisions to its users.

3.1. Research questions and analytical framework

Table 2. Analytical toolkit to classify knowledge-based explainable systems

[Características do KG, do modelo ML e da explicação a ser gerada]

Recurso no ORKG -> https://www.orkg.org/orkg/comparison/R69680

4. Using knowledge graphs for explainable machine learning

4.1. Rule-based machine learning

Structured knowledge in the form of domain ontologies was investigated with the idea that it could to support (or potentially replace) experts in this data interpretation step – cfr. seminal work of [39] to translate the outputs of a neural network into symbolic knowledge using a domain ontology in the form of Horn clauses. 

[Se tentaram com Ontologias então é de se esperar tentar com KG]

A main feature of these systems, summarised in Table 4, is that the structured knowledge to generate explanations is integrated a posteriori, i.e. once obtained the outputs of the models. With a few exceptions, these works are limited in what they rely on the manual selection of the knowledge graphs, requiring an expert to extract from these useful background knowledge in the form of statements.

[Após]

The use of domain knowledge graphs which turned with the use of factual knowledge afterwards, alongside with explanations becoming more visual and fact-based, suggests that these systems moved from targeting an audience of domain-experts that could understand articulated explanations, to one where users would need visual support to better understand their decision and, consequently, trust them.

[Explicações para que os usuários acreditem nas conclusões]

4.3. Recommender systems

Knowledge graphs to provide more transparent results to models' outputs have recently experienced a take-up also in the area of recommender systems, with the goal of enhancing the users' experience in terms of satisfaction, trust, and loyalty. Most of the approaches are content-based, i.e. they consists of explaining a recommendation with entities from a given knowledge graph in the form of images or natural language sentences.

[Justificar uma recomendação. Poderiam ser usado em anúncios com "Pq estou vendo isto?"]

4.4. Natural language applications

Taking inspiration from the social sciences arguing that explaining also involve a social process of communicating information from an explainer to an explainee [74], a body of relevant work can be identified in natural language applications such as knowledge-based Question-Answering (KB-QA), machine reading comprehension and Conversational AI in general, where knowledge graphs have been used mostly as background knowledge to answer common-sense knowledge questions both in the form of images, speech and text.

4.5. Predictive and forecasting tasks

The last body we analyse is the use of knowledge graphs to interpret and explain predictive tasks such as loans applications, market analysis, traffic dynamics etc. These systems rely on the idea that explanations can be derived by linking raw input data points to nodes of the graphs, allowing to retrieve additional information about them through graph navigation

5.1. What are the characteristics of current knowledge-based explanation systems?

A clear distinction on the type of knowledge graphs employed by a KBX-system can be seen depending on the task at hand. Common-sense knowledge graphs are employed to explain behaviours of neural network-based systems for classification tasks such as image recognition and QA, while factual knowledge is employed for prediction and recommendation. The use of domain knowledge graphs is rather restricted to earlier systems to motivate their decision making in rule-based learning.

[Os factuais como OWA explicam]

 

 

 

 

 


 

 



 

Comentários

Postagens mais visitadas deste blog

Aula 12: WordNet | Introdução à Linguagem de Programação Python *** com NLTK

 Fonte -> https://youtu.be/0OCq31jQ9E4 A WordNet do Brasil -> http://www.nilc.icmc.usp.br/wordnetbr/ NLTK  synsets = dada uma palavra acha todos os significados, pode informar a língua e a classe gramatical da palavra (substantivo, verbo, advérbio) from nltk.corpus import wordnet as wn wordnet.synset(xxxxxx).definition() = descrição do significado É possível extrair hipernimia, hiponimia, antonimos e os lemas (diferentes palavras/expressões com o mesmo significado) formando uma REDE LEXICAL. Com isso é possível calcular a distância entre 2 synset dentro do grafo.  Veja trecho de código abaixo: texto = 'útil' print('NOUN:', wordnet.synsets(texto, lang='por', pos=wordnet.NOUN)) texto = 'útil' print('ADJ:', wordnet.synsets(texto, lang='por', pos=wordnet.ADJ)) print(wordnet.synset('handy.s.01').definition()) texto = 'computador' for synset in wn.synsets(texto, lang='por', pos=wn.NOUN):     print('DEF:',s...

truth makers AND truth bearers - Palestra Giancarlo no SBBD

Dando uma googada https://iep.utm.edu/truth/ There are two commonly accepted constraints on truth and falsehood:     Every proposition is true or false.         [Law of the Excluded Middle.]     No proposition is both true and false.         [Law of Non-contradiction.] What is the difference between a truth-maker and a truth bearer? Truth-bearers are either true or false; truth-makers are not since, not being representations, they cannot be said to be true, nor can they be said to be false . That's a second difference. Truth-bearers are 'bipolar,' either true or false; truth-makers are 'unipolar': all of them obtain. What are considered truth bearers?   A variety of truth bearers are considered – statements, beliefs, claims, assumptions, hypotheses, propositions, sentences, and utterances . When I speak of a fact . . . I mean the kind of thing that makes a proposition true or false. (Russe...

DGL-KE : Deep Graph Library (DGL)

Fonte: https://towardsdatascience.com/introduction-to-knowledge-graph-embedding-with-dgl-ke-77ace6fb60ef Amazon recently launched DGL-KE, a software package that simplifies this process with simple command-line scripts. With DGL-KE , users can generate embeddings for very large graphs 2–5x faster than competing techniques. DGL-KE provides users the flexibility to select models used to generate embeddings and optimize performance by configuring hardware, data sampling parameters, and the loss function. To use this package effectively, however, it is important to understand how embeddings work and the optimizations available to compute them. This two-part blog series is designed to provide this information and get you ready to start taking advantage of DGL-KE . Finally, another class of graphs that is especially important for knowledge graphs are multigraphs . These are graphs that can have multiple (directed) edges between the same pair of nodes and can also contain loops. The...