Link https://doi.org/10.1016/j.artint.2021.103627
Abstract
This paper provides an extensive overview of the use of knowledge graphs in the context of Explainable Machine Learning. As of late, explainable AI has become a very active field of research by addressing the limitations of the latest machine learning solutions that often provide highly accurate, but hardly scrutable and interpretable decisions.
An increasing interest has also been shown in the integration of Knowledge Representation techniques in Machine Learning applications, mostly motivated by the complementary strengths and weaknesses that could lead to a new generation of hybrid intelligent systems. Following this idea, we hypothesise that knowledge graphs, which naturally provide domain background knowledge in a machine-readable format, could be integrated in Explainable Machine Learning approaches to help them provide more meaningful, insightful and trustworthy explanations.
6. Current challenges (and ideas to go forward)
Finally, we discuss a set of open challenges that we identified for knowledge-based explainable systems.
Knowledge graph maintenance Explainable AI systems require completeness and accuracy. This means that an important challenge for the field of Knowledge Representation at scale to increase the information coverage and represent more knowledge explicitly across-domains. Additionally, correctness and freshness of the information in large knowledge graph are necessary, requiring not only the investigation of efficient approaches for knowledge graph evolution at scale [100], but also solutions to maintain high-quality cross-domain knowledge graphs without requiring expensive human labour, which can also lead to resources to be discontinued. As already investigated [101], a centralised authoritative hub could be a potential solution to the problem.
[Precisa do contexto para avaliar]
Identity management Discrepancy and misalignment between resources of different knowledge graphs is a persistent issue in current KBX-systems. Managing identities is a prerogative for knowledge-based explainable systems to efficiently use the available information and avoid undesirable, wide-ranging effects. While a number of principles exist for publishing and linking resources, a common agreement on what constitutes identical entities is still an open challenge. This also affect the wide-spread adoption of knowledge graphs in eXplainable AI, that cannot tolerated uncertainty over data quality. Solutions to this problems, partly investigated [96], could be services to help data modellers and applications to identify same entities in the real world; better guidelines to correctly use the different types of identity links (e.g. owl:sameAs, owl:equivalentClass, skos:exactMatch). Additionally, error detection and correction approaches to monitor and identify misuse should be investigated [102].
[Identificadores internos e externos]
Automated knowledge extraction from graphs Knowledge acquisition from the existing knowledge graphs is still an open challenge which deserved deeper investigation. We believe that there is an urgent need to investigate new heuristics that can deal with the scale of the current knowledge graphs and consequently identify the correct portion information in them automatically. An idea explored in [53], [103] but requiring more efforts given the fast-growing nature of knowledge graphs. Application of novel network-analysis methodologies and graph-based strategies to understand the nature of the graphs at hand should also be explored. This would have benefits for KBX-systems in what both the computational costs and the human resource allocation would be significantly reduced.
Understanding the human role An open challenge remains to understand the role of humans in KBX-systems, i.e. if and how much they should be involved in the process of generating explanation. Some KBX-systems have suggested that better performances are achieved when human users provide feedback to the system. In this sense, an idea to be investigated is the applicability of reinforcement learning and humans-in-the-loop in KBX-systems (so far unexplored), integrating principles and methodologies of hybrid intelligence and collaborative AI [104]. Human assessment should also be employed in the development of benchmarks for KBX-systems, currently lacking in the field, to allow a better understanding of what is a “good explanation” from a human perspective. This would also support a better characterisation of useful explanations in terms of types and satisfaction criteria.
From knowledge to meaning Finally, the biggest challenge we mention is the one of capturing meaning. The KBX-systems we analysed exploit knowledge graphs as silos of facts, from which relevant triples are aggregated to support or explain a given observation, without following any particular semantic structure. We argue that knowledge graphs capture much more information beyond simple facts, and that causal, modal or spatio-temporal relationships could be used to building complex narratives such as claims, ideas, stories, behaviours and experiences. By casting simple facts into coherent narratives through semantic models, machines would be able to capture meaning of certain experiences as humans do, and therefore explain the events underlying them more coherently [105]. The study of how information from knowledge graphs can be manipulated and combined to support machines in encompassing meaning will allow the development of a human-centric AI, where machines support human capabilities instead them [106].
[Os mapeamentos de contexto podem ajudar nesta integração de diferentes KGs se complementados com equivalências (sameAS, equivalentTo) entre entidades e predicados]
2.1. Overview of explanations before AI
“The word explanation occurs so continually and holds so important a place in philosophy, that a little time spent in fixing the meaning of it will be profitably employed.” (John Stuart Mill, 1884)
Perhaps two centuries after Mill's wish, the success of precise but inscrutable models has pushed researchers from fields such as of cognitive science, law and social sciences to join forces with the Machine Learning community and work towards providing a unified view over the concept of explanation. Indeed, our view aligns with those studies arguing that explainable AI does need insights from those disciplines that extensively discussed explanations over time [1], [12], [13], [14]. We refer the reader to the cited work for an in-depth discussion on the topic; here, we limit ourselves to identify a working definition of explainability for our research through highlighting how the different disciplines have perceived the concept of explanations across time.
Throughout history, philosophers have been looking at explanations as deductive or inductive situations where a set of initial elements (an event and some conditions) needed to be put into a relation with an consequent phenomenon according to a set of empirical/metaphysical laws (see Aristotle's four causes of explanations,1 Mill's scientific explanations [15], Hempel's deductive-nomological model [16]). Psychologists have been focusing on defining explanations as cognitive-social processes, see folks psychology [17], belief-desire-intention models,2 script theory [18]), while linguists and focusing on explanations as the process of transferring knowledge (explanations as conversations [19], argumentation theories [20], Grice's maxims [21]. Finally, with the advent of Artificial Intelligence, explanations have been mostly seen processes where some initial facts and prior knowledge could be mapped to new knowledge under specific constraints (see rule-based models such as expert systems and inductive logics [22]).
While it is clear that a common agreement on a definition was never reached, the aspect that can be remarked here is that disciplines do share a common abstract model to define explanations, composed of law, events linked by causal relationships, and circumstances that constrain the events. Similar abstract models have been identified by recent work (cfr. [1] and [23] in Fig. 1). Following these, we will loosely refer to explanations as answers to why questions, in the form of “when X happens, then, due to a given set of circumstances C, Y will occur because of a given law L”. Once identified a unified working model, we can look at how all explanation components (events, circumstances, law, causal relationships) can be structured as knowledge to be further used to generate explanations.
[As perguntas contextuais do tipo POR QUE, seriam perguntas sobre Proveniência? Se formuladas como BASEADAS EM QUÊ ficaria mais claro]
2.2. Structuring knowledge in the form of large-scale graphs
WikiData,11 also a free, collaborative project build on top of wiki contents, additionally providing metadata about data provenance;
[Proveniência pq são qualificadores diferentes]
2.3. Explainable AI needs structured knowledge: practical examples
Knowledge graphs and the structured Web represent a valuable form of domain-specific, machine-readable knowledge, and the connected or centralised datasets available could serve as background knowledge for the AI system to better explain its decisions to its users.
3.1. Research questions and analytical framework
4. Using knowledge graphs for explainable machine learning
4.1. Rule-based machine learning
Structured knowledge in the form of domain ontologies was investigated with the idea that it could to support (or potentially replace) experts in this data interpretation step – cfr. seminal work of [39] to translate the outputs of a neural network into symbolic knowledge using a domain ontology in the form of Horn clauses.
Table 4, is that the structured knowledge to generate explanations is integrated a posteriori, i.e. once obtained the outputs of the models. With a few exceptions, these works are limited in what they rely on the manual selection of the knowledge graphs, requiring an expert to extract from these useful background knowledge in the form of statements.
A main feature of these systems, summarised in[Após]
The use of domain knowledge graphs which turned with the use of factual knowledge afterwards, alongside with explanations becoming more visual and fact-based, suggests that these systems moved from targeting an audience of domain-experts that could understand articulated explanations, to one where users would need visual support to better understand their decision and, consequently, trust them.
[Explicações para que os usuários acreditem nas conclusões]
4.3. Recommender systems
Knowledge graphs to provide more transparent results to models' outputs have recently experienced a take-up also in the area of recommender systems, with the goal of enhancing the users' experience in terms of satisfaction, trust, and loyalty. Most of the approaches are content-based, i.e. they consists of explaining a recommendation with entities from a given knowledge graph in the form of images or natural language sentences.
[Justificar uma recomendação. Poderiam ser usado em anúncios com "Pq estou vendo isto?"]
4.4. Natural language applications
Taking inspiration from the social sciences arguing that explaining also involve a social process of communicating information from an explainer to an explainee [74], a body of relevant work can be identified in natural language applications such as knowledge-based Question-Answering (KB-QA), machine reading comprehension and Conversational AI in general, where knowledge graphs have been used mostly as background knowledge to answer common-sense knowledge questions both in the form of images, speech and text.
4.5. Predictive and forecasting tasks
The last body we analyse is the use of knowledge graphs to interpret and explain predictive tasks such as loans applications, market analysis, traffic dynamics etc. These systems rely on the idea that explanations can be derived by linking raw input data points to nodes of the graphs, allowing to retrieve additional information about them through graph navigation5.1. What are the characteristics of current knowledge-based explanation systems?
A clear distinction on the type of knowledge graphs employed by a KBX-system can be seen depending on the task at hand. Common-sense knowledge graphs are employed to explain behaviours of neural network-based systems for classification tasks such as image recognition and QA, while factual knowledge is employed for prediction and recommendation. The use of domain knowledge graphs is rather restricted to earlier systems to motivate their decision making in rule-based learning.
[Os factuais como OWA explicam]
- Gerar link
- Outros aplicativos
Marcadores
deep learning embeddings LLM- Gerar link
- Outros aplicativos
Comentários
Postar um comentário
Sinta-se a vontade para comentar. Críticas construtivas são sempre bem vindas.