Editor
Moamar Sayed-Mouchaweh
Institute Mines-Telecom Lille Douai, Douai, France
ISBN 978-3-030-76408-1 e-ISBN 978-3-030-76409-8
https://doi.org/10.1007/978-3-030-76409-8
The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Explainable Artificial Intelligence (XAI) aims at producing explainable models that enable human users to understand and appropriately trust the obtained results. The produced explanations allow to reveal how the model functions, why it behaves that way in the past, present, and future, why certain actions were taken or must be taken, how certain goals can be achieved, how the system reacts to certain inputs or actions, what are the causes for the occurrence of a certain fault and how this occurrence can be avoided in the future, etc. The need for explanations is increasingly becoming necessary in multiple application domains, such as smart grids, autonomous cars, smart factory or industry 4.0, telemedicine and healthcare, etc., in particular within the context of digital transformation and cyber-physical systems.
This book gathers research contributions aiming at the development and/or the use of XAI techniques, in particular within the context of digital transformation and cyber-physical systems. This book aims to address the aforementioned challenges in different applications such as healthcare, finance, cybersecurity, and document summarization.
The discussed methods and techniques cover different kinds of designed explainable models (transparent models, model agnostic methods), evaluation layout and criteria (user-expertise level, expressive power, portability, computational complexity, accuracy, etc.), and major applications (energy, industry 4.0, critical systems, telemedicine, finance, e-government, etc.). The goal is to provide readers with an overview of advantages and limits of the generated explainable models in different application domains. This allows to highlight the benefits and requirements of using explainable models in different application domains and to provide guidance to readers to select the most adapted models to their specified problem and conditions.
Making machine learning-based AI explainable faces several challenges. Firstly, the explanations must be adapted to different stakeholders (end users, policy makers, industries, utilities, etc.) with different levels of technical knowledge (managers, engineers, technicians, etc.) in different application domains. Secondly, it is important to develop an evaluation framework and standards in order to measure the effectiveness of the provided explanations at the human and the technical levels. For instance, this evaluation framework must be able to verify that each explanation is consistent across similar predictions (similar observations) over time, is expressive in order to increase the user confidence (trust) in the decisions made, promote impartial and fair decisions, and improve the user task performance.
Finally, the editor is very grateful to all authors and reviewers for their valuable contribution. He would like also to acknowledge Mrs. Mary E. James for establishing the contract with Springer and supporting the editor in any organizational aspects. The editor hopes that this book will be a useful basis for further fruitful investigations for researchers and engineers as well as a motivation and inspiration for newcomers in order to address the challenges related to energy transition.
Moamar Sayed-Mouchaweh
Douai, France
Contents
Moamar Sayed-Mouchaweh
Riccardo Guidotti , Anna Monreale , Dino Pedreschi and Fosca Giannotti
Usef Faghihi , Sioui Maldonado Bouchard and Ismail Biskri
Joglas Souza and Carson K. Leung
Alaidine Ben Ayed , Ismal Biskri and Jean-Guy Meunier
Kirill I. Tumanov and Gerasimos Spanakis
Edward Verenich , M. G. Sarwar Murshed , Nazar Khan , Alvaro Velasquez and Faraz Hussain
Ehsan Hallaji , Ranim Aljoudi , Roozbeh Razavi-Far , Majid Ahmadi and Mehrdad Saif
Ranim Aljoudi , Ehsan Hallaji , Roozbeh Razavi-Far , Majid Ahmadi and Mehrdad Saif