• Complain

Leonida Gianfagna - Explainable AI Using Python

Here you can read online Leonida Gianfagna - Explainable AI Using Python full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. year: 2021, publisher: Springer, genre: Children. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

Leonida Gianfagna Explainable AI Using Python

Explainable AI Using Python: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Explainable AI Using Python" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

Leonida Gianfagna: author's other books


Who wrote Explainable AI Using Python? Find out the surname, the name of the author of the book and a list of all author's works by series.

Explainable AI Using Python — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Explainable AI Using Python" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
Contents
Landmarks
Book cover of Explainable AI with Python Leonida Gianfagna and Antonio Di - photo 1
Book cover of Explainable AI with Python
Leonida Gianfagna and Antonio Di Cecco
Explainable AI with Python
1st ed. 2021
Logo of the publisher Leonida Gianfagna Cyber Guru Rome Italy Antonio - photo 2
Logo of the publisher
Leonida Gianfagna
Cyber Guru, Rome, Italy
Antonio Di Cecco
School of AI Italia, Pescara, Italy
ISBN 978-3-030-68639-0 e-ISBN 978-3-030-68640-6
https://doi.org/10.1007/978-3-030-68640-6
The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG

The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Contents
The Author(s), under exclusive license to Springer Nature Switzerland AG 2021
L. Gianfagna, A. Di Cecco Explainable AI with Python https://doi.org/10.1007/978-3-030-68640-6_1
1. The Landscape
Leonida Gianfagna
(1)
Cyber Guru, Rome, Italy
(2)
School of AI Italia, Pescara, Italy

Everyone knows that debugging is twice as hard as writing a program in the first place.

So if youre as clever as you can be when you write it, how will you ever debug it?

Brian Kernighan

This chapter covers:
  • What is Explainable AI in the context of Machine Learning?

  • Why do we need Explainable AI?

  • The big picture of how Explainable AI works

For our purposes we place the birth of AI with the seminal work of Alan Turing () in which the author posed the question Can machines think? and the later famous mental experiment proposed by Searle called the Chinese Room.

The point is simple: suppose to have a black-box-based AI system that pretends to speak Chinese in the sense that it can receive questions in Chinese and provide answers in Chinese. Assume also that this agent may pass a Turing test that means it is indistinguishable from a real person that speaks Chinese. Would we be fine on saying that this AI system is capable of speaking Chinese as well? Or do we want more? Do we want the black box to explain itself clarifying some Chinese language grammar?

So, the root of Explainable AI was at the very beginning of Artificial Intelligence, albeit not in the current form as a specific discipline. The key to trust the system as a real Chinese speaker would be to make the system less opaque and explainable as a further requirement besides getting proper answers.

Jumping to our days, it is worth to mention the statement of GO champion Fan Hui commenting the famous 37th move of AlphaGo , the software developed by Google to play GO, that defeated in March 2016 the Korean champion Lee Sedol with a historical result: Its not a human move, Ive never seen a man playing such a move (Metz ). GO is known as a computationally complex game, more complex than chess, and before this result, the common understanding was that it was not a game suitable for a machine to play successfully. But for our purposes and to start this journey, we need to focus on Fan Huis quoted statement. The GO champion could not make sense of the move even after having looked at all the match; he recognized it as brilliant, but he had no way to provide an explanation. So, we have an AI system (AlphaGo) that performed very well (defeating the GO champion), but no explanation of how it worked to win the game; that is where Explainable AI inside the wider Machine Learning and Artificial Intelligence starts to play a critical role.

Before presenting the full landscape, we will give some examples that are less sensationalistic but more practical in terms of understanding what we mean by the fact that most of the current Machine Learning models are opaque and not explainable. And the fil rouge of the book will be to learn in practice leveraging different methods and how to make ML models explainable, that is, to answer the questions What, How, and Why on the results.

1.1 Examples of What Explainable AI Is

Explainable AI (aka XAI) is more than just a buzz word, but it is not easy to provide a definition that includes the different angles to look at the term. Basically speaking, XAI is a set of methods and tools that can be adopted to make ML models understandable to human beings in terms of providing explanations on the results provided by the ML models elaboration.

Well start with some examples to get into the context. In particular, we will go through three easy cases that will show different but fundamental aspects of Explainable AI to keep in mind for the rest of the book:
  • The first one is about the learning phase.

  • The second example is more on knowledge discovery.

  • The third introduces the argument of reliability and robustness against external attacks to the ML model.

1.1.1 Learning Phase

One of the most brilliant successes of modern Deep Learning techniques against the traditional approach comes from computer vision. We can train a convolutional neural network (CNN) to understand the difference between different classes of labelled pictures. The applications are probably infinite: we can train a model to discriminate between different kinds of pneumonia RX pictures or teach it to translate sign language into speech. But are the results truly reliable?

Lets follow a famous toy task in which we have to classify pictures of wolves and dogs (Fig. ).
Fig 11 ML classification of wolves and dogs Singh After the training the - photo 3
Fig. 1.1

ML classification of wolves and dogs (Singh )

After the training, the algorithm learned to distinguish the classes with remarkable accuracy: only a misclassification over 100 images! But if we use an Explainable AI method asking the model Why have you predicted wolf?, the answer will be with a little of surprise because there is snow! (Ribeiro et al. ).
Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Explainable AI Using Python»

Look at similar books to Explainable AI Using Python. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Explainable AI Using Python»

Discussion, reviews of the book Explainable AI Using Python and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.