• Complain

Michael Munn - Explainable AI for Practitioners (Early Release, Ch1&2/8)

Here you can read online Michael Munn - Explainable AI for Practitioners (Early Release, Ch1&2/8) full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. year: 2022, publisher: OReilly Media, Inc., genre: Children. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

Michael Munn Explainable AI for Practitioners (Early Release, Ch1&2/8)
  • Book:
    Explainable AI for Practitioners (Early Release, Ch1&2/8)
  • Author:
  • Publisher:
    OReilly Media, Inc.
  • Genre:
  • Year:
    2022
  • Rating:
    5 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 100
    • 1
    • 2
    • 3
    • 4
    • 5

Explainable AI for Practitioners (Early Release, Ch1&2/8): summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Explainable AI for Practitioners (Early Release, Ch1&2/8)" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

Most intermediate-level machine learning books usually focus on how to optimize models by increasing accuracy or decreasing prediction error. But this approach often overlooks the importance and the need to be able to explain why and how your ML model makes the predictions that it does.This practical guide brings together the best-in-class techniques for model interpretability and explains model predictions in a hands-on approach. Experienced ML practitioners will be able to more easily apply these tools in their daily workflow.

Michael Munn: author's other books


Who wrote Explainable AI for Practitioners (Early Release, Ch1&2/8)? Find out the surname, the name of the author of the book and a list of all author's works by series.

Explainable AI for Practitioners (Early Release, Ch1&2/8) — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Explainable AI for Practitioners (Early Release, Ch1&2/8)" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
Explainable AI for Practitioners by Michael Munn and David Pitman Copyright - photo 1
Explainable AI for Practitioners

by Michael Munn and David Pitman

Copyright 2022 Michael Munn, David Pitman and OReilly Media, Inc. All rights reserved.

Printed in the United States of America.

Published by OReilly Media, Inc. , 1005 Gravenstein Highway North, Sebastopol, CA 95472.

OReilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://oreilly.com). For more information, contact our corporate/institutional sales department: 800-998-9938 or corporate@oreilly.com .

  • Acquisitions Editor: Rebecca Novack
  • Development Editor: Rita Fernando
  • Production Editor: Jonathon Owen
  • Copyeditor:
  • Proofreader:
  • Indexer:
  • Interior Designer: David Futato
  • Cover Designer: Karen Montgomery
  • Illustrator:
  • December 2022: First Edition
Revision History for the Early Release
  • 2022-07-20: First Release

See http://oreilly.com/catalog/errata.csp?isbn=9781098119133 for release details.

The OReilly logo is a registered trademark of OReilly Media, Inc. Explainable AI for Practitioners, the cover image, and related trade dress are trademarks of OReilly Media, Inc.

The views expressed in this work are those of the authors, and do not represent the publishers views. While the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights.

978-1-098-11913-3

[LSI]

Chapter 1. An Overview of Explainability
A Note for Early Release Readers

With Early Release ebooks, you get books in their earliest formthe authors raw and unedited content as they writeso you can take advantage of these technologies long before the official release of these titles.

This will be the 2nd chapter of the final book. Please note that the GitHub repo will be made active later on.

If you have comments about how we might improve the content and/or examples in this book, or if you notice missing material within this chapter, please reach out to the editor at rfernando@oreilly.com.

Explainability has been a part of machine learning since the inception of AI. The very first AIs, rule-based chain systems, were specifically constructed to provide a clear understanding of what led to a prediction. The field continued to pursue explainability as a key part of models, partly due to a focus on general AI but also to justify that the research was sane and on the right track, for many decades until the complexity of model architectures outpaced our ability to explain what was happening. After the invention of ML neurons and neural nets in the 1980s, research into explainability waned as researchers focused on surviving the first AI winter by turning to techniques that were explainable because they relied solely on statistical techniques that were well-proven in other fields. Explainability in its modern form (and what we largely focus on in this book) was revived, now as a distinct field of research, in the mid 2010s in response to the persistent question of this model works really well but how?

In just a few years, the field has gone from obscurity to one of intense interest and investigation. Remarkably, many powerful explainability techniques have been invented, or repurposed from other fields, in the short time since. However, the rapid transition from theory to practice, and the increasing need for explainability from users who interact with ML, such as users and business stakeholders, has led to growing confusion about the capability and extent of different methods. Many fundamental terms of explainability are routinely used to represent different, even contradictory, ideas, and it is easy for explanations to be misunderstood due to practitioners rushing to provide assurance that ML is working as expected. Even the terms explainability and interpretability are routinely swapped, despite having very different focuses. For example, while writing this book, we were asked by a knowledgeable industry organization to describe explainable and interpretable capabilities of a system, but the definitions of explainability and interpretability were flipped in comparison to how the rest of industry defines the terms! Recognizing the confusion over explainability, the purpose of this chapter is to provide a background and common language for future chapters.

What Are Explanations?

When a model makes a prediction, Explainable AI methods generate an explanation that gives insight into the models behavior as it arrived at that prediction. When we seek explanations, we are trying to understand why did X happen? Figuring out this Why can help us build a better comprehension of what influences a model, how that influence occurs, and where the model performs (or fails). As part of building our own mental models, we often find a pure explanation to be unsatisfactory, so we are also interested in explanations which provide a counterfactual, or foil, to the original situation. Counterfactuals are scenarios which seek to provide an opposing, plausible, scenario of why X did not happen. If we are seeking to explain why did it rain today? we may also try to find the counterfactual explanation for why did it not rain today [in a hypothetical world]? While our primary explanation for why it rained might include temperature, barometric pressure, and humidity, it may be easier to explain that it did not rain because there were no clouds in the sky, implying that clouds are part of an explanation for why it does rain.

We also often seek explanations that are causal, or in the form of X was predicted because of Y. These explanations are attractive because they give an immediate sense of what a counterfactual prediction would be: remove X and presumably the prediction will no longer be Y. It certainly sounds more definitive to say it rains because there are clouds in the sky. However, this is not always true; rain can occur even with clear skies in some circumstances. Establishing causality with data-focused explanations is extremely difficult (even for time-series data), and no explainability techniques have been proposed that are both useful in practice and have a high level of guarantee in their analysis. Instead, if you want to establish causal relationships within your model or data, we recommend you explore the field of interpretable, causal models.

Explainability Consumers

Understanding and using the results of Explainable AI can look very different depending on who is receiving the explanation. As a practitioner, for example, your needs from an explanation are very different from those of a non-technical individual who may be receiving an explanation as part of an ML system in production that they may not even know exists!

Understanding the primary types of users, or personas, will be helpful as you learn about different techniques so you can assess which will best suit your audiences needs. In Chapter 7, we will go into more detail about how to build good experiences for these different audiences with explainability.

Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Explainable AI for Practitioners (Early Release, Ch1&2/8)»

Look at similar books to Explainable AI for Practitioners (Early Release, Ch1&2/8). We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Explainable AI for Practitioners (Early Release, Ch1&2/8)»

Discussion, reviews of the book Explainable AI for Practitioners (Early Release, Ch1&2/8) and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.