• Complain

Ajay Thampi - Interpretable AI: Building explainable machine learning systems

Here you can read online Ajay Thampi - Interpretable AI: Building explainable machine learning systems full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. year: 2022, publisher: Manning, genre: Children. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

Ajay Thampi Interpretable AI: Building explainable machine learning systems
  • Book:
    Interpretable AI: Building explainable machine learning systems
  • Author:
  • Publisher:
    Manning
  • Genre:
  • Year:
    2022
  • Rating:
    4 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 80
    • 1
    • 2
    • 3
    • 4
    • 5

Interpretable AI: Building explainable machine learning systems: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Interpretable AI: Building explainable machine learning systems" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

AI doesnt have to be a black box. These practical techniques help shine a light on your models mysterious inner workings. Make your AI more transparent, and youll improve trust in your results, combat data leakage and bias, and ensure compliance with legal requirements.
In Interpretable AI, you will learn:
Why AI models are hard to interpret
Interpreting white box models such as linear regression, decision trees, and generalized additive models
Partial dependence plots, LIME, SHAP and Anchors, and other techniques such as saliency mapping, network dissection, and representational learning
What fairness is and how to mitigate bias in AI systems
Implement robust AI systems that are GDPR-compliant
Interpretable AI opens up the black box of your AI models. It teaches cutting-edge techniques and best practices that can make even complex AI systems interpretable. Each method is easy to implement with just Python and open source libraries. Youll learn to identify when you can utilize models that are inherently transparent, and how to mitigate opacity when your problem demands the power of a hard-to-interpret deep learning model.
Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications.
About the technology
Its often difficult to explain how deep learning models work, even for the data scientists who create them. Improving transparency and interpretability in machine learning models minimizes errors, reduces unintended bias, and increases trust in the outcomes. This unique book contains techniques for looking inside black box models, designing accountable algorithms, and understanding the factors that cause skewed results.
About the book
Interpretable AI teaches you to identify the patterns your model has learned and why it produces its results. As you read, youll pick up algorithm-specific approaches, like interpreting regression and generalized additive models, along with tips to improve performance during training. Youll also explore methods for interpreting complex deep learning models where some processes are not easily observable. AI transparency is a fast-moving field, and this book simplifies cutting-edge research into practical methods you can implement with Python.
Whats inside
Techniques for interpreting AI models
Counteract errors from bias, data leakage, and concept drift
Measuring fairness and mitigating bias
Building GDPR-compliant AI systems
About the reader
For data scientists and engineers familiar with Python and machine learning.
About the author
Ajay Thampi is a machine learning engineer focused on responsible AI and fairness.
Table of Contents
PART 1 INTERPRETABILITY BASICS
1 Introduction
2 White-box models
PART 2 INTERPRETING MODEL PROCESSING
3 Model-agnostic methods: Global interpretability
4 Model-agnostic methods: Local interpretability
5 Saliency mapping
PART 3 INTERPRETING MODEL REPRESENTATIONS
6 Understanding layers and units
7 Understanding semantic similarity
PART 4 FAIRNESS AND BIAS
8 Fairness and mitigating bias
9 Path to explainable AI

Ajay Thampi: author's other books


Who wrote Interpretable AI: Building explainable machine learning systems? Find out the surname, the name of the author of the book and a list of all author's works by series.

Interpretable AI: Building explainable machine learning systems — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Interpretable AI: Building explainable machine learning systems" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
inside front cover The process of building a robust AI system - photo 1
inside front cover

The process of building a robust AI system Interpretable AI Building - photo 2

The process of building a robust AI system

Interpretable AI Building explainable machine learning systems - image 3

Interpretable AI

Building explainable machine learning systems

Ajay Thampi

To comment go to liveBook

Interpretable AI Building explainable machine learning systems - image 4

Manning

Shelter Island

For more information on this and other Manning titles go to

www.manning.com

Copyright

For online information and ordering of these and other Manning books, please visit www.manning.com. The publisher offers discounts on these books when ordered in quantity.

For more information, please contact

Special Sales Department

Manning Publications Co.

20 Baldwin Road

PO Box 761

Shelter Island, NY 11964

Email: orders@manning.com

2022 by Manning Publications Co. All rights reserved.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by means electronic, mechanical, photocopying, or otherwise, without prior written permission of the publisher.

Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in the book, and Manning Publications was aware of a trademark claim, the designations have been printed in initial caps or all caps.

Recognizing the importance of preserving what has been written, it is Mannings policy to have the books we publish printed on acid-free paper, and we exert our best efforts to that end. Recognizing also our responsibility to conserve the resources of our planet, Manning books are printed on paper that is at least 15 percent recycled and processed without the use of elemental chlorine.

Interpretable AI Building explainable machine learning systems - image 5

Manning Publications Co.

20 Baldwin Road Technical

PO Box 761

Shelter Island, NY 11964

Development editor:

Lesley Trites

Technical development editor:

Kostas Passadis

Review editor:

Mihaela Batini

Production editor:

Deirdre S. Hiam

Copy editor:

Pamela Hunt

Proofreader:

Melody Dolab

Technical proofreader:

Vishwesh Ravi Shrimali

Typesetter:

Gordan Salinovi

Cover designer:

Marija Tudor

ISBN: 9781617297649

dedication

To Achan, Amma, Ammu, and my dear Miru.

front matter
preface

Ive been fortunate to have worked with data and machine learning for about a decade now. My background is in machine learning, and my PhD was focused on applying machine learning in wireless networks. I have published papers ( http://mng.bz/zQR6 ) at leading conferences and journals on the topic of reinforcement learning, convex optimization, and classical machine learning techniques applied to 5G cellular networks.

After completing my PhD, I began working in the industry as a data scientist and machine learning engineer and gained experience deploying complex AI solutions for customers across multiple industries, such as manufacturing, retail, and finance. It was during this time that I realized the importance of interpretable AI and started researching it heavily. I also started to implement and deploy interpretability techniques in real-world scenarios for data scientists, business stakeholders, and experts to get a deeper understanding of machine-learned models.

I wrote a blog post ( http://mng.bz/0wnE ) on interpretable AI and coming up with a principled approach to building robust, explainable AI systems. The post got a surprisingly large response from data scientists, researchers, and practitioners from a wide range of industries. I also presented on this subject at various AI and machine learning conferences. By putting my content in the public domain and speaking at leading conferences, I learned the following:

  • I wasnt the only one interested in this subject.

  • I was able to get a better understanding of what specific topics are of interest to the community.

These learnings led to the book that you are reading now. You can find a few resources available to help you stay abreast of interpretable AI, like survey papers, blog posts, and one book, but no single resource or book covers all the important interpretability techniques that would be valuable for AI practitioners. There is also no practical guide on how to implement these cutting-edge techniques. This book aims to fill that gap by first providing a structure to this active area of research and covering a broad range of interpretability techniques. Throughout this book, we will look at concrete real-world examples and see how to build sophisticated models and interpret them using state-of-the-art techniques.

I strongly believe that as complex machine learning models are being deployed in the real world, understanding them is extremely important. The lack of a deep understanding can result in models propagating bias, and weve seen examples of this in criminal justice, politics, retail, facial recognition, and language understanding. All of this has a detrimental effect on trust, and, from my experience, this is one of the main reasons why companies are resisting the deployment of AI. Im excited that you also realize the importance of this deep understanding, and I hope you learn a lot from this book.

acknowledgments

Writing a book is harder than I thought, and it requires a lot of workreally! None of this would have been possible without the support and understanding of my parents, Krishnan and Lakshmi Thampi; my wife, Shruti Menon; and my brother, Arun Thampi. My parents put me on the path of lifelong learning and have always given me the strength to chase my dreams. Im also eternally grateful to my wife for supporting me throughout the difficult journey of writing this book, patiently listening to my ideas, reviewing my rough drafts, and believing that I could finish this. My brother deserves my wholehearted thanks as well for always having my back!

Next, Id like to acknowledge the team at Manning: Brian Sawyer, who read my blog post and suggested that there might a book there; my editors, Matthew Spaur, Lesley Trites, and Kostas Passadis, for working with me, providing high-quality feedback, and for being patient when things got rough; and Marjan Bace, for green-lighting this whole project. Thanks as well to all the other folks at Manning who worked with me on the production and promotion of the book: Deirdre Hiam, my production editor; Pamela Hunt, my copyeditor; and Melody Dolab, my page proofer.

Id also like to thank the reviewers who took the time to read my manuscript at various stages during its development and who provided invaluable feedback: Al Rahimi, Alain Couniot, Alejandro Bellogin Kouki, Ariel Gamio, Craig E. Pfeifer, Djordje Vukelic, Domingo Salazar, Dr. Kanishka Tyagi, Izhar Haq, James J. Byleckie, Jonathan Wood, Kai Gellien, Kim Falk Jorgensen, Marc Paradis, Oliver Korten, Pablo Roccatagliata, Patrick Goetz, Patrick Regan, Raymond Cheung, Richard Vaughan, Sergio Govoni, Shashank Polasa Venkata, Sriram Macharla, Stefano Ongarello, Teresa Fontanella De Santis, Tiklu Ganguly, Vidhya Vinay, Vijayant Singh, Vishwesh Ravi Shrimali, and Vittal Damaraju.Special thanks to James Byleckie and Vishwesh Ravi Shrimali, technical proofreaders, for carefully reviewing the code one last time shortly before the book went into production.

Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Interpretable AI: Building explainable machine learning systems»

Look at similar books to Interpretable AI: Building explainable machine learning systems. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Interpretable AI: Building explainable machine learning systems»

Discussion, reviews of the book Interpretable AI: Building explainable machine learning systems and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.