• Complain

Anand Tamboli - Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks

Here you can read online Anand Tamboli - Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. City: S.l., year: 2020, publisher: Apress, genre: Computer / Science. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

Anand Tamboli Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks
  • Book:
    Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks
  • Author:
  • Publisher:
    Apress
  • Genre:
  • Year:
    2020
  • City:
    S.l.
  • Rating:
    5 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 100
    • 1
    • 2
    • 3
    • 4
    • 5

Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

Artificial Intelligence (AI), we often see it as a panacea that can somehow resolve everything. As it stands today, the outlook is still highly debatable. AI has now become a real-world application technology, and it is becoming part of modern life fabric.If AI is to drive the business success and social acceptance, it cannot hide in a black box. To have confidence in the outcomes, win users trust, and ultimately capitalize on the opportunities, it may be necessary to open up the black box or adopt a preemptive approach.In Keeping Your AI on the Leash, author Anand Tamboli explains multiple risk factors and a proven method to evaluate them quantitatively. He also introduces a new concept of AI insurance to cover residual risks. How do you keep your AI in control and make sure that it does what it is supposed to do? This book covers that and more.You will learn Various types of risks involved in developing and using AI solutions How to identify, evaluate, and quantify risks pragmatically How AI insurance can be utilized to support residual risk management

Anand Tamboli: author's other books


Who wrote Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks? Find out the surname, the name of the author of the book and a list of all author's works by series.

Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
Contents
Landmarks
Anand Tamboli Keeping Your AI Under Control A Pragmatic Guide to Identifying - photo 1
Anand Tamboli
Keeping Your AI Under Control
A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks
Anand Tamboli New South Wales NSW Australia Any source code or other - photo 2
Anand Tamboli
New South Wales, NSW, Australia

Any source code or other supplementary material referenced by the author in this book is available to readers on GitHub via the books product page, located at www.apress.com/9781484254660 . For more detailed information, please visit http://www.apress.com/source-code .

ISBN 978-1-4842-5466-0 e-ISBN 978-1-4842-5467-7
https://doi.org/10.1007/978-1-4842-5467-7
Anand Tamboli 2019
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.
Trademarked names, logos, and images may appear in this book. Rather than use a trademark symbol with every occurrence of a trademarked name, logo, or image we use the names, logos, and images only in an editorial fashion and to the benefit of the trademark owner, with no intention of infringement of the trademark. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights.
While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein.
Distributed to the book trade worldwide by Springer Science+Business Media New York, 233 Spring Street, 6th Floor, New York, NY 10013. Phone 1-800-SPRINGER, fax (201) 348-4505, e-mail orders-ny@springer-sbm.com, or visit www.springeronline.com. Apress Media, LLC is a California LLC and the sole member (owner) is Springer Science + Business Media Finance Inc (SSBM Finance Inc). SSBM Finance Inc is a Delaware corporation.

To everyone who believes that humans are more valuable than machines!

Early praise for Keeping Your AI Under Control
Trusted AI and bias detection in AI are essential areas that have gained significance in recent times. Most of the AI use cases today need explainability as a critical feature. This book has excellent use cases for CIOs to assess their AI projects and their current effectiveness. In addition, the book covers an important aspectResponsible AI, highlighted through The Leash System that outlines how organizations can perform a sanity check on their AI projects.

Shalini Kapoor, Director & CTO - Watson IoT, IBM India

In todays world of cloud services and AUTOML [automated machine learning] frameworks, the development of AI models has become quite methodical (if not easy) and is framework driven. When you have defined steps or a pipeline to approach a particular problem, it has an inherent self-correcting mechanism. Today, we have a framework for AI development and implementations from a process and technology standpoint, but we dont have a one for designing a responsible AI model with a self-correcting or feedback mechanism to learn from mistakes and be responsible for them. This book fills the gap with The Leash System to implement AI in an unbiased way, make it more responsible for its utility point of view, and make your models more real (if not human). AI models do not have a mind of their own, so this book will help the AI designers to design AI solutions, representing a responsible human mind.

Rahul Kharat, Director, Head of Artificial Intelligence, Eton Solutions LP

I had no idea AI would be such a huge game-changer when we programmed our first expert systems based application in 1996.Keeping Your AI Under Controlnot only accounts how AI is changing the world, but more importantly it gives us guidelines on how AI should be built to high ethical standards. Anand is skilled in converting complicated topics and technical jargon into a simple language; thus, the book will equally appeal to programmers and end-users.

This book helps strike a collective debate on how humanity should use a potent tool like AI towards the betterment of all.

Amit Dangl, Vice President of Customer Success, Saviant Consulting

Introduction

Future tech always brings us two things: promise and consequences. Its those consequences theresponsible AIis all about, and if it is not, then it should be.

The sensational news and resulting hysteria about the future of artificial intelligence are everywhere. Hyperbolic representation by the media has made many people believe that they are already living in the future.

Much of our daily lives as consumers intertwine with artificial intelligence. There is no doubt that artificial intelligence is a powerful technology, and with that power comes responsibility!

AI hyperbole has given rise to AI solutionism mindset. Such that many believe that if you give them enough data, their machine learning algorithms can solve all of humanitys problems.

We already have seen the rise of a similar mindset a few years ago, which was there is an app for it mindset, and we know that it hasnt done any good in real life. Instead of supporting any progress, it endangers the value of emerging technology and sets unrealistic expectations.

AI solutionism has also led to a reckless push to use AI for anything and everything. This push is making several companies to take theready-fire-aimapproach, which is not only detrimental to the companys growth but also is dangerous for customers on many levels.

A better approach would be to consider suitability and applicability, apply phronesis, and do what is necessary. However, fear of missing out is leading to several missteps and is eventually creating a substantial intellectual debt that we may never be able to pay.

One of the many ways to handle this is being responsible with AI and keeping it always in your control. However, the problem with the responsible AI paradigm is that everyone knowswhyit is necessary, but no one knowshowto achieve it.

This book aims at guiding you toward responsible AI with actionable details. The responsible AI is not just a fancy term or an abstract conceptit is to be ethical, careful, controlled, cautious, reasonable, and accountable, that is, to be responsible in designing, developing, deploying, and using AI.

Throughout the book, you will learn about the various risks involved in developing and using AI solutions. Once you are enabled to identify risks, you will be able to evaluate and quantify them. Doing this in a structured manner means your approach will be more responsible for designing and using AI.

Knowing what we dont know has significant advantages. Unfortunately, AI tech giants are continually pushing user companies in a danger zone, where companies dont know what they dont know. This push is dangerous, not only from a risk concentration perspective but also for your own businesss sake. You must seek to understand what is inside the AI black box.

Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks»

Look at similar books to Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks»

Discussion, reviews of the book Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.