Trusted AI and bias detection in AI are essential areas that have gained significance in recent times. Most of the AI use cases today need explainability as a critical feature. This book has excellent use cases for CIOs to assess their AI projects and their current effectiveness. In addition, the book covers an important aspectResponsible AI, highlighted through The Leash System that outlines how organizations can perform a sanity check on their AI projects.
In todays world of cloud services and AUTOML [automated machine learning] frameworks, the development of AI models has become quite methodical (if not easy) and is framework driven. When you have defined steps or a pipeline to approach a particular problem, it has an inherent self-correcting mechanism. Today, we have a framework for AI development and implementations from a process and technology standpoint, but we dont have a one for designing a responsible AI model with a self-correcting or feedback mechanism to learn from mistakes and be responsible for them. This book fills the gap with The Leash System to implement AI in an unbiased way, make it more responsible for its utility point of view, and make your models more real (if not human). AI models do not have a mind of their own, so this book will help the AI designers to design AI solutions, representing a responsible human mind.
Rahul Kharat, Director, Head of Artificial Intelligence, Eton Solutions LP
I had no idea AI would be such a huge game-changer when we programmed our first expert systems based application in 1996.Keeping Your AI Under Controlnot only accounts how AI is changing the world, but more importantly it gives us guidelines on how AI should be built to high ethical standards. Anand is skilled in converting complicated topics and technical jargon into a simple language; thus, the book will equally appeal to programmers and end-users.
This book helps strike a collective debate on how humanity should use a potent tool like AI towards the betterment of all.
Amit Dangl, Vice President of Customer Success, Saviant Consulting
Introduction
Future tech always brings us two things: promise and consequences. Its those consequences theresponsible AIis all about, and if it is not, then it should be.
The sensational news and resulting hysteria about the future of artificial intelligence are everywhere. Hyperbolic representation by the media has made many people believe that they are already living in the future.
Much of our daily lives as consumers intertwine with artificial intelligence. There is no doubt that artificial intelligence is a powerful technology, and with that power comes responsibility!
AI hyperbole has given rise to AI solutionism mindset. Such that many believe that if you give them enough data, their machine learning algorithms can solve all of humanitys problems.
We already have seen the rise of a similar mindset a few years ago, which was there is an app for it mindset, and we know that it hasnt done any good in real life. Instead of supporting any progress, it endangers the value of emerging technology and sets unrealistic expectations.
AI solutionism has also led to a reckless push to use AI for anything and everything. This push is making several companies to take theready-fire-aimapproach, which is not only detrimental to the companys growth but also is dangerous for customers on many levels.
A better approach would be to consider suitability and applicability, apply phronesis, and do what is necessary. However, fear of missing out is leading to several missteps and is eventually creating a substantial intellectual debt that we may never be able to pay.
One of the many ways to handle this is being responsible with AI and keeping it always in your control. However, the problem with the responsible AI paradigm is that everyone knowswhyit is necessary, but no one knowshowto achieve it.
This book aims at guiding you toward responsible AI with actionable details. The responsible AI is not just a fancy term or an abstract conceptit is to be ethical, careful, controlled, cautious, reasonable, and accountable, that is, to be responsible in designing, developing, deploying, and using AI.
Throughout the book, you will learn about the various risks involved in developing and using AI solutions. Once you are enabled to identify risks, you will be able to evaluate and quantify them. Doing this in a structured manner means your approach will be more responsible for designing and using AI.
Knowing what we dont know has significant advantages. Unfortunately, AI tech giants are continually pushing user companies in a danger zone, where companies dont know what they dont know. This push is dangerous, not only from a risk concentration perspective but also for your own businesss sake. You must seek to understand what is inside the AI black box.