Awkward Intelligence
Awkward Intelligence
Where AI Goes Wrong, Why It Matters, and What We Can Do about It
Katharina A. Zweig
Translated by Noah Harley
The MIT Press
Cambridge, Massachusetts
London, England
2022 Massachusetts Institute of Technology
Original title: Ein Algorithmus hat kein Taktgefhl. Wo knstliche Intelligenz sich irrt, warum uns das betrifft und was wir dagegen tun knnen, by Katharina Zweig
2019 by Wilhelm Heyne Verlag, a division of Verlagsgruppe Random House GmbH, Mnchen, Germany
The translation of this work was supported by a grant from the Goethe-Institut.
All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.
Library of Congress Cataloging-in-Publication Data
Names: Zweig, Katharina A., author. | Harley, Noah, translator.
Title: Awkward intelligence : where AI goes wrong, why it matters, and what we can do about it / Katharina A. Zweig ; translated by Noah Harley.
Description: Cambridge, Massachusetts : The MIT Press, [2022] | Includes bibliographical references and index.
Identifiers: LCCN 2021060550 | ISBN 9780262047463 | ISBN 9780262371858 (pdf) | ISBN 9780262371865 (epub)
Subjects: LCSH: Artificial intelligence. | Artificial intelligencePhilosophy.
Classification: LCC Q334.7 .Z84 2022 | DDC 006.301dc23/eng20220521
LC record available at https://lccn.loc.gov/2021060550
d_r0
To my mother, who helped me become a scientist, a teacher, and a writer.
Contents
The most important thing about this book, dear reader, is you! Thats because artificial intelligence, AI for short, will soon find its way into every corner of our lives and make decisions about, with, and for us. And for AI to make those decisions as well as possible, we all have to think about what actually goes into a good decisionand whether computers can make them in our stead. In what follows I take you on a backstage tour so that you can see for yourself just how many levers computer and data scientists are actually pulling to wrest decisions from data. And thats where you come in: what matters at moments like these is how you would decide. Thats because society should leave its important decisions to machines only if it is confident those machines will behave according to its cultural and moral standards. This is why more than anything else, I want this book to empower you. I hope to dispel the sense of helplessness that creeps in when the conversation turns to algorithms; to explain the necessary terms and point out how and where you can intervene; and finally, to rouse you to action so that you can join computer scientists, politicians, and employers in debating where artificial intelligence makes senseand where it doesnt.
And how is it that artificial intelligence will soon find its way into every corner of our lives, you ask? For one, because AI can make things more efficient by relieving us of the burdensome, endlessly repetitive parts of our work. Yet I also see a tendency at present toward thinking AI should make decisions about people. That might occur when using data to determine whether a job applicant should receive an interview or a person is fit enough for a medical study, for example, or if someone else may be predisposed to acts of terrorism.
How did we get here in the first place, to the point where it became possible for so many of us to entertain the notion that machines are better judges of people than we ourselves are? Well, for starters computers are clearly capable of processing data in quantities that humans cannot. What strikes me, however, is a present lack of faith in the human capacity to judge. Its not as though we first came to perceive humanity on the whole to be irrational, liable to manipulation, subjective, and prejudiced when Daniel Kahneman was awarded the Nobel Prize in 2002 for his research on human irrationality, or more recently with the introduction of Richard Thalers concept of nudging in 2017. This in turn leads us to hope that machines will unerringly arrive at more objective decisions and then, with a bit of magic, will discover patterns and rules in human behavior that have escaped the experts thus far, resulting in sounder predictions.
Where do such hopes spring from? In recent years, teams of developers have demonstrated that by using artificial intelligence, computers are able to solve tasks quickly and effectively that just two decades ago would have posed a real challenge. Every day, machines manage to sift through billions of websites to offer up the best results for our search queries or to detect partially concealed bicyclists and pedestrians in images and reliably predict their next movements; theyve even beaten the reigning champions in chess and go. From here, doesnt it seem obvious that they could also support decision-makers in reaching fair judgements about people? Or that machines should simply make those judgements themselves?
Many expect this will make decisions more objectivesomething that is also sorely lacking on many counts. Take the United States, one country where algorithmic decision systems are already used in the lead-up to important human decisions. In a land that holds 20 percent of the official prison population worldwide, and where African Americans are roughly six times as likely to be imprisoned as white people, one could only wish for systems that would avoid any and all forms of latent racismif possible, without having to raise spending significantly. This has led to the use of risk-assessment systems, which estimate the risk that someone with a previous conviction runs of becoming a repeat offender. The algorithms work by automatically analyzing properties that are common among known criminals who go on to commit another offense, and rare among those who dont. I found it deeply unsettling when my research was able to show that one commonly used algorithm in the US resulted in mistaken judgements up to 80 percent of the time (!) in the case of serious crimes. Concretely, this means that a mere one out of every four people the algorithm labeled as high-risk repeat offenders went on to commit another serious offense. Simple guesswork based on the general likelihood of recidivism would only have been slightly less accurate, and at least had the advantage of consciously being pure conjecture.
So whats going awry when machines judge people? As a scientist coming from a highly interdisciplinary background, I consider the effects and side effects of software from a particular angle: socioinformatics. A recent offshoot of computer science, as a discipline socioinformatics draws on methods and approaches from within psychology, sociology, economics, statistical physics, and (of course) computer science. The key argument is that interactions between users and software can only be understood when seen as part of a larger whole called a sociotechnical system.
For over fifteen years now, my research has focused concretely on how and when we can use computersand, more specifically, exploit data, or perform data miningto better understand the complex world we inhabit. That lands me among the ranks of those with the sexiest jobs on planet Earth, even if a weekend spent wading through endless streams of data, sifting for exciting correlations with statistics, may not exactly sound like your idea of fun. Personally, I cant imagine anything better! Yet at the start of my career, I used statistics without really understanding it, always uncertain of whether this, that, or the other method could actually be applied to data to yield interpretable results. This was due to the fact that after graduating high school I initially chose to study biochemistry, a course of study that typically spends little time on mathematics. We learned the basics of biology, medicine, physics, and chemistrybut not a single hour of statistics. They were probably hoping it would seep into our brains by pure osmosis if only we cooked up enough of the lab experiments they assigned.
Next page