• Complain

Katharina A. Zweig - Awkward Intelligence: Where AI Goes Wrong, Why It Matters, and What We Can Do about It

Here you can read online Katharina A. Zweig - Awkward Intelligence: Where AI Goes Wrong, Why It Matters, and What We Can Do about It full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. City: Cambridge, year: 2022, publisher: The MIT Press, genre: Computer / Science. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

No cover
  • Book:
    Awkward Intelligence: Where AI Goes Wrong, Why It Matters, and What We Can Do about It
  • Author:
  • Publisher:
    The MIT Press
  • Genre:
  • Year:
    2022
  • City:
    Cambridge
  • Rating:
    5 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 100
    • 1
    • 2
    • 3
    • 4
    • 5

Awkward Intelligence: Where AI Goes Wrong, Why It Matters, and What We Can Do about It: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Awkward Intelligence: Where AI Goes Wrong, Why It Matters, and What We Can Do about It" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

An expert offers a guide to where we should use artificial intelligenceand where we should not.
Before we know it, artificial intelligence (AI) will work its way into every corner of our lives, making decisions about, with, and for us. Is this a good thing? Theres a tendency to think that machines can be more objective than humanscan make better decisions about job applicants, for example, or risk assessments. In Awkward Intelligence, AI expert Katharina Zweig offers readers the inside story, explaining how many levers computer and data scientists must pull for AIs supposedly objective decision making. She presents the good and the bad: AI is good at processing vast quantities of data that humans cannotbut its bad at making judgments about people.
AI is accurate at sifting through billions of websites to offer up the best results for our search queries and it has beaten reigning champions in games of chess and Go. But, drawing on her own research, Zweig shows how inaccurate AI is, for example, at predicting whether someone with a previous conviction will become a repeat offender. Its no better than simple guesswork, and yet its used to determine peoples futures.
Zweig introduces readers to the basics of AI and presents a toolkit for designing AI systems. She explains algorithms, big data, and computer intelligence, and how they relate to one another. Finally, she explores the ethics of AI and how we can shape the process. With Awkward Intelligence. Zweig equips us to confront the biggest question concerning AI: where we should use itand where we should not.

Katharina A. Zweig: author's other books


Who wrote Awkward Intelligence: Where AI Goes Wrong, Why It Matters, and What We Can Do about It? Find out the surname, the name of the author of the book and a list of all author's works by series.

Awkward Intelligence: Where AI Goes Wrong, Why It Matters, and What We Can Do about It — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Awkward Intelligence: Where AI Goes Wrong, Why It Matters, and What We Can Do about It" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make

Awkward Intelligence Awkward Intelligence Where AI Goes Wrong Why It Matters - photo 1

Awkward Intelligence
Awkward Intelligence
Where AI Goes Wrong, Why It Matters, and What We Can Do about It

Katharina A. Zweig

Translated by Noah Harley

The MIT Press

Cambridge, Massachusetts

London, England

2022 Massachusetts Institute of Technology

Original title: Ein Algorithmus hat kein Taktgefhl. Wo knstliche Intelligenz sich irrt, warum uns das betrifft und was wir dagegen tun knnen, by Katharina Zweig

2019 by Wilhelm Heyne Verlag, a division of Verlagsgruppe Random House GmbH, Mnchen, Germany

The translation of this work was supported by a grant from the Goethe-Institut.

All rights reserved No part of this book may be reproduced in any form by any - photo 2

All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.

Library of Congress Cataloging-in-Publication Data

Names: Zweig, Katharina A., author. | Harley, Noah, translator.

Title: Awkward intelligence : where AI goes wrong, why it matters, and what we can do about it / Katharina A. Zweig ; translated by Noah Harley.

Description: Cambridge, Massachusetts : The MIT Press, [2022] | Includes bibliographical references and index.

Identifiers: LCCN 2021060550 | ISBN 9780262047463 | ISBN 9780262371858 (pdf) | ISBN 9780262371865 (epub)

Subjects: LCSH: Artificial intelligence. | Artificial intelligencePhilosophy.

Classification: LCC Q334.7 .Z84 2022 | DDC 006.301dc23/eng20220521

LC record available at https://lccn.loc.gov/2021060550

d_r0

To my mother, who helped me become a scientist, a teacher, and a writer.

Contents

The most important thing about this book, dear reader, is you! Thats because artificial intelligence, AI for short, will soon find its way into every corner of our lives and make decisions about, with, and for us. And for AI to make those decisions as well as possible, we all have to think about what actually goes into a good decisionand whether computers can make them in our stead. In what follows I take you on a backstage tour so that you can see for yourself just how many levers computer and data scientists are actually pulling to wrest decisions from data. And thats where you come in: what matters at moments like these is how you would decide. Thats because society should leave its important decisions to machines only if it is confident those machines will behave according to its cultural and moral standards. This is why more than anything else, I want this book to empower you. I hope to dispel the sense of helplessness that creeps in when the conversation turns to algorithms; to explain the necessary terms and point out how and where you can intervene; and finally, to rouse you to action so that you can join computer scientists, politicians, and employers in debating where artificial intelligence makes senseand where it doesnt.

And how is it that artificial intelligence will soon find its way into every corner of our lives, you ask? For one, because AI can make things more efficient by relieving us of the burdensome, endlessly repetitive parts of our work. Yet I also see a tendency at present toward thinking AI should make decisions about people. That might occur when using data to determine whether a job applicant should receive an interview or a person is fit enough for a medical study, for example, or if someone else may be predisposed to acts of terrorism.

How did we get here in the first place, to the point where it became possible for so many of us to entertain the notion that machines are better judges of people than we ourselves are? Well, for starters computers are clearly capable of processing data in quantities that humans cannot. What strikes me, however, is a present lack of faith in the human capacity to judge. Its not as though we first came to perceive humanity on the whole to be irrational, liable to manipulation, subjective, and prejudiced when Daniel Kahneman was awarded the Nobel Prize in 2002 for his research on human irrationality, or more recently with the introduction of Richard Thalers concept of nudging in 2017. This in turn leads us to hope that machines will unerringly arrive at more objective decisions and then, with a bit of magic, will discover patterns and rules in human behavior that have escaped the experts thus far, resulting in sounder predictions.

Where do such hopes spring from? In recent years, teams of developers have demonstrated that by using artificial intelligence, computers are able to solve tasks quickly and effectively that just two decades ago would have posed a real challenge. Every day, machines manage to sift through billions of websites to offer up the best results for our search queries or to detect partially concealed bicyclists and pedestrians in images and reliably predict their next movements; theyve even beaten the reigning champions in chess and go. From here, doesnt it seem obvious that they could also support decision-makers in reaching fair judgements about people? Or that machines should simply make those judgements themselves?

Many expect this will make decisions more objectivesomething that is also sorely lacking on many counts. Take the United States, one country where algorithmic decision systems are already used in the lead-up to important human decisions. In a land that holds 20 percent of the official prison population worldwide, and where African Americans are roughly six times as likely to be imprisoned as white people, one could only wish for systems that would avoid any and all forms of latent racismif possible, without having to raise spending significantly. This has led to the use of risk-assessment systems, which estimate the risk that someone with a previous conviction runs of becoming a repeat offender. The algorithms work by automatically analyzing properties that are common among known criminals who go on to commit another offense, and rare among those who dont. I found it deeply unsettling when my research was able to show that one commonly used algorithm in the US resulted in mistaken judgements up to 80 percent of the time (!) in the case of serious crimes. Concretely, this means that a mere one out of every four people the algorithm labeled as high-risk repeat offenders went on to commit another serious offense. Simple guesswork based on the general likelihood of recidivism would only have been slightly less accurate, and at least had the advantage of consciously being pure conjecture.

So whats going awry when machines judge people? As a scientist coming from a highly interdisciplinary background, I consider the effects and side effects of software from a particular angle: socioinformatics. A recent offshoot of computer science, as a discipline socioinformatics draws on methods and approaches from within psychology, sociology, economics, statistical physics, and (of course) computer science. The key argument is that interactions between users and software can only be understood when seen as part of a larger whole called a sociotechnical system.

For over fifteen years now, my research has focused concretely on how and when we can use computersand, more specifically, exploit data, or perform data miningto better understand the complex world we inhabit. That lands me among the ranks of those with the sexiest jobs on planet Earth, even if a weekend spent wading through endless streams of data, sifting for exciting correlations with statistics, may not exactly sound like your idea of fun. Personally, I cant imagine anything better! Yet at the start of my career, I used statistics without really understanding it, always uncertain of whether this, that, or the other method could actually be applied to data to yield interpretable results. This was due to the fact that after graduating high school I initially chose to study biochemistry, a course of study that typically spends little time on mathematics. We learned the basics of biology, medicine, physics, and chemistrybut not a single hour of statistics. They were probably hoping it would seep into our brains by pure osmosis if only we cooked up enough of the lab experiments they assigned.

Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Awkward Intelligence: Where AI Goes Wrong, Why It Matters, and What We Can Do about It»

Look at similar books to Awkward Intelligence: Where AI Goes Wrong, Why It Matters, and What We Can Do about It. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Awkward Intelligence: Where AI Goes Wrong, Why It Matters, and What We Can Do about It»

Discussion, reviews of the book Awkward Intelligence: Where AI Goes Wrong, Why It Matters, and What We Can Do about It and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.