Making AI Intelligible
Great Clarendon Street, Oxford, ox 2 6 dp , United Kingdom
Oxford University Press is a department of the University of Oxford. It furthers the Universitys objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries
Herman Cappelen and Josh Dever 2021
The moral rights of the authors have been asserted
First Edition published in 2021
Impression: 1
Some rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, for commercial purposes, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization.
This is an open access publication, available online and distributed under the terms of a Creative Commons Attribution Non Commercial No Derivatives 4.0 International licence (CC BY-NC-ND 4.0), a copy of which is available at http://creativecommons.org/licenses/by-nc-nd/4.0/ .
Enquiries concerning reproduction outside the scope of this licence should be sent to the Rights Department, Oxford University Press, at the address above
Published in the United States of America by Oxford University Press
198 Madison Avenue, New York, NY 10016, United States of America
British Library Cataloguing in Publication Data
Data available
Library of Congress Control Number: 2020951691
ISBN 9780192894724
ebook ISBN 9780192647566
DOI: 10.1093/oso/9780192894724.001.0001
Printed and bound in Great Britain by
Clays Ltd, Elcograf S.p.A.
Links to third party websites are provided by Oxford in good faith and for information only. Oxford disclaims any responsibility for the materials contained in any third party website referenced in this work.
Contents
This is a book about some aspects of the philosophical foundations of Artificial Intelligence. Philosophy is relevant to many aspects of AI and we dont mean to cover all of them. Our focus is on one relatively underexplored question: Can philosophical theories of meaning, language, and content help us understand, explain, and maybe also improve AI systems? Our answer is Yes. To show this, we first articulate some pressing issues about how to interpret and explain the outputs we get from advanced AI systems. We then use philosophical theories to answer questions like the above.
Here is a brief story to illustrate how we use certain forms of artificial intelligence and how those uses raise pressing philosophical questions:
Lucie needs a mortgage to buy a new house. She logs onto her banks webpage, fills in a great deal of information about herself and her financial history, and also provides account names and passwords for all of her social media accounts. She submits this to the bank. In so doing, she gives the bank permission to access her credit score. Within a few minutes, she gets a message from her bank saying that her application has been declined. It has been declined because Lucies credit score is too low; its 550, which is considered very poor. No human beings were directly involved in this decision. The calculation of Lucies credit score was done by a very sophisticated form of artificial intelligence, called SmartCredit. A natural way to put it is that this AI system saysthat Lucie has a low credit score and on that basis, another part of the AI system decidesthat Lucie should not get a mortgage.
Its natural for Lucie to wonder where this number 550 came from. This is Lucies first question:
Lucies First Question. What does the output 550 that has been assigned to me mean?
The bank has a ready answer to that question: the number 550 is a credit score, which represents how credit-worthy Lucie is.