This book provides an argumentation model for means-end reasoning, a distinctive type of reasoning used for problem-solving and decision-making. Means-end reasoning is modeled as goal-directed argumentation from an agents goals and known circumstances, and from an action selected as a means, to a decision to carry out the action.
Goal-Based Reasoning for Argumentation provides an argumentation model for this kind of reasoning, showing how it is employed in settings of intelligent deliberation where agents try to collectively arrive at a conclusion on what they should do to move forward in a set of circumstances. The book explains how this argumentation model can help build more realistic computational systems of deliberation and decision-making and shows how such systems can be applied to solve problems posed by goal-based reasoning in numerous fields, from social psychology and sociology, to law, political science, anthropology, cognitive science, artificial intelligence, multi-agent systems, and robotics.
Douglas Walton is a Canadian academic and author, well known for his many widely published books and papers on argumentation and logic. He is Distinguished Research Fellow of the Centre for Research in Reasoning, Argumentation, and Rhetoric at the University of Windsor, Canada. Waltons work has been used to better prepare legal arguments and in helping to develop artificial intelligence. His books have been translated worldwide, and he attracts students from many countries to study with him.
32 Avenue of the Americas, New York, NY 10013-2473, USA
Cambridge University Press is part of the University of Cambridge.
It furthers the Universitys mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence.
www.cambridge.org
Information on this title: www.cambridge.org/9781107545090
Douglas Walton 2015
This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.
First published 2015
Printed in the United States of America
A catalog record for this publication is available from the British Library.
ISBN 978-1-107-11904-8 Hardback
ISBN 978-1-107-54509-0 Paperback
Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party Internet Web sites referred to in this publication and does not guarantee that any content on such Web sites is, or will remain, accurate or appropriate.
For Karen, with love
Contents
Acknowledgments
All of the chapters in this book have benefited enormously from many discussions and collaborative research projects with colleagues working in the field of artificial intelligence in computer science, and with my fellow members of CRRAR (the Centre for Research in Reasoning, Argumentation, and Rhetoric) at the University of Windsor. Without the support and inspiration provided by these colleagues over the last decade, my continuing research over this period, which culminated in this book, would not have been possible. It will be readily evident to the reader how much the theory of practical reasoning put forward in the book owes to my collaborative work with Tom Gordon. Work and discussions over the past years with Henry Prakken and Chris Reed on argumentation models of artificial intelligence have also influentially guided my views, helped to solve many problems, and shown ways forward.
For discussions on the subject of value-based practical reasoning during our collaborative work on subjects treated in is a substantially revised version of an article, Practical Reasoning in Health Product Ads, originally published in the journal Argument and Computation (1(3), 2010, 179198). I would like to thank Taylor and Francis for permission to reprint the material in this article.
I would like to thank Floris Bex, Tom Gordon, Henry Prakken, and Bart Verheij for many helpful discussions on the subjects treated in was helped and inspired by my collaborative research work with Fabrizio Macagno and Giovanni Sartor.
I would like to thank Giovanni Sartor for making it possible to work with him on a joint project on argumentation and artificial intelligence at the European University Institute in Florence in 2012 (funded by a Fernand Braudel Research Fellowship). I would also like to thank Eddo Rigotti and Andrea Rocci for organizing The Thematic School on Practical Reasoning, held at the University of Lugano, November 2830, 2012. Additionally, I would like to thank Thomas Roth-Berghofer, Nava Tintarev, and David B. Leake for organizing the ExaCt 2009 Workshop on Explanation-Aware Computing, which took place at the 2009 International Joint Conference on Artificial Intelligence (IJCAI 2009) Workshop in Pasadena, July 1112, 2009. My thanks are also due to Thomas Roth-Berghofer, David B. Leake, and Jrg Cassens for organizing the ExaCt 2011 Workshop on Explanation-Aware Computing, which took place at the 20th European Conference on Artificial Intelligence (ECAI 2012) in Montpellier on July 28, 2012. For helpful discussions I would like to thank Marcin Koszowy, Erik Krabbe, Henry Prakken, Chris Reed, Bart Verheij, and Simon Wells.
I would like to thank the Social Sciences and Humanities Research Council of Canada for support of the research in this book by Insight Grant 435-2012-0104 on the Carneades Argumentation System (held jointly with Tom Gordon). Finally, I would like to thank Rita Campbell for composing the index and helping with proofreading.
Introduction to Practical Reasoning
Practical reasoning of the kind described by philosophers since Aristotle (384322 BC) is identified as goal-based reasoning that works by finding a sequence of actions that leads toward or reaches an agents goal. Practical reasoning, as described in this book, is used by an agent to select an action from a set of available alternative actions the agent sees as open in its given circumstances. A practical reasoning agent can be a human or an artificial agent for example, software, a robot, or an animal. Once the action is selected as the best or most practical means of achieving the goal in the given situation, the agent draws a conclusion that it should go ahead and carry out this action. Such an inference is fallible, as long as the agents knowledge base is open to new information. It is an important aspect of goal-based practical reasoning that if an agent learns that its circumstances or its goals have changed and a different action might now become the best one available, it can (and perhaps should) change its mind.
In computer science, practical reasoning is more likely to be known as means-end reasoning (where an end is taken to mean a goal), goal-based reasoning, or goal-directed reasoning (Russell and Norvig, ) take values into account.
and shows how they lead forward to the work of the subsequent chapters.
1.1 The Basic Form of Practical Reasoning
There are three basic components of a practical inference in the simplest kind of case. One premise describes an agents goal. A second premise describes an action that the agent could carry out that would be a means to accomplish the goal. The third component is the conclusion of the inference telling us that this action should be carried out. The simplest and most basic kind of practical inference that is readily familiar to all of us can be represented in the following scheme. The first-person pronoun I represents an agent. More correctly, it could be called a rational agent of the kind described by Woodridge (), an entity that has goals, some (though possibly incomplete) knowledge of its circumstances, and the capability of acting to alter those circumstances and to perceive (some of) the consequences of so acting.