Human Error
Human Error
James Reason
Department of Psychology
University of Manchester
CAMBRIDGE UNIVERSITY PRESS
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore,
So Paulo, Delhi, Dubai, Tokyo, Mexico City
Cambridge University Press
32 Avenue of the Americas, New York ny 10013-2473, USA
www.cambridge.org
Information on this title: www.cambridge.org/9780521306690
Cambridge University Press 1990
This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.
First published 1990
20th printing 2009
A catalog record for this publication is available from the British Library.
ISBN 978-0-521-30669-0 Hardback
ISBN 978-0-521-31419-0 Paperback
Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. Information regarding prices, travel timetables, and other factual information given in this work is correct at the time of first printing but Cambridge University Press does not guarantee the accuracy of such information thereafter.
To Jens Rasmussen
Contents
Preface
Human error is a very large subject, quite as extensive as that covered by the term human performance. But these daunting proportions can be reduced in at least two ways. The topic can be treated in a broad but shallow fashion, aiming at a wide though superficial coverage of many well-documented error types. Or, an attempt can be made to carve out a narrow but relatively deep slice, trading comprehensiveness for a chance to get at some of the more general principles of error production. I have tried to achieve the latter.
The book is written with a mixed readership in mind: cognitive psychologists, human factors professionals, safety managers and reliability engineers and, of course, their students. As far as possible, I have tried to make both the theoretical and the practical aspects of the book accessible to all. In other words, it presumes little in the way of prior specialist knowledge of either kind. Although some familiarity with the way psychologists think, write and handle evidence is clearly an advantage, it is not a necessary qualification for tackling the book. Nor, for that matter, should an unfamiliarity with high-technology systems deter psychologists from reading the last two chapters.
Errors mean different things to different people. For cognitive theorists, they offer important clues to the covert control processes underlying routine human action. To applied practitioners, they remain the main threat to the safe operation of high-risk technologies. Whereas the theoreticians like to collect, cultivate and categorise errors, practitioners are more interested in their elimination and, where this fails, in containing their adverse effects by error-tolerant designs. It is hoped that this book offers something useful to both camps.
The shape of the book
The book is divided into three parts. The first two chapters introduce the basic ideas, methods, research traditions and background studies. They set the scene for the book as a whole.
discusses the nature of error, makes a preliminary identification of its major categories and considers the various techniques by which it has been investigated.
This tradition has provided the basis of much of what we know about the resource limitations of human cognition. The engineering approach, on the other hand, is more concerned with framing working generalisations than with the finer shades of theoretical difference. It synthesises rather than analyses and formulates broadly based theoretical frameworks rather than limited, data-bound models. The more theoretical aspects of the subsequent chapters are very much in this latter tradition.
The middle section of the book, comprising , presents a view of the basic error mechanisms and especially those processes that give recurrent forms to a wide variety of error types. Whereas error types are rooted in the cognitive stages involved in conceiving and then carrying out an action sequence (i.e., planning, storage and execution), error forms have their origins in the universal processes that select and retrieve pre-packaged knowledge structures from long-term storage.
describes a generic error-modelling system (GEMS) which permits the identification of three basic error types: skill-based slips and lapses, rule-based mistakes and knowledge-based mistakes. These three types may be distinguished on the basis of several dimensions: activity, attentional focus, control mode, relative predictability, abundance in relation to opportunity, situational influences, ease of detection and relationship to change. Most of the chapter is taken up with describing the various failure modes evident at the skill-based, rule-based and knowledge-based levels of performance.
types. Evidence drawn from a wide range of cognitive activities is presented in support of these assertions.
attempts to express these ideas more precisely in both a notional and a computational form. It addresses the question: What kind of information-handling machine could operate correctly for most of the time, but also produce the occasional wrong responses characteristic of human behaviour? The description of the fallible machine is in two parts: first in a notional, non-programmatic form, then in a suite of computer programs that seek to model how human subjects, of varying degrees of ignorance, give answers to general knowledge questions relating to the lives of U.S. presidents. The output of this model is then compared to the responses of human subjects.
The final section of the book focuses upon the consequences of human error: error detection, accident contribution and remedial measures.
reviews the relatively sparse empirical evidence bearing upon the important issues of error detection and error correction. Although error correction mechanisms are little understood, there are grounds for arguing that their effectiveness is inversely related to their position within the cognitive control hierarchy. Low-level (and largely hard-wired) postural correcting mechanisms work extremely well. Attentional processes involved in monitoring the actual execution of action plans are reasonably successful in detecting unintended deviations (i.e., slips and lapses). But even higher-level processes concerned with making these plans are relatively insensitive to actual or potential straying from some adequate path towards the desired goal (mistakes). The relative efficiency of these error-detection mechanisms depends crucially upon the immediacy and the adequacy of feed-back information. The quality of this feed-back is increasingly degraded as one moves up the control levels.
Vincennes, the Clapham Junction and Purley rail crashes and the Hillsborough football stadium catastrophe.
The book ends with a consideration of the various techniques, either in current use or in prospect, to assess and reduce the risks associated with human error. begins with a critical review of probabilistic risk assessment (PRA) and its associated human reliability assessment (HRA) techniques. It then considers some of the more speculative measures for error reduction: eliminating error affordances, intelligent decision support systems, memory aids, error management and ecological interface design. In conclusion, the chapter traces the shifting preoccupations of reliability specialists: an initial concern with defending against component failures, then an increasing awareness of the damaging potential of active human errors, and now, in the last few years, a growing realisation that the prime causes of accidents are often present within systems long before an accident sequence begins.
Next page