QUALITY MATTERS
Comparative Policy Evaluation Series
Ray C. Rist, series editor
Program Evaluation and the Management of Government, Volume I
edited by Ray C. Rist
Budgeting, Auditing, and Evaluation, Volume II
edited by Andrew Gray, Bill Jenkins, and Bob Segsworth
Can Governments Learn? Volume III
edited by Frans L. Leeuw, Ray C. Rist, and Richard C. Sonnichsen
Politics and Practices of Intergovernmental Evaluation, Volume IV
edited by Olaf Rieper and Jacques Toulemonde
Monitoring Performance in the Public Sector, Volume V
edited by John Mayne and Eduardo Zapico-Goi
Public Policy and Program Evaluation, Volume VI
by Evert Vedung
Carrots, Sticks, and Sermons: Policy Instruments and Their Evaluation, Volume II
edited by Marie-Louise Bemelmans-Videc, Ray C. Rist, and Evert Vedung
Building Effective Evaluation Capacity, Volume VIII
edited by Richard Boyle and Donald Lemaire
International Atlas of Evaluation, Volume IX
edited by Jan-Eric Furubo, Ray C. Rist, and Rolf Sandahl
Collaboration in Public Services: The Challenge for Evaluation, Volume X
edited by Andrew Gray, Bill Jenkins, Frans Leeuw, and John Mayne
With a foreword by Christopher Pollitt
Seeking Confidence in
Evaluating, Auditing, and
Performance Reporting
QUALITY MATTERS
Edited by
Robert Schwartz
John Mayne
Comparative Policy Evaluation Volume XI
First published 2005 by Transaction Publishers
Published 2018 by Routledge
2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN
52 Vanderbilt Avenue, New York, NY 10017, USA
Routledge is an imprint of the Taylor & Francis Group, an informa business
Copyright 2005 by Taylor & Francis.
All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers.
Notice:
Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe.
Library of Congress Catalog Number: 2004049811
Library of Congress Cataloging-in-Publication Data
Quality matters : seeking confidence in evaluation, auditing, and performance reporting / Robert Schwartz and John Mayne, editors.
p. cm. (Comparative policy evaluation series)
Includes bibliographical references and index.
ISBN 0-7658-0256-2 (cloth : alk. paper)
1. Total quality management in government. 2. Public administrationEvaluation. 3. Organizational effectivenessEvaluation. 4. Quality assurance. 5. Comparative government. I. Schwartz, Robert, 1959- II. Mayne, John, 1943- III. Series.
JF1525.T67Q83 2004
352.357dc22
2004049811
ISBN 13: 978-0-7658-0256-9 (hbk)
Contents
Christopher Pollitt
John Mayne and Robert Schwartz
Jean-Claude Barbier
Thomas Widmer
Jaques Toulemonde, Hilkka Summa-Pollitt, and Neil Usher
Patrick G. Grasso
Andrea Kraan and Helenne van Adrichem
Bob Segsworth and Stellina Volpe
M. L. Bemelmans-Videc
Jeremy Lonsdale and John Mayne
Andrew Gray and Bill Jenkins
Alan L. Ginsburg and Natalia Pane
John Mayne and Peter Wilkins
Stan Divorski
Richard Boyle
Robert Schwartz and John Mayne
Informationregular, systematic, reliable informationis the life blood of democracy, and the fuel of effective management. Without it, such values as accountability, transparency, equity, fairness, efficiency, and non-discrimination are hollowed out: they become ritualistic and are incapable of any substantive realization. The realm of governance shrinks: not to nothing but rather to a rough and unpredictable territory of charismatic leaders, instinctual decision-making and ever-shifting political deals. Eventually public administration itself becomes a set of activities characterized by a dismal combination of defensive rule-following punctuated with opportunistic rule-breaking. Trust and responsiveness are among the early casualties.
Yet surelytodaythere is no problem with information? This is the age of information overload. It pours onto our screens and out of our printers. Just look at the lists of references appended to the chapters of this book and consider the person hours that must have gone into the production and consumption of all those reports and studies. Many governments claimoften with some justificationto be more open and transparent than ever before. But what ifto continue the blood and fuel metaphorsthe blood is contaminated, or the fuel polluted? Then the body politic sickens and the engine of public management runs rough. It is to this vital issue of the quality of our information flow that this book is addressed.
The editors and authors are here concerned with a special type of information, what is here termed evaluative information. In many local, national, and international jurisdictions this category of information has become steadily more important over the past two decades. It is defined here as those types of information that are generated by the processes of evaluation, performance auditing, and performance reporting (all activities that appear to have grown in extent since the 1980s). These three activities share the stated intention of being both systematic and analytic. The information they generate should be of particularly high quality. Furthermore, frequently, it is information related to the results achieved by public organizations and programsinformation concerning outputs, outcomes, and impacts. From either a democratic or a managerial perspective this is crucial material. It tells us how well (or badly) we are doing, and perhaps also gives some strong insights as to why. It is the kind of information that those many governments who have declared themselves in favor of a more performance-oriented approach, a results culture, should rationally be extremely interested in.
It is no surprise to discover, however, that the seemingly impeccable logic linking the provision of evaluative information to political and managerial decision-making looks much clearer on the page than on the ground. On the ground there are many reasons why high quality evaluative information may never be produced, may be produced but ignored, or may even be willfully misinterpreted or misused. These reasons include skill deficits, lack of planning, poor co-ordination, shortage of money and time, conceptual misunderstandings, managerial defensiveness, professional suspicion, political sensitivity and, occasionally, a deep-seated hostility to evaluation itself, when it is perceived to be a manifestation of an alien Anglo-Saxon or American culture.
These barriers to the production and use of evaluative knowledge mean that a struggle or, more accurately, many struggles have been required in order to extend the domain of evaluation, performance audit, and performance reporting. This process of colonization has been supported and interpreted through many texts, both official and academic. While further extensions to the territory of evaluative knowledge certainly continue to occur, it could be argued that the empire-building of phase one is now largely complete. We already have a considerable range of evaluation units and programs, a vast number of mandatory performance reports and perhaps even an Audit Society. Now we are in phase two, when the new evaluative actors have to show what they can do. Just how useful and trustworthy are their information products? Are they really a quality act? Are they worth the expense and effort? The literature of this second phase is as yet much less copious than that which addresses the first phase. Direct analyses of the quality of evaluative informationand even more of the effectiveness of different approaches to improving that qualityhave been rare. This volume is therefore both timely and important. Its broad coverage of countries and international organizations is especially welcome. As the editors readily admit, it is exploratory rather than conclusive. To borrow from the title of the final chapter, we can be reasonably certain that quality matters, and that it can often be improved, but we are as yet less clear why some care about it while others do not.