First published 1990 by Transaction Publishers
First paperback edition 1999
Published 2019 by Routledge
2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN
52 Vanderbilt Avenue, New York, NY 10017
Routledge is an imprint of the Taylor & Francis Group, an informa business
Copyright 1990 by Taylor & Francis
All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers.
Notice:
Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe.
Library of Congress Catalog Number: 89-4440
Library of Congress Cataloging-in-Publication Data
Program evaluation and the management of government : patterns and prospects across eight nations / edited by Ray C. Rist.
p. cm.
Bibliography: p.
ISBN 0-88738-297-5
1. Administrative agenciesEvaluation. 2. Public administrationEvaluation.
I. Rist, Ray C.
JF1411.P764 1989
351.0076dc20
89-4440
CIP
ISBN 13: 978-0-7658-0600-0 (pbk)
ISBN 13: 978-1-138-53089-8 (hbk)
For the International Institute of Administrative Sciences, whose encouragement and support made this effort possible
Ray C. Rist
The chapters in this volume provide a detailed and up-to-date account of both the organization and uses of evaluation in eight Western, democratic countries. With a focus on the national or federal level of government, the material provided here presents a systematic and comparative view of where these eight countries are in their development, institutionalization, and utilization of evaluations.
It is to be expected, and is indeed the case, that such comparative work will demonstrate considerable variability among the eight. But the intriguing issue is less whether there is variability than what the dimensions are along which those differences occur. Key among those dimensions that have helped focus the analyses to follow are the genesis of evaluation efforts, the fiscal situations in the respective countries, the political constellations that either facilitated or hindered the introduction of evaluation into governmental processes, the constitutional features of the respective countries, the availability of researchers from the social sciences, and whether those within government could see uses for evaluation information. Each of these dimensions, and others, are discussed in the chapters that follow. Derlien, in the overview and synthesis chapter (
What Do We Mean by Evaluation?
In developing the analyses of the individual countries, a critical effort included framing the definitions of key terms. Phrases like program evaluation, policy analysis, policy evaluation, policy studies, effectiveness audits, and policy forecasting were all used, and often interchangeably. Pivotal to our definitions was to distinguish between program evaluation, which focused retrospectively on assessing policies or programs, and policy analysis, which was prospective and sought to inform decisions that were yet to be made.
The distinction between retrospective analysis, as the focus for program evaluation, and prospective analysis, as the focus for policy analysis, is the result of efforts at definition that began as early as 1965 by Anthony, followed then by Wildavsky in 1969, Poland in 1974, and Chelimsky as recently as 1985. Chelimsky stresses the consequences of this distinction when she writes
That policy analysis is prospective while program evaluation is retrospective has importance essentially because this fact influences the kinds of questions each can address. The emphasis of policy analysis is on likely effects (or estimates, or projections); a typical policy analysis question might ask, What will be the likely effects on hospital use if bed supply is restricted? The focus of evaluation, on the other hand, is on actual effects (that is, on what has been observed, what has already occurred or is existentially occurring); a typical evaluation question might be, What happened to hospital use after bed supply was restricted? (1985, 67)
A troubling aspect of this distinction, and one that the present papers do not resolve, is that these definitions essentially address two sides of the same coin. Good data on what has already happened can have profound influences on thinking about likely future effects. If history is prologue, then it is to be assumed that policy makers would look to what has occurred (and been learned) in similar circumstances before making the present decision. Likewise, once a decision has been made and the impacts of that decision begin to come into view, the likely effects are now transformed into actual effects. Consequently, what appears at first glance as a clear dichotomyretrospective and prospectivebecomes with closer scrutiny two stages of an interactive process: decisions are made, information is gathered about the effects of those decisions, further decisions are made with data available on the results of previous decisions, etc., etc., etc.
The development of program evaluation, both in terms of its methodologies as well as the kinds of questions that it could address, has resulted in a clear expansion of what now comes under its umbrella. The first, and still perhaps main, assumption about program evaluation was that it was a means of assessing program outcomes or effects through rigorous methodological means (preferably via experimental designs). But the most recent thinking suggests that program evaluation now can encompass the various stages of life cycles of a program or policyfrom conception through execution through impact.
This expanded focus and rationale for program evaluation has been institutionalized, at least for the United States, in two sets of evaluation standards published in 1981 (by the Joint Committee for Standards for Educational Evaluation) and 1982 (by the Evaluation Research Society Standards Committee). These standards, particularly those of the Evaluation Research Society, listed six different approaches or strategies for conducting program evaluation. These sixfront-end analysis, evaluability assessment, process evaluation, effectiveness or impact evaluation, program and problem monitoring, and meta-evaluation or evaluation synthesisrepresent a broad domain of work that can be conducted within a retrospective framework. The recent definition of program evaluation offered by Chelimsky captures this expanded understanding and acceptance. She writes: Thus, a reasonably well accepted definition might be that program evaluation is the application of systematic research methods to the assessment of program design, implementation, and effectiveness (1985, 7).
To What Purposes Can Evaluation Be Applied?
If the definition provided by Chelimsky can be expanded to include the retrospective assessment of policies as well as programs (and this is no small expansion), then the evaluation function can essentially be applied throughout the life cycle of a government initiative (see Thurn et al. 1984 for a discussion by four German authors on this point.) The inclusion of policies as well as programs comes from the fact that many governmental actions and initiatives never become programs in the conventional sense of that term, that is, providing health care, expanding preschool education, building treatment plants for water pollution, providing storage facilities for surplus grains and commodities, etc. Indeed, many key government initiatives include such efforts as rewriting banking regulations, changing the tax laws, changing fishing restrictions, or establishing new air-pollution standards. In each of these instances, there is no delivery of government services, no established program that expands or contracts, and no newly established government bureaucracy. Each of these instances represents an administrative procedure that existing government agencies would carry out. Yet for each of these there are consequences that policy makers would want to know about. Whether the policies were administered correctly or had the intended effects, for example, are but two critical pieces of retrospective information.