SAGE Publications Ltd
1 Olivers Yard
55 City Road
London EC1Y 1SP
SAGE Publications Inc.
2455 Teller Road
Thousand Oaks, California 91320
SAGE Publications India Pvt Ltd
B 1/I 1 Mohan Cooperative Industrial Area
Mathura Road
New Delhi 110 044
SAGE Publications Asia-Pacific Pte Ltd
3 Church Street
#10-04 Samsung Hub
Singapore 049483
Souraya Sidani 2015
First published 2014
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act, 1988, this publication may be reproduced, stored or transmitted in any form, or by any means, only with the prior permission in writing of the publishers, or in the case of reprographic reproduction, in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers.
Library of Congress Control Number: 2014936874
British Library Cataloguing in Publication data
A catalogue record for this book is available from the British Library
ISBN 978-1-4462-5616-9
ISBN 978-1-4462-5617-6 (pbk)
Editors: Michael Carmichael and Jai Seaman
Editorial assistants: Keri Dickens and Lily Mehrbod
Production editor: Katie Forsythe
Copyeditor: Jane Fricker
Proofreader: Sarah Cooke
Indexer: Silvia Benvenuto
Marketing manager: Camille Richmond
Cover design: Wendy Scott
Typeset by: C&M Digitals (P) Ltd, Chennai, India
Printed and bound by CPI Group (UK) Ltd, Croydon, CR0 4YY
About the Author
Souraya Sidani,is Professor and Canada Research Chair at the School of Nursing, Ryerson University. Her areas of expertise are in quantitative research methods, intervention design and evaluation, treatment preferences, and measurement. Her research areas of interest focus on evaluating interventions and advanced practice roles, on examining patient preferences for treatments, and on refining research methods and measures for determining the clinical effectiveness of interventions.
Preface
In an era characterized by an emphasis on evidence-informed practice, research plays a central role in helping us understand human conditions, behaviors, and emerging problems; and identifying interventions that effectively address the problems and promote well-being. Empirical evidence on the effects of treatments guides decision making regarding the selection, implementation, and evaluation of interventions to improve various domains of life at the local, national, and international levels. However, to be useful, research studies must be well planned and executed. Careful planning involves the selection of research designs and methods that 1) are appropriate to capture the problem under investigation, 2) are consistent with the studys purpose and aims, 3) ensure validity of the inferences and minimize potential biases, and 4) are feasible within the context in which they are applied. Careful execution consists of developing a detailed study protocol, adhering to it when carrying out research activities (i.e., recruitment, data collection, implementation of the intervention, and data analysis), and closely monitoring the study in order to ensure quality of performance, and to identify and remedy any challenges or deviations.
A range of research designs and methods is available to study the effects of interventions. Some have been commonly considered as the most useful in generating credible evidence, and others have been advanced as plausible alternatives in response to recent critique of commonly used designs and methods. The critique was prompted by the realization that most, if not all, designs and methods are based on assumptions and recommendations which have been taken for granted and not been systematically and critically evaluated. These are derived from logic that may no longer be tenable in light of accumulating experience and emerging empirical evidence. Specifically, the adoption of the experimental or randomized controlled trial as the gold standard for determining the effects of interventions was based on theoretical reasons and intuitive attractiveness rather than a compelling evidence base of data. Empirical evidence derived from meta-analyses shows that results of randomized trials and well-designed non-randomized studies evaluating the same interventions are comparable in determining the success of the intervention. These findings raise questions about the necessity and utility of randomization in reducing selection bias and enhancing validity of causal inferences. Randomization increases the likelihood that study groups are similar at baseline, but it does not guarantee it. Further, it introduces biases related to who takes part in studies and the influence of their perception of the intervention on treatment adherence and outcomes. Practical, pragmatic trials and partially randomized clinical or preference trials have been proposed to enhance representativeness of the sample, account for participants treatment preferences, and reduce attrition. Similarly, evidence is emerging that questions the utility of other methods such as the use of placebo.
This book represents a compendium of research designs and methods, encompassing commonly used ones and recent advances that can be used in the evaluation of interventions. The book content describes the theoretical, empirical, and practical knowledge required in choosing among designs and methods for intervention evaluation. Theoretical knowledge covers the logic underlying different designs and methods; it provides the rationale or the why for methodological decisions. Empirical knowledge looks at the results of studies that investigate the effectiveness, utility, or efficiency of different methods; it informs the what, when, and where of methodological decisions. Practical knowledge involves descriptions of the procedure for implementing different research methods; it points to the how for carrying out selected methods. The aim is to inform researchers of the nature and effectiveness of various designs and methods. This information is essential to 1) make researchers aware of different designs and methods, each having its strengths and limitations at the theoretical and empirical levels, 2) assist researchers in making appropriate decisions related to the selection of most suitable methods that best fit the context of particular studies, 3) help researchers recognize that methodological decisions should be based on evidence rather than mere traditional recommendations which may not be well supported logically and empirically, and ultimately 4) move the research enterprise out of the inertia of using commonly recommended designs and methods that produce empirical evidence of limited utility to decision making and policy development, and into the world of generating, testing, and using alternative relevant designs and methods.