Journal of Neurology and Psychology

Download PDF
Review Article

Implementing Implementation Science: Reviewing the Quest to Develop Methods and Frameworks for Effective Implementation

Barbara Kelly*

  • Department of Psychology, School of Psychology and Health, University of Strathclyde, UK

*Address for Correspondence: Barbara Kelly, Department of Psychology, School of Psychology and Health, University of Strathclyde, UK, E-mail: Barbara.kelly@strath.ac.uk
 
Citation: Kelly B. Implementing Implementation Science: Reviewing the Quest to Develop Methods and Frameworks for Effective Implementation. J Neurol Psychol. 2013;1(1): 5.
 
Copyright © 2013 Kelly B, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
 
Journal of Neurology and Psychology | ISSN: 2332-3469 | Volume: 1, Issue: 1
 
Submission: 25 September 2013 | Accepted: 21 October 2013 | Published: 25 October 2013
 
Reviewed & Approved by: Dr. Shui-fong Lam, Department of Psychology, The University of Hong Kong, China

What is Implementation Science?

How to tackle the implementation of Implementation Science is the theme of this discussion. For many of us, Implementation Science itself may need some introduction. Implementation Science is an emerging science exploring barriers to intervention effectiveness in real world contexts. The science of implementation currently comprises a multi-disciplinary set of theories, methods and evidence aimed at improving the processes of translation from research evidence to every day practices across a wide variety of human service and policy contexts. It shares perspectives and methods to some extent with translation research and with prevention science [1,2].

There has been some debate around the distinctiveness of translation research and Implementation Science. Translation research might be defined as including two distinct areas. First is the process of applying discoveries generated during scientifically controlled experimental research and controlled trials to controlled trials in real contexts. The second area concerns implementation issues in non-controlled contexts, that is, research aimed at enhancing the adoption of best practices in the community to support evidence based interventions [2]. It is in the second area where Implementation Science has a specific focus. In relation to Prevention Science, Implementation Science reflects and shares key themes relating to problems in demonstrating the impact of evidence based programmes and interventions. Shared concerns in Prevention and Implementation Science were clearly articulated at the inaugural Global Implementation Conference in Washington, D.C., [3]. Delegates highlighted that while there was no shortage of studies focused on whether interventions work, very basic questions still challenged the research orientated presentations and practice related group sessions to explain in sufficient and replicable detail how, why, when and in what contexts implementation works. The specific focus of Implementation Science is summarised in these basic questions and whilst it might seem indistinguishable from the second stage of translational research, Implementation Science has defined this stage as sufficiently complex, wide ranging and urgent to warrant distinctive scientific endeavour. Implementation Science might be defined therefore as aiming to provide a coherent science of implementation by exploring and creating replicable and evidence based methods, frameworks and systems for translation processes. Its distinctiveness lies in recognising the scope of the undertaking and the range of conceptual and methodological tools and methodologies required to understand and outline the complex human processes of engagement in real world interventions.

There is no doubt that the evidence emerging from Implementation Science is now critical for the advancement of science, social sciences and human services. It explores and addresses global confusion and disappointment about the failure to transfer and replicate evidence based approaches successfully from scientifically controlled contexts to the live, diverse and dynamic ecologies offered by organisations and communities. It has powerful relevance across a range of applied sciences in seeking to ensure greater impact. Even more generally, it is indispensable for all scientists, organisations and practitioners seeking to understand how to promote change. Implementation Science provides evidence on what works most effectively in tackling the complex change processes related to specific, real world issues and problems. It demonstrates that impact and outcome are almost invariably linked to predictable, contextual, imperatives demanding careful and distinctive approaches to their management. Implementation Science is generating an evidence base linked to implementation processes themselves, allowing the development of conceptual and applied frameworks to support the transfer of evidence based interventions. Developing this innovative focus, exploring processes underlying predictability, impact and outcome makes it an indispensable, albeit overdue, science.

Science and Implementation Science

Implementation Science responds directly to an increasing focus on the complex nature of the links between science, scientific evidence and applying science. A pressing need to be able to demonstrate the impact of science is clear across disciplines, reflected, for example, in the progressively outcome oriented agendas held by research funding bodies and in the intensifying demands for clearer definitions and demonstrations of impact in research evaluations across all domains [4].

Externally driven demands for evidence of impact in applied science are especially disconcerting at this point, escalating against a back drop of intense debate about the very role and meaning of replication within science [5,6]. Lack of clarity within scientific communities surrounding reliable replication in carefully controlled contexts and debate around what constitutes impact and outcome, complicates the need to demonstrate impact in the even more challenging context of the real world. The lack of understanding of the determinants of real world predictability in terms of response to programmes and interventions combined with lack of certainty about factors influencing experimental replication, make the situation exceptionally challenging for applied scientists. These challenges are becoming central for politicians, purveyors and purchasers of interventions and for the practitioners who implement them.

The history of accountability and real world science

New emphasis on scientific accountability accompanies the current focus on outcomes and impact. As late as the 1960’s the nature of the accountability issues surrounding science was linked primarily to cultural and political agendas emerging at the end of the Second World War. The ready acceptance of science as a necessary part of society reflected perceptions of its indispensable role in the technology of war and defence [7]. However, rather than expected post war growth and expansion in science and scientific endeavour, a general scepticism has emerged, not least from political milieus, questioning the role and requirements of science in society.

Generally, a greater demand for accountability underpins criticism and debate about the usefulness of science. In the past, attempts to define and activate the concept of scientific accountability have tended to fall narrowly within the familiar domains of ethics and good practice [8]. Currently, economic pressures mean greater scrutiny of public expenditure of all kinds and the contemporary western scientist is progressively regulated in ways that aim to control unproductive science as well ensure conformity to practice ethics. Today, although scientific accountability continues to reflect concerns around the ethical implications of scientific activity, contemporary challenges are emerging which extend to the measurable and demonstrable value of science as a product. Demands for accountability in relation to impact and outcome measures now imply that science can no longer be for science sake. Science needs to offer value for money and evidence of benefits for clients. This is proving especially challenging in the applied social scientific and therapeutic contexts.

Further understanding of why a focus on science as a product has taken some time to emerge relates to a historical view of science as a worthy undertaking, representing exploratory endeavour with no clear responsibility to bring about change, only to increase understanding. From this perspective, funding scientific endeavour more or less uncritically makes sense. It contains an assumption that there are no guarantees about the nature of outcomes, impact or transferability of findings to support real world progress and emancipation. The primary focus is on systematic investigation. The classical scientific paradigm holds considerable sway and science as a worthy endeavour based on hypotheses testing, trial and error, and carefully controlled experimentation has held precedence over concern, or even curiosity, about what happens when scientific discoveries enter the live context.

Now it is becoming increasingly obvious that creating sciencebased practice needs to be recognised as a complex and challenging scientific undertaking worthy of exploration, endeavour and indeed requiring considerable innovation in scientific theory and perspective. It is also recognised to encapsulate the ethical obligation that scientists demonstrate effectiveness implicit in transfer to real world contexts and justify public and private expenditure and investment in personnel, programmes, interventions and approaches devised to support the delivery of human services.

Developing Complementary Real World Paradigms, Frameworks and Evidence Bases for Effective Implementation

The study of failure

From clinical, medical and social science perspectives, it seems that the sequestered nature of work carried out within the experimental context has endured without extending to include widespread, rigorous attention to the transfer of processes or systematic investigation of the issues and problems inevitably arising in the applied context. Increasingly, the contextual aspects of transferring and utilising evidence based interventions and programmes are found to be daunting and complex. Green draws attention to the evidence based medicine model of ‘translational pipeline’ pointing out how naïve this simple model of science to practice is [9]. He suggests that the linear model be replaced by the notion of ‘translational algorithm’, one which recognises the bidirectional nature of real world applications including informing and developing academic science via practitioner reflection, experience and feedback. However, accumulating evidence suggests that even more complex methods are required and that these are more precisely reflected in multi stranded, dynamic and transactional models. This type of model is reflected and developed in the epistemological, methodological and contextual analyses and exploration of change processes becoming central to Implementation Science.

The developmental path of Implementation Science reflects the gradual emergence of a global body of concern about similar issues across many contexts. It has arisen mainly from concern and study of factors influencing failure of transferability. Currently it offers a focus on evidence on the nature of barriers to effective intervention. Its scientific themes have arisen from a range of sources, cutting across disciplines and practice contexts. The first academic journal, Implementation Science, appeared as late as 2006 and aims to pull together the disparate fields of epistemology, theory, research protocols, and practitioner related methodology and experience under a single banner.

Historically, implementation problems were noted in a number of contexts. Early examples arose in the medical, clinical and health care contexts where the relationship of outcomes to empirically substantiated interventions and programmes was linked to clear and measurable objectives and always central. The fact that promising and empirically tested interventions and approaches did not deliver the expected outcomes or offer demonstrable impact begged the conclusion that they were not implemented effectively after transfer to live clinical and community settings. These concerns were raised in the medical context as early as 1945 but how to clearly define the nature of implementation issues, develop associated scientific method or train practitioners and researchers to manage implementation issues remain as current challenges [10].

Implementation concerns emerged in the political science context in the 60’ and 70’s. A number of authors noted that policy design and focus had little to do with the success of the implementation of policy. Many policies based on sound ideas ran into problems in implementation [11]. Lipsky anticipated the now well established, powerful link between the behaviour, beliefs and values of the practitioner involved in the direct implementation of programmes and interventions and their impact and outcome for receivers [12]. He identified several common themes now emerging in implementation frameworks, for example, the problems of limited resources, requirement s for on-going negotiation and relationships with non-voluntary clients [13].

The accumulated knowledge of the implementation gap has created a focus on the scientific exploration of factors and processes which may facilitate or inhibit effective transfer of evidence based practices or policy related ideas to live contexts [14,15].

The Readiness for Evidence Based Practice Scale is an example of the type of complementary, social scientific approaches needed to understand how to disseminate and implement evidence based interventions in real world service settings [16]. Provider attitudes are subtlety implicated in the quality of any implementation and in its outcome and impact. General openness to change and practitioners’ perceived divergence between innovations in evidence based practice and current practice are major influences undermining implementation. Literature on the readiness concept highlights a range of issues surrounding the attitudes of potential providers towards a specific intervention which can affect whether the intervention will be adopted in the first place. Aspects of the quality of the implementation of the intervention, if adopted, and its effective sustainability over time are also key areas in effective transfer [17]. Fidelity to the delivery protocols supporting evidence based practice is another source of outcome variation and often organisations and individuals will not comply with directives essential to successful delivery of a programme. This ‘process resistance’, involving failure to accept the need for certain implementation practices, has been noted in business as well as clinical contexts [18].

Although scientists and researchers have realised for some time that what transpires in the field in intervention research far from mirrors the conditions of scientifically controlled intervention trials, response has been slow. The power of contextual variables has been minimised in research reports and in most cases completely overlooked in scientific explanations of unexpected outcomes and variability in impact. In a key paper illustrating this fundamental oversight, Dane and Schneider investigated the extent to which programme integrity, which is the degree to which programmes were implemented as planned, was verified in studies of behavioural prevention programmes published between 1980 and 1994 [19]. Reporting on implementation processes at this point was not part of scientific routine in programme design, implementation and outcome evaluation. In total, 231 studies involving the primary and early secondary prevention of behavioural, social and academic maladjustment in children were examined. Only 39 of 102 outcome studies specified features for the documentation of applied, programme integrity and of these, only 13 considered variations in integrity in analysing the impact of the intervention.

During the 1970’s, although control and measurement of contextual issues and implementation fidelity may have been edging to the forefront of concerns for some medical, clinical and social and political scientists and practitioners, their relative scarcity in evaluation reports indicates they were, and alarmingly, still are, considered to be optional, scientific dimensions. For example, a number of relatively recent research reviews have examined the impact of prevention programmes for children and adolescents. These include Durlack and Wells, Gillham et al. and Merry et al. [20-22]. All argue that the evidence for prevention was inconclusive and highlighted the failure to evaluate programme integrity as a possible source of outcome variation.

Implementation Science in social and educational contexts

The development of awareness of implementation issues is becoming increasingly prominent in social and educational contexts, offering an interesting perspective on its scientific dimensions. This is despite arguably weaker links historically to science than clinical and medical contexts. In fact, in the social and educational contexts, the pressure to adhere to evidence based practice has been harder to justify and enforce. Until recently government sponsored social and educational programmes were created and disseminated with very little awareness of, or interest in, their potential effectiveness. Instead, they were instituted on the basis of social or political factors and terminated without recourse to evaluation, offering little to address needs, advance practice or develop research [23]. This restricted dynamic has begun to change and promising interventions offering solutions to persistent issues are increasingly evaluated using large scale randomised control trials. In addition, bodies and organisations now exist to explore and endorse the scientific merit of specific social and educational programmes, providing information for policy makers, stakeholders, practitioners and clients on the quality of their supporting evidence. These scrutinising bodies focus equally on the implementation protocols accompanying programmes and offer advice and resources to strengthen scientific and practitioner effectiveness in developing and applying evidence based programmes [24].

Evidence, structure and coherence

Although eclectic, Implementation Science is beginning to provide evidence, structure and coherence in understanding and addressing the human processes underlying managed change via scientifically verified, evidence based programmes. Critically, the complexity and the implications of the Implementation Science evidence base are far reaching, highlighting the need to review and create new epistemological, conceptual and methodological frameworks supporting, enabling and evaluating real world change. Currently, there is a parallel need to continue collecting and collating existing evidence from disparate sources under the Implementation Science banner to help further substantiate evidence based change processes.

There are many questions about how to implement Implementation Science. Evidence highlights indisputably that organisational and practitioner qualities and practices make or break an intervention, suggesting that some considerable preparatory work prior to intervention is crucial to effectiveness. Scientists with a focus on implementation can now offer programmes combining classical scientific paradigms with frameworks for implementation. For example from the clinical perspective, Chambless and Hollon recommend rigorous experimental methods for treatment outcome research and move this to implementation with the provision of a manual for delivery to support faithful replication of the programme via consistent training for those delivering interventions and checks for programme adherence using a range of reliable and valid outcome measures [25]. Spence and Short also suggest that programmes need a clear theoretical and conceptual basis and should be comprehensive, employ a variety of teaching methods, implement sufficient dosage and be based on the development of positive relationships [26].

The beginning of coherent and scientific and evidence based approaches to implementation justify some optimism but some recent literature does underline the complexity in developing a science of implementation. Spence and Short and Greenberg et al. note on-going variability in programme outcomes using evidence based implementation processes [26,27]. Gillham et al. underline the complexity in developing an implementation science in finding outcomes differing by school with no identifiable variables linked to different outcomes [21]. They suggest that despite awareness of implementation effects and processes, subtle and complex school differences continue to impact on delivery and outcome of programmes.

Implementing Implementation Science

Currently a range of evidence based frameworks exists to understand, describe and deliver Implementation Science. A selective review of these is provided in Kelly and Perkins [28]. A number of authors point to the need for conceptual understanding of the epistemology of real contexts. A shared scientific perspective, characterising and reflecting available evidence on real world change, needs to combine an appreciation of the fundamental role of organisational and practitioner social constructs of reality with quantifiable processes to support, measure and evidence positive impact and desirable outcomes. Some emerging frameworks tackling practitioner constructs and preparation for effective delivery make use of critical realist epistemology, evidence on effective consultation and successfully complement the RCT methodology used to create an initial evidence of effectiveness [29-32]. Implementation Science needs multi- method approaches to support the considerable complexity of stages and processes. These are captured conceptually and practically in a distinctive, applied multi-level framework for use in educational and applied psychology contexts though it has relevance across many contexts [13]. The framework highlights key concepts, themes and evidence underlying effective implementation. These include the core intervention components driving effective implementation and actions, inputs, resources and attitudes required to implement successfully. Each stage of implementation is supported by problem solving methodology.

The major areas of evidence provided by Implementation Science about barriers to change require evidence based interventions to dismantle them. Evidence based executive frameworks and more specific evidence based inputs and actions are already available or are emerging. Some examples of key areas in effective implementation are in the training of practitioners where a considerable evidencebase now supports most effective training methods [33,34]. The development of implementation capacity in practitioners is now seen to involve their understanding of theory underlying interventions and their own crucial role in delivery and evaluation processes [35].

State of the art is in pulling together the critical steps in implementation processes and developing integrated, evidence based approaches which act directly to counter negative effects of key aspects in design and delivery of implementation. A key paper by Meyers et al. outlines and addresses the need to synthesise and integrate approaches to supporting implementation and ensure that these approaches are intrinsically evidence based [36]. The paper establishes three goals in understanding the complex and dynamic processes in implementation. Authors report the synthesis and analysis of twenty five implementation frameworks. They focus on specific actions for fostering high quality implementation resulting in fourteen critical steps creating a Quality Implementation Framework (QIF). Practical implications of findings are a crucial addition to implementing Implementation Science.

Implementation Science offers considerably more than might have been suspected even ten years ago. It is building a much needed expanded perspective and innovative, integrated, complementary paradigms and frameworks. In relation to contemporary science ethics, accountability, and cost effectiveness, empiricism needs to build power in tackling real world challenges. Implementation Science is in the process of crafting innovative concepts of ‘real’, ‘scientific’, ‘evidence-based’ and ‘effective’ which, while they are wholly compatible with established empiricism, add ground breaking dimensions for practitioners and clear direction for all scientists in creating and sustaining positive change in real contexts.

References

  1. Elliot SD, Mihalic S (2004) Issues in disseminating and replicating effective prevention programmes. Prev Sci 5: 47-53.
  2. Rubio DM, Shoenbaum EE, Lee LS, Schteingart DE, Marantz PR, et al. (2010) Defining translational research: implications for training. Acad Med 85: 470-475.
  3. (2011) Why a Science of Implementation is needed. Prevention Action News Letter.
  4. Green J (2012) Editorial: Science, implementation and implementation science. J Child Psychol Psychiatry 53: 333-336.
  5. Abbott A (2013) Disputed results a fresh blow for social psychology. Failure to replicate intelligence priming effects ignites row in research community. Nature 497: 16.
  6. Yong E (2012) Replication Studies: Bad Copy. Nature 485: 298-300.
  7. Smith BL (1996) The Accountability of Science. Minerva 34: 45-56.
  8. Shils E (1983) The Academic Ethic. The report of a study group on the International Council on the future of the University, University of Chicago Press, Chicago.
  9. Green J (2006) New Professionalism in the 21st century. Lancet 367: 646-647.
  10. Bush V (1945) Science: The Endless Frontier. A report to the President by Vannevar Bush, Director of the Office of Scientific Research and Development. Washington DC: United States Government Printing Office.
  11. Pressman JL, Wildawsky A (1984) Implementation University of California Press Berkeley, Los Angeles, California.
  12. Lipsky M (1980) Street level bureaucracy: Dilemmas of the individual in public services.
  13. Blase KA, Van Dyke M, Fixsen DL, Wallace Bailley F (2012) Implementation Science: Key Concepts, Themes and Evidence for Practitioners in Educational Psychology. In Kelly B, Perkins D (Eds) Handbook of Implementation Science for Psychology in Education, Cambridge University Press New York.
  14. Aarons GA (2004) Mental health provider attitudes towards adoption of evidence based practice: the Evidence Based Practice Attitude Scale (EBPAS). Mental Health Serv Res 6: 61-74.
  15. Greenhalgh T, Robert G, MacFarlane F, Bate P, Kyriakidou O (2004) Diffusion of innovations in service organisations: Systematic review and recommendations. Milbank Q 82: 581-692.
  16. Aarons GA, Green AE, Miller E (2012) Researching Readiness for Implementation of Evidence- Based Practice. A Comprehensive Review of the Evidence- Based Practice Attitude Scale (EBPAS) in Handbook of Implementation Science for Psychology in Education, Cambridge University Press 150-164.
  17. Rydell R, McConnell AR (2006) Understanding implicit and explicit attitude change: a systems of reasoning analysis. J Pers Soc Psychol 91: 995-1008.
  18. Garvin DA (1993) Building a learning organisation. Harv Bus Rev 71: 78-91.
  19. Dane AV, Schneider BH (1998) Program integrity in primary and early secondary prevention. are implementation effects out of control? Clin Psychol Rev 18: 23-45.
  20. Durlack JA, Wells AM (1977) Primary prevention programmes for children and adolescents: A meta-analytic review. Am J Community Psychol 25: 115-152.
  21. Gillham JE, Reivich KJ, Freres DR, Chaplin TM, Shatte AJ, et al. (2007) School-based prevention of depressive symptoms: A randomized control study of the effectiveness and specificity of then Penn Resiliency Programme. J Consult Clin Psychol 75: 9-19.
  22. Merry S, McDowell H, Hetrick S, Muller N (2006) Psychological and Educational Interventions for the Prevention of Depression in Children. A review. Oxford, UK: Cochrane Library
  23. Slavin R (2012) Foreword. In Kelly B, Perkins D (Eds) Handbook of Implementation Science for Psychology in Education, Cambridge University Press, New York
  24. Collaborative for Academic, Social and Emotional Learning (C.A.S.E.L.) Chicago, IL.
  25. Chambless DL, Hollon SD (1998) Defining empirically supported therapies. J Consult Clin Psychol 66: 7-18.
  26. Spence SH, Short AL (2007) Research Review. Can we justify the widespread dissemination of universal, school-based interventions for the prevention of depression among children and adolescents? J Child Psychol Psychiatry 48: 526-542.
  27. Greenberg MT, Domitrovich C, Bummbarger B (2001) The prevention of mental disorders in school-aged children: Current state of the field. Prevention and Treatment 4: 1-59.
  28. Kelly B, Perkins D (2012) (Eds) Handbook of Implementation Science for Psychology in Education, Cambridge University Press, New York.
  29. Bywater TJ (2012) Developing Rigorous Program Evaluation. In Kelly B, Perkins D (Eds) Handbook of Implementation Science for Psychology in Education, Cambridge University Press, New York.
  30. Kelly B (2012) Implementation Science for Psychology in Education. In Kelly B, Perkins D (Eds) Handbook of Implementation Science for Psychology in Education, Cambridge University Press, New York.
  31. Monsen JJ, Woolfson LM (2012) The Role of Executive Problems Solving Frameworks in Preparing for Change in Educational Contexts. In Kelly B, Perkins D (Eds) Handbook of Implementation Science for Psychology in Education, Cambridge University Press, New York 132-149.
  32. Illback RJ (2012) Change-Focused Organizational Consultation in School Settings. In Kelly B, Perkins D (Eds) Handbook of Implementation Science for Psychology in Education, Cambridge University Press, New York 165-183.
  33. Dunst CJ, Trivette CM (2012) Meta-Analysis of Implementation Practice Research. In Kelly B, Perkins D (Eds) Handbook of Implementation Science for Psychology in Education, Cambridge University Press, New York
  34. Neufield B, Donaldson M (2012) Coaching for Instructional Improvement. In Kelly B and Perkins D (Eds) Handbook of Implementation Science for Psychology in Education, Cambridge University Press, New York
  35. Fixsen DL, Blase KA, Naoom SF, Wallace F (2009) Core implementation components. Research on Social Work Practice 19: 531-539.
  36. Myers DC, Durlak JA, Wanderman A (2012) The quality implementation framework: a synthesis of critical steps in the implementation process. Am J Community Psychol 50: 462-480.
  37. Frambach RT, Schillewaert N (2002) Organisational innovation adoption: a multi-level framework of determinants and opportunities for future research. J Business Res 55: 163-176.DU