• Open access
  • Published: 25 March 2019

From intervention to interventional system: towards greater theorization in population health intervention research

  • Linda Cambon   ORCID: orcid.org/0000-0001-6040-9826 1 , 2 ,
  • Philippe Terral 3 &
  • François Alla 2  

BMC Public Health volume  19 , Article number:  339 ( 2019 ) Cite this article

5710 Accesses

51 Citations

16 Altmetric

Metrics details

Population health intervention research raises major conceptual and methodological issues. These require us to clarify what an intervention is and how best to address it.

This paper aims to clarify the concepts of intervention and context and to propose a way to consider their interactions in evaluation studies, especially by addressing the mechanisms and using the theory-driven evaluation methodology.

This article synthesizes the notions of intervention and context. It suggests that we consider an “interventional system”, defined as a set of interrelated human and non-human contextual agents within spatial and temporal boundaries generating mechanistic configurations – mechanisms – which are prerequisites for change in health. The evaluation focal point is no longer the interventional ingredients taken separately from the context, but rather mechanisms that punctuate the process of change. It encourages a move towards theorization in evaluation designs, in order to analyze the interventional system more effectively. More particularly, it promotes theory-driven evaluation, either alone or combined with experimental designs.

Considering the intervention system, hybridizing paradigms in a process of theorization within evaluation designs, including different scientific disciplines, practitioners and intervention beneficiaries, may allow researchers a better understanding of what is being investigated and enable them to design the most appropriate methods and modalities for characterizing the interventional system. Evaluation methodologies should therefore be repositioned in relation to one another with regard to a new definition of “evidence”, repositioning practitioners’ expertise, qualitative paradigms and experimental questions in order to address the intervention system more profoundly.

Peer Review reports

Population health intervention research has been defined as “the use of scientific methods to produce knowledge about policy and program interventions that operate within or outside of the health sector and have the potential to impact health at the population level” [ 1 ] (see Table  1 ). This research raises a number of conceptual and methodological issues concerning, among other things, the interaction between context and intervention. This paper therefore aims to synthesize these issues, to clarify the concepts of intervention and context and to propose a way of considering their interactions in evaluation studies, especially by addressing the mechanisms and using the theory-driven evaluation methodology.

To clarify the notions of intervention, context and system

What is an intervention.

According to the International Classification of Health Interventions (ICHI), “a health intervention is an act performed for, with or on behalf of a person or population whose purpose is to assess, improve, maintain, promote or modify health, functioning or health conditions” [ 2 ]. Behind this simple definition lurks genuine complexity, creating a number of challenges for the investigators circumscribing, evaluating and transferring these interventions. This complexity arises in particular from the strong influence of what is called the context [ 3 ], defined as a “spatial and temporal conjunction of events, individuals and social interactions generating causal mechanisms that interact with the intervention and possibly modifying its outcomes” [ 4 ]. Acknowledgement of the influence of context has led to increased interest in process evaluation, such as that described in the Medical Research Council (MRC) guideline [ 5 ]. It defines the complexity of intervention by pinpointing its constituent parts. It also stresses the need for evaluations “to consider the influence of context insofar as it affects how we understand the problem and the system, informs intervention design, shapes implementation, interacts with interventions and moderates outcomes”.

Intervention components

How should intervention and context be defined when assessing their specificities and interactions? The components of the interventions have been addressed in different ways. Some authors have introduced the concept of “intervention components” [ 6 ] and others that of “active ingredients” [ 7 , 8 ] as a way to characterize interventions more effectively and distinguish them from context. For Hawe [ 9 ], certain basic elements of an intervention should be examined as a priority because they are “key” to producing an effect. She distinguishes an intervention’s theoretical processes (“key functions”) that must remain intact and transferable, from the aspects of the intervention that are structural and contingent on context. Further, she and her colleagues introduced a more systemic approach to intervention [ 10 , 11 ]. Intervention could be defined as “a series of inter-related events occurring within a system where the change in outcome (attenuated or amplified) is not proportional to change in input. Interventions are thus considered as ongoing social processes rather than fixed and bounded entities” [ 11 ]. Both intervention and context are thus defined as being dynamic over time, and interact with each other.

The notion of mechanisms

To understand these interactions between context and intervention, we can use the work by Pawson and Tilley [ 12 ] on realistic evaluation. This involves analyzing the configurations between contextual parameters, mechanisms and outcomes (CMO). As such, we can consider the process of change as being marked by various intermediate states illustrated by mechanisms.

Mechanisms may be the result of a combination of factors which can be human (knowledge, attitudes, representations, psychosocial and technical skills, etc.) or material (called “non-human” by Akrich et al. [ 13 ]). The notion of mechanism has various definitions. Some authors, such as Machamer et al. [ 14 ] , define them as “entities and activities organized such that they are productive of regular changes from start or set-up to finish or termination of conditions”. Others define them more as prerequisites to outcomes, as in the realistic approach: a mechanism is “an element of reasoning and reaction of an agent with regard to an intervention productive of an outcome in a given context” [ 15 , 16 ]. They can be defined in health psychology as “the processes by which a behavior change technique regulates behavior” [ 8 ]. This could include, for instance, how practitioners perceive an intervention’s usefulness, or how individuals perceive their ability to change their behavior.

Due to the combinations of contextual and interventional components, the process of change therefore produces mechanisms, which in turn produce effects (final and intermediate outcomes). For instance, we could consider that a motivational interview for smoking cessation could produce different psychosocial mechanisms, such as motivation, perception of the usefulness of cessation and self-efficacy. These mechanisms influence smoking cessation. This constitutes causal chains, defined here as the way in which an ordered sequence of events in the chain causes the next event. These mechanisms may also affect their own contextual or interventional components as a system. For example, the feeling of self-efficacy could influence the choice of smoking cessation supports.

From the intervention to the interventional system

Because the mechanism is the result of the interaction between the intervention and its context, the line between intervention and context becomes blurred [ 17 ]. Thus, rather than intervention, we suggest using “interventional system”, which includes interventional and contextual components. An interventional system is produced by successive changes over a given period in a given setting.

In this case, mechanisms become key to understanding the interventional system and could generally be defined as “what characterizes and punctuates the process of change and hence, the production of outcomes”. As an illustration, they could be psychological (motivation, self-efficacy, self-control, skills, etc) in behavioral intervention or social (values shared in a community, power sharing perception, etc.) in socio-ecological intervention.

In light of the above, we propose to define the interventional system in population health intervention research as: A set of interrelated human and non-human contextual agents within spatial and temporal boundaries generating mechanistic configurations – mechanisms – which are prerequisites for change in health . In the same way, we could also consider that the intervention could in fact be an arrangement of pre-existing contextual parameters influencing their own change over time. Figure  1 illustrates this interventional system.

figure 1

The interventional system

Combining methods to explore the system’s key mechanisms

Attribution versus contribution: a need for theorization.

The dynamic nature of interventional systems raises the question of how best to address them in evaluation processes. Public health has historically favored research designs with strong internal validity [ 18 ], based on experimental designs. Individual randomized controlled trials are the gold standard for achieving causal attribution by counterfactual comparison in an experimental situation. Beyond the ethical, technical or legal constraints known in population health intervention research [ 19 ], trials in this field have a major drawback: they are “blind” to the contextual elements which do influence outcomes, however. Their theoretical efficacy may well be demonstrated, but their transferability is weak, which becomes an issue as intervention research is supposed to inform policy and practice [ 20 ]. Breslow [ 22 ] made the following statement: “Counterfactual causality with its paradigm, randomization, is the ultimate black box.” However, the black box has to be opened in order to understand how an intervention is effective and how it may be transferred elsewhere.

More in line with the notion of the interventional system, other models depart completely from causal attribution by counterfactual methods. They use a contributive understanding of an intervention through mechanistic interpretation, focusing on the exploration of causal chains [ 23 ]. In other words, instead of “does the intervention work? ” the question becomes “given the number of parameters influencing the result (including the intervention components), how did the intervention meaningfully contribute to the result observed?” This new paradigm promotes theory-driven evaluations (TDE) [ 24 , 25 ], which could clarify intervention-contextual configurations and mechanisms. In TDEs, the configurations and mechanisms are hypothesized by combining scientific evidence and the expertise of practitioners and researchers. The hypothetical system is then tested empirically. If this is conclusive, evidence therefore exists of contribution, and causal inferences can be made. Two main categories of TDEs can be distinguished [ 24 , 26 ]: realist evaluation and theories of change.

Realistic evaluation

In the first one, developed by Pawson and Tilley [ 12 ], intervention effectiveness depends on the underlying mechanisms at play within a given context. The evaluation consists in identifying context-mechanism-outcome configurations (CMOs), and their recurrences are observed in successive case studies or in mixed protocols, such a realist trials [ 27 ]. The aim is to understand how and under what circumstances an intervention works. In this approach, context is studied with and as a part of the intervention. This moves us towards the idea of an interventional system. For example, we applied this approach to the “Transfert de Connaissances en REGion” project (TC-REG project), an evaluation of a knowledge transfer scheme to improve policy making and practices in a health promotion and disease prevention setting in French regions [ 28 ]. This protocol describes the way in which we combined evidence and stakeholders’ expertise in order to define an explanatory theory. This explanatory theory (itself based on a combination of sociological and psychological classic theories) hypothesizes mechanism-context configurations for evidence-based decision-making. The three steps to build the theory in the TC-REG project [ 28 ] are: step 1/ a literature review of evidence-based strategies of knowledge transfer and mechanisms to enhance evidence-based decision making (e.g. the perceived usefulness of scientific evidence); step 2 / a seminar with decision makers and practitioners to choose the strategies to be implemented and hypothesize the mechanisms potentially activated by them, along with any contextual factors potentially influencing them (e.g. the availability of scientific data.) 3/ a seminar with the same stakeholders to elaborate the theory combining strategies, contextual factors and mechanisms to be activated. The theory is the interpretative framework for defining strategies, their implementation, the expected outcomes and all the investigation methods.

Theory of change

In theory of change [ 25 , 29 , 30 ], the intervention components or ingredients mentioned earlier are fleshed out and examined separately from those of context, as a way to study how they contribute to producing outcomes. As with realistic evaluation, the initial hypothesis (the theory) is based on empirical assumptions (i.e. from earlier evaluations) or theoretical assumptions (i.e. from social or psychosocial theories). What is validated (or not) is the extent to which the explanatory theory, including implementation parameters (unlike realist evaluation), corresponds to observations: expected change (i.e. 30 mins of daily physical activity); presence of individual or socio-ecological prerequisites for success (i.e. access to appropriate facilities, sufficient physical ability, knowledge about the meaning of physical activity, etc.) based on psychosocial or organizational theories (e.g. social cognitive theory, health belief model) called classic theories [ 31 ]; effectivity of actions to achieve the prerequisites for change (i.e. types of intervention or necessary environmental modifications and their effects) based on implementation theories [ 31 ] (e.g COM-B model: Capacity-Opportunity-Motvation – Behaviour Model).; effectivity of actions conducive to these prerequisites (i.e. use of the necessary intellectual, human, financial and organizational (…) resources). This can all be mapped out in a chart for checking [ 30 ]. Then, the contribution of the external factors of the intervention to the outcomes can be evaluated. For an interventional system, in both categories, the core elements to be characterized in TDE would be the mechanisms as prerequisites to outcome. The identification of these mechanisms should confirm the causal inference, rather than demonstrating causal attribution by comparison. By replicating these mechanisms, the interventions can be transferred [ 21 , 32 ]. In the case of TDEs, interventional research can be developed by natural experiment [ 33 ], allowing mechanisms to be explored, in order to explain the causal inferences, in a system which is outside the control of investigators. The GoveRnance for Equity ENvironment and Health in the City (GREENH-City) project illustrates this. It aims to address the conditions in which green areas could contribute to reducing health inequality by intervening on individual, political, organizational or geographical factors [ 34 ]. The researchers combined evidence, theories, frameworks and multidisciplinary expertise to hypothesize the potential action mechanisms of green areas on health inequalities. The investigation plans to verify these mechanisms by a retrospective study via qualitative interviews. The final goal is to determine recurring mechanisms and conditions for success by cross-sectional analysis and make recommendations for towns wishing to use green areas to help reduce health inequality.

In addition, new statistical models are emerging in epidemiology. They encourage researchers to devote more attention to causal modelling. [ 35 ].

The intervention theory

For both methods, before intervention and evaluation designs are elaborated, sources of scientific, theoretical and empirical knowledge should be combined to produce the explanatory theory (with varying numbers of implementation parameters). We call this explanatory theory the “intervention theory” to distinguish it from classic generalist psychosocial, organizational or social implementation theories, determinant frameworks or action models [ 31 ], which can fuel the intervention theory. The intervention theory would link activities, mechanisms (prerequisites of outcomes), outcomes and contextual parameters in causal hypotheses.

Note that to establish the theory, the contribution of social and human sciences (e.g. sociology, psychology, history, anthropology) is necessary. For example, the psychosocial, social and organizational theories enable investigators to hypothesize and confirm many components, mechanisms and their relationships involved in behavioral or organizational interventions. In this respect, intervention research becomes subordinate to the hybridization of different disciplines.

Combination of theory-based approaches and counterfactual designs

Notwithstanding the epistemic debates [ 36 ], counterfactual designs and theory-based approaches are not opposed, but complementary. They answer different questions and can be used successively or combined during an evaluation process. More particularly, TDEs could be used in experimental design, as some authors suggest [ 27 , 36 , 37 , 38 ]. This combination provides a way of comparing data across evaluations; in sites which have employed both an experimental design (true control group) and theory-based evaluation, an evaluator might, for example, look at the extent to which the success of the experimental group hinged upon the manipulation of components identified by the theory as relevant to learning.

On this basis, both intervention and evaluation could be designed better. For example, the “Évaluation de l’Efficacité de l’application Tabac Info service” (EE-TIS) project [ 39 ] combines a randomized trial with a theory-based analysis of mechanisms (motivation, self-efficacy, self-regulation, etc.) which are brought about through behavioral techniques used in an application for smoking cessation. The aim is to figure out how the application works, which techniques are used by users, which mechanisms are activated and for whom. Indeed in EE-TIS project [ 39 ], we attributed one or several behavioral change techniques [ 8 ] to each feature of the “TIS” application (messages, activities, questionnaires) and identified three mechanisms– potentially activated by them and supporting smoking cessation (i.e. motivation, self-efficacy, knowledge). This was carried out by a multidisciplinary committee in 3 steps: step 1/ two groups of researchers attributed behavior change techniques to each feature, step 2/ both groups compared their results and drew a consensus and step 3/ researchers presented their results to the committee which will in turn draw a consensus. To validate these hypotheses, a multivariate analysis embedded into the randomized control trial will make it possible to figure out which techniques influence which mechanisms and which contextual factors could moderate these links.

Other examples exist which combine a realist approach and trial designs [ 27 , 38 ].

Interdisciplinarity and stakeholder involvement

A focal point in theorizing evaluation designs is the interdisciplinary dimension, especially drawing on the expertise of social and human sciences and of practitioners and intervention beneficiaries [ 40 ]. As an intervention forms part of and influences contextual elements to produce an outcome, the expertise and feedback of stakeholders, including direct beneficiaries, offers valuable insights into how the intervention may be bringing about change. In addition, this empowers stakeholders and promotes a democratic process, which is to be upheld in population health [ 40 ]. The theorization could be done through specific workshops, including researchers, practitioners and beneficiaries on an equal basis. For example, the TC-REG project [ 28 ] has held a seminar involving both prevention practitioners and researchers, the aim being to discuss literature results and different theories/frameworks in order to define the explanatory theory (with context-mechanism configurations) and intervention strategies to be planned to test it.

Population health intervention research raises major conceptual and methodological issues. These imply clarifying what an intervention is and how best to address it. This involves a paradigm shift in order to consider that in intervention research, intervention is not a separate entity from context, but rather that there is an interventional system that is different from the sum of its parts, even though each part does need to be studied in itself. This gives rise to two challenges. The first is to integrate the notion of the interventional system, which underlines the fact that the boundaries between intervention and context are blurred. The evaluation focal point is no longer the interventional ingredients taken separately from their context, but rather mechanisms punctuating the process of change, considered as key factors in the intervention system. The second challenge, resulting from the first, is to move towards a theorization within evaluation designs, in order to analyze the interventional system more effectively. This would allow researchers a better understanding of what is being investigated and enable them to design the most appropriate methods and modalities for characterizing the interventional system. Evaluation methodologies should therefore be repositioned in relation to one another with regard to a new definition of “evidence”, including the points of view of various disciplines, and repositioning the expertise of the practitioners and beneficiaries, qualitative paradigms and experimental questions in order to address the interventional system more profoundly.

Abbreviations

Context-mechanism-outcome configurations

Capacity-Opportunity-Motvation – Behaviour Model

Évaluation de l’Efficacité de l’application Tabac Info Service

GoveRnance for Equity ENvironment and Health in the City

Classification of Health Interventions (ICHI)

Medical Research Council

Transfert de Connaissances en REGion

Theory-driven evaluation.

Tabac Info service

Hawe P, Potvin L. What is population health intervention research? Can J Public Health. 2009;100(Suppl 1):I8–14.

PubMed Central   Google Scholar  

WHO | International Classification of Health Interventions (ICHI) [Internet]. WHO. [cité 16 déc 2017]. Disponible sur: http://www.who.int/classifications/ichi/en/ .

Shoveller J, Viehbeck S, Ruggiero ED, Greyson D, Thomson K, Knight R. A critical examination of representations of context within research on population health interventions. Crit Public Health. 2016;26(5):487–500.

Article   Google Scholar  

Poland B, Frohlich K, Cargo M. Health Promotion Evaluation Practices in the Americas. New York: Springer; 2008. p. 299–317. Disponible sur: http://link.springer.com/chapter/10.1007/978-0-387-79733-5_17 .

Book   Google Scholar  

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M; Medical Research Council Guidance. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655. https://doi.org/10.1136/bmj.a1655 . PubMed PMID: 18824488; PubMed Central PMCID: PMC2769032.

Clark AM. What are the components of complex interventions in healthcare? Theorizing approaches to parts, powers and the whole intervention. Soc Sci Med. 2013;93:185–93. https://doi.org/10.1016/j.socscimed.2012.03.035 . Epub 2012 Apr 22. Review. PubMed PMID: 22580076.

Durlak JA. Why program implementation is important. J Prev Interv Community. 1998;17(2):5–18.

Michie S, Richardson M, Johnston M, Abraham C, Francis J, Hardeman W, et al. The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an international consensus for the reporting of behavior change interventions. Ann Behav Med. 2013;46:81–95.

Article   PubMed   Google Scholar  

Hawe P, Shiell A, Riley T. Complex interventions: how ‘out of control’ can a randomised controlled trial be? Br Med J. 2004;328:1561–3.

Shiell A, Hawe P, Gold L. Complex interventions or complex systems? Implications for health economic evaluation. BMJ. 2008;336(7656):1281–3.

Article   PubMed   PubMed Central   Google Scholar  

Hawe P, Shiell A, Riley T. Theorising interventions as events in systems. Am J Community Psychol. 2009;43:267–76.

Pawson R, Tilley N. Realistic Evaluation. London: Sage Publications Ltd; 1997.

Akrich M, Callon M, Latour B. Sociologie de la traduction : Textes fondateurs. 1st éd ed. Paris: Transvalor - Presses des mines; 2006. p. 304.

Machamer P, Darden L, Craver CF. Thinking about mechanisms. Philos Sci. 2000;67(1):1–25.

Lacouture A, Breton E, Guichard A, Ridde V. The concept of mechanism from a realist approach: a scoping review to facilitate its operationalization in public health program evaluation. Implement Sci. 2015;10:153.

Ridde V, Robert E, Guichard A, Blaise P, Olmen J. L’approche réaliste à l’épreuve du réel de l’évaluation des programmes. Can J Program Eval. 2012;26.

Minary L, Kivits J, Cambon L, Alla F, Potvin L. Addressing complexity in population health intervention research: the context/intervention interface. J Epdemiology Community Health. 2017;0:1–5.

Google Scholar  

Campbell D, Stanley J. Experimental and quasi-experimental designs for research. Chicago: Rand McNally; 1966.

Alla F. Challenges for prevention research. Eur J Public Health 1 févr. 2018;28(1):1–1.

Tarquinio C, Kivits J, Minary L, Coste J, Alla F. Evaluating complex interventions: perspectives and issues for health behaviour change interventions. Psychol Health. 2015;30:35–51.

Cambon L, Minary L, Ridde V, Alla F. Transferability of interventions in health education: a review. BMC Public Health. 2012;12:497.

Breslow NE. Statistics. Epidem Rev. 2000;22:126–30.

Article   CAS   Google Scholar  

Mayne J. Addressing attribution through contribution analysis: using performance measures sensibly. Can J Program Eval. 2001;16(1):1–24.

Blamey A, Mackenzie M. Theories of change and realistic evaluation. Evaluation. 2007;13:439–55.

Chen HT. Theory-driven evaluation. Newbury Park: SAGE; 1990. p. 326.

Stame N. Theory-based evaluation and types of complexity. Evaluation. 2004;10(1):58–76.

Bonell C, Fletcher A, Morton M, Lorenc T, Moore L. Realist randomised controlled trials: a new approach to evaluating complex public health interventions. Soc Sci Med 1982. 2012;75(12):2299–306.

Cambon L, Petit A, Ridde V, Dagenais C, Porcherie M, Pommier J, et al. Evaluation of a knowledge transfer scheme to improve policy making and practices in health promotion and disease prevention setting in French regions: a realist study protocol. Implement Sci. 2017;12(1):83.

Weiss CH. How Can Theory-based evaluation make greater headway? Eval Rev. 1997;21(4):501–24.

De Silva MJ, Breuer E, Lee L, Asher L, Chowdhary N, Lund C, et al. Theory of change: a theory-driven approach to enhance the Medical Research Council’s framework for complex interventions. Trials. 2014;15(1) [cité 5 sept 2016]. Disponible sur: http://trialsjournal.biomedcentral.com/articles/10.1186/1745-6215-15-267 .

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;21:10 [cité 18 sept 2018]. Disponible sur: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4406164/ .

Wang S, Moss JR, Hiller JE. Applicability and transferability of interventions in evidence-based public health. Health Promot Int. 2006;21:76–83.

Petticrew M, Cummins S, Ferrell C, Findlay A, Higgins C, Hoy C, et al. Natural experiments: an underused tool for public health? Public Health. 2005;119(9):751–7.

Article   CAS   PubMed   Google Scholar  

Porcherie M, Vaillant Z, Faure E, Rican S, Simos J, Cantoreggi NL, et al. The GREENH-City interventional research protocol on health in all policies. BMC Public Health. 2017;17:820.

Aalen OO, Røysland K, Gran JM, Ledergerber B. Causality, mediation and time: a dynamic viewpoint. J R Stat Soc Ser A Stat Soc. 2012;175(4):831–61.

Bonell C, Moore G, Warren E, Moore L. Are randomised controlled trials positivist? Reviewing the social science and philosophy literature to assess positivist tendencies of trials of social interventions in public health and health services. Trials. 2018;19(1):238.

Moore GF, Evans RE. What theory, for whom and in which context? Reflections on the application of theory in the development and evaluation of complex population health interventions. SSM Popul Health. 2017;3:132–5.

Jamal F, Fletcher A, Shackleton N, Elbourne D, Viner R, Bonell C. The three stages of building and testing mid-level theories in a realist RCT: a theoretical and methodological case-example. Trials. 2015;16(1):466.

Cambon L, Bergman P, Le Faou A, Vincent I, Le Maitre B, Pasquereau A, et al. Study protocol for a pragmatic randomised controlled trial evaluating efficacy of a smoking cessation e-‘Tabac info service’: ee-TIS trial. BMJ Open. 2017;7(2):e013604.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Alla F. Research on public health interventions: the need for a partnership with practitioners. Eur J Public Health. 2016;26(4):531.

Download references

Acknowledgments

Not applicable.

Availability of data and materials

Author information, authors and affiliations.

Chaire Prévention, ISPED, Université Bordeaux, Bordeaux, France

Linda Cambon

Université Bordeaux, CHU, Inserm, Bordeaux Population Health Research Center, UMR 1219, CIC-EC 1401, Bordeaux, France

Linda Cambon & François Alla

Université Paul Sabatier, Toulouse 3, CRESCO EA 7419 - F2SMH, Toulouse, France

Philippe Terral

You can also search for this author in PubMed   Google Scholar

Contributions

All authors read and approved the final version of the manuscript. LC and FA conceived the idea for the paper, based on their previous researches on evaluation of complex interventions, LC wrote the first draft and led the writing of the paper. LC, PT and FA helped draft the manuscript. LC acts as guarantor.

Corresponding author

Correspondence to Linda Cambon .

Ethics declarations

Authors’ information, ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Cambon, L., Terral, P. & Alla, F. From intervention to interventional system: towards greater theorization in population health intervention research. BMC Public Health 19 , 339 (2019). https://doi.org/10.1186/s12889-019-6663-y

Download citation

Received : 16 February 2018

Accepted : 15 March 2019

Published : 25 March 2019

DOI : https://doi.org/10.1186/s12889-019-6663-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Intervention
  • Public health
  • Intervention research

BMC Public Health

ISSN: 1471-2458

health research intervention definition

health research intervention definition

New framework on complex interventions to improve health

Abstract framework

30 September 2021

The Medical Research Council (MRC) and National Institute for Health Research (NIHR) complex intervention research framework has been published.

The framework is aimed at a broad audience including health researchers, funders, clinicians, health professionals, policy and decision makers.

It is intended to help:

  • researchers choose appropriate methods to improve research quality
  • research funders to understand the constraints on evaluation design
  • users of evaluation to weigh up the available evidence in the light of methodological and practical constraints.

Defining complex interventions

The new framework provides an updated definition of complex interventions, highlighting the dynamic relationship between the intervention and its context.

Complex interventions are widely used in the health service, in public health practice, and in areas of social policy that have important health consequences, such as education, transport, and housing.

Tackling the important questions

The new framework supports the development or identification, feasibility testing, evaluation and implementation of complex interventions. The framework outlines that complex intervention research can take an efficacy, effectiveness, theory-based or systems perspective depending on what is known already and what further evidence would be most useful.

It highlights a trade-off between precise unbiased answers to narrow questions and more uncertain answers to broader, more complex questions. This framework aims to increase the utility of data so that it will provide more valuable information to decision makers and improve health in practice.

Using the framework’s core elements

There are four main phases of research: intervention development or identification, for example from policy or practice, feasibility, evaluation, and implementation.

At each phase, the guidance suggests that six core elements should be considered:

  • how does the intervention interact with its context?
  • what is the underpinning programme theory?
  • how can diverse stakeholder perspectives be included in the research?
  • what are the main uncertainties?
  • how can the intervention be refined?
  • do the effects of the intervention justify its cost?

These core elements can be used to decide whether the research should proceed to the next phase, return to a previous phase, repeat a phase or stop.

Developing the framework

The development of the framework was led by the Medical Research Council and Chief Scientist Office Social and Public Health Sciences Unit, University of Glasgow.

It was developed alongside co-authors and a Scientific Advisory Group chaired by Professor Martin White (MRC Epidemiology Unit, University of Cambridge). It also included representation from all the NIHR Boards and MRC’s Population Health Science Group.

The work was informed by:

  • a scoping review
  • a workshop with international experts
  • an open consultation (with broad response from researchers of all career stages, funders, the public, and journal editors)
  • further targeted consultation with experts in relevant fields.

The update was jointly commissioned by the Medical Research Council and the National Institute of Health Research .

Extremely influential

Professor Nick Wareham, Professor Nick Wareham, Chair of MRC’s Population Health Sciences Group, said:

Previous versions of the guidance on the development and evaluation of complex interventions have been extremely influential and are widely used in the field. We are delighted that the successful partnership between MRC and NIHR has enabled the guidance to be updated and extended. It is particularly important to see how the new framework brings in thinking about the interplay between an intervention and the context in which it is applied.

Stimulating debate

Dr Kathryn Skivington, Research Fellow, MRC and CSO Social and Public Health Sciences Unit and lead author of the framework, said:

The new and exciting developments for complex intervention research are of practical relevance and I feel sure they will stimulate constructive debate, leading to further progress in this area.

Patients benefit

Professor Hywel Williams, NIHR Scientific and Coordinating Centre Programmes Contracts Advisor, said:

This updated framework is a landmark piece of guidance for researchers working on such interventions. The updated guidance will help researchers to develop testable and reproducible interventions that will ultimately benefit NHS patients. The guidance also represents a terrific collaborative effort between the NIHR and MRC that I would like to see more of.

Previous guidance

In 2006, the MRC published guidance for developing and evaluating complex interventions, building on the framework that had been published in 2000. These documents have been highly influential, and the accompanying papers published in the British Medical Journal (BMJ) are widely cited.

Interest in complex interventions has increased rapidly in recent years. Given the pace and extent of methodological development, there was a need to update the core guidance and address some of the remaining weaknesses and gaps.

Top image:   Credit: BreakingTheWalls/Getty

Share this page

  • Share this page on Twitter
  • Share this page on LinkedIn
  • Share this page on Facebook

This is the website for UKRI: our seven research councils, Research England and Innovate UK. Let us know if you have feedback or would like to help improve our online products and services .

Intervention Research in Health Care

  • First Online: 01 January 2012

Cite this chapter

health research intervention definition

  • Boris Sobolev 4 ,
  • Victor Sanchez 5 &
  • Lisa Kuramoto 6  

836 Accesses

In this introductory chapter, we provide a broad overview of the evaluation of complex interventions aimed to improve the quality of health care. In particular, we outline the analytical framework and designs for evaluative studies within the context of health services research. We then describe the types of questions that commonly arise in the evaluation of management alternatives for perioperative processes. We conclude with a brief discussion about the transition from posing a study question to identifying the level of analysis and the summary measure of the outcome variable.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

L. Aharonson-Daniel, R. J. Paul, and A. J. Hedley. Management of queues in out-patient departments: The use of computer simulation. Journal of Health Organization and Management , 10(6):50–58, 1996.

CAS   Google Scholar  

J. Appleby, S. Boyle, N. Devlin, M. Harley, A. Harrison, L. Locock, and R. Thorlby. Sustaining reductions in waiting times: Identifying successful strategies. Technical report, King’s Fund, 2005.

Google Scholar  

N. T. J. Bailey. A study of queues and appointment systems in hospital out-patient departments, with special reference to waiting-times. Journal of the Royal Statistical Society , 14(2):185–199, 1952.

G. Bertolini, C. Rossi, L. Brazzi, D. Radrizzani, G. Rossi, E. Arrighi, and B. Simini. The relationship between labour cost per patient and the size of intensive care units: A multicentre prospective study. Intensive Care Medicine , 29 (12):2307–2311, 2003.

Article   PubMed   Google Scholar  

D. M. Berwick. The John Eisenberg Lecture: Health services research as a citizen in improvement. Health Services Research , 40(2):317–336, 2005.

J. Blake and M. Carter. A goal programming approach to strategic resource allocation in acute care hospitals. European Journal of Operational Research , 140: 541–561, 2002.

Article   Google Scholar  

J. Blake and J. Donald. Mount Sinai Hospital uses integer programming to allocate operating room time. Interfaces , 32(3):63–73, 2002.

J. T. Blake and M. W. Carter. Surgical process management: A conceptual framework. Surgical Services Management , 3(9):31–37, 1997.

M. Blanc-Jouvan, A. Mercatello, D. Long, M. P. Benoit, M. Khadraoui, C. Nemoz, M. Gaydarova, J. P. Boissel, and J. F. Moskovtchenko. The value of anesthesia consultation in relation to the single preanesthetic visit. Annales Franaises d’Anesthsie et de Ranimation , 18(8):843–847, 1999.

Article   CAS   Google Scholar  

J. M. Bland and D. G. Altman. The logrank test. British Medical Journal , 328(7447): 1073, 2004.

M. Brahimi and D. Worthington. Queuing models for out-patient appointment systems - a case study. Journal of Operational Research Society , 42(9):733–746, 1991.

L. H. Cohn and L. H. Edmunds Jr. Cardiac Surgery in the Adult . McGraw-Hill, New York, 2003.

A. X. Costa, S. A. Ridley, A. K. Shahani, P. R. Harper, V. De Senna, and M. S. Nielsen. Mathematical modelling and simulation for planning critical care capacity. Anaesthesia , 58(4):320–327, 2003.

Article   PubMed   CAS   Google Scholar  

D. R. Cox. Principles of statistical inference . Cambridge University Press, Cambridge, 2006.

Book   Google Scholar  

J. L. Cronenwett and R. B. Rutherford. Decision making in vascular surgery . Saunders, Philadelphia, 2001.

R. G. Cumming, C. Sherrington, S. R. Lord, J. M. Simpson, C. Vogler, I. D. Cameron, and V. Naganathan. Cluster randomised trial of a targeted multifactorial intervention to prevent falls among older people in hospital. British Medical Journal , 336(7647):758–760, 2008.

F. Dexter and R. D. Traub. How to schedule elective surgical cases into specific operating rooms to maximize the efficiency of use of operating room time. Anesthesia and Analgesia , 94(4):933–942, 2002.

A. Donabedian. Evaluating the quality of medical care. Milbank Quarterly , 83(4): 691–729, 2005.

M. Eccles, J. Grimshaw, M. Campbell, and C. Ramsay. Research designs for studies evaluating the effectiveness of change and improvement strategies. Quality and Safety in Health Care , 12(1):47–52, 2003.

R. H. Edwards, J. Clague, M. Barlow, P. Clarke, and R. Rada. Operations research survey and computer simulation of waiting times in two medical outpatient clinic structures. Health care analysis , 2(2):164–169, 1994.

A. Figueiras, M. T. Herdeiro, J. Polonia, and J. J. Gestal-Otero. An educational intervention to improve physician reporting of adverse drug reactions: A cluster-randomized controlled trial. Journal of the American Medical Association , 296(9):1086–1093, 2006.

C. Ham, R. Kipping, and H. McLeod. Redesigning work processes in health care: Lessons from the national health service. The Milbank Quarterly , 81(3):415–439, 2003.

D. M. Hamilton and S. Breslawski. Operating room scheduling. Factors to consider. Association of Operating Room Nurses Journal , 59(3):665–680, 1994.

P. R. Harper and H. M. Gamlin. Reduced outpatient waiting times with improved appointment scheduling: A simulation modelling approach. OR Spectrum , 25(2): 207–222, 2003.

G. Iapichino, L. Gattinoni, D. Radrizzani, B. Simini, G. Bertolini, L. Ferla, G. Mistraletti, F. Porta, and D. R. Miranda. Volume of activity and occupancy rate in intensive care units. Association with mortality. Intensive Care Medicine , 30(2): 290–297, 2004.

J. B. Jun, S. H. Jacobson, and J. R. Swisher. Application of discrete-event simulation in health care clinics: A survey. Journal of the Operational Research Society , 50(2):109–123, 1999.

S. J. Katz, H. F. Mizgala, and H. G. Welch. British Columbia sends patients to Seattle for coronary artery surgery. Bypassing the queue in Canada. Journal of the American Medical Association , 266(8):1108–1111, 1991.

K. J. Klassen and T. R. Rohleder. Scheduling outpatient appointments in a dynamic environment. Journal of Operations Management , 14(2):83–101, 1996.

A. Levy, B. Sobolev, R. Hayden, M. Kiely, M. FitzGerald, and M. Schechter. Time on wait lists for coronary bypass surgery in british columbia, canada, 1991 - 2000. BMC Health Services Research , 5(1):1–10, 2005.

P. Matthey, B. T. Finucane, and B. A. Finegan. The attitude of the general public towards preoperative assessment and risks associated with general anesthesia. Canadian Journal of anaesthesia , 48(4):333–339, 2001.

H. McLeod, C. Ham, and R. Kipping. Booking patients for hospital admissions: Evaluation of a pilot programme for day cases. British Medical Journal , 327(7424): 1147, 2003.

P. Meredith, C. Ham, and R. Kipping. Modernising the NHS: Booking patients for hospital care: A progress report. Technical report, University of Birmingham, 1999.

G. Moens, K. Johannik, T. Dohogne, and G. Vandepoele. The effectiveness of teaching appropriate lifting and transfer techniques to nursing students: Results after two years of follow-up. Archives of Public Health , 60(2):115–123, 2002.

S. Z. Mordiffi, S. P. Tan, and M. K. Wong. Information provided to surgical patients versus information needed. Association of Operating Room Nurses Journal , 77(3): 546–552, 2003.

M. Murray and D. M. Berwick. Advanced access: Reducing waiting and delays in primary care. Journal of the American Medical Association , 289(8):1035–1040, 2003.

M. Murray, T. Bodenheimer, D. Rittenhouse, and K. Grumbach. Improving timely access to primary care: casC studies of the advanced access model. Journal of the American Medical Association , 289(8):1042–1046, 2003.

Institute of Medicine.Committee on Quality of Health Care. To err is human: Building a safer health system . National Academies Press, Washington, D.C., 2000.

J. Papaceit, M. Olona, C. Ramon, R. Garcia-Aguado, R. Rodriguez, and M. Rull. National survey of preoperative management and patient selection in ambulatory surgery centers. Gaceta Sanitaria , 17(5):384–392, 2003.

C. O. Phillips, S. M. Wright, D. E. Kern, R. M. Singa, S. Shepperd, and H. R. Rubin. Comprehensive discharge planning with postdischarge support for older patients with congestive heart failure: A meta-analysis. Journal of the American Medical Association , 291(11):1358–1367, 2004.

M. Ramchandani, S. Mirza, A. Sharma, and G. Kirkby. Pooled cataract waiting lists: Views of hospital consultants, general practitioners and patients. Journal of the Royal Society of Medicine , 95(12):598–600, 2002.

W. Shearer, J. Monagle, and M. Michaels. A model of community based, preadmission management for elective surgical patients. Canadian Journal of anaesthesia , 44(12):1311–1314, 1997.

J. H. Silber and S. V. Williams. Hospital and patient characteristics associated with death after surgery—A study of adverse occurrence and failure to rescue. Medical Care , 30(7):615–629, 1992.

B. Sobolev and L. Kuramoto. Policy analysis using patient flow simulations: Conceptual framework and study design. Clinical and Investigative Medicine , 28(6): 359–363, 2005.

PubMed   Google Scholar  

B. Sobolev, D. Harel, C. Vasilakis, and A. Levy. Using the statecharts paradigm for simulation of patient flow in surgical care. Health Care Management Science , 11(1): 79–86, 2008.

R. M. Tappen, J. Muzic, and P. Kennedy. Preoperative assessment and discharge planning for older adults undergoing ambulatory surgery. Association of Operating Room Nurses Journal , 73(2):464, 467, 469, 2001.

W. M. K. Trochim and J. P. Donnelly. Research methods knowledge base . Atomic Dog Publishing, 3rd edition, Mason, OH, 2008.

O. C. Ukoumunne, M. C. Gulliford, S. Chinn, J. A. Sterne, P. G. Burney, and A. Donner. Methods in health service research. evaluation of health interventions at area and organisation level. British Medical Journal , 319(7206):376–379, 1999.

W. A. van Klei, P. J. Hennis, J. Moen, C. J. Kalkman, and K. G. Moons. The accuracy of trained nurses in pre-operative health assessment: Results of the open study. Anaesthesia , 59(10):971–978, 2004.

J. Vissers. Selecting a suitable appointment system in an outpatient setting. Medical Care , 17(12):1207–1220, 1979.

D. A. Wood, K. Kotseva, S. Connolly, C. Jennings, A. Mead, J. Jones, A. Holden, Bacquer D. De, T. Collier, Backer G. De, and O. Faergeman. Nurse-coordinated multidisciplinary, family-based cardiovascular disease prevention programme (euroaction) for patients with coronary heart disease and asymptomatic individuals at high risk of cardiovascular disease: A paired, cluster-randomised controlled trial. Lancet , 371(9629):1999–2012, 2008.

C. J. Wright, G. K. Chambers, and Y. Robens-Paradise. Evaluation of indications for and outcomes of elective surgery. Canadian Medical Association Journal , 167(5): 461–466, 2002.

Download references

Author information

Authors and affiliations.

University of British Columbia, 828 West 10th Avenue, Vancouver, BC, Canada

Boris Sobolev

Electrical Engineering and Computer Sciences, University of California, Berkeley, 253 Cory Hall, Berkeley, CA, USA

Victor Sanchez

Centre for Clinical Epidemiology and Evaluation, Vancouver Coastal Health Research Institute, 828 West 10th Avenue, Vancouver, BC, Canada

Lisa Kuramoto

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer Science+Business Media, LLC

About this chapter

Sobolev, B., Sanchez, V., Kuramoto, L. (2012). Intervention Research in Health Care. In: Health Care Evaluation Using Computer Simulation. Springer, Boston, MA. https://doi.org/10.1007/978-1-4614-2233-4_1

Download citation

DOI : https://doi.org/10.1007/978-1-4614-2233-4_1

Published : 04 May 2012

Publisher Name : Springer, Boston, MA

Print ISBN : 978-1-4614-2232-7

Online ISBN : 978-1-4614-2233-4

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Search Menu

Sign in through your institution

  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Numismatics
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Social History
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Legal System - Costs and Funding
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Restitution
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Social Issues in Business and Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Management of Land and Natural Resources (Social Science)
  • Natural Disasters (Environment)
  • Pollution and Threats to the Environment (Social Science)
  • Social Impact of Environmental Issues (Social Science)
  • Sustainability
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • Ethnic Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Politics of Development
  • Public Policy
  • Public Administration
  • Qualitative Political Methodology
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Disability Studies
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Intervention Research: Developing Social Programs

  • < Previous
  • Next chapter >

Intervention Research: Developing Social Programs

1 What Is Intervention Research?

  • Published: April 2009
  • Cite Icon Cite
  • Permissions Icon Permissions

At the core, making a difference is what social work practice is all about. Whether at the individual, organizational, state, or national level, making a difference usually involves developing and implementing some kind of action strategy. Often too, practice involves optimizing a strategy over time, that is, attempting to improve it.

In social work, public health, psychology, nursing, medicine, and other professions, we select strategies that are thought to be effective based on the best available evidence. These strategies range from clinical techniques, such as developing a new role-play to demonstrate a skill, to complex programs that have garnered support in a series of controlled studies, to policy-level initiatives that may be based on large case studies, expert opinion, or legislative reforms. To be sure, the evidence is often only a partial guide in developing new clinical techniques, programs, and policies. Indeed, strategies often must be adapted to meet the unique needs of the situation, including the social or demographic characteristics that condition problems. Thus, the hallmark of modern social work practice is this very process of identifying, adapting, and implementing what we understand to be the best available strategy for change.

However, suppose that you have an idea for how to develop a new service or revise an existing one. That is, through experience and research, you begin to devise a different practice strategy—an approach that perhaps has no clear evidence base, but one that may improve current services. When you attempt to develop new strategies or enhance existing strategies, you are ready to engage in intervention research.

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code
  • Add your ORCID iD

Institutional access

Sign in with a library card.

  • Sign in with username/password
  • Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

Month: Total Views:
February 2024 1
July 2024 3
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Intervention Studies Clinical Trials

Introduction

The primary goal of observational studies, e.g., case-control studies and cohort studies, is to test hypotheses about the determinants of disease. In contrast, the goal of intervention studies is to test the efficacy of specific treatments or preventive measures by assigning individual subjects to one of two or more treatment or prevention options. Intervention studies often test the efficacy of drugs, but one might also use this design to test the efficacy of differing management strategies or regimens. There are two major types of intervention studies:

  • Controlled clinical trials in which individual subjects are assigned to one or another of the competing interventions, or
  • Community interventions, in which an intervention is assigned to an entire group.

In many respects the design of a clinical trial is analogous to a prospective cohort study, except that the investigators assign or allocate the exposure (treatment) under study.

health research intervention definition

This provides clinical trials with a powerful advantage over observational studies, provided the assignment to a treatment group is done randomly with a sufficiently large sample size. Under these circumstances randomized clinical trials (RCTs) provide the best opportunity to control for confounding and avoid certain biases. Consequently, they provide the most effective way to detect small to moderate benefits of one treatment over another. However, in order to provide definitive answers, clinical trials must enroll a sufficient number of appropriate subjects and follow them for an adequate period of time. Consequently, clinical trials can be long and expensive.

Learning Objectives

After successfully completing this section, the student will be able to:

  • Explain the distinguishing features of a clinical trial (intervention study).
  • Discuss the two major potential advantages of intervention studies and their limitations.
  • Differentiate between preventive, therapeutic, individual and community RCTs
  • Briefly explain the differences among phase I, II, III, & IV clinical trials.
  • Define randomization in the context of a clinical trial and give examples of appropriate methods of randomization.
  • Explain why randomization is used.
  • Explain how to determine whether randomization has been successful.
  • Define blinding and explain the purpose of blinding.
  • Distinguish between single and double blinding.
  • Explain what the placebo effect is.
  • Define the term "placebo" and explain why placebos are used.
  • Explain when the use of a placebo is not appropriate and discuss alternative strategies.
  • Explain why it is important to maintain high rates of follow-up in a prospective cohort study or a clinical trial.
  • Explain why compliance is important and the effects of non-compliance.
  • Define and distinguish between "intention to treat analysis" and an efficacy analysis.
  • Define what a run-in phase is and explain its purpose.

Types of Intervention Studies

Individual versus group (community) trials.

  • Most trials are conducted by allocating treatments or interventions to individual subjects , i.e., the treatment or intervention is allocated to individuals. For example, investigators recently compared the effectiveness of glucosamine and chondroitin and several other drugs in their ability to relieve symptoms of osteoarthritis. Subjects were randomly assigned to receive one of several possible treatments, and they were followed and assessed for pain relief and other measures.
  • In contrast, group trials allocate the intervention to groups of subjects. These types of trials are generally conducted when the intervention is inherently operates at a group-level (e.g., changing a law or policy) or because it would be difficult to give the intervention to some people in the group while withholding it from others. Group units might be families, schools, or medical practices. A well-known type of group trial is a community trial, in which the intervention is allocated therapy to entire communities or neighborhoods. In the 1940s the effectiveness of fluoride in preventing dental caries was tested comparing the frequency of caries in the children in Kingston and Newburgh. New York after Newburgh had had fluoride added to the town's drinking water. A copy of the first research report, published in 1950 can be seen here. Other community trials might test community-based interventions that provide educational programs to some communities, but not others in order to determine their effectiveness. An example might be to compare the effectiveness of a community-based educational program in Tanzania in which some villages receive the educational program, but others do not.

The Kingston-Newburgh Fluoride Trial

health research intervention definition

Prevention trials (or prophylactic trials) versus Therapeutic Trials

Clinical trials might also be distinguished based on whether they are aimed at assessing preventive interventions or evaluating new treatments for existing disease. The Physicians Health Study established that low-dose aspirin reduced the risk of myocardial infarctions (heart attacks) in males. Other trials have assessed whether exercise or low-fat diet can reduce the risk of heart disease or cancer. A study currently underway at BUSPH is testing whether peer counseling is effective in helping smokers who live in public housing quit smoking. All of these are prevention trials. In contrast, there have been many trials that have contributed to our knowledge about optimum treatment  of many diseases through medication, surgery, or other medical interventions.

Phases of Trials Evaluating New Drugs

Clinical trials for new drugs are conducted in phases with different purposes that depend on the stage of development.

  • Phase I trials : ClinicalTrials.gov describes phase I trials as "Initial studies to determine the metabolism and pharmacologic actions of drugs in humans, the side effects associated with increasing doses, and to gain early evidence of effectiveness; may include healthy participants and/or patients." Frequently, an experimental drug or treatment initially is tested in a small group of people (8-80) to evaluate its safety and to explore possible side effects and the doses at which they occur.
  • Phase II trials : ClinicalTrials.gov describes these as "Controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in patients with the disease or condition under study and to determine the common short-term side effects and risks." The new treatment might be tested in a somewhat larger group (80-200) to get more information about effectiveness and potential side effects at different dosages.
  • Phase III trials: ClinicalTrials.gov defines these as "Expanded controlled and uncontrolled trials after preliminary evidence suggesting effectiveness of the drug has been obtained, and are intended to gather additional information to evaluate the overall benefit-risk relationship of the drug and provide and adequate basis for physician labeling." These are typically conducted in larger groups (200-40,000) to formally test effectiveness and establish the frequency and severity of side effects compared to no treatment, or, compared to currently used treatments ("usual care")
  • Phase IV refers to post-marketing "surveillance" to collect information regarding risks, benefits, and optimal use. This phase can be particularly important for identifying rare, but potentially devastating side effects. Example: Safety of Influenza A (H1N1) Vaccine in Post-marketing Surveillance in China

The Subjects

Population hierarchy.

When clinical trials are performed there is generally a target population or reference population to which one would like to apply the findings. For example, researchers reported on the efficacy of low-dose aspirin in preventing myocardial infarction in women [Ridker P, et al.: A randomized trial of low-dose aspirin in the primary prevention of cardiovascular disease in women. N Engl J Med 2005;352:1293-304]. The reference population was adult females who have not had a myocardial infarction.

health research intervention definition

The experimental population (study population) are the potential participants, i.e., a practical subset of people who are representative of the reference population. Important practical considerations might include choosing a group that was sufficiently large and likely to produce an adequate number of end points (outcomes of interest) in order to allow valid statistical analysis and a reasonably precise estimate of the measure of effect. The participants would be those who were willing to participate (i.e., consented after being fully informed about the study) and also met eligibility criteria that take into account scientific and safety considerations. For example, an inclusion criterion might be age 45 or older in order to achieve a study sample that would produce a sufficient number of end points. In a study of the effect of aspirin on cardiovascular disease it would also be important to specify exclusion criteria, e.g., people with pre-existing cardiovascular disease or those who were already taking aspirin or anticoagulants for other medical conditions.

The following is an excerpt from the report by Ridker et al. describing how they obtained their study population:

 

"In brief, between September 1992 and May 1995, letters of invitation were mailed to more than 1.7 million female health professionals. A total of 453,787 completed the questionnaires, with 65,169 initially willing and eligible to enroll. Women were eligible if they were 45 years of age or older; had no history of coronary heart disease, cerebrovascular disease, cancer (except non-melanoma skin cancer), or other major chronic illness; had no history of side effects to any of the study medications; were not taking aspirin or nonsteroidal anti-inflammatory medications (NSAIDs) more than once a week (or were willing to forego their use during the trial); were not taking anticoagulants or corticosteroids;

 

Eligible women were enrolled in a three-month run-in phase of placebo administration to identify a group likely to be compliant with long-term treatment. A total of 39,876 women were willing, eligible, and compliant during the run-in period and underwent randomization: 19,934 were assigned to receive aspirin and 19,942 to receive placebo."

Internal and External Validity

The eligibility criteria need to balance the needs for internal and external validity. Internal validity refers to the accuracy of the conclusions within that particular study sample, while external validity refers to whether or not the results of a particular study are relevant to a more general population. For example, in 1981 the Physicians Health Study sent invitation letters, consent forms, and enrollment questionnaires to all 261,248 male physicians between 40 and 84 years of age who lived in the United States and who were registered with the American Medical Association. Less than half responded to the invitation, and only about 59,000 were willing to participate. Of those 33,223 were both willing and eligible.

These physicians were enrolled in a run-in phase during which all received active aspirin and placebo beta-carotene (both of these were to be tested in a two-by-two factorial design, to be described later in this module). After 18 weeks, participants were sent a questionnaire asking about their health status, side effects, compliance, and willingness to continue in the trial. A total of 11,152 changed their minds, reported a reason for exclusion, or did not reliably take the study pills. The remaining 22,071 physicians were then randomly assigned to experimental groups and followed for the duration of the study. The study was restricted to physicians in order to facilitate follow-up, since all subjects were registered physicians in the AMA. The study excluded female physicians, because in 1981 the number of registered female physicians over the age of 40 was quite small and would not have provided enough statistical power to provide valid results in females. (Note that the exclusion of females is not an example of selection bias. It does not affect the validity of the results of the study but rather the nature of the target population and therefore the generalizability of the results.) The study convincingly demonstrated that the regimen of low-dose aspirin reduced the risk of myocardial infarction (heart attack) in these subjects by about 44%, and the results were reported in 1989 in the New England Journal of Medicine. However, one of the unanswered questions was whether the results were applicable to females (or even to the non-physician population at large). Consequently, the questions about the external validity, i.e. the generalizability, of the study lingered and eventually led to a separate clinical trial in The Women's Health Study. The results were published in 2005 and concluded:

"In this large, primary-prevention trial among women, aspirin lowered the risk of stroke without affecting the risk of myocardial infarction or death from cardiovascular causes, leading to a non-significant finding with respect to the primary end point."

 

Ridker et al.: A randomized trial of low-dose aspirin in the primary prevention of cardiovascular disease in women. N Engl J Med 2005;352:1293-304.

In other words, the effect of aspirin in preventing myocardial infarctions did appear to be different in women and men. 

Sample Size

The major advantage of large randomized clinical trials is that that they are the most effective way to reduce confounding. As such, they offer the opportunity to identify small to moderate effects that may be clinically very important. For example, coronary artery disease (CAD) is the most frequent cause of death and disability in the the US and worldwide. Consequently, interventions that reduce risk by 15-20% would be extremely important, because so much death and disability is attributed to CAD. While control of confounding makes it easier to accurately assess modest but important effects, it is still necessary to have an adequate sample size in order to produce a measure of association that is reasonably precise. If the study does not have a sufficient sample size (i.e., if it is "under powered"), the study might fail to identify a meaningful benefit that truly existed, and much time and money would have been wasted on an incorrect conclusion.

Actually, the key factor influencing the power of the study is the number of outcomes (often referred to as "endpoints") rather than study size per se . Of course, increasing study size will increase the number of endpoints, but two other factors that affect the power of the study are the likelihood of the outcome among the study subjects and the duration of the study. For example, both the Physicians' Health Study and the Women's Health Study required participants to be above the age of 40 at the time of enrollment, since younger subjects would be substantially less likely to have a myocardial infarction during the planned follow up period. The duration of the follow up period is obviously also relevant, since shorter periods of follow up will produce fewer events and reduce statistical power

In order to avoid conducting studies that are underpowered, investigators will perform a series of calculations referred to as sample size estimates. This is not a single calculation, but a series of calculations that, in essence, address "what if" questions. For example, the observational studies that led up to the Physicians Health Study failed to find statistically significant benefits of aspirin, but they seemed to suggest that if there were a benefit, it would likely be on the order of a 15-30% reduction in risk of myocardial infarction. If one has estimates of the magnitude of risk (the expected cumulative incidence) in the reference population, one can than perform calculations to estimate how many subjects one would need in each of two study groups to detect a given effect, if it existed. For example, if the expected incidence of myocardial infarction over five years in males over 40 years of age were around 5%, and if low-dose aspirin truly reduced the risk by about 20%, then the expected frequencies in the untreated placebo group and the aspirin treated group would be expected to be 0.05 and 0.04 respectively. The Excel file "Epi_Tools.XLS" has a worksheet entitled "Sample Size" that performs these calculations for you.

health research intervention definition

The illustration above shows that a "what if" situation, i.e. what if the frequency of myocardial infarction is 5% without aspirin and 4% with the low-dose aspirin regimen (i.e., a 20% reduction in risk). The calculations indicate that in order to have a 90% probability (statistical power) of finding a statistically significant difference using p<0.05 as the criterion of significance, we would need a little over 9,000 subjects in each group. The investigators in the Physicians' Health Study wisely sought a somewhat larger sample than the estimates indicated.

Assignment to Treatments or Regimens

Confounding.

Many factors can influence whether or not a subject will develop an outcome of interest. As a simple example, consider a study with the goal of determining whether physical activity reduces the risk of heart disease. An overly simplistic approach would be to enroll a cohort of subjects without pre-existing heart disease and divide them into exposure groups based on their activity level at the time of enrollment. They could then be followed longitudinally in order to measure and compare the incidence of heart disease in each group. Both groups would likely have subjects with a range of ages, but the 'active' group would probably have a somewhat younger age distribution than the inactive group, because younger people tend to be more active than older people. The problem, of course, is that age is also an independent risk factor for developing heart disease, so we wouldn't be evaluating just the effect of activity. The "risk" in each group is measured as their cumulative incidence of heart disease, but the risk ratio or risk difference that we measure is really going to reflect the sum total of all differences between the groups that influence their probability of developing heart disease. This would include not only differences in age, but also differences in a host of known (and yet to be discovered) risk factors such as smoking, gender, body mass index, blood pressure, family history, medications used, etc. All of these are factors that influence the risk of heart disease, and they confound our estimation of the association between activity and heart disease.

Confounding distorts the measure of association that is our main concern; in the example above, it is the association between activity and heart disease. However, all of these 'other risk factors' can distort the measure of association we are interested in if they are unevenly distributed among the groups we are comparing. The primary advantage to randomized clinical trials is that random assignment of a sufficiently large number of subjects tends to result in similar distributions of all other factors, including factors unknown to us, among the groups . If the groups have the same distributions of all of these other risk factors at baseline (i.e., the beginning of the trial) then they will not distort our estimate of effect (measure of association).

Methods of Assignment

The distinguishing feature of an intervention study is that the investigators assign subjects to a treatment (or "exposure") in order to establish actively treated groups of subjects and a comparison group. There are several means of assigning exposure for the purposes of comparison, many of which do not, in fact, randomly assign subjects to different groups or have too few subjects to rely on the randomization process to balance factors between groups.

  • Historical comparison group : One can simply compare results with an intervention to an historical control group. For example, vascular surgeons at Boston Medical Center wanted to test the efficacy of a "critical pathway," a protocol for patient management after surgery for atherosclerotic occlusions in the arteries in the leg. They compared 67 consecutively treated patients before institution of the pathway with care of 69 consecutively treated patients with the critical pathway in place. This is a convenient method when there is a sudden shift in treatment or management that is applied to all patients, but the limitation of this approach is an inability to control for confounding factors.
  • Non-random assignment: Non-random assignment methods such as alternate patients or alternate days of the week are not optimal because they are predictable and can be exploited by caretakers either consciously or unconsciously. This may lead to biased assignment.
  • Randomization: Randomized assignment means that all subjects have an equal chance of being allocated to any of the available treatment options. To be effective, It must be done by a method that is unpredictable. One can use published tables of random numbers and simply assign subjects based on the next number listed on the table, or one can use a random number generated by a computer, such as the random number function in Excel. The Epi_Tools application has a worksheet that allows you to specify the number of study groups and then enter a "seed" number that triggers the generation of a random number that specifies which treatment group a subject should be assigned to. The unpredictability means that, if a sufficiently large number of subjects are randomly assigned to treatment groups, the groups will have similar distributions of all characteristics. As a result both known and unknown confounders will tend to be equally distributed among the study groups. By avoiding an imbalance in other risk factors, the estimate of association is less likely to be influenced by confounding. However, in order to ensure baseline comparability of the groups, the sample size must be sufficiently large. The other advantage to assigning subjects to treatment groups by a random method is that it avoids the potential for bias in assignment. Thus, two major advantages to random allocation to treatment groups (randomization) are:

Importantly, it is the number of units randomized, not the number of people, that determine whether randomization is likely to work. If the study is an individual trial, then the number of subjects equal the number of units. However, in a group-randomized trial, the number of units is smaller than the number of individuals in the trial. For example, in the trial of peer counseling for smokers in public housing, entire public housing developments were assigned to either the intervention or control arm, so that every participant at a particular development received the same treatment. Twenty developments were randomized. The likelihood that the 10 developments in each arm were balanced on potential confounding factors was the same as if the study consisted of 20 individuals (or the same as the likelihood that flipping a coin 20 times would produce a balanced number of heads and tails), even though there were 500 individuals in the study. In the fluoride trial described previously, even though tens of thousands of people were involved, there were only two cities, and randomization can never balance confounders between two units, whether they are individuals or groups. However, in both these cases, random assignment did avoid the possibility that the investigators would consciously or unconsciously assign the groups based on their feeling about what would be most likely to produce a result consistent with their hypothesis.

Blinding (Masking), Placebos, and Shams

Blinding (or masking) refers to withholding knowledge about treatment assignment from subjects and/or investigators in order to prevent bias in assessment of subjective outcomes, such as pain relief. There are several schemes for blinding:

  • Single blind : subjects don't know which treatment they are receiving
  • Double blind : neither subjects nor the investigator who is assessing the patient are aware of the treatment assignment until the end of the study
  • Triple blind : This term is sometimes used when the person who administers treatment to the study subjects is kept unaware of the assigned treatment.

Blinding is facilitated by the use of placebo treatments or sham procedures. For example, in a study designed to evaluate the efficacy of arthroscopic surgery in treating painful osteoarthritis of the knee, subjects in the sham surgery group had a small incision placed on the knee under sedation, but arthroscopic surgery was not actually performed. Instead, the surgeons simulated the procedure by asking to be given the usual instruments and manipulating the knee of the subject as if the real procedure were being performed. Sham surgery is more problematic than use of a placebo, because it has the potential for causing harm and because the patient is being actively deceived. (For a detailed discussion on the ethics of sham surgeries, see Miller FG and Kaptchuk TJ: Sham procedures and the ethics of clinical trials. J R Soc Med. 2004 December; 97(12): 576–578.)

  • Placebo : a pharmacologically inert (inactive) substance that is otherwise indistinguishable from the active treatment. When a certain "standard of care" is routine for a given condition, it is probably not ethical to assign subjects to a placebo group.
  • Sham : similar to a placebo, a sham is a fake procedure designed to resemble a real procedure that is being tested for efficacy.

Masking is not always necessary, nor is it always possible. If the primary outcome of interest is definitive and objective, such as death, then masking isn't necessary. In addition, if the treatment is an elaborate surgical procedure, the ethics of doing a sham procedure would be questionable.

The Placebo Effect

The use of placebos and sham procedures facilitates masking and thereby prevents bias in assessment of subjective outcomes, such as pain relief. However, another major advantage to using them is that they enable investigators to distinguish the degree to which improvements are solely the result of the " placebo effect. " When people are enrolled in a study, or prescribed a medication or offered any medical treatment or care, there is generally an expectation that they will improve or benefit from it . The tendency for people to report improvements even when the treatment has no real therapeutic effect is referred to as "the placebo effect," and it can vary widely in magnitude. In a clinical trial designed to test the effectiveness of glucosamine and chondroitin in relieving symptoms of osteoarthritis the authors defined the outcome of interest as greater than 20% relief of pain on an analog scale, shown below.

health research intervention definition

In the placebo treated group, 60% reported greater than 20% relief of pain, compared to 67% in the group treated with glucosamine and chondroitin.

Another illustration of the potential impact of the placebo effect is seen in the article below from the New York Times.

Perceptions: Positive Spin Adds to a Placebo's Impact

By NICHOLAS BAKALAR, New York Times, December 27, 2011

In a study published online last week in the online journal PLoS One, researchers explained to 80 volunteers with irritable bowel syndrome that half of them would receive routine treatment and the other half would receive a placebo. They explained to all that this was an inert substance, like a sugar pill, that had been found to "produce significant improvement in I.B.S. symptoms through mind-body self-healing processes." The patients, all treated with the same attention, warmth and empathy by the researchers, were then randomly assigned to get the pill or not.

At the end of three weeks, they tested all the patients with questionnaires assessing the level of their pain and other symptoms. The patients given the sugar pill — in a bottle clearly marked "placebo" — reported significantly better pain relief and greater reduction in the severity of other symptoms than those who got no pill. The authors speculate that the doctors' communication of a positive outcome was one factor in the apparent effectiveness of the placebo.

Does Sugar Make Kids Hyperactive?

Watch this short video.

Compliance and Loss to Follow Up

Non-compliance is the failure to adhere to the study protocol. For example, in the Physicians Health Study about 15% of the subjects randomized to receive low-dose aspirin did not regularly take the capsules they received, and about 15% of the subjects in the placebo group were actually taking aspirin on a fairly regular basis.

Effects of Non-compliance:

Non-compliance tends minimize any difference between the groups. As a result, the statistical power to detect a true difference is reduced, and the true effect will be biased toward the null.

Ways to Maintain Compliance

  • Design protocols and regimens that are as simple and easy as possible to follow and comply with. For example, use give pills in convenient "blister packs" with the days of the month on the pack, or use special pill boxes to make it easier to remember to take medications.
  • Enroll motivated and knowledgeable subjects who lead fairly organized lives (e.g. The Physicians' Health Study or The Nurses's Health Study).
  • Paint an accurate picture of what will be required during the enrollment and informed consent process.
  • During enrollment take a careful medical history to try to identify individuals who would have a difficult time complying, and exclude these people. For example, The Physicians' Health Study excluded subjects with a history of gastritis, since they would be less likely to tolerate the aspirin regimen and at higher risk of ulcers and gastritis.
  • When possible, mask the subjects in the comparison group, because if they know they are not receiving an active ingredient, they will be less likely to comply
  • Have your study staff contact subjects frequently to maintain interest and motivation.
  • Conduct a "run-in phase" to identify subjects who are unable or unmotivated to comply.

Assessing Compliance

Even when measures are taken to maximize compliance, it is important to assess compliance in the participants.

  • Ask the subjects if they adhered to the protocol.
  • Collect pill packs to count unused pills.
  • Collect blood or urine samples to assess compliance (measure the active ingredient or an inert marker if possible.)

Loss to Follow Up

Follow up on subjects can be accomplished through periodic visits and examinations, by phone interviews, by mail questionnaires, or via the Internet. Patients may drop out of a study as a result of loss or interest, adverse reactions, or simple a burdensome protocol that becomes tiring. Others might become lost to follow up because of death, relocation or other reasons. Loss to follow up is a problem for two main reasons:

  • It reduces the effective sample size because the investigators will be missing outcome measures on those who are lost.
  • If follow up rates differ among comparison groups and if attrition is related to the outcome, the results of the study can be biased. This is a special type of selection bias caused not by differences in enrollment, but instead by differential rates of retention that are related to the outcome. For example, a study on the efficacy of high-impact exercise on risk factors for osteoporosis had greater loss to follow-up in the group assigned to exercise (see below). Women who dropped out may have had inherently lower bone strength that made the high-impact regimen less tolerable for them. If so, this would have produced selective loss of women more likely to develop osteoporosis from the active treatment group, and this would cause a biased estimate, specifically an overestimate of the benefit of exercise.

health research intervention definition

Issues in the Analysis of Clinical Trials

The basic analysis.

The basic data analysis is similar to that of a typical cohort study, and the results can be summarized in a contingency table. One can then compute cumulative incidence or incidence rates, as appropriate. From these, one can calculate the risk ratio, risk difference, p-values and 95% confidence intervals. Most of these calculations can be done quite easily using the Excel worksheet for cohort studies provided in "Epi_Tools.XLS". The illustration below shows the results of analysis of a trial looking at the ability of zidovudine (an anti-retroviral drug used in the treatment and prevention of HIV) to reduce maternal to child transmission.

health research intervention definition

Data source: Connor EM, et al.: Reduction in maternal-infancy transmission of human immunodeficiency virus type 1 with zidovudine treatment. N. Engl. J. Med. 1994;331:1173-1180, as quoted in the textbook by Aschegrau and Seage in Table 7-5, page 191 in the 2nd edition.)

The analysis to the right resulted in a risk ratio of 0.33 (a 67% reduction in risk) when zidovudine treatment was compared to placebo-treated controls. The 95% confidence interval for the risk ratio was 0.18-0.60. (This was part of protocol 076 referred to above; this trial was the one that originally demonstrated the efficacy of zidovudine in women in the United States and France.)

Did Randomization Control for Confounding?

When analyzing or reading a randomized clinical trial, an important consideration is whether or not randomization actually achieved baseline comparability among the groups. This can be assessed by comparing the groups with respect to their characteristics and potential confounding factors at baseline, i.e., at the beginning of the study.

health research intervention definition

Frequently, a published paper will have an initial table which summarizes the baseline characteristics and compares those using appropriate statistical tests. If the groups are similar with respect to all of these characteristics, then it is more likely that they are similar with respect to other factors as well, and one can assume that randomization was successful. It is important to remember that we can never really know whether randomization was truly successful, because we can only judge based on those baseline characteristics that we have measured.   We may not have measured all known confounders, and in any case, we can't have measured the unknown ones. Therefore, the larger the sample size, the more confident we can be that the process of randomization, which relies on the "laws" of chance, has worked to balance baseline characteristics.

health research intervention definition

Pitfall : According to Rothman, an almost universal mistake in the reporting of clinical trials is to determine whether randomization was successful by comparing the baseline characteristics among the groups using a statistical test and p-values to decide whether confounding occurred. However, the extent to which a confounding factor distorts the measure of effect will depend not only on the extent to which it differs among the groups being compared, but also on the strength of association between the confounder and the outcome (i.e., the risk ratio of the confounding factor.)

If all baseline characteristics are nearly identical in the groups being compared, then there will be little, if any, confounding. However, if some factors appear to differ, the only effective way to determine whether they caused confounding is to calculate the measure of effect before and after adjusting for that factor using either stratification analysis or regression analysis. If the adjusted measure of association differs from the unadjusted measure by more than about 10%, then confounding occurred, and steps should be taken to adjust for it in the final analysis. Bear in mind that, even with randomization of treatment status, differences in other risk factors can occur, just by chance. If this occurs, the appropriate thing to do is to adjust for the imbalances in the analysis, using either stratification or regression analysis. 

Intention-to-Treat Analysis:

For the primary analysis all subjects should be included in the groups to which they were randomly assigned, even if they did not complete or even receive the appropriate treatment. This is referred to as an "intention to treat" analysis, and it is important because:

  • It preserves baseline comparability and provides control of confounding by known and unknown confounders.
  • It maintains the statistical power of the original study population.
  • Since compliers and non-compliers are likely to differ on important prognostic factors, retention of all subjects in the analysis will reduce bias.
  • It reflects efficacy in everyday practice. In real life, people who have been prescribed certain medications or given advice may not comply for the same reasons that subjects fail to comply in a clinical trial. For this reason, an intention to treat analysis may provide a more accurate measure of the potential benefit to be derived from an new therapy.

Efficacy Analysis

If there has been a problem with compliance, the investigators can also conduct an efficacy analysis (sometimes referred to as a 'secondary analysis'), which compares subjects who actually complied with the assigned protocols. In essence, it determines the efficacy of the new therapy under ideal circumstances, i.e., it tests the benefit of taking the therapy as opposed to the alternative. The problem with an efficacy analysis is that the sample size will be smaller, and it does not control for confounding as rigorously as an intention to treat analysis, because the removal of subjects from either or both groups means the original randomization no longer is in place.

Weighing Risk Versus Benefit

Many reports of epidemiologic studies focus on the strength of association, i.e., risk ratios and rate ratios. However, when trying to weigh decisions for an individual person it it important to consider:

  • The individual's personal circumstances, i.e., what are their risks for the primary outcome?
  • What adverse effects of the proposed treatment are they at risk for?
  • What are the absolute benefits and absolute risks of the proposed therapy?

Is Low Dose Aspirin Beneficial?

A number of descriptive studies suggested that people who took aspirin regularly seemed to have a lower risk of myocardial infarction (heart attack). Observational studies suggested perhaps a 30% reduction in risk of myocardial infarction, but of course the subjects were not randomized, so there were concerns about unrecognized confounding. Several small clinical trials suggested similar reductions, but the sample sizes were too small to arrive at a solid conclusion. In the early 1980s The Physician's Health Study was conducted to test the hypothesis that 325 mg. of aspirin (one 'adult' sized aspirin) taken every other day would reduce mortality from cardiovascular disease ( N. Engl. J. Med. 320:1238, 1989 ). Male physicians 40 to 84 years of age living in the US in 1980 were eligible to participate. Physicians were excluded if they had a personal history of myocardial infarction, stroke or transient ischemic attack; cancer; current gout; liver, renal or peptic ulcer disease; contraindication to aspirin consumption; current use of aspirin, platelet-active drugs or non-steroidal anti-inflammatory agents; intolerance to aspirin; or inability to comply with the protocol. Eligible subjects who met the inclusion criteria and who successfully completed a run-in phase were randomly assigned to receive aspirin or a placebo. Eventually 22,071 physicians were enrolled; 11,037 were assigned to aspirin, and 11,034 were assigned to placebo. The agents (aspirin and placebo) were identical in appearance and were mailed to the subjects. The recipients' treatment group was coded, and neither the subject nor the investigators knew which treatment group a given subject was in. The table below summarizes the number of 'events' that had occurred in each treatment group after about 5 years of follow up. The primary outcome of interest was myocardial infarction, but possible adverse effects of chronic aspirin use, such as stroke, ulcer disease, and bleeding problems were also recorded.

Endpoints

Aspirin Group

Placebo Group

 

(N=11,037)

(N=11,034)

 

 

Fatal MI

10

26

Non-fatal MI

129

213

Total MI

139

239

91

82

23

12

169

138

Upper GI ulcer with bleeding

38

22

2,979

2,249

Bleeding requiring blood transfusion

48

28

The data clearly show a substantial decrease in the occurrence of both fatal and non-fatal myocardial infarctions among those randomized to aspirin compared to placebo. However, there also were an increased number of hemorrhagic strokes, ulcers, and bleeding problems. These results were controversial at the time. Most of the investigators wanted to continue the study to clarify whether there was an increased risk of stroke. However, the data safety and monitoring board for the study strongly recommended that the study be terminated, because the benefit of aspirin had been clearly demonstrated, and they felt it was unethical to withhold its use from half of the participants.

As an exercise, you can calculate the risk ratios comparing the aspirin and placebo groups on the different outcomes. How much do the risk ratios help you weigh the benefits and risks of aspirin therapy.

Toggle open/close quiz question

Your recommendations?

Based on the results of your analysis, what recommendations would you make regarding the benefits and risks of this regimen with low dose aspirin? Write down your recommendations, based on your analysis of this data, regarding the use of low dose aspirin.

An alternative approach?

Before you look at the feedback, consider whether there is an alternate way of looking at the data in the table. Would an alternate approach offer any advantages in weighting the risks and benefits?

  Feedback

For "A" Students - Another Useful Tool for Weighing Risk/Benefit

The Cost of Prevention

When thinking about the potential benefit of competing treatment options, one has to consider both the effectiveness of therapy and its cost. An interesting way to think about this is to calculate the number of people you would need to treat in order to prevent one adverse outcome. If you also know the cost of treatment, it is easy to calculate the cost of preventing one adverse outcome with a given treatment or preventive strategy. If you were to calculate this for competing preventive strategies, you would have a convenient way of comparing their cost effectiveness.

You already have the tools to do this. Suppose you were interested in preventing cardiovascular disease in people who were classified as having an increased risk. Statins are a class of drugs that have been demonstrated to be effective in lowering blood levels of cholesterol and significantly reducing the incidence of major cardiac events (heart attack, stroke, severe angina) in patients with elevated cholesterol levels. However, certain groups of people who do not have elevated cholesterol levels are still at increased risk of having a major cardiac event, including people who have elevations in an inflammatory marker called C-reactive protein. In Nov. 2008 the New England Journal of Medicine published the results of a study in which the investigators enrolled 17,802 subjects who had no history of heart disease. All subjects had elevated levels of C-reactive protein, but they all had normal cholesterol levels. Subjects were randomly assigned to receive either the statin Rosuvastatin (Crestor) 20 mg. per day or a placebo that looked identical to the active agent. The drugs were coded, and neither the investigators nor the subjects know who was receiving the active drug. Subjects were followed for an average duration of about two years.

Several points from the Methods section of the paper:

  • "Follow-up visits were scheduled to occur at 13 weeks and then 6, 12, 18, 24, 30, 36, 42, 48, 54, and 60 months after randomization. Follow-up assessments included laboratory evaluations, pill counts, and structured interviews assessing outcomes and potential adverse events."
  • "The primary outcome was the occurrence of a first major cardiovascular event, defined as nonfatal myocardial infarction, nonfatal stroke, hospitalization for unstable angina, an arterial revascularization procedure, or confirmed death from cardiovascular causes."
  • "All reported primary end points that occurred through March 30, 2008, were adjudicated on the basis of standardized criteria by an independent end-point committee unaware of the randomized treatment assignments. Only deaths classified as clearly due to cardiovascular or cerebrovascular causes by the end-point committee were included in the analysis of the primary end point."

Other important information: CanadianDrugs.com is has been selling 20 mg. Crestor for about $2.00 (US) per pill (i.e., $2.00 per day to treat). This was the lowest cost source that I was able to identify.

The following table summarizes the main findings of the study.

Crestor (a statin)

8,901

142

18,442

Placebo

8,901

  251

18,529

One could easily compute the rate ratio:

Rate Ratio = (142/18,442) / (251/18,529) = 0.57

The 95% confidence interval for the rate ratio is 0-46-0.70, and the p-value < 0.00001.

However, to evaluate cost versus benefit, it would be more useful to consider how much it costs to prevent a single major cardiovascular 'event.' How would you compute this from the data shown in the table and knowing that the cost of Crestor therapy is about $2.00 per day?

One can easily compute this from the rate difference:

Rate Difference = (142/18,442) - (251/18,529) = - 0.005847 per person-year

Since the result is a negative number, I can interpret this as a reduction in risk of about 58 major cardiovascular events among 10,000 treated persons over a year. In other words, if we treated 10,000 such subjects with statins for one year, we could expect to prevent 58 major CVD events.

"Number Needed to Treat"

Another way of thinking the rate difference is to consider how many people one would have to treat for a year in order to prevent a single CVD event. This is often referred to as the "number needed to treat" or NNT.

If 10,000 treated subjects prevented 58.47 events, then the number that would need to be treated to prevent one event is

NNT = 10,000 treated for a year / 58.47 = 171 treated for one year to prevent one event

Note that the NNT is simply the reciprocal of the rate difference for a year, and note also that NNT is conveniently calculated for you in EpiTools.XLS in the worksheet for cohort.-type studies.

Rate difference = 58.47 / 10,000 over a year

NNT = 10,000 over a year / 58.47 = 171

Finally, if one needs to treat 171 people for a year to prevent one major CVD event, then the cost of preventing one such event is:

171 x $2.00/day x 365 days = $124,830 per year to prevent one major event

And the cost of treating 10,000 such persons would be $7,300,000 per year .

With these calculations in mind, consider the management of a 50 year old who has normal cholesterol levels, but elevated C-reactive protein. This individual would, of course, have to be treated for many years. Would you support or recommend long term treatment of such individuals with Crestor? Why or why not?

  • - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Implementation...

Implementation research: what it is and how to do it

  • Related content
  • Peer review
  • David H Peters , professor 1 ,
  • Taghreed Adam , scientist 2 ,
  • Olakunle Alonge , assistant scientist 1 ,
  • Irene Akua Agyepong , specialist public health 3 ,
  • Nhan Tran , manager 4
  • 1 Johns Hopkins University Bloomberg School of Public Health, Department of International Health, 615 N Wolfe St, Baltimore, MD 21205, USA
  • 2 Alliance for Health Policy and Systems Research, World Health Organization, CH-1211 Geneva 27, Switzerland
  • 3 University of Ghana School of Public Health/Ghana Health Service, Accra, Ghana
  • 4 Alliance for Health Policy and Systems Research, Implementation Research Platform, World Health Organization, CH-1211 Geneva 27, Switzerland
  • Correspondence to: D H Peters  dpeters{at}jhsph.edu
  • Accepted 8 October 2013

Implementation research is a growing but not well understood field of health research that can contribute to more effective public health and clinical policies and programmes. This article provides a broad definition of implementation research and outlines key principles for how to do it

The field of implementation research is growing, but it is not well understood despite the need for better research to inform decisions about health policies, programmes, and practices. This article focuses on the context and factors affecting implementation, the key audiences for the research, implementation outcome variables that describe various aspects of how implementation occurs, and the study of implementation strategies that support the delivery of health services, programmes, and policies. We provide a framework for using the research question as the basis for selecting among the wide range of qualitative, quantitative, and mixed methods that can be applied in implementation research, along with brief descriptions of methods specifically suitable for implementation research. Expanding the use of well designed implementation research should contribute to more effective public health and clinical policies and programmes.

Defining implementation research

Implementation research attempts to solve a wide range of implementation problems; it has its origins in several disciplines and research traditions (supplementary table A). Although progress has been made in conceptualising implementation research over the past decade, 1 considerable confusion persists about its terminology and scope. 2 3 4 The word “implement” comes from the Latin “implere,” meaning to fulfil or to carry into effect. 5 This provides a basis for a broad definition of implementation research that can be used across research traditions and has meaning for practitioners, policy makers, and the interested public: “Implementation research is the scientific inquiry into questions concerning implementation—the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (collectively called interventions).”

Implementation research can consider any aspect of implementation, including the factors affecting implementation, the processes of implementation, and the results of implementation, including how to introduce potential solutions into a health system or how to promote their large scale use and sustainability. The intent is to understand what, why, and how interventions work in “real world” settings and to test approaches to improve them.

Principles of implementation research

Implementation research seeks to understand and work within real world conditions, rather than trying to control for these conditions or to remove their influence as causal effects. This implies working with populations that will be affected by an intervention, rather than selecting beneficiaries who may not represent the target population of an intervention (such as studying healthy volunteers or excluding patients who have comorbidities).

Context plays a central role in implementation research. Context can include the social, cultural, economic, political, legal, and physical environment, as well as the institutional setting, comprising various stakeholders and their interactions, and the demographic and epidemiological conditions. The structure of the health systems (for example, the roles played by governments, non-governmental organisations, other private providers, and citizens) is particularly important for implementation research on health.

Implementation research is especially concerned with the users of the research and not purely the production of knowledge. These users may include managers and teams using quality improvement strategies, executive decision makers seeking advice for specific decisions, policy makers who need to be informed about particular programmes, practitioners who need to be convinced to use interventions that are based on evidence, people who are influenced to change their behaviour to have a healthier life, or communities who are conducting the research and taking action through the research to improve their conditions (supplementary table A). One important implication is that often these actors should be intimately involved in the identification, design, and conduct phases of research and not just be targets for dissemination of study results.

Implementation outcome variables

Implementation outcome variables describe the intentional actions to deliver services. 6 These implementation outcome variables—acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, coverage, and sustainability—can all serve as indicators of the success of implementation (table 1 ⇓ ). Implementation research uses these variables to assess how well implementation has occurred or to provide insights about how this contributes to one’s health status or other important health outcomes.

 Implementation outcome variables

  • View inline

Implementation strategies

Curran and colleagues defined an “implementation intervention” as a method to “enhance the adoption of a ‘clinical’ intervention,” such as the use of job aids, provider education, or audit procedures. 7 The concept can be broadened to any type of strategy that is designed to support a clinical or population and public health intervention (for example, outreach clinics and supervision checklists are implementation strategies used to improve the coverage and quality of immunisation).

A review of ways to improve health service delivery in low and middle income countries identified a wide range of successful implementation strategies (supplementary table B). 8 Even in the most resource constrained environments, measuring change, informing stakeholders, and using information to guide decision making were found to be critical to successful implementation.

Implementation influencing variables

Other factors that influence implementation may need to be considered in implementation research. Sabatier summarised a set of such factors that influence policy implementation (clarity of objectives, causal theory, implementing personnel, support of interest groups, and managerial authority and resources). 9

The large array of contextual factors that influence implementation, interact with each other, and change over time highlights the fact that implementation often occurs as part of complex adaptive systems. 10 Some implementation strategies are particularly suitable for working in complex systems. These include strategies to provide feedback to key stakeholders and to encourage learning and adaptation by implementing agencies and beneficiary groups. Such strategies have implications for research, as the study methods need to be sufficiently flexible to account for changes or adaptations in what is actually being implemented. 8 11 Research designs that depend on having a single and fixed intervention, such as a typical randomised controlled trial, would not be an appropriate design to study phenomena that change, especially when they change in unpredictable and variable ways.

Another implication of studying complex systems is that the research may need to use multiple methods and different sources of information to understand an implementation problem. Because implementation activities and effects are not usually static or linear processes, research designs often need to be able to observe and analyse these sometimes iterative and changing elements at several points in time and to consider unintended consequences.

Implementation research questions

As in other types of health systems research, the research question is the king in implementation research. Implementation research takes a pragmatic approach, placing the research question (or implementation problem) as the starting point to inquiry; this then dictates the research methods and assumptions to be used. Implementation research questions can cover a wide variety of topics and are frequently organised around theories of change or the type of research objective (examples are in supplementary table C). 12 13

Implementation research can overlap with other types of research used in medicine and public health, and the distinctions are not always clear cut. A range of implementation research exists, based on the centrality of implementation in the research question, the degree to which the research takes place in a real world setting with routine populations, and the role of implementation strategies and implementation variables in the research (figure ⇓ ).

Spectrum of implementation research 33

  • Download figure
  • Open in new tab
  • Download powerpoint

A more detailed description of the research question can help researchers and practitioners to determine the type of research methods that should be used. In table 2 ⇓ , we break down the research question first by its objective: to explore, describe, influence, explain, or predict. This is followed by a typical implementation research question based on each objective. Finally, we describe a set of research methods for each type of research question.

 Type of implementation research objective, implementation question, and research methods

Much of evidence based medicine is concerned with the objective of influence, or whether an intervention produces an expected outcome, which can be broken down further by the level of certainty in the conclusions drawn from the study. The nature of the inquiry (for example, the amount of risk and considerations of ethics, costs, and timeliness), and the interests of different audiences, should determine the level of uncertainty. 8 14 Research questions concerning programmatic decisions about the process of an implementation strategy may justify a lower level of certainty for the manager and policy maker, using research methods that would support an adequacy or plausibility inference. 14 Where a high risk of harm exists and sufficient time and resources are available, a probability study design might be more appropriate, in which the result in an area where the intervention is implemented is compared with areas without implementation with a low probability of error (for example, P< 0.05). These differences in the level of confidence affect the study design in terms of sample size and the need for concurrent or randomised comparison groups. 8 14

Implementation specific research methods

A wide range of qualitative and quantitative research methods can be used in implementation research (table 2 ⇑ ). The box gives a set of basic questions to guide the design or reporting of implementation research that can be used across methods. More in-depth criteria have also been proposed to assess the external validity or generalisability of findings. 15 Some research methods have been developed specifically to deal with implementation research questions or are particularly suitable to implementation research, as identified below.

Key questions to assess research designs or reports on implementation research 33

Does the research clearly aim to answer a question concerning implementation?

Does the research clearly identify the primary audiences for the research and how they would use the research?

Is there a clear description of what is being implemented (for example, details of the practice, programme, or policy)?

Does the research involve an implementation strategy? If so, is it described and examined in its fullness?

Is the research conducted in a “real world” setting? If so, is the context and sample population described in sufficient detail?

Does the research appropriately consider implementation outcome variables?

Does the research appropriately consider context and other factors that influence implementation?

Does the research appropriately consider changes over time and the level of complexity of the system, including unintended consequences?

Pragmatic trials

Pragmatic trials, or practical trials, are randomised controlled trials in which the main research question focuses on effectiveness of an intervention in a normal practice setting with the full range of study participants. 16 This may include pragmatic trials on new healthcare delivery strategies, such as integrated chronic care clinics or nurse run community clinics. This contrasts with typical randomised controlled trials that look at the efficacy of an intervention in an “ideal” or controlled setting and with highly selected patients and standardised clinical outcomes, usually of a short term nature.

Effectiveness-implementation hybrid trials

Effectiveness-implementation hybrid designs are intended to assess the effectiveness of both an intervention and an implementation strategy. 7 These studies include components of an effectiveness design (for example, randomised allocation to intervention and comparison arms) but add the testing of an implementation strategy, which may also be randomised. This might include testing the effectiveness of a package of delivery and postnatal care in under-served areas, as well testing several strategies for providing the care. Whereas pragmatic trials try to fix the intervention under study, effectiveness-implementation hybrids also intervene and/or observe the implementation process as it actually occurs. This can be done by assessing implementation outcome variables.

Quality improvement studies

Quality improvement studies typically involve a set of structured and cyclical processes, often called the plan-do-study-act cycle, and apply scientific methods on a continuous basis to formulate a plan, implement the plan, and analyse and interpret the results, followed by an iteration of what to do next. 17 18 The focus might be on a clinical process, such as how to reduce hospital acquired infections in the intensive care unit, or management processes such as how to reduce waiting times in the emergency room. Guidelines exist on how to design and report such research—the Standards for Quality Improvement Reporting Excellence (SQUIRE). 17

Speroff and O’Connor describe a range of plan-do-study-act research designs, noting that they have in common the assessment of responses measured repeatedly and regularly over time, either in a single case or with comparison groups. 18 Balanced scorecards integrate performance measures across a range of domains and feed into regular decision making. 19 20 Standardised guidance for using good quality health information systems and health facility surveys has been developed and often provides the sources of information for these quasi-experimental designs. 21 22 23

Participatory action research

Participatory action research refers to a range of research methods that emphasise participation and action (that is, implementation), using methods that involve iterative processes of reflection and action, “carried out with and by local people rather than on them.” 24 In participatory action research, a distinguishing feature is that the power and control over the process rests with the participants themselves. Although most participatory action methods involve qualitative methods, quantitative and mixed methods techniques are increasingly being used, such as for participatory rural appraisal or participatory statistics. 25 26

Mixed methods

Mixed methods research uses both qualitative and quantitative methods of data collection and analysis in the same study. Although not designed specifically for implementation research, mixed methods are particularly suitable because they provide a practical way to understand multiple perspectives, different types of causal pathways, and multiple types of outcomes—all common features of implementation research problems.

Many different schemes exist for describing different types of mixed methods research, on the basis of the emphasis of the study, the sampling schemes for the different components, the timing and sequencing of the qualitative and quantitative methods, and the level of mixing between the qualitative and quantitative methods. 27 28 Broad guidance on the design and conduct of mixed methods designs is available. 29 30 31 A scheme for good reporting of mixed methods studies involves describing the justification for using a mixed methods approach to the research question; describing the design in terms of the purpose, priority, and sequence of methods; describing each method in terms of sampling, data collection, and analysis; describing where the integration has occurred, how it has occurred, and who has participated in it; describing any limitation of one method associated with the presence of the other method; and describing any insights gained from mixing or integrating methods. 32

Implementation research aims to cover a wide set of research questions, implementation outcome variables, factors affecting implementation, and implementation strategies. This paper has identified a range of qualitative, quantitative, and mixed methods that can be used according to the specific research question, as well as several research designs that are particularly suited to implementation research. Further details of these concepts can be found in a new guide developed by the Alliance for Health Policy and Systems Research. 33

Summary points

Implementation research has its origins in many disciplines and is usefully defined as scientific inquiry into questions concerning implementation—the act of fulfilling or carrying out an intention

In health research, these intentions can be policies, programmes, or individual practices (collectively called interventions)

Implementation research seeks to understand and work in “real world” or usual practice settings, paying particular attention to the audience that will use the research, the context in which implementation occurs, and the factors that influence implementation

A wide variety of qualitative, quantitative, and mixed methods techniques can be used in implementation research, which are best selected on the basis of the research objective and specific questions related to what, why, and how interventions work

Implementation research may examine strategies that are specifically designed to improve the carrying out of health interventions or assess variables that are defined as implementation outcomes

Implementation outcomes include acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, coverage, and sustainability

Cite this as: BMJ 2013;347:f6753

Contributors: All authors contributed to the conception and design, analysis and interpretation, drafting the article, or revising it critically for important intellectual content, and all gave final approval of the version to be published. NT had the original idea for the article, which was discussed by the authors (except OA) as well as George Pariyo, Jim Sherry, and Dena Javadi at a meeting at the World Health Organization (WHO). DHP and OA did the literature reviews, and DHP wrote the original outline and the draft manuscript, tables, and boxes. OA prepared the original figure. All authors reviewed the draft article and made substantial revisions to the manuscript. DHP is the guarantor.

Funding: Funding was provided by the governments of Norway and Sweden and the UK Department for International Development (DFID) in support of the WHO Implementation Research Platform, which financed a meeting of authors and salary support for NT. DHP is supported by the Future Health Systems research programme consortium, funded by DFID for the benefit of developing countries (grant number H050474). The funders played no role in the design, conduct, or reporting of the research.

Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: support for the submitted work as described above; NT and TA are employees of the Alliance for Health Policy and Systems Research at WHO, which is supporting their salaries to work on implementation research; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

Provenance and peer review: Invited by journal; commissioned by WHO; externally peer reviewed.

  • ↵ Brownson RC, Colditz GA, Proctor EK, eds. Dissemination and implementation research in health: translating science to practice. Oxford University Press, 2012.
  • ↵ Ciliska D, Robinson P, Armour T, Ellis P, Brouwers M, Gauld M, et al. Diffusion and dissemination of evidence-based dietary strategies for the prevention of cancer. Nutr J 2005 ; 4 (1): 13 . OpenUrl CrossRef PubMed
  • ↵ Remme JHF, Adam T, Becerra-Posada F, D’Arcangues C, Devlin M, Gardner C, et al. Defining research to improve health systems. PLoS Med 2010 ; 7 : e1001000 . OpenUrl CrossRef PubMed
  • ↵ McKibbon KA, Lokker C, Mathew D. Implementation research. 2012. http://whatiskt.wikispaces.com/Implementation+Research .
  • ↵ The compact edition of the Oxford English dictionary. Oxford University Press, 1971.
  • ↵ Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health 2010 ; 38 : 65 -76. OpenUrl
  • ↵ Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care 2012 ; 50 : 217 -26. OpenUrl CrossRef PubMed Web of Science
  • ↵ Peters DH, El-Saharty S, Siadat B, Janovsky K, Vujicic M, eds. Improving health services in developing countries: from evidence to action. World Bank, 2009.
  • ↵ Sabatier PA. Top-down and bottom-up approaches to implementation research. J Public Policy 1986 ; 6 (1): 21 -48. OpenUrl CrossRef
  • ↵ Paina L, Peters DH. Understanding pathways for scaling up health services through the lens of complex adaptive systems. Health Policy Plan 2012 ; 27 : 365 -73. OpenUrl Abstract / FREE Full Text
  • ↵ Gilson L, ed. Health policy and systems research: a methodology reader. World Health Organization, 2012.
  • ↵ Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med 2012 ; 43 : 337 -50. OpenUrl CrossRef PubMed
  • ↵ Improved Clinical Effectiveness through Behavioural Research Group (ICEBeRG). Designing theoretically-informed implementation interventions. Implement Sci 2006 ; 1 : 4 . OpenUrl CrossRef PubMed
  • ↵ Habicht JP, Victora CG, Vaughn JP. Evaluation designs for adequacy, plausibility, and probability of public health programme performance and impact. Int J Epidemiol 1999 ; 28 : 10 -8. OpenUrl Abstract / FREE Full Text
  • ↵ Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research. Eval Health Prof 2006 ; 29 : 126 -53. OpenUrl Abstract / FREE Full Text
  • ↵ Swarenstein M, Treweek S, Gagnier JJ, Altman DG, Tunis S, Haynes B, et al, for the CONSORT and Pragmatic Trials in Healthcare (Practihc) Groups. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ 2008 ; 337 : a2390 . OpenUrl Abstract / FREE Full Text
  • ↵ Davidoff F, Batalden P, Stevens D, Ogrince G, Mooney SE, for the SQUIRE Development Group. Publication guidelines for quality improvement in health care: evolution of the SQUIRE project. Qual Saf Health Care 2008 ; 17 (suppl I): i3 -9. OpenUrl Abstract / FREE Full Text
  • ↵ Speroff T, O’Connor GT. Study designs for PDSA quality improvement research. Q Manage Health Care 2004 ; 13 (1): 17 -32. OpenUrl CrossRef
  • ↵ Peters DH, Noor AA, Singh LP, Kakar FK, Hansen PM, Burnham G. A balanced scorecard for health services in Afghanistan. Bull World Health Organ 2007 ; 85 : 146 -51. OpenUrl CrossRef PubMed Web of Science
  • ↵ Edward A, Kumar B, Kakar F, Salehi AS, Burnham G. Peters DH. Configuring balanced scorecards for measuring health systems performance: evidence from five years’ evaluation in Afghanistan. PLOS Med 2011 ; 7 : e1001066 . OpenUrl
  • ↵ Health Facility Assessment Technical Working Group. Profiles of health facility assessment method, MEASURE Evaluation, USAID, 2008.
  • ↵ Hotchkiss D, Diana M, Foreit K. How can routine health information systems improve health systems functioning in low-resource settings? Assessing the evidence base. MEASURE Evaluation, USAID, 2012.
  • ↵ Lindelow M, Wagstaff A. Assessment of health facility performance: an introduction to data and measurement issues. In: Amin S, Das J, Goldstein M, eds. Are you being served? New tools for measuring service delivery. World Bank, 2008:19-66.
  • ↵ Cornwall A, Jewkes R. “What is participatory research?” Soc Sci Med 1995 ; 41 : 1667 -76. OpenUrl CrossRef PubMed Web of Science
  • ↵ Mergler D. Worker participation in occupational health research: theory and practice. Int J Health Serv 1987 ; 17 : 151 . OpenUrl Abstract / FREE Full Text
  • ↵ Chambers R. Revolutions in development inquiry. Earthscan, 2008.
  • ↵ Creswell JW, Plano Clark VL. Designing and conducting mixed methods research. Sage Publications, 2011.
  • ↵ Tashakkori A, Teddlie C. Mixed methodology: combining qualitative and quantitative approaches. Sage Publications, 2003.
  • ↵ Leech NL, Onwuegbuzie AJ. Guidelines for conducting and reporting mixed research in the field of counseling and beyond. Journal of Counseling and Development 2010 ; 88 : 61 -9. OpenUrl CrossRef Web of Science
  • ↵ Creswell JW. Mixed methods procedures. In: Research design: qualitative, quantitative and mixed methods approaches. 3rd ed. Sage Publications, 2009.
  • ↵ Creswell JW, Klassen AC, Plano Clark VL, Clegg Smith K. Best practices for mixed methods research in the health sciences. National Institutes of Health, Office of Behavioral and Social Sciences Research, 2011.
  • ↵ O’Cathain A, Murphy E, Nicholl J. The quality of mixed methods studies in health services research. J Health Serv Res Policy 2008 ; 13 : 92 -8. OpenUrl Abstract / FREE Full Text
  • ↵ Peters DH, Tran N, Adam T, Ghaffar A. Implementation research in health: a practical guide. Alliance for Health Policy and Systems Research, World Health Organization, 2013.
  • Rogers EM. Diffusion of innovations. 5th ed. Free Press, 2003.
  • Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implement Sci 2007 ; 2 : 40 . OpenUrl CrossRef PubMed
  • Victora CG, Schellenberg JA, Huicho L, Amaral J, El Arifeen S, Pariyo G, et al. Context matters: interpreting impact findings in child survival evaluations. Health Policy Plan 2005 ; 20 (suppl 1): i18 -31. OpenUrl Abstract

health research intervention definition

Definition of a health-related intervention

The University of Waterloo Research Ethics Boards (REBs) have adopted the following definition of a health-related intervention: "An activity or set of activities aimed at modifying a process, course of action or sequence of events in order to change one or several of their characteristics such as performance of expected outcome.” ( International Classification of Health Interventions Training Manual , 2021, p.6). 

Examples of health-related interventions involving study participants are listed below:

  • Use of a drug, eye care solution, or medication (marketed or investigational) to assess a positive change in health outcome as a result of the drug, solution, or medication.
  • Consuming a food or vitamin supplement to assess a change in nutritional status, insulin level, or other metabolic measures.
  • Taking part in a new urine screening test that will be used to determine or rule out a certain disease such as diabetes.
  • Testing of a new exercise program with people who have had a mild ischemic stroke to reduce the risk of a subsequent stroke by improving fitness levels.
  • Psychotherapeutic approach to a behavioural disorder or other mental illness comparing the outcomes of two or more patient populations with the same diagnosis but receiving different therapies or a trial comparing the outcome of those who have received the therapy with those who are on the waiting list for treatment.
  • Fact sheets
  • Facts in pictures

Publications

  • Questions and answers
  • Tools and toolkits
  • Endometriosis
  • Excessive heat
  • Mental disorders
  • Polycystic ovary syndrome
  • All countries
  • Eastern Mediterranean
  • South-East Asia
  • Western Pacific
  • Data by country
  • Country presence 
  • Country strengthening 
  • Country cooperation strategies 
  • News releases
  • Feature stories
  • Press conferences
  • Commentaries
  • Photo library
  • Afghanistan
  • Cholera 
  • Coronavirus disease (COVID-19)
  • Greater Horn of Africa
  • Israel and occupied Palestinian territory
  • Disease Outbreak News
  • Situation reports
  • Weekly Epidemiological Record
  • Surveillance
  • Health emergency appeal
  • International Health Regulations
  • Independent Oversight and Advisory Committee
  • Classifications
  • Data collections
  • Global Health Estimates
  • Mortality Database
  • Sustainable Development Goals
  • Health Inequality Monitor
  • Global Progress
  • World Health Statistics
  • Partnerships
  • Committees and advisory groups
  • Collaborating centres
  • Technical teams
  • Organizational structure
  • Initiatives
  • General Programme of Work
  • WHO Academy
  • Investment in WHO
  • WHO Foundation
  • External audit
  • Financial statements
  • Internal audit and investigations 
  • Programme Budget
  • Results reports
  • Governing bodies
  • World Health Assembly
  • Executive Board
  • Member States Portal
  • Health topics /

Research is indispensable for resolving public health challenges – whether it be tackling diseases of poverty, responding to rise of chronic diseases,  or ensuring that mothers have access to safe delivery practices.

Likewise, shared vulnerability to global threats, such as severe acute respiratory syndrome, Ebola virus disease, Zika virus and avian influenza has mobilized global research efforts in support of enhancing capacity for preparedness and response. Research is strengthening surveillance, rapid diagnostics and development of vaccines and medicines.

Public-private partnerships and other innovative mechanisms for research are concentrating on neglected diseases in order to stimulate the development of vaccines, drugs and diagnostics where market forces alone are insufficient.

Research for health spans 5 generic areas of activity:

  • measuring the magnitude and distribution of the health problem;
  • understanding the diverse causes or the determinants of the problem, whether they are due to biological, behavioural, social or environmental factors;
  • developing solutions or interventions that will help to prevent or mitigate the problem;
  • implementing or delivering solutions through policies and programmes; and
  • evaluating the impact of these solutions on the level and distribution of the problem.

High-quality research is essential to fulfilling WHO’s mandate for the attainment by all peoples of the highest possible level of health. One of the Organization’s core functions is to set international norms, standards and guidelines, including setting international standards for research.

Under the “WHO strategy on research for health”, the Organization works to identify research priorities, and promote and conduct research with the following 4 goals:

  • Capacity - build capacity to strengthen health research systems within Member States.
  • Priorities - support the setting of research priorities that meet health needs particularly in low- and middle-income countries.
  • Standards - develop an enabling environment for research through the creation of norms and standards for good research practice.
  • Translation - ensure quality evidence is turned into affordable health technologies and evidence-informed policy.
  • Prequalification of medicines by WHO
  • Global Observatory on Health R&D
  • Global Observatory on Health Research and Development
  • Implementation research toolkit
  • Ethics in implementation research: participant's guide
  • Ethics in implementation research: facilitator's guide
  • Ethics in epidemics, emergencies and disasters: Research, surveillance and patient care: WHO training manual
  • WHA58.34 Ministerial Summit on Health Research
  • WHA60.15 WHO's role and responsibilities in health research
  • WHA63.21 WHO's role and responsibilities in health research
  • EB115/30 Ministerial Summit on Health Research: report by the Secretariat
  • Science division

WHO consults on action plan for sustainable clinical research infrastructure

WHO advisory group convenes its first meeting on responsible use of the life sciences in Geneva

Challenging harmful masculinities and engaging men and boys in sexual and reproductive health

Stakeholders convene in Uganda on responsible use of the life sciences

health research intervention definition

Target product profiles for animal plasma-derived antivenoms: antivenoms for treatment of snakebite envenoming...

We describe here WHO public-benefit Target Product Profiles (TPPs) for antivenoms intended to be used for treatment of snakebite envenoming in South Asia,...

health research intervention definition

WHO global research priorities for sexually transmitted infections

Sexually transmitted infections (STIs) are widespread globally and negatively affect sexual and reproductive health. Gaps in evidence and in available...

Report of the sixth meeting of the WHO Diagnostic Technical Advisory Group for Neglected Tropical Diseases: Geneva, Switzerland, 14–15 February 2024

Report of the sixth meeting of the WHO Diagnostic Technical Advisory Group for Neglected Tropical Diseases:...

The World Health Organization’s Global Neglected Tropical Diseases Programme (WHO/NTD) manages a diverse portfolio of 21 diseases and disease groups,1 ...

Target product profile for a diagnostic test to confirm cure of visceral leishmaniasis

Target product profile for a diagnostic test to confirm cure of visceral leishmaniasis

Leishmaniasis is caused by protozoan parasites which are transmitted by the bite of infected female phlebotomine sandflies. The disease is poverty-related...

Coordinating R&D on antimicrobial resistance

Ensuring responsible use of life sciences research

Prioritizing diseases for research and development in emergency contexts

Promoting research on Buruli ulcer

Research in maternal, perinatal, and adolescent health

Undertaking health law research

Feature story

One year on, Global Observatory on Health R&D identifies striking gaps and inequalities

who-joins-coalition-s

Video: Open access to health: WHO joins cOAlition S

research-on-sleeping-sickness

Video: Multisectional research on sleeping sickness in Tanzania in the context of climate change

Related health topics

Clinical trials

Global health ethics

Health Laws

Intellectual property and trade

Related links

Research and Development Blueprint

WHO Collaborating Centres

R&D Blueprint for Action to Prevent Epidemics

International Clinical Trials Registry Platform

  • Table of Contents

TOOLS AND TECHNIQUES

Tools for implementing an evidence-based approach in public health practice, navigate this article, introduction, the need for evidence-based public health, training programs, key elements, putting evidence to work, acknowledgments, author information, julie a. jacobs, mph; ellen jones, phd; barbara a. gabella, msph; bonnie spring, phd; ross c. brownson, phd.

Suggested citation for this article: Jacobs JA, Jones E, Gabella BA, Spring B, Brownson RC. Tools for Implementing an Evidence-Based Approach in Public Health Practice. Prev Chronic Dis 2012;9:110324. DOI: http://dx.doi.org/10.5888/pcd9.110324 .

PEER REVIEWED

Increasing disease rates, limited funding, and the ever-growing scientific basis for intervention demand the use of proven strategies to improve population health. Public health practitioners must be ready to implement an evidence-based approach in their work to meet health goals and sustain necessary resources. We researched easily accessible and time-efficient tools for implementing an evidence-based public health (EBPH) approach to improve population health. Several tools have been developed to meet EBPH needs, including free online resources in the following topic areas: training and planning tools, US health surveillance, policy tracking and surveillance, systematic reviews and evidence-based guidelines, economic evaluation, and gray literature. Key elements of EBPH are engaging the community in assessment and decision making; using data and information systems systematically; making decisions on the basis of the best available peer-reviewed evidence (both quantitative and qualitative); applying program-planning frameworks (often based in health-behavior theory); conducting sound evaluation; and disseminating what is learned.

Top of Page

An ever-expanding evidence base, detailing programs and policies that have been scientifically evaluated and proven to work, is available to public health practitioners. The practice of evidence-based public health (EBPH) is an integration of science-based interventions with community preferences for improving population health (1). The concept of EBPH evolved at the same time as discourse on evidence-based practice in the disciplines of medicine, nursing, psychology, and social work. Scholars in these related fields seem to agree that the evidence-based decision-making process integrates 1) best available research evidence, 2) practitioner expertise and other available resources, and 3) the characteristics, needs, values, and preferences of those who will be affected by the intervention (Figure) (2-5).

Figure. Domains that influence evidence-based decision making. Source: Satterfield JM et al (2). [A text description of this figure is also available.]

Public health decision making is a complicated process because of complex inputs and group decision making. Public health evidence often derives from cross-sectional studies and quasi-experimental studies, rather than the so-called “gold standard” of randomized controlled trials often used in clinical medicine. Study designs in public health sometimes lack a comparison group, and the interpretation of study results may have to account for multiple caveats. Public health interventions are seldom a single intervention and often involve large-scale environmental or policy changes that address the needs and balance the preferences of large, often diverse, groups of people.

The formal training of the public health workforce varies more than training in medicine or other clinical disciplines (6). Fewer than half of public health workers have formal training in a public health discipline such as epidemiology or health education (7). No single credential or license certifies a public health practitioner, although voluntary credentialing has begun through the National Board of Public Health Examiners (6). The multidisciplinary approach of public health is often a critical aspect of its successes, but this high level of heterogeneity also means that multiple perspectives must be considered in the decision-making process.

Despite the benefits and efficiencies associated with evidence-based programs or policies, many public health interventions are implemented on the basis of political or media pressures, anecdotal evidence, or “the way it’s always been done” (8,9). Barriers such as lack of funding, skilled personnel, incentives, and time, along with limited buy-in from leadership and elected officials, impede the practice of EBPH (8-12). The wide-scale implementation of EBPH requires not only a workforce that understands and can implement EBPH efficiently but also sustained support from health department leaders, practitioners, and policy makers.

Calls for practitioners to include the concepts of EBPH in their work are increasing as the United States embarks upon the 10-year national agenda for health goals and objectives that constitutes the Healthy People 2020 initiative. The very mission of Healthy People 2020 asks for multisectoral action “to strengthen policies and improve practices that are driven by the best available evidence and knowledge” (13).

Funders, especially federal agencies, often require programs to be evidence-based. The American Recovery and Reinvestment Act of 2009 allocated $650 million to “carry out evidence-based clinical and community-based prevention and wellness strategies . . . that deliver specific, measurable health outcomes that address chronic disease rates” (14). The Patient Protection and Affordable Care Act of 2010 mentions “evidence-based” 13 times in Title IV, Prevention of Chronic Disease and Improving Public Health, and will provide $900 million in funding to 75 communities during 5 years through Community Transformation Grants (15).

Federal funding in states, cities, and tribes, and in both urban and rural areas, creates an expectation for EBPH at all levels of practice. Because formal public health training in the workforce is lacking (7), on-the-job training and skills development are needed. The need may be even greater in local health departments, where practitioners may be less aware of and slower to adopt evidence-based guidelines than state practitioners (16) and where training resources may be more limited.

Core Competencies for Public Health Professionals (17) emerged on the basis of recommendations of the Institute of Medicine’s 1988 report The Future of the Public’s Health . Last updated in May 2010, these 74 competencies represent a “set of skills desirable for the broad practice of public health,” and they are compatible with the skills needed for EBPH (3). Elements of state chronic disease programs and competencies endorsed by the National Association of Chronic Disease Directors are also compatible with EBPH (18).

In addition to efforts to establish competencies and certification for individual practitioners, voluntary accreditation for health departments is now offered through the Public Health Accreditation Board (PHAB). Tribal, state, and local health departments may seek this accreditation to document capacity to deliver the 3 core functions of public health and the Ten Essential Public Health Services (19). One of 12 domains specified by the PHAB as a required level of achievement is “to contribute to and apply the evidence base of public health” (19). This domain emphasizes the importance of the best available evidence and the role of health departments in adding to evidence for promising practices (19).

Several programs have been developed to meet EBPH training needs, including free, online resources (Box 1).

Evidence-Based Public Health (http://prcstl.wustl.edu/EBPH/Pages/ EvidenceBasedPublicHealthCourse.aspx). Features slides from the course developed by the Prevention Research Center in St. Louis.

Evidence-Based Behavioral Project Training Portal (www.ebbp.org). Nine modules illustrate the evidence-based practice process for both individual and population-based approaches. Continuing education credits are available for social workers, psychologists, physicians, and nurses.

Evidence-Based Public Health Online Course (http://ebph.ihrp.uic.edu). Produced through the University of Illinois at Chicago’s Institute for Health Research and Policy, this online course provides an overview of the EBPH process and includes additional resources and short quizzes.

Cancer Control P.L.A.N.E.T. (http://cancercontrolplanet.cancer.gov). The P.L.A.N.E.T. portal walks practitioners through an evidence-based process for cancer control, providing easy access to data and evidence-based resources. Topics include diet/nutrition, physical activity, tobacco control, and more. Step 4 includes practical details on interventions such as time and resources required and suitable settings.

The Community Tool Box (http://ctb.ku.edu). This comprehensive resource offers more than 7,000 pages of practical guidance on a wide range of skills essential for promoting community health. Tool kits (under “Do the Work” tab) provide outlines, examples, and links to tools for topics such as community assessment and evaluation.

Community Health Assessment and Group Evaluation (CHANGE) Tool and Action Guide (www.cdc.gov/healthycommunitiesprogram/ tools/change.htm). Developed by the Centers for Disease Control and Prevention (CDC), this tool focuses on assessment and planning. It provides Microsoft Excel (Microsoft, Redmond, Washington) templates for collecting data in 5 sectors: community-at-large, community institutions/organizations, health care, school, and worksite. It is recommended for prioritizing action planning and tracking annual progress in key policy and environmental strategies.

Mobilizing for Action through Planning and Partnerships (MAPP) (www.naccho.org/topics/infrastructure/mapp/index.cfm). The MAPP model, developed by the National Association of County and City Health Officials, guides practitioners through a complete planning process, from beginning organizational steps through assessment and action planning, implementation, and evaluation. The website contains a comprehensive user handbook, a clearinghouse of resources, and stories from the field.

YMCA Community Healthy Living Index (www.ymca.net/communityhealthylivingindeX). This site provides assessment tools and planning guides for 6 key community settings: after-school child care sites, early childhood programs, neighborhoods, schools, worksites, and the community at large.

CDC Program Evaluation (www.cdc.gov/eval/index.htm). This site contains step-by-step manuals and other evaluation resources, including the CDC Framework for Program Evaluation.

Behavioral Risk Factor Surveillance System (BRFSS) (www.cdc.gov/brfss). BRFSS tracks health conditions and risk behaviors annually, using a standard core questionnaire that allows state-specific data to be compared across strata. An interactive menu generates prevalence and trend data by age, sex, race/ethnicity, education, and income level. The SMART (Selected Metropolitan/Micropolitan Area Risk Trends) project provides local data for selected cities and counties.

CDC WONDER (http://wonder.cdc.gov/). CDC WONDER (Wide-ranging Online Data for Epidemiologic Research) provides a single point of access to public health surveillance data and a wide variety of CDC reports, guidelines, and reference materials. Data sets available for query include mortality, natality, cancer incidence, HIV/AIDS, and more.

Youth Risk Behavior Surveillance System (YRBSS) (www.cdc.gov/healthyyouth/yrbs). YRBSS monitors priority health-risk behaviors and the prevalence of obesity and asthma among youth and young adults in the United States.

County Health Rankings (www.countyhealthrankings.org/). Counties in each of the 50 states are ranked according to surveillance data on health outcomes and a broad range of health factors. For each state, data can be downloaded as a Microsoft Excel file; links for relevant state-specific data websites are provided.

National Conference of State Legislators (NCSL) (www.ncsl.org/). NCSL provides access to current state and federal legislation and a comprehensive list of state documents, including state statutes, constitutions, legislative audits, and research reports.

Yale Rudd Center for Food Policy and Obesity (www.yaleruddcenter.org/). This site provides a legislation database for federal and state policies on food policy and obesity topics such as breastfeeding, body mass index screenings, and school nutrition.

State Cancer Legislative Database Program (www.scld-nci.net/). The National Cancer Institute maintains this database of state cancer-related health policy.

(the ) (www.thecommunityguide.org). The Task Force on Community Preventive Services has systematically reviewed more than 200 interventions to produce evidence-based recommendations on population-level interventions. Topics currently include adolescent health, alcohol, asthma, birth defects, cancer, diabetes, health communication, health equity, HIV/AIDS, sexually transmitted infections and pregnancy, mental health, motor vehicle injury, nutrition, obesity, oral health, physical activity, the social environment, tobacco use, vaccines, violence, and worksite health.

The Cochrane Library (www.cochrane.org). More than 5,000 systematic reviews are published in the Cochrane Library, including clinical and population-based interventions and economic evaluations. The Cochrane Public Health Group produces reviews on the effects of population-level interventions (www.ph.cochrane.org).

The Campbell Collaboration (www.campbellcollaboration.org). This international research network produces systematic reviews in education, crime and justice, and social welfare.

Cost-Effectiveness Analysis Registry (https://research.tufts-nemc.org/cear4/home.aspX). This registry offers detailed information on nearly 3,000 cost-effectiveness analyses covering a wide array of diseases and intervention types.

New York Academy of Medicine, Grey Literature Report (www.nyam.org/library/online-resources/grey-literature-report). This bimonthly publication alerts readers to new gray literature on selected public health topics.

The Mississippi State Department of Health (MSDH) sponsored an EBPH course, led by faculty from the Prevention Research Center in St. Louis (PRC-StL), for state leaders in July 2010. In April 2011, the course was expanded to local public health districts. At a pre-course workshop, the Southwest District health officer explained the importance of evidence-based community interventions and the role of the health department in community assessment, interventions, and policy. The course itself was taught to 26 local practitioners by instructors from MSDH and PRC-StL. In May 2011, MSDH repeated the course, taught entirely by MSDH staff, in McComb, Mississippi. MSDH included the EBPH model in grant applications to the Coordinated Chronic Disease Program and the Community Transformation Grants program, both initiated by the Centers for Disease Control and Prevention. MSDH offered $15,000 to $26,000 mini-grants to support the development of evidence-based action planning in such areas as physical activity, joint-use agreements, smoke-free municipalities, and healthy corner stores.

Since May 2011, the Prevention Services Division of the Colorado Department of Public Health and Environment has conducted a pilot project to collaboratively build capacity in EBPH. The 7-step EBPH training approach (3) served as a guide. Epidemiologists and evaluators created practical tools and mini-trainings. One volunteer team focuses on increasing physical activity at the population level while another works to increase screening and referral for pregnancy-related depression during the next 5 years. Both teams completed a community assessment, quantified their health issue, wrote a concise issue statement, rated the evidence on strategies, and prioritized the strategies (steps 1–5). The first team expanded to address obesity prevention and prioritized strategies in April 2012. Division leadership will convene implementation teams to plan and execute the action and evaluation plans for the top-ranked strategies. The team addressing pregnancy-related depression created a logic model using priority strategies, which then informed their state action plan (step 6) that includes SMART (specific, measurable, achievable, relevant, time-bound) objectives and process measures (step 7). At the end of the project in January 2012, this team updated their issue statement and had a portfolio of key documents, tools, and a literature library, intended to sustain capacity in EBPH. This team is implementing the action plan and will semiannually assess the need to repeat any EBPH step.

In 1997, the Prevention Research Center in St. Louis (PRC-StL) developed an on-site training course, Evidence-Based Public Health. To date, the course has reached more than 1,250 practitioners and has been replicated by PRC-StL faculty in 14 US states and 6 other countries. The course aims to “train the trainer” to extend the reach of the course and build local capacity (Box 2). Course evaluations are positive, and more than 90% of attendees have indicated they will use course information in their work (20-23). Course slides are available online, and a textbook is in its second edition (8). Using a similar framework, the University of Illinois at Chicago developed an online EBPH course that includes short quizzes and additional resources.

In 2006, with support from National Institutes of Health, experts from the fields of medicine, nursing, public health, social work, psychology, and library sciences formed the Council for Training in Evidence-Based Behavioral Practice. This group produced a transdisciplinary model of evidence-based practice that facilitates communication and collaboration (Figure) (2,4,5,24) and launched an interactive website to provide web-based training materials and resources to practitioners, researchers, and educators. The EBBP Training Portal, available free with registration, offers 9 modules on both individual and population-based approaches. Users learn how to choose effective interventions, evaluate interventions that are not yet proven, engage in decision making with others, and balance the 3 domains of evidence-based decision making (Figure).

Key elements of EBPH have been summarized (3) as the following:

  • Engaging the community in assessment and decision making;
  • Using data and information systems systematically;
  • Making decisions on the basis of the best available peer-reviewed evidence (both quantitative and qualitative);
  • Applying program planning frameworks (often based in health behavior theory);
  • Conducting sound evaluation; and
  • Disseminating what is learned.

Data for community assessment

As a first step in the EBPH process, a community assessment identifies the health and resource needs, concerns, values, and assets of a community. This assessment allows the intervention (a public health program or policy) to be designed and implemented in a way that increases the likelihood of success and maximizes the benefit to the community. The assessment process engages the community and creates a clear, mutual understanding of where things stand at the outset of the partnership and what should be tracked along the way to determine how an intervention contributed to change.

Public health surveillance is a critical tool for understanding a community’s health issues. Often conducted through national or statewide initiatives, surveillance involves ongoing systematic collection, analysis, and interpretation of quantitative health data. Various health issues and indicators may be tracked, including deaths, acute illnesses and injuries, chronic illnesses and impairments, birth defects, pregnancy outcomes, risk factors for disease, use of health services, and vaccination coverage. National surveillance sources typically provide state-level data, and county-level data have become more readily available in recent years (Box 1). State health department websites can also be sources of data, particularly for vital statistics and hospital discharge data. Additionally, policy tracking and surveillance systems (Box 1) monitor policy interest and action for various health topics (25).

Other data collection methods can be tailored to describe the particular needs of a community, creating new sources of data rather than relying on existing data. Telephone, mail, online, or face-to-face surveys collect self-reported data from community members. Community audits involve detailed counting of factors such as the number of supermarkets, sidewalks, cigarette butts, or health care facilities. For example, the Active Living Research website (www.activelivingresearch.org) provides a collection of community audit tools designed to assess how built and social environments support physical activity.

Qualitative methods can help create a more complete picture of a community, using words or pictures to describe the “how” and “why” of an issue. Qualitative data collection can take the form of simple observation, interviews, focus groups, photovoice (still or video images that document community conditions), community forums, or listening sessions. Qualitative data analysis involves the verbatim creation of transcripts, the development of data-sorting categories, and iterative sorting and synthesizing of data to develop sets of common concepts or themes (26).

Each of these forms of data collection offers advantages and disadvantages that must be weighed according to the planning team’s expertise, time, and budget. No single source of data is best. Most often data from several sources are needed to fully understand a problem and its best potential solutions. Several planning tools are available (Box 1) to help choose and implement a data collection method.

Selecting evidence

Once health needs are identified through a community assessment, the scientific literature can identify programs and policies that have been effective in addressing those needs. The amount of available evidence can be overwhelming; practitioners can identify the best available evidence by using tools that synthesize, interpret, and evaluate the literature.

Systematic reviews (Box 1) use explicit methods to locate and critically appraise published literature in a specific field or topic area. The products are reports and recommendations that synthesize and summarize the effectiveness of particular interventions, treatments, or services and often include information about their applicability, costs, and implementation barriers. Evidence-based practice guidelines are based on systematic reviews of research-tested interventions and can help practitioners select interventions for implementation. The Guide to Community Preventive Services (the Community Guide ), conducted by the Task Force on Community Preventive Services, is one of the most useful sets of reviews for public health interventions (27,28). The Community Guide evaluates evidence related to community or population-based interventions and is intended to complement the Guide to Clinical Preventive Services (systematic reviews of clinical preventive services) (29).

Not all populations, settings, and health issues are represented in evidence-based guidelines and systematic reviews. Furthermore, there are many types of evidence (eg, randomized controlled trials, cohort studies, qualitative research), and the best type of evidence depends on the question being asked. Not all types of evidence (eg, qualitative research) are equally represented in reviews and guidelines. To find evidence tailored to their own context, practitioners may need to search resources that contain original data and analysis. Peer-reviewed research articles, conference proceedings, and technical reports can be found in PubMed (www.ncbi.nlm.nih.gov/pubmed). Maintained by the National Library of Medicine, PubMed is the largest and most widely available bibliographic database; it covers more than 21 million citations in the biomedical literature. This user-friendly site provides tutorials on topics such as the use of Medical Subject Heading (MeSH) terms. Practitioners can freely access abstracts and some full-text articles; practitioners who do not have journal subscriptions can request reprints from authors directly. Economic evaluations provide powerful evidence for weighing the costs and benefits of an intervention, and the Cost-Effectiveness Analysis Registry tool (Box 1) offers a searchable database and links to PubMed abstracts.

The “gray” literature includes government reports, book chapters, conference proceedings, and other materials not found in PubMed. These sources may provide useful information, although readers should interpret non–peer-reviewed literature carefully. The New York Academy of Medicine produces a bimonthly Grey Literature Report (Box 1), and the US government maintains a website (www.science.gov) that searches the databases and websites of federal agencies in a single query. Internet search engines such as Google Scholar (http://scholar.google.com) may also be useful in finding both peer-reviewed articles and gray literature.

Program-planning frameworks

Program-planning frameworks provide structure and organization for the planning process. Commonly used models include PRECEDE-PROCEED (30), Intervention Mapping (31), and Mobilizing for Action through Planning and Partnerships (Box 1). Public health interventions grounded in health behavior theory often prove to be more effective than those lacking a theoretical base, because these theories conceptualize the mechanisms that underlie behavior change (32,33). Developed as a free resource for public health practitioners, the National Cancer Institute’s guide Theory at a Glance concisely summarizes the most commonly used theories, such as the ecological model, the health belief model, and social cognitive theory, and it uses 2 planning models (PRECEDE-PROCEDE and social marketing) to explain how to incorporate theory in program planning, implementation, and evaluation (34). Logic models are an important planning tool, particularly for incorporating the concepts of health-behavior theories. They visually depict the relationship between program activities and their intended short-term objectives and long-term goals. The first 2 chapters of the Community Tool Box explain how to develop logic models, provide overviews of several program-planning models, and include real-world examples (Box 1).

Evaluation and dissemination

Evaluation answers questions about program needs, implementation, and outcomes (35). Ideally, evaluation begins when a community assessment is initiated and continues across the life of a program to ensure proper implementation. Four basic types of evaluation can achieve program objectives, using both quantitative and qualitative methods. Formative evaluation is conducted before program initiation; the goal is to determine whether an element of the intervention (eg, materials, messages) is feasible, appropriate, and meaningful for the target population (36). Process evaluation assesses the way a program is being implemented, rather than the effectiveness of that program (36) (eg, counting program attendees and examining how they differ from those not attending).

Impact evaluation assesses the extent to which program objectives are being met and may reflect changes in knowledge, attitudes, behavior, or other intermediate outcomes. Ideally, practitioners should use measures that have been tested for validity (the extent to which a measure accurately captures what it is intended to capture) and reliability (the likelihood that the instrument will get the same result time after time) elsewhere. The Behavioral Risk Factor Surveillance System (BRFSS) is the largest telephone health survey in the world, and its website offers a searchable archive of survey questions since the survey’s inception in 1984 (Box 1). New survey questions receive a technical review, cognitive testing, and field testing before inclusion. A 2001 review summarized reliability and validity studies of the BRFSS (37).

Outcome evaluation provides long-term feedback on changes in health status, morbidity, mortality, or quality of life that can be attributed to an intervention. Because it takes so long to observe effects on health outcomes and because changes in these outcomes are influenced by factors outside the scope of the intervention itself, this type of evaluation benefits from more rigorous forms of quantitative evaluation, such as experimental or quasi-experimental rather than observational study designs.

The Centers for Disease Control and Prevention (CDC) Framework for Program Evaluation, developed in 1999, identifies a 6-step process for summarizing and organizing the essential elements of evaluation (38). The related CDC website (Box 1) maintains links to framework-based materials, step-by-step manuals, and other evaluation resources. Within a detailed outline of the CDC framework’s steps, the Community Toolbox also provides tools and examples (Box 1).

After an evaluation, the dissemination of findings is often overlooked, but practitioners have an implied obligation to share results with stakeholders, decision makers, and community members. Often these are people who participated in data collection and can make use of the evaluation findings. Dissemination may take the form of formal written reports, oral presentations, publications in academic journals, or placement of information in newsletters or on websites.

An increasing volume of scientific evidence is now at the fingertips of public health practitioners. Putting this evidence to work can help practitioners meet demands for a systematic approach to public health problem solving that yields measurable outcomes. Practitioners need skills, knowledge, support, and time to implement evidence-based policies and programs. Many tools exist to help efficiently incorporate the best available evidence and strategies into their work. Improvements in population health are most likely when these tools are applied in light of local context, evaluated rigorously, and shared with researchers, practitioners, and other stakeholders.

Preparation of this article was supported by the National Association of Chronic Disease Directors; cooperative agreement no. U48/DP001903 from CDC, Prevention Research Centers Program; CDC grant no. 5R18DP001139-02, Improving Public Health Practice Through Translation Research; and National Institutes of Health Office of Behavioral and Social Sciences Research contract N01-LM-6-3512, Resources for Training in Evidence-Based Behavioral Practice.

We thank Dr Elizabeth Baker, Dr Kathleen Gillespie, and the late Dr Terry Leet for their roles in developing the PRC-StL EBPH course. We thank the Colorado pilot portfolio teams Erik Aakko, Linda Archer, Gretchen Armijo, Mandy Bakulski, Renee Calanan, Julie Davis, Julie Graves, Indira Gujral, Rebecca Heck, Ashley Juhl, Kyle Legleitner, Flora Martinez, Kristin McDermott, Jessica Osborne, Kerry Thomson, Jason Vahling, and Stephanie Walton. We acknowledge the Mississippi EBPH team, Dr Victor Sutton, Dr Rebecca James, Dr Thomas Dobbs, Cassandra Dove, and State Health Officer Dr Mary Currier, for its commitment to the pilot and implementation of EBPH. We also thank Molly Ferguson, MPH (coordinator), and Drs Ed Mullen, Robin Newhouse, Steve Persell, and Jason Satterfield, members of the Council on Evidence-Based Behavioral Practice.

Corresponding Author: Ross C. Brownson, PhD, Washington University in St. Louis, Kingshighway Building, 660 S Euclid, Campus Box 8109, St. Louis, MO 63110. Telephone: 314-362-9641. E-mail: [email protected] .

Author Affiliations: Julie A. Jacobs, Prevention Research Center in St. Louis, Brown School, Washington University in St. Louis, St. Louis, Missouri; Ellen Jones, School of Health Related Professions, University of Mississippi Medical Center, Jackson, Mississippi; Barbara A. Gabella, Colorado Department of Public Health and Environment, Denver, Colorado; Bonnie Spring, Northwestern University Feinberg School of Medicine, Chicago, Illinois.

  • Kohatsu ND, Robinson JG, Torner JC. Evidence-based public health: an evolving concept. Am J Prev Med 2004;27(5):417-21. CrossRef PubMed
  • Satterfield JM, Spring B, Brownson RC, Mullen EJ, Newhouse RP, Walker BB, et al. Toward a transdisciplinary model of evidence-based practice. Milbank Q 2009;87(2):368-90. CrossRef PubMed
  • Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health 2009;30:175-201. CrossRef PubMed
  • Spring B, Hitchcock K. Evidence-based practice. In: Weiner IB, Craighead WE, editors. Corsini encyclopedia of psychology. 4th edition. New York (NY): Wiley; 2009. p. 603-7.
  • Spring B, Neville K, Russell SW. Evidence-based behavioral practice. In: Encyclopedia of human behavior. 2nd edition. New York (NY): Elsevier; 2012.
  • Gebbie KM. Public health certification. Annu Rev Public Health 2009;30:203-10. CrossRef PubMed
  • Turnock BJ. Public health: what it is and how it works. Sadbury (MA): Jones and Bartlett; 2009.
  • Brownson RC, Baker EA, Leet TL, Gillespie KN, True WR. Evidence-based public health. 2nd edition. New York (NY): Oxford University Press; 2011.
  • Dodson EA, Baker EA, Brownson RC. Use of evidence-based interventions in state health departments: a qualitative assessment of barriers and solutions. J Public Health Manag Pract 2010;16(6):E9-15. PubMed
  • Baker EA, Brownson RC, Dreisinger M, McIntosh LD, Karamehic-Muratovic A. Examining the role of training in evidence-based public health: a qualitative study. Health Promot Pract 2009;10(3):342-8. CrossRef PubMed
  • Brownson RC, Ballew P, Dieffenderfer B, Haire-Joshu D, Heath GW, Kreuter MW, et al. Evidence-based interventions to promote physical activity: what contributes to dissemination by state health departments. Am J Prev Med 2007;33(1 Suppl):S66-73. CrossRef PubMed
  • Jacobs JA, Dodson EA, Baker EA, Deshpande AD, Brownson RC. Barriers to evidence-based decision making in public health: a national survey of chronic disease practitioners. Public Health Rep 2010;125(5):736-42. PubMed
  • Healthy People 2020 framework: the vision, mission and goals of Healthy People 2020. US Department of Health and Human Services, Office of Disease Prevention and Health Promotion. http://www.healthypeople.gov/2020/Consortium/HP2020Framework.pdf. Accessed March 7, 2012.
  • American Recovery and Reinvestment Act of 2009, Pub L No 111-5, 123 Stat 233 (2009).
  • Patient Protection and Affordable Care Act of 2010, Pub L No 111-148, 124 Stat 119 (2010).
  • Brownson RC, Ballew P, Brown KL, Elliott MB, Haire-Joshu D, Heath GW, et al. The effect of disseminating evidence-based interventions that promote physical activity to health departments. Am J Public Health 2007;97(10):1900-7. CrossRef PubMed
  • Core competencies for public health professionals. Council on Linkages Between Academia and Public Health Practice. http://www.phf.org/resourcestools/Pages/Core_Public_Health_Competencies.aspx. Accessed March 7, 2012.
  • Slonim A, Wheeler FC, Quinlan KM, Smith SM. Designing competencies for chronic disease practice. Prev Chronic Dis 2010;7(2). http://www.cdc.gov/pcd/issues/2010/mar/08_0114.htm. Accessed March 7, 2012. PubMed
  • Standards and measures. Public Health Accreditation Board. http://www.phaboard.org/accreditation-process/public-health-department-standards-and-measures/. Accessed March 7, 2012.
  • Brownson RC, Diem G, Grabauskas V, Legetic B, Poternkina R, Shatchkute A, et al. Training practitioners in evidence-based chronic disease prevention for global health. Promot Educ 2007;14(3):159-63. PubMed
  • O’Neall MA, Brownson RC. Teaching evidence-based public health to public health practitioners. Ann Epidemiol 2005;15(7):540-4. CrossRef PubMed
  • Dreisinger M, Leet TL, Baker EA, Gillespie KN, Haas B, Brownson RC. Improving the public health workforce: evaluation of a training course to enhance evidence-based decision making. J Public Health Manag Pract 2008;14(2):138-43. PubMed
  • Franks AL, Brownson RC, Bryant C, Brown KM, Hooker SP, Pluto DM, et al. Prevention Research Centers: contributions to updating the public health workforce through training. Prev Chronic Dis 2005;2(2). http://www.cdc.gov/pcd/issues/2005/apr/04_0139.htm. Accessed March 7, 2012. PubMed
  • Newhouse RP, Spring B. Interdisciplinary evidence-based practice: moving from silos to synergy. Nurs Outlook 2010;58(6):309-17. CrossRef PubMed
  • Chriqui JF, O’Connor JC, Chaloupka FJ. What gets measured, gets changed: evaluating law and policy for maximum impact. J Law Med Ethics 2011;39 (Suppl 1)21-6. CrossRef PubMed
  • Hesse-Biber S, Leavy P. The practice of qualitative research. Thousand Oaks (CA): Sage; 2006.
  • Mullen PD, Ramirez G. The promise and pitfalls of systematic reviews. Annu Rev Public Health 2006;27:81-102. CrossRef PubMed
  • Zaza S, Briss PA, Harris KW, editors. The guide to community preventive services: what works to promote health? New York (NY): Oxford University Press; 2005.
  • U.S. Preventive Services Task Force. http://www.uspreventiveservicestaskforce.org/. Accessed March 9, 2012.
  • Green LW, Kreuter MW. Health promotion planning: an educational and ecological approach. 4th edition. New York (NY): McGraw-Hill; 2004.
  • Bartholomew LK, Parcel GS, Kok G, Gottlieb NH, Fernandez ME. Planning health promotion programs: an Intervention Mapping approach. 3rd edition. San Francisco (CA): Jossey-Bass; 2011.
  • Glanz K, Bishop DB. The role of behavioral science theory in the development and implementation of public health interventions. Annu Rev Public Health 2010;31:399-418. CrossRef PubMed
  • Glanz K, Rimer BK, Viswanath K. Health behavior and health education: theory, research, and practice. 4th edition. San Francisco (CA): Jossey-Bass; 2008.
  • Glanz K, Rimer BK. Theory at a glance: a guide for health promotion practice. National Cancer Institute, National Institutes of Health; 2005. (NIH publication 05-3896). http://www.cancer.gov/cancertopics/cancerlibrary/theory.pdf. Accessed March 7, 2012.
  • Shadish WR. The common threads in program evaluation. Prev Chronic Dis 2006;3(1). http://www.cdc.gov/pcd/issues/2006/jan/05_0166.htm. Accessed March 7, 2012. PubMed
  • Thompson N, Kegler M, Holtgrave D. Program evaluation. In: Crosby RA, DiClemente RJ, Salazar LF, editors. Research methods in health promotion. San Francisco (CA): Jossey-Bass; 2006. p. 199-225.
  • Nelson DE, Holtzman D, Bolen J, Stanwyck CA, Mack KA. Reliability and validity of measures from the Behavioral Risk Factor Surveillance System (BRFSS). Soz Praventivmed 2001;(46 Suppl 1):S3-42. PubMed
  • Centers for Disease Control and Prevention. Framework for program evaluation in public health. MMWR Recomm Rep 1999;(48 RR-11):1-40. http://www.cdc.gov/mmwr/preview/mmwrhtml/rr4811a1.htm. Accessed March 7, 2012. PubMed

File Formats Help:

  • PCD podcasts
  • PCD on Facebook
  • Page last reviewed: July 26, 2012
  • Page last updated: July 26, 2012
  • Content source: National Center for Chronic Disease Prevention and Health Promotion
  • Using this Site
  • Contact CDC

Web Analytics

Systematic review

  • Open access
  • Published: 27 January 2022

Implementability of healthcare interventions: an overview of reviews and development of a conceptual framework

  • Marlena Klaic   ORCID: orcid.org/0000-0003-2328-0503 1 , 2 ,
  • Suzanne Kapp   ORCID: orcid.org/0000-0002-5438-8384 3 ,
  • Peter Hudson   ORCID: orcid.org/0000-0001-5891-8197 1 , 4 , 5 ,
  • Wendy Chapman   ORCID: orcid.org/0000-0001-8702-4483 6 ,
  • Linda Denehy   ORCID: orcid.org/0000-0002-2926-8436 1 , 7 ,
  • David Story   ORCID: orcid.org/0000-0002-6479-1310 8 , 9 &
  • Jill J. Francis   ORCID: orcid.org/0000-0001-5784-8895 1 , 10 , 11  

Implementation Science volume  17 , Article number:  10 ( 2022 ) Cite this article

33k Accesses

95 Citations

53 Altmetric

Metrics details

Implementation research may play an important role in reducing research waste by identifying strategies that support translation of evidence into practice. Implementation of healthcare interventions is influenced by multiple factors including the organisational context, implementation strategies and features of the intervention as perceived by people delivering and receiving the intervention. Recently, concepts relating to perceived features of interventions have been gaining traction in published literature, namely, acceptability, fidelity, feasibility, scalability and sustainability. These concepts may influence uptake of healthcare interventions, yet there seems to be little consensus about their nature and impact. The aim of this paper is to develop a testable conceptual framework of implementability of healthcare interventions that includes these five concepts.

A multifaceted approach was used to develop and refine a conceptual framework of implementability of healthcare interventions. An overview of reviews identified reviews published between January 2000 and March 2021 that focused on at least one of the five concepts in relation to a healthcare intervention. These findings informed the development of a preliminary framework of implementability of healthcare interventions which was presented to a panel of experts. A nominal group process was used to critique, refine and agree on a final framework.

A total of 252 publications were included in the overview of reviews. Of these, 32% were found to be feasible, 4% reported sustainable changes in practice and 9% were scaled up to other populations and/or settings. The expert panel proposed that scalability and sustainability of a healthcare intervention are dependent on its acceptability, fidelity and feasibility. Furthermore, acceptability, fidelity and feasibility require re-evaluation over time and as the intervention is developed and then implemented in different settings or with different populations. The final agreed framework of implementability provides the basis for a chronological, iterative approach to planning for wide-scale, long-term implementation of healthcare interventions.

Conclusions

We recommend that researchers consider the factors acceptability, fidelity and feasibility (proposed to influence sustainability and scalability) during the preliminary phases of intervention development, evaluation and implementation, and iteratively check these factors in different settings and over time.

Peer Review reports

Contributions to the literature

Reviews report relatively few healthcare interventions that are sustained beyond the initial implementation phase or scaled to different populations or settings.

Acceptability, fidelity and feasibility may influence scalability and sustainability of a healthcare intervention.

We have developed a testable conceptual framework that can be used to prospectively and iteratively guide the implementability of healthcare interventions.

Prospective identification of factors that influence scalability and sustainability of a healthcare intervention is critical to avoid or reduce research waste.

Implementation science aims to identify and address care gaps, support practice change and enhance quality and equity of health care. Building a robust and generalizable evidence base to inform implementation practice is the objective of implementation research. Implementation research can also play a critical role in efforts to reduce research waste, in that it can provide evidence about the strategies that are effective for translating the findings of clinical research into enhanced healthcare practice and thus improved health outcomes [ 1 , 2 , 3 ]. Identifying the factors important for translation of an effective intervention or innovation from the research setting to routine clinical practice can arguably contribute to reducing the estimated annual US$85 billion, globally, wasted in health research [ 1 , 2 , 4 ].

Most implementation investigations focus on one of two approaches to achieve change. First, implementation activities consist of either “top down” processes (e.g. governance arrangements, national policies and guidelines, continuing medical education, incentivisation systems) [ 5 , 6 , 7 ], or more granular “bottom-up” processes that consider views of healthcare workers: their perceived barriers and enablers to specific elements of practice change at the level of healthcare teams and individual clinicians [ 8 , 9 , 10 , 11 ]. The second approach considers features of healthcare contexts (including organisational factors and the wider health system context) that might interact with the implementation activities to enable or impede practice change [ 12 , 13 , 14 , 15 , 16 ].

The current paper considers a third lever for achieving implementation: the perceived features of healthcare interventions themselves (in addition to effectiveness). An early theory, Diffusion of Innovations [ 17 ], identified six features of innovations that make their adoption more or less likely, namely, relative advantage, compatibility with the existing system, complexity, trialability, potential for reinvention and observed effects, where trialability refers to being able to test the innovation or intervention on a small scale, such as a pilot study. The more recent Consolidated Framework of Implementation Research (CFIR) proposed seven attributes of interventions, namely, intervention source, evidence strength and quality, relative advantage, adaptability, trialability, complexity and design quality and packaging, which refers to the presentation of the intervention, such as how it is bundled and user accessibility [ 12 ]. A recently published review identified 28 implementation frameworks and models, including the CFIR, which were synthesised into a number of core phases and components [ 18 ]. The authors suggest there is a need for an overarching framework that can guide researchers from intervention development to sustainable practice change.

Uptake of an intervention by both providers and recipients also depends crucially on their perceptions of the intervention. The COVID pandemic of 2020–2021 exemplifies this point. Even though approved vaccine interventions have substantial evidence of a positive benefit-to-risk ratio, the speed of uptake in many countries has been dependent on the perceptions of politicians, service providers and members of the public regarding the necessity, urgency and benefits of vaccine programs. It seems that, independent of the objective features of an intervention, stakeholder perceptions about the intervention will radically influence implementation and uptake at many levels. Furthermore, these perceptions may change over time and during roll-out of an intervention. We refer to these perceptions as views about the “implementability” of an intervention. We define “implementability” as the likelihood that an intervention will be adopted into routine practice and into health consumer behaviours across settings and over time. Several concepts related to implementability of healthcare interventions are gaining traction in the implementation science literature and appear to be primarily focused on the earlier stages of intervention development or latter stages of evaluation. These are acceptability, fidelity, feasibility, scalability and sustainability. A search of the health sciences literature (conducted 25th February 2021) for studies published in the last 20 years containing one or more of these five concepts in the title, revealed that the annual frequency of usage has steadily increased. Figure 1 shows that, from a relatively low baseline in the year 2000, these concepts appeared in titles in 2020, respectively, > 900, > 450, > 4500, > 350 and > 650 times.

figure 1

Annual frequency of the five key concepts in publications indexed to PubMed

Where there was an explicit rationale in the included studies, authors noted that the concept under investigation was likely to influence engagement, adoption and ongoing use. For example, a systematic review exploring videoconferencing in an orthopaedic setting [ 19 ] found that “acceptability of service users (both patients and clinicians) is a key factor for the uptake of telemedicine in clinical practice” (p.184). Hence, it is plausible that these concepts, individually and collectively, influence intervention implementability. To identify whether other implementation-related concepts were also appearing in the literature, we conducted a further illustrative search of literature published in the last 20 years (conducted on 29th August 2021) relating to specific interventions (using the phrase “of [intervention]”). We selected three healthcare interventions for which there were published reviews including one or more of the five concepts. The majority (> 90%) of reviews focused on clinical effectiveness or evaluation of outcomes. No additional concepts related to implementability, other than the five considered in our proposed framework, were evident.

There seems to be little consensus about the nature of these concepts, appropriate measurement strategies and how they might be related to one another. Without consistent definitions or reliable measurement approaches, it is not possible to test assumptions or predictions about whether these features indeed influence the implementability of healthcare interventions.

Implementability has been previously explored in the published literature, but this has focused on the implementability of clinical practice guidelines [ 20 , 21 , 22 , 23 ] or, more recently, the implementability of late-phase clinical trials [ 24 ]. The current paper focuses more broadly on prospective implementability of healthcare interventions, particularly at the early stages of development and evaluation, during scale-up, and over time.

The aim of this paper is to report the development of a testable conceptual framework of implementability that includes acceptability, fidelity, feasibility, scalability and sustainability.

A multifaceted approach was used to develop and refine the framework of implementability of healthcare interventions. We use the World Health Organization definition of healthcare interventions as “an act performed for, with or on behalf of a person or population whose purpose is to assess, improve, maintain, promote or modify health, functioning or health conditions.” [ 25 ]

Step 1: Overview of reviews

A preliminary exploratory search indicated a large volume of systematic reviews on the aforementioned five concepts within published literature on healthcare interventions. We therefore decided to conduct an overview of reviews [ 26 ], to answer the following questions:

Have the five concepts (acceptability, fidelity, feasibility, scalability and sustainability) been defined, operationalised and/or theorised in systematic reviews (SR) on healthcare interventions?

Have the five concepts been combined in any publications, frameworks or models used in published literature on healthcare interventions?

Search strategy

Systematic reviews published from January 2000 to March 2021 were identified and retrieved by one author (MK). Searches were structured by combining relevant review filters (Additional file 1 : search strategy), with the appearance of the truncated term for each concept (“acceptab*”, “fidelity”, “feasib*”, “sustainab*” and “scalab*”) in article titles. Searching for these terms in article titles ensured that the review had a primary focus on the concept. The term feasibility was combined with “or process evaluat*” but papers were considered for inclusion only if the term feasibility was in the abstract. We did not include a synonym search as we were specifically interested in the use of the particular concept terms in the literature.

Multiple databases were searched using OVID (Medline and Embase) and EBSCO (PsycINFO) and restricted to publications in English.

Screening citations

Duplicates were removed using the deduplicate option in the OVID and EBSCO search engines and the remaining citations were imported into EndNote X9 [ 27 ] where further duplicates were manually identified and removed. Reviews were considered eligible for inclusion if they met the criteria detailed in Table 1 . Screening was a two-step process, commencing with an initial review of the abstracts by one author (MK) to determine eligibility. If there was insufficient information in the abstract to make a decision regarding inclusion, the full paper was retrieved, and the methods section was screened. For example, drug-development reviews with acceptability or feasibility in the title but a focus on cost-effectiveness acceptability curves were excluded.

Full-text review

The full articles for all citations that met the inclusion criteria were retrieved by one author (MK) with an additional author (SK) independently reviewing a random selection of 10% of the retrieved papers. Data were extracted using a form developed by the research group and included if and how the concept was defined, how it was operationalised or measured, use or development of a framework and overall outcome of the review, i.e. had the healthcare intervention achieved acceptability, fidelity, feasibility, scalability or sustainability? A review was considered to have used a framework if it was mentioned in the methods and data were synthesised using the components of the stated framework. Primary studies included in the reviews were not individually reviewed as this was considered to be outside of the scope of this overview. Data extracted from the reviews were summarised descriptively.

Assessment of quality

Assessment of quality was not conducted as the aim of the study was to explore how the implementability concepts were defined and conceptualised, rather than the quality of the information related to the health intervention being delivered.

Step 2: Development of the preliminary framework

A preliminary framework (Fig. 2 ) was developed in parallel with the overview of reviews, integrating the five concepts that appeared to be mostly investigated in isolation in the published reviews on healthcare interventions, and based on the research experiences and theory-development expertise of two authors (JF and MK). This framework was then presented to the group of experts (co-authors), as described below.

figure 2

Initial framework of implementability of healthcare interventions

Step 3: Modified Nominal Group Technique

The Nominal Group Technique (NGT) is a facilitator-led, structured process for obtaining information and arriving at a decision with a target group who have some association or experience with the topic [ 28 ]. Various adaptations of the NGT have been used in conceptual studies that focus on framework development [ 29 , 30 , 31 , 32 , 33 ]. Recently, an additional pre-meeting, information-giving step has been suggested to enable more time for participants to consider their contribution to the topic [ 34 , 35 ]. The adapted NGT process utilised in this study was as follows: (i) identification of group members, to include experts with depth and diversity of experience [ 36 ]. All authors on this paper were invited by e-mail to attend an online group meeting. They were purposively identified at the start of this study for their knowledge and expertise in the fields of implementation science, theory development, biomedical informatics and clinical research across a broad range of fields; (ii) provision of information prior to the meeting, including a PowerPoint presentation, findings of the overview of reviews and objectives of the meeting. Five authors with extensive clinical research backgrounds were asked to prepare a clinical scenario on one concept for sharing at the group meeting. The intention of this exercise was to discuss the fit between a real-world example of a study that explored one of the concepts and the proposed framework; (iii) meeting conducted online and facilitated by one author (JF) who has extensive experience in consensus panel processes. Following presentation of the meeting materials, including the preliminary framework, group members were instructed to silently consider the framework and generate ideas and critiques; iv) round-robin process with participants sharing their ideas and critiques; v) clarification process where participants shared their clinical scenario on a concept and discussed the fit with components of the initial framework, and vi) voting and/or agreement on the preliminary framework.

The database searches initially identified a total of 839 references across all five concepts (acceptability = 224, fidelity = 281, feasibility = 253, scalability = 37 and sustainability = 44). Following removal of 317 duplicates and screening of titles and abstracts, 301 full texts were sought for retrieval. Two were not retrieved as they were not available in English. Of the remaining 299 reports assessed for eligibility, 43 were excluded due to being unrelated to the concept (e.g. fidelity of DNA) and four were excluded as they focused on psychometric testing of a measure rather than a health intervention. The final number of publications included in this review was 252, of which 22 papers discussed more than one concept. As we were considering the concepts separately, these 22 were treated as separate investigations, resulting in a total of 274 investigations (Additional file 2 : reviews included in the overview, consisting of acceptability = 132, fidelity = 41, feasibility = 65, scalability = 11 and sustainability = 25), with the stages of the search process presented in Fig. 3 .

figure 3

PRISMA [ 37 ] flow chart of included reviews for search completed in March 2021

Characteristics of the included studies

Of the 252 studies in the overview, 30% included a meta-analysis and 19% used a mixed-methods approach that incorporated both quantitative and qualitative data from empirical research. The healthcare interventions that were a focus of the review were broad ranging and included psychological/psychiatric/psychosocial interventions (20%), technology-based interventions (5%), physical activities (6%), pharmacological and alternative interventions. The number of studies included in reviews ranged from 4 to 296, with the majority of studies (63%) having no setting exclusions (or not reported). Acceptability and fidelity were assessed using highly variable measurement approaches, so it was not appropriate to summarise the findings across reviews. The intervention under investigation was reported to be feasible in 32 (49%) reviews, sustainable in 1 (4%) review and successfully scaled up in 1 (9%) review (Additional file 3 ).

Definition and measurement (question a) and frameworks (question b) for the five concepts

A total of 1096 items of information were extracted from the 274 investigations by the first reviewer (MK). The second reviewer (SK), double extracted information from 10% of the reviews (total 32 papers double reviewed), which were compared with the first reviewer to assess reliability.

Of these, 100% agreement was achieved for definition of the concept and outcome of the review (e.g. acceptability or fidelity or feasibility or scalability or sustainability was/was not achieved). There was 96% agreement on the use of a framework and 89% agreement on the constructs measured. One reviewer failed to identify a framework in one of the studies resulting in 96% (out of 32 studies double reviewed) agreement. For constructs measured, reviewers did not identify the same constructs for two papers, resulting in 89% agreement. The two reviewers discussed these differences and agreed on a decision.

Twenty-two publications included two concepts in their review, of which 20 considered acceptability and feasibility [ 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 ] in exploring implementation of a healthcare intervention, and two considered scalability and sustainability [ 58 , 59 ].

Acceptability

Of the reviews exploring acceptability of a healthcare intervention, only 13 provided an a priori definition and they largely focused on whether those receiving a healthcare intervention found it to be “appropriate” and “fair” and “reasonable” [ 42 , 48 , 60 , 61 , 62 , 63 , 64 , 65 , 66 ]. Four reviews considered acceptability from the perspective of those delivering a healthcare intervention [ 61 , 62 , 65 , 66 ].

The majority of reviews measured one variable in evaluating acceptability, and this was predominantly either dropout rates (33%) or user perceptions of the intervention (30%). Twenty-three reviews measured three or more component variables, most commonly a combination of participant dropouts, recruitment rates, perceptions of users such as satisfaction measures, adherence to the study protocol and adverse events (3%).

Six reviews used a framework to define and measure acceptability, as described in Table 2 . The Theoretical Framework of Acceptability [ 67 ] was used in four of these reviews and defined acceptability as “a multi-faceted construct that reflects the extent to which people delivering or receiving a healthcare intervention consider it to be appropriate, based on anticipated or experiential cognitive and emotional responses to the intervention”. (p.8) This framework consists of seven constructs related to acceptability including affective attitude, burden, perceived effectiveness, ethicality, intervention coherence, opportunity costs and self-efficacy.

Of the 41 reviews exploring fidelity of a healthcare intervention, 35 included an a priori definition. Almost all reflected a dictionary definition of fidelity [ 68 ] with terms such as “integrity” and “delivered as intended” and “accuracy and consistency” included in their descriptions.

Thirty-six reviews measured four or more components as part of their assessment of fidelity of a healthcare intervention including adherence to the protocol (76%), dose delivered and received (76%) and provider training (49%). Thirty-four publications on fidelity of a healthcare intervention explicitly used a framework to guide their review, as described in Table 3 . The framework used in 19 of these cases was from the National Institute of Health Behavior Change Consortium (NIHBCC) [ 69 ] which includes measures of study design, training, delivery of treatment, receipt of treatment and enactment of treatment. The NIHBCC describe fidelity as “the methodological strategies used to monitor and enhance the reliability and validity of behaviour interventions”. (p.443)

Feasibility

Thirteen of 65 reviews defined feasibility using terms such as “practicality” and “ease of delivery” and “possible to undertake”. Seven of these 13 papers noted the importance of context and broader system factors when considering the “possibility of what could be done”, such as physical space, ongoing funding and political support [ 39 , 70 , 71 , 72 , 73 , 74 , 75 ].

The two most frequently measured constructs within the concept of feasibility were adherence to the study protocol (34%) (i.e. the same as the operationalisation of fidelity) and perceptions of key stakeholders (33%), including those providing and receiving the healthcare intervention (23%). There was a significant overlap between feasibility and acceptability with a number (22%) of feasibility reviews incorporating acceptability as a construct to be measured within feasibility.

Five reviews referred to a feasibility framework (Table 4 ) with the most commonly used being Bowen et al’s publication on designing feasibility studies [ 76 ]. This highly cited publication does not provide a definition of feasibility but does identify eight areas of focus that should be addressed in feasibility studies including acceptability, demand, implementation, practicality, adaptation, integration, expansion and limited efficacy testing. Bowen et al. define implementation as the extent to which the intervention can be delivered as planned which is synonymous with fidelity.

Almost a third (32%) of the reviews found the healthcare intervention to be feasible, with the remaining being a mixture of not feasible or unable to establish feasibility due to lack of information in the empirical studies.

  • Scalability

Healthcare interventions may be scaled up to different populations and/or settings. Eleven reviews explored scalability of a healthcare intervention with nine presenting an a priori definition including terms such as “deliberate efforts” and “expanding or increasing the impact”. Six of these definitions also included the need for a healthcare intervention to be proven effective prior to scaling up.

Five reviews measured four or more constructs with organisation, community and sociocultural factors being the most frequently reported measures (45%) followed by resources, economic viability (18%) and adaptation of the intervention (18%). Only one of the reviews definitively found that the healthcare intervention had been successfully expanded across different settings or populations [ 77 ]. The majority were unable to reach a conclusion due to lack of data in the included studies.

Four different scalability frameworks (Table 5 ) were described and used in four of the 11 reviews [ 59 , 78 , 79 , 80 ]. These were the World Health Organisation ExpandNet Scaling-Up Framework [ 81 ], the Intervention Scalability Assessment Tool (ISAT) [ 82 ], the Assess, Innovate, Develop, Engage, Devolve (AIDED) model [ 83 ] and the Non-adoption, Abandonment, Scale-up, Spread, Sustainability (NASSS) framework [ 84 ]. Commonalities across the four frameworks include the intervention, the strategic/political context to support scale-up and resources to support and sustain the scale-up process. Scalability was defined in a similar way within these frameworks as the capacity or ease with which an intervention or innovation that had been proven effective could be expanded to other settings or populations [ 81 , 82 ].

  • Sustainability

Of the 25 reviews on this concept, 15 included a definition with common use of terms such as “continuation” and “extended period of time”. Seven of these definitions also included the notion that sustainability is about the maintenance of the intervention or program after initial funding or implementation efforts have ceased [ 58 , 85 , 86 , 87 , 88 , 89 , 90 ].

Constructs typically measured in reviews of sustainability of healthcare interventions included organisation- or community-specific factors (36%), continuation of the intervention beyond a specified period of time (24%), established collaborations or partnerships (12%) and resources (8%). Only one of the reviews [ 91 ] found that sustainability had been achieved for the healthcare intervention under investigation. A number of other reviews [ 92 , 93 , 94 ] were unable to draw a definitive conclusion due to inconsistent definitions and measures of sustainability within the empirical literature in the reviews.

Seven different sustainability frameworks (Table 6 ) were referred to in nine of the systematic reviews. The two most frequently used frameworks included ongoing maintenance of benefits from the intervention, capacity building and integration of the intervention or program within the organisation [ 95 , 96 ]. Moore et al. developed a comprehensive definition of sustainability based on five constructs including continuation of a program or intervention of implementation strategies or individual behaviour change, after a defined period of time, with or without adaptations but continuing to produce benefits for the individual and/or systems [ 95 ].

In summary, two key findings emerged from the overview of reviews. First, the current literature suggests that the concepts are related, although there was some variation in the terms used. For example, reviews on feasibility of a healthcare intervention measured ‘implementation’ of the intervention, defined as the extent to which the intervention can be delivered as planned, also known as fidelity. Similarly, reviews on sustainability, also known as maintenance, measured resources, funding and organisational factors, which were all measures frequently included within reviews on feasibility of a healthcare intervention. The second key finding is that although acceptability appeared to be an important factor contributing to fidelity, additional factors were required, for example provider training. Similarly, although fidelity was an important factor contributing to feasibility, additional factors were required, for example funding and other resources such as physical space.

Step 2: Modified Nominal Group Technique

The first group meeting took place online using ZOOM [ 97 ] in March 2021. Three themes were identified from the group discussion, as follows:

Theme 1: It is plausible that the concepts influence implementability of a healthcare intervention.

Following presentation of the findings of the overview of reviews on each concept, participants agreed the theoretical plausibility of a framework of implementability for healthcare interventions that includes all five concepts. This was further consolidated through sharing their own real-world research and clinical experiences as described in Tables 7 , 8 , 9 , 10 and 11 .

Theme 2: The concepts appear to be related to one another.

Participants were asked to consider the question “in your view, is it plausible that the concepts are related to each other?” All agreed that it was possible that the concepts were interdependent and should be considered together when developing and implementing a healthcare intervention. They suggested that individual concepts were necessary but insufficient on their own to ensure implementability of a healthcare intervention. Participants also identified that in many of the clinical scenarios they shared, often more than one concept was involved in the final outcome. For example, it was identified that the hypothetical scalability scenario included issues with feasibility of a device, acceptability of an online system and lack of procedural information relevant for fidelity.

Theme 3: The preliminary implementability framework could be amended to better represent the relationships between the concepts.

Although all participants agreed that the concepts were related, they were not in agreement that the preliminary framework adequately represented the nature of their interdependence. Some participants felt more detail was needed to explain how the concepts related whilst other participants felt that some concepts required greater representation than others. All participants were asked to further consider the framework over the following week and provide feedback and revisions.

Step 3: Revising the framework

Following the first group discussion and feedback from the participants, the preliminary framework was revised, resulting in three options presented to consensus panel participants as three figures. These three draft frameworks included the revisions requested by the participants such as annotations on the first framework for option one, different graphics representing different relationships between the concepts on the second option and a combination of options one and two for the third framework option. Participants were asked to consider the three options and silently vote on their preferred framework and provide any further feedback via return email. The feedback was considered and integrated by two authors (MK and JF) resulting in a further version of the framework (Fig. 4 ). All participants agreed this final version.

figure 4

Conceptual framework of implementability of healthcare interventions

The framework as depicted in Fig. 4 is designed to guide research activities, chronologically and iteratively, from left to right and from bottom to top. Commencing in the research context, it is proposed that acceptability is the first concept to assess, during intervention development and work-up of supporting documentation and resources (intervention protocol, training manual, patient information leaflet, data and technology requirements, validation of digital components, etc.). If acceptability is adequate to providers and potential recipients, it is appropriate to deliver the intervention to assess fidelity as delivered and as received. Without adequate acceptability, providers and recipients are unlikely to engage with the intervention, and hence, fidelity will be low. Adequate fidelity will also require other enabling factors such as provider training and confirmed information flow. Without adequate fidelity, it would be wasteful to conduct a feasibility study. Factors such as appropriate resources, workforce, technology, and management will be required for feasibility. If feasibility (supported by acceptability and fidelity) in the research context is adequate, it is appropriate to consider testing acceptability, fidelity and feasibility in the healthcare context. If adopted consistently in one healthcare context, it is appropriate to consider scaling the intervention to other settings, provider groups and patient groups. In each new setting, it would be wise to re-assess acceptability, fidelity and feasibility as described above because adequate feasibility in one setting at one time, whilst a positive sign, is not a guarantee that the intervention will be feasible in other settings. Similarly, over time, the factors that support feasibility may change, thus threatening sustainability. It would therefore be prudent to continue to assess the factors affecting feasibility over time to detect any problems that need to be addressed.

Based on the findings from the overview of reviews and the group consensus process, we propose a framework of implementability of healthcare interventions which includes five key concepts, namely, acceptability, fidelity, feasibility, scalability and sustainability. The framework illustrates the interrelationship between the concepts and chronology, with acceptability, fidelity and feasibility requiring investigation during early stages of the development of a healthcare implementation, including during proof-of-principle studies and pragmatic evaluations of intervention effectiveness at one point in time and in one specific context. Acceptability is a necessary but not sufficient condition for fidelity, and similarly, fidelity is a necessary but not sufficient condition for feasibility. All three concepts are context- and population-dependent and will require reinvestigation as the healthcare intervention is scaled to different settings and populations and over time. We argue that there is an association between the concepts, with acceptability, fidelity and feasibility influencing the scalability and sustainability of a healthcare intervention. We are not suggesting that there is a causal relationship between the concepts, rather, scalability and sustainability depend on the pre-conditions acceptability, fidelity and feasibility of the healthcare intervention, and these concepts should be re-examined over time, and as the healthcare intervention is implemented with different populations or in different settings. This argument is consistent with other frameworks on scalability and sustainability, including the Intervention Scalability Assessment Tool [ 82 ] and the Dynamic Sustainability Framework [ 98 ], both of which suggest that feasibility, acceptability and fidelity must be considered in the planning for scaling up and sustainability of a healthcare intervention. The Consolidated Framework for Implementation Research [ 12 ] suggests that the outer setting, including the social and economic context, can influence implementation. Our proposed framework encourages the researcher to prospectively assess acceptability, fidelity and feasibility in both the inner and outer contexts as the key stakeholders are likely to be different.

From the 252 reviews identified in the overview, the majority did not provide a definition of the concept. Rather, the reviews used measurement approaches which implied a definition. For example, feasibility was typically defined by measuring components such as compliance to the intervention, dropouts, recruitment rates and adverse events. There was conflating of terms, with feasibility and safety used interchangeably in several reviews, particularly drug feasibility reviews. Although frameworks were identified for all five concepts, they were not frequently used in the reviews identified in the overview. Most of the frameworks included conceptual definitions and operationalisation of the concept, but these varied between frameworks. It is difficult to test the influence of these concepts on the implementability of a healthcare intervention without consistent definitions, descriptions, operationalisation and measurement approaches for these concepts.

Of all the concepts explored in the systematic overview, scalability and sustainability of healthcare interventions were not often achieved. These findings suggest that healthcare interventions may be found to be effective, acceptable and feasible in the development or pilot phase, but this does not guarantee successful scale-up or sustainment of the intervention over time. We propose that the framework of implementability can provide a dynamic, longitudinal perspective of intervention development where researchers consider acceptability, fidelity and feasibility during the earlier phases of intervention development and implementation, and iteratively re-evaluate these factors as the healthcare intervention is scaled to different settings and over time.

Other rigorous frameworks, such as RE-AIM and the Implementation Outcomes Framework, propose that sustainable adoption and implementation of healthcare interventions require consideration of many like concepts such as acceptability, reach, effectiveness, adoption, implementation and maintenance [ 99 , 100 ]. Whilst there are some similarities in the concepts contained in these frameworks and our proposed framework of implementability, the latter is explicitly concerned with the prospective and ongoing identification of factors that will influence scalability and sustainability of a healthcare intervention. It should also be noted that prospective identification of factors that influence implementability of healthcare interventions is receiving growing attention in the literature, particularly in relation to reducing avoidable research waste [ 101 ]. A recent publication developed a framework to assist researchers to prospectively cost the different phases of healthcare interventions. The authors argue that implementation costs are often underestimated or not included in cost-effectiveness analyses. This in turn contributes to research waste which may have been avoided through a more systematic and earlier approach to identifying the factors that support the translation of effective interventions into real-world settings, and prospectively costing the implementation of these.

It has been argued that involving end-users in the development and implementation of healthcare interventions may improve outcomes through enhanced relevance, acceptability and feasibility [ 102 ]. We propose that our framework of implementability could be used to test these assumptions. We recommend that researchers prospectively set criteria to inform the decision about whether to abandon, amend or proceed with the intervention, depending on the outcomes of the feasibility study [ 103 ]. Table 12 illustrates how the framework of implementability could be used to prospectively guide the ongoing implementation activities of healthcare interventions. Enabling factors were identified from published frameworks and reviews included in step 1, though this is not an exhaustive list and is likely to be influenced by context. For example, scalability of healthcare interventions to low- and middle-income countries may be more, or less, enabled by factors that differ from those in high-income countries [ 104 ]. The framework of implementability of healthcare interventions may be particularly helpful in guiding effectiveness-implementation hybrid designs, which aim to simultaneously evaluate effectiveness of the intervention in the real-world context and the implementation strategy [ 105 , 106 , 107 ].

We propose that empirical investigation of the framework of implementability is required to answer the following questions:

What is the nature of the relationship between the key concepts? (e.g. linear, curvilinear or threshold?)

Do acceptability, fidelity and feasibility predict scalability and sustainability as proposed in the framework?

Can identified deficits in constructs of the framework be addressed to enhance the implementability of effective interventions?

Strengths and limitations

To our knowledge, this is the first review that considers all five key concepts in published reviews on healthcare interventions. The overview collated and described important information on concepts that are increasingly being assumed to influence implementation of healthcare interventions. The development of a framework utilising well-established consensus methods is another strength. Although one author was responsible for most of the screening, data extraction and coding, independent extraction by a second author of 10% of the reviews confirmed reliability of the extraction process.

In order to the make the overview of reviews feasible, we only focused on publications that had one or more of the concepts in the title and/or abstract. Therefore, it is possible we may not have identified some reviews that were relevant. It must also be noted that the framework of implementability of healthcare interventions is untested. We propose that it articulates some of the untested assumptions in the current literature on implementation science and have suggested some approaches for empirical evaluation of the framework.

We do not propose that interventions with high implementability will automatically result in high uptake. As we argued in the background to this paper, the features of interventions may interact with top-down and bottom-up implementation activities, and with contextual factors, to achieve consistent uptake into routine practice. Our argument is that these implementation activities are more likely to be effective if implementability of the intervention is high.

The framework developed in this study can inform research that aims to prospectively and iteratively identify the likely implementability of evidence-based healthcare interventions. We suggest that the framework be tested empirically through studies that examine the actual uptake of interventions, across settings and over time, compared with prospective assessments of the independent variables (acceptability, fidelity and feasibility) and the outcome variables (scalability and sustainability) in the framework. We recommend that, to avoid research waste, implementability should be assessed, and enhancements made, during the clinical evaluation stages of the development of interventions [ 1 , 3 ]. This would potentially accelerate their uptake into clinical practice.

Availability of data and materials

All data generated during this study are included either within the text or as an additional file.

Abbreviations

Assess, Innovate, Develop, Engage and Devolve

Consolidated Framework of Implementation Research

Feasibility, Appropriateness, Meaningfulness and Effectiveness

Implementation Outcomes Framework

Intervention Scalability Assessment Tool

Implementation of Treatment Integrity Procedures Scale

Non-adoption, Abandonment, Scale-up, Spread and Sustainability

Nominal Group Technique

National Institute of Health Behavior Change Consortium

Preferred Reporting Items for Systematic Reviews

Reach, Effectiveness, Adoption, Implementation and Maintenance

Structured Assessment of Feasibility

World Health Organisation

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet (London, England). 2009;374(9683):86–9.

Article   Google Scholar  

Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322–7.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Glasziou P, Chalmers I. Research waste is still a scandal—an essay by Paul Glasziou and Iain Chalmers. Bmj. 2018;363:k4645.

Ivers NM, Grimshaw JM. Reducing research waste with implementation laboratories. Lancet (London, England). 2016;388(10044):547–8.

Davey P, Marwick CA, Scott CL, Charani E, McNeil K, Brown E, Gould IM, Ramsay CR, Michie S. Interventions to improve antibiotic prescribing practices for hospital inpatients. Cochrane Database Syst Rev. 2017;2(2):CD003543. https://doi.org/10.1002/14651858.CD003543.pub4 .

Flodgren G, Eccles MP, Shepperd S, Scott A, Parmelli E, Beyer FR. An overview of reviews evaluating the effectiveness of financial incentives in changing healthcare professional behaviours and patient outcomes. Cochrane Database Syst Rev. 2011;2011(7):CD009255. https://doi.org/10.1002/14651858.CD009255 .

Forsetlund L, Bjørndal A, Rashidian A, Jamtvedt G, O'Brien MA, Wolf F, Davis D, Odgaard-Jensen J, Oxman AD. Continuing education meetings and workshops: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2009;2009(2):CD003030. https://doi.org/10.1002/14651858.CD003030.pub2 . Update in: Cochrane Database Syst Rev. 2021 Sep 15;9:CD003030.

Cane J, OʼConnor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci. 2012;7:37.

Article   PubMed   PubMed Central   Google Scholar  

French SD, Green SE, OʼConnor DA, McKenzie JE, Francis JJ, Michie S, et al. Developing theory-informed behaviour change interventions to implement evidence into practice: a systematic approach using the Theoretical Domains Framework. Implement Sci. 2012;7(1):38.

Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A, et al. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care. 2005;14(1):26–33.

Presseau J, McCleary N, Lorencatto F, Patey AM, Grimshaw JM, Francis JJ. Action, actor, context, target, time (AACTT): a framework for specifying behaviour. Implement Sci. 2019;14(1):102.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.

Squires JE, Graham I, Bashir K, Nadalin-Penno L, Lavis J, Francis J, et al. Understanding context: A concept analysis. J Adv Nurs. 2019;75(12):3448–70.

Article   PubMed   Google Scholar  

Squires JE, Graham ID, Hutchinson AM, Michie S, Francis JJ, Sales A, et al. Identifying the domains of context important to implementation science: a study protocol. Implement Sci. 2015;10(1):135.

Weiner BJ. A theory of organizational readiness for change. Implement Sci. 2009;4(1):67.

Skivington K, Matthews L, Simpson SA, Craig P, Baird J, Blazeby JM, et al. A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance. BMJ. 2021;374:n2061.

Rogers EM, Cartano DG. Methods of measuring opinion leadership. Public Opin Q. 1962;26(3):435–41.

Huybrechts I, Declercq A, Verté E, Raeymaeckers P, Anthierens S. The Building Blocks of Implementation Frameworks and Models in Primary Care: A Narrative Review. Front Public Health. 2021;9:675171. https://doi.org/10.3389/fpubh.2021.675171 .

Gilbert AW, Jaggi A, May CR. What is the patient acceptability of real time 1:1 videoconferencing in an orthopaedics setting? A systematic review. Physiotherapy. 2018;104(2):178–86.

Gagliardi AR, Brouwers MC, Palda VA, Lemieux-Charles L, Grimshaw JM. How can we improve guideline use? A conceptual framework of implementability. Implement Sci. 2011;6:26.

Shiffman RN, Dixon J, Brandt C, Essaihi A, Hsiao A, Michel G, et al. The GuideLine Implementability Appraisal (GLIA): development of an instrument to identify obstacles to guideline implementation. BMC Med Inform Decis Mak. 2005;5(1):23.

Kastner M, Bhattacharyya O, Hayden L, Makarski J, Estey E, Durocher L, et al. Guideline uptake is influenced by six implementability domains for creating and communicating guidelines: a realist review. J Clin Epidemiol. 2015;68(5):498–509.

Shekelle P, Woolf S, Grimshaw JM, Schünemann HJ, Eccles MP. Developing clinical practice guidelines: reviewing, reporting, and publishing guidelines; updating guidelines; and the emerging issues of enhancing guideline implementability and accounting for comorbid conditions in guideline development. Implement Sci. 2012;7(1):62.

Cumpston MS, Webb SA, Middleton P, Sharplin G, Green S, Australian Clinical Trials Alliance Reference Group on I, et al. Understanding implementability in clinical trials: a pragmatic review and concept map. Trials. 2021;22(1):232.

World Health Organization. (n.d.). International Classification of Health Interventions (ICHI). WHO. Retrieved October 2021 from https://www.who.int/standards/classifications/international-classification-of-health-interventions

Pollock A, Campbell P, Brunton G, Hunt H, Estcourt L. Selecting and implementing overview methods: implications from five exemplar overviews. Syst Rev. 2017;6(1):145.

The EndNote Team. EndNote. EndNote. X9 ed. Philadelphia: Clarivate; 2013.

Google Scholar  

Van de Ven AH, Delbecq AL. The nominal group as a research instrument for exploratory health studies. Am J Public Health. 1972;62(3):337-42. https://doi.org/10.2105/ajph.62.3.337 .

Hussainy SY, Crum MF, White PJ, Larson I, Malone DT, Manallack DT, Nicolazzo JA, McDowell J, Lim AS, Kirkpatrick CM. Developing a Framework for Objective Structured Clinical Examinations Using the Nominal Group Technique. Am J Pharm Educ. 2016;80(9):158. https://doi.org/10.5688/ajpe809158 .

McMillan SS, Kelly F, Sav A, Kendall E, King MA, Whitty JA, Wheeler AJ. Using the Nominal Group Technique: how to analyse across multiple groups. Health Services and Outcomes Research Methodology. 2014;14(3):92-108.

Pan H, Norris JL, Liang YS, Li JN, Ho MJ. Building a professionalism framework for healthcare providers in China: a nominal group technique study. Med Teach. 2013;35(10):e1531–6.

Rubin G, De Wit N, Meineche-Schmidt V, Seifert B, Hall N, Hungin P. The diagnosis of IBS in primary care: consensus development using nominal group technique. Fam Pract. 2006;23(6):687-92. https://doi.org/10.1093/fampra/cml050 . Epub 2006 Oct 24.

Sarre G, Cooke J. Developing indicators for measuring Research Capacity Development in primary care organizations: a consensus approach using a nominal group technique. Health Soc Care Community. 2009;17(3):244-53. https://doi.org/10.1111/j.1365-2524.2008.00821.x .

Olsen J. The Nominal Group Technique (NGT) as a tool for facilitating pan-disability focus groups and as a new method for quantifying changes in qualitative data. Int J Qual Methods. 2019;18:1609406919866049.

Harvey N, Holmes CA. Nominal group technique: an effective method for obtaining group consensus. Int J Nurs Pract. 2012;18(2):188–94.

Leape LL, Park RE, Kahan JP, Brook RH. Group judgments of appropriateness: the effect of panel composition. International J Qual Health Care. 1992;4(2):151–9.

CAS   Google Scholar  

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. Updating guidance for reporting systematic reviews: development of the PRISMA 2020 statement. J Clin Epidemiol. 2021;134:103–12.

Forbes CC, Finlay A, McIntosh M, Siddiquee S, Short CE. A systematic review of the feasibility, acceptability, and efficacy of online supportive care interventions targeting men with a history of prostate cancer. J Cancer Survivorship : Res Pract. 2019;13(1):75–96.

Pham MD, Agius PA, Romero L, McGlynn P, Anderson D, Crowe SM, et al. Acceptability and feasibility of point-of-care CD4 testing on HIV continuum of care in low and middle income countries: a systematic review. BMC Health Serv Res. 2016;16(a):343.

Skea ZC, Aceves-Martins M, Robertson C, De Bruin M, Avenell A. Acceptability and feasibility of weight management programmes for adults with severe obesity: a qualitative systematic review. BMJ Open. 2019;9(9):e029473.

Marshall S, Vahabi M, Lofters A. Acceptability, Feasibility and uptake of HPV self-sampling among immigrant minority women: a focused literature review. J Immigr Minor Health. 2019;21(6):1380–93.

Babiano-Espinosa L, Wolters LH, Weidle B, Op de Beek V, Pedersen SA, Compton S, et al. Acceptability, feasibility, and efficacy of Internet cognitive behavioral therapy (iCBT) for pediatric obsessive-compulsive disorder: a systematic review. Syst Rev. 2019;8(1):284.

Gilbey D, Morgan H, Lin A, Perry Y. Effectiveness, Acceptability, and feasibility of digital health interventions for LGBTIQ+ young people: systematic review. J Med Internet Res. 2020;22(12):e20158.

Sheffield KM, Woods-Giscombé CL. Efficacy, Feasibility, and Acceptability of perinatal yoga on womenʼs mental health and well-being: a systematic literature review. J Holist Nurs. 2016;34(1):64–79.

GL D, De Crescenzo F, Minozzi S, Morgano GP, Mitrova Z, Scattoni ML, et al. Equity, acceptability and feasibility of using polyunsaturated fatty acids in children and adolescents with autism spectrum disorder: a rapid systematic review. Health Qual Life Outcomes. 2020;18(1):101.

Li Y, Coster S, Norman I, Chien WT, Qin J, Ling Tse M, et al. Feasibility, acceptability, and preliminary effectiveness of mindfulness-based interventions for people with recent-onset psychosis: A systematic review. Early Interv Psychiatry. 2021;15(1):3–15.

Article   CAS   PubMed   Google Scholar  

Fish AF, Christman SK, Frid DJ, Smith BA, Bryant CX. Feasibility and acceptability of stepping exercise for cardiovascular fitness in women. Appl Nurs Res. 2009;22(4):274–9.

Heynsbergh N, Heckel L, Botti M, Livingston PM. Feasibility, useability and acceptability of technology-based interventions for informal cancer carers: a systematic review. BMC Cancer. 2018;18(1):244.

Hadgraft NT, Brakenridge CL, Dunstan DW, Owen N, Healy GN, Lawler SP. Perceptions of the acceptability and feasibility of reducing occupational sitting: review and thematic synthesis. Int J Behav Nutr Phys Act. 2018;15(1):90.

Pierret ACS, Anderson JK, Ford TJ. Burn A.-M. Review: Education and training interventions, and support tools for school staff to adequately respond to young people who disclose self-harm – a systematic literature review of effectiveness, feasibility and acceptability. Child Adolesc Ment Health. 2021. https://doi.org/10.1111/camh.12436

Brooke-Sumner C, Petersen I, Asher L, Mall S, Egbe CO, Lund C. Systematic review of feasibility and acceptability of psychosocial interventions for schizophrenia in low and middle income countries. BMC Psychiatry. 2015;15:19.

Moulton-Perkins A, Moulton D, Cavanagh K, Jozavi A, Strauss C. Systematic review of mindfulness-based cognitive therapy and mindfulness-based stress reduction via group videoconferencing: Feasibility, acceptability, safety, and efficacy. Journal of Psychotherapy Integration. Advance online publication. 2020. https://doi.org/10.1037/int0000216 .

Shek AC, Biondi A, Ballard D, Wykes T, Simblett SK. Technology-based interventions for mental health support after stroke: A systematic review of their acceptability and feasibility. Neuropsychol Rehabil. 2021;31(3):432–52.

Padmanathan P, De Silva MJ. The acceptability and feasibility of task-sharing for mental healthcare in low and middle income countries: a systematic review. Soc Sci Med. 1982;2013(97):82–6.

Griffiths H. The acceptability and feasibility of using text messaging to support the delivery of physical health care in those suffering from a psychotic disorder: a review of the literature. Psychiatry Q. 2020;91(4):1305–16.

Stephen C, McInnes S, Halcomb E. The feasibility and acceptability of nurse-led chronic disease management interventions in primary care: an integrative review. J Adv Nurs. 2018;74(2):279–88.

Tough D, Robinson J, Gowling S, Raby P, Dixon J, Harrison SL. The feasibility, acceptability and outcomes of exergaming among individuals with cancer: a systematic review. BMC Cancer. 2018;18(1):1151.

Pallas SW, Minhas D, Perez-Escamilla R, Taylor L, Curry L, Bradley EH. Community health workers in low- and middle-income countries: what do we know about scaling up and sustainability? Am J Public Health. 2013;103(7):e74–82.

James HM, Papoutsi C, Wherton J, Greenhalgh T, Shaw SE. Spread, scale-up, and sustainability of video consulting in health care: systematic review and synthesis guided by the NASSS framework. J Med Internet Res. 2021;23(1):e23775.

Qiu D, Hu M, Yu Y, Tang B, Xiao S. Acceptability of psychosocial interventions for dementia caregivers: a systematic review. BMC Psychiatry. 2019;19(1):23.

Bautista T, James D, Amaro H. Acceptability of mindfulness-based interventions for substance use disorder: a systematic review. Complement Ther Clin Pract. 2019;35:201–7.

Sotirova MB, McCaughan EM, Ramsey L, Flannagan C, Kerr DP, OʼConnor SR, et al. Acceptability of online exercise-based interventions after breast cancer surgery: systematic review and narrative synthesis. J Cancer Survivorship : Res Pract. 2021;15(2):281–310.

Simon N, McGillivray L, Roberts NP, Barawi K, Lewis CE, Bisson JI. Acceptability of internet-based cognitive behavioural therapy (i-CBT) for post-traumatic stress disorder (PTSD): a systematic review. Eur J Psychotraumatol. 2019;10(1):1646092.

Sprogis SK, Currey J, Considine J. Patient acceptability of wearable vital sign monitoring technologies in the acute care setting: a systematic review. J Clin Nurs. 2019;28(15-16):2732–44.

Griffin JB, Ridgeway K, Montgomery E, Torjesen K, Clark R, Peterson J, et al. Vaginal ring acceptability and related preferences among women in low- and middle-income countries: a systematic review and narrative synthesis. PLoS One. 2019;14(11):e0224898.

Goldberg SB, Riordan KM, Sun S, Kearney DJ, Simpson TL. Efficacy and acceptability of mindfulness-based interventions for military veterans: a systematic review and meta-analysis. J Psychosom Res. 2020;138:110232.

Sekhon M, Cartwright M, Francis JJ. Acceptability of healthcare interventions: an overview of reviews and development of a theoretical framework. BMC Health Serv Res. 2017;17(1):88. https://doi.org/10.1186/s12913-017-2031-8 .

Cambridge University Press. (n.d.). Fidelity. In Cambridge dictionary. Retrieved March 2021, from https://dictionary.cambridge.org/dictionary/english/fidelity .

Bellg AJ, Borrelli B, Resnick B, Hecht J, Minicucci DS, Ory M, et al. Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium. Health Psychol. 2004;23(5):443–51.

Troy V, McPherson KE, Emslie C, Gilchrist E. The Feasibility, Appropriateness, meaningfulness, and effectiveness of parenting and family support programs delivered in the criminal justice system: a systematic review. J Child Fam Stud. 2018;27(6):1732–47.

Peltea A, Berghea F, Gudu T, Ionescu R. Knee ultrasound from research to real practice: a systematic literature review of adult knee ultrasound assessment feasibility studies. Med Ultrason. 2016;18(4):457–62.

Learmonth YC, Motl RW. Important considerations for feasibility studies in physical activity research involving persons with multiple sclerosis: a scoping systematic review and case study. Pilot Feasibility Stud. 2018;4:1.

Seitz DP, Brisbin S, Herrmann N, Rapoport MJ, Wilson K, Gill SS, et al. Efficacy and feasibility of nonpharmacological interventions for neuropsychiatric symptoms of dementia in long term care: a systematic review. J Am Med Dir Assoc. 2012;13(6):503–6.e2.

Chipps J, Brysiewicz P, Mars M. Effectiveness and feasibility of telepsychiatry in resource constrained environments? A systematic review of the evidence. Afr J Psychiatry. 2012;15(4):235–43.

Soneson E, Howarth E, Ford T, Humphrey A, Jones PB, Thompson Coon J, et al. Feasibility of school-based identification of children and adolescents experiencing, or at-risk of developing, mental health difficulties: a systematic review. Prev Sci. 2020;21(5):581–603.

Bowen DJ, Kreuter M, Spring B, Cofta-Woerpel L, Linnan L, Weiner D, Bakken S, Kaplan CP, Squiers L, Fabrizio C, Fernandez M. How we design feasibility studies. Am J Prev Med. 2009;36(5):452-7. https://doi.org/10.1016/j.amepre.2009.02.002 .

Chapman DJ, Morel K, Anderson AK, Damio G, Perez-Escamilla R. Breastfeeding peer counseling: from efficacy through scale-up. J Hum Lact. 2010;26(3):314–26.

Troup J, Fuhr DC, Woodward A, Sondorp E, Roberts B. Barriers and facilitators for scaling up mental health and psychosocial support interventions in low- and middle-income countries for populations affected by humanitarian crises: a systematic review. Int J Mental Health Syst. 2021;15(1):5.

Ben Charif A, Zomahoun HTV, LeBlanc A, Langlois L, Wolfenden L, Yoong SL, et al. Effective strategies for scaling up evidence-based practices in primary care: a systematic review. Implement Sci. 2017;12(1):139.

McCrabb S, Lane C, Hall A, Milat A, Bauman A, Sutherland R, et al. Scaling-up evidence-based obesity interventions: a systematic review assessing intervention adaptations and effectiveness and quantifying the scale-up penalty. Obes Rev. 2019;20(7):964–82.

World Health Organization. Practical guidance for scaling up health service innovations. Geneva: World Health Organization; 2009. Available from: https://apps.who.int/iris/bitstream/handle/10665/44180/9789241598521_eng.pdf;jsessionid=FFD201E7790BD61165F70FF4F21A6AE3?sequence=1

Milat A, Lee K, Conte K, Grunseit A, Wolfenden L, van Nassau F, et al. Intervention scalability assessment tool: a decision support tool for health policy makers and implementers. Health Res Policy Syst. 2020;18(1):1.

Bradley EH, Curry LA, Taylor LA, Pallas SW, Talbert-Slagle K, Yuan C, Fox A, Minhas D, Ciccone DK, Berg D, Pérez-Escamilla R. A model for scale up of family health innovations in low-income and middle-income settings: a mixed methods study. BMJ Open. 2012;2(4):e000987. https://doi.org/10.1136/bmjopen-2012-000987 .

Greenhalgh T, Wherton J, Papoutsi C, Lynch J, Hughes G, A’Court C, et al. Beyond adoption: a new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. J Med Internet Res. 2017;19(11):e367.

Hailemariam M, Bustos T, Montgomery B, Barajas R, Evans LB, Drahota A. Evidence-based intervention sustainability strategies: a systematic review. Implement Sci. 2019;14(1):57.

Francis L, Dunt D, Cadilhac DA. How is the sustainability of chronic disease health programmes empirically measured in hospital and related healthcare services?-a scoping review. BMJ Open. 2016;6(5):e010944.

Braithwaite J, Ludlow K, Testa L, Herkes J, Augustsson H, Lamprell G, et al. Built to last? The sustainability of healthcare system improvements, programmes and interventions: a systematic integrative review. BMJ Open. 2020;10(6):e036453.

Ishola F, Cekan J. Evaluating the sustainability of health programmes: A literature review. African Evaluation Journal. 2019; 7(1): 1–7. https://doi.org/10.4102/aej.v7i1.369 .

Mok WKH, Sharif R, Poh BK, Wee LH, Reilly JJ, Ruzita AT. Sustainability of childhood obesity interventions: a systematic review. Pak J Nutr. 2019;18:603–14.

Ament SM, de Groot JJ, Maessen JM, Dirksen CD, van der Weijden T, Kleijnen J. Sustainability of professionalsʼ adherence to clinical practice guidelines in medical care: a systematic review. BMJ Open. 2015;5(12):e008073.

Lauckner C, Whitten P. The state and sustainability of telepsychiatry programs. J Behav Health Serv Res. 2016;43(2):305–18.

Herlitz L, MacIntyre H, Osborn T, Bonell C. The sustainability of public health interventions in schools: a systematic review. Implement Sci. 2020;15(1):4.

Flynn R, Newton AS, Rotter T, Hartfield D, Walton S, Fiander M, et al. The sustainability of Lean in pediatric healthcare: a realist review. Syst Rev. 2018;7(1):137.

Crespo-Gonzalez C, Benrimoj SI, Scerri M, Garcia-Cardenas V. Sustainability of innovations in healthcare: A systematic review and conceptual framework for professional pharmacy services. Res Social Adm Pharm. 2020;16(10):1331–43.

Moore JE, Mascarenhas A, Bain J, Straus SE. Developing a comprehensive definition of sustainability. Implement Sci. 2017;12(1):110.

Shediac-Rizkallah MC, Bone LR. Planning for the sustainability of community-based health programs: conceptual frameworks and future directions for research, practice and policy. Health Educ Res. 1998;13(1):87-108. https://doi.org/10.1093/her/13.1.87 .

Banyai I. Zoom. New York: Viking; 1995.

Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8(1):117.

Glasgow RE, Harden SM, Gaglio B, Rabin B, Smith ML, Porter GC, et al. RE-AIM planning and evaluation framework: adapting to new science and practice with a 20-year review. Front Public Health. 2019;7(64):64.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76.

Sohn H, Tucker A, Ferguson O, Gomes I, Dowdy D. Costing the implementation of public health interventions in resource-limited settings: a conceptual framework. Implement Sci. 2020;15(1):86.

Greenhalgh T, Hinton L, Finlay T, Macfarlane A, Fahy N, Clyde B, et al. Frameworks for supporting patient and public involvement in research: systematic review and co-design pilot. Health Expect. 2019;22(4):785–801.

Patton DE, Pearce CJ, Cartwright M, Smith F, Cadogan CA, Ryan C, et al. A non-randomised pilot study of the Solutions for Medication Adherence Problems (S-MAP) intervention in community pharmacies to support older adults adhere to multiple medications. Pilot Feasibility Stud. 2021;7(1):18.

Bulthuis SE, Kok MC, Raven J, Dieleman MA. Factors influencing the scale-up of public health interventions in low- and middle-income countries: a qualitative systematic literature review. Health Policy Plan. 2020;35(2):219–34.

Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26.

Bernet AC, Willens DE, Bauer MS. Effectiveness-implementation hybrid designs: implications for quality improvement science. Implement Sci. 2013;8(Suppl 1):S2.

Article   PubMed Central   Google Scholar  

Landes SJ, McBain SA, Curran GM. An introduction to effectiveness-implementation hybrid designs. Psychiatry Res. 2019;280:112513.

Download references

Acknowledgements

Not applicable.

Author information

Authors and affiliations.

The University of Melbourne, School of Health Sciences, Melbourne, Australia

Marlena Klaic, Peter Hudson, Linda Denehy & Jill J. Francis

The Royal Melbourne Hospital, Allied Health Department, Melbourne, Australia

Marlena Klaic

The University of Melbourne, School of Health Sciences, Faculty of Medicine, Dentistry and Health Sciences, Department of Nursing, Melbourne, Australia

Suzanne Kapp

Centre for Palliative Care, St Vincent’s Hospital, Melbourne, Australia

Peter Hudson

End-of-life Care Research Department, Vrije Universiteit Brussel (VUB), Brussels, Belgium

Centre for Digital Transformation of Health, The University of Melbourne, Melbourne, Australia

Wendy Chapman

Department of Allied Health, Peter McCallum Cancer Centre, Melbourne, Australia

Linda Denehy

Department of Critical Care, The University of Melbourne, Melbourne, Australia

David Story

Department of Anaesthesia, Austin Health, Melbourne, Australia

Ottawa Hospital Research Institute, Clinical Epidemiology Program, Ottawa, Canada

Jill J. Francis

Department of Health Services Research, Peter McCallum Cancer Centre, Melbourne, Australia

You can also search for this author in PubMed   Google Scholar

Contributions

MK completed the systematic searches, synthesis of data, framework design and substantial writing in collaboration with JF. SK was the second reviewer for 10% of the literature included in the review and commented on drafts. PH commented on drafts and provided a clinical scenario. WC commented on drafts and provided a clinical scenario. LD commented on drafts and provided a clinical scenario. DS commented on drafts and provided a clinical scenario. JF contributed the original idea for the framework, coordinated the authorship team and substantial writing in collaboration with MK. All authors participated in the consensus process and approved the final submitted version of the manuscript.

Corresponding author

Correspondence to Marlena Klaic .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Search strategy.

Additional file 2.

Papers included in the overview.

Additional file 3.

Characteristics of studies included in the overview of reviews (n=252).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Klaic, M., Kapp, S., Hudson, P. et al. Implementability of healthcare interventions: an overview of reviews and development of a conceptual framework. Implementation Sci 17 , 10 (2022). https://doi.org/10.1186/s13012-021-01171-7

Download citation

Received : 21 May 2021

Accepted : 02 November 2021

Published : 27 January 2022

DOI : https://doi.org/10.1186/s13012-021-01171-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Implementation strategies
  • Implementability
  • Implementation science
  • Implementation research
  • Healthcare interventions

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

health research intervention definition

What is the intervention?

health research intervention definition

Ensuring Consistency and Quality of the Invention

Whether your intervention is a drug or a type of counselling, or anything else, it needs to be the same throughout the trial – and this needs careful consideration during trial design.

For example, if your trial is to test whether text messages are successful as an intervention to remind patients to take medication, your intervention is the text message. Here, it will be very easy to ensure consistency for all participants in the research: they will either receive or not receive the text message, and you can ensure that the text is the same every single time.

However, if your intervention is a type of counselling, it would be much harder to ensure consistency across all subjects, so you would probably need to create a framework so that the main elements can be consistently applied. For example, you would want to ensure that all participants had the same number of sessions and that each session was the same length, and that all counsellors in your trial were working together to ensure consistency in their approach. It may be that you would prepare specific options, such as specific applications of Cognitive Behavioural Therapy, to ensure that the participant’s experiences were as similar as possible to one another.

If your intervention is a drug, there are other things to consider. Perhaps you would like to compare two common pain relief drugs, A and B. Even if they are commonly available, you’d still need to ensure that your entire intervention supply was the same throughout the trial. The drug would also need to be correctly stored, accounted for, and managed (for example, perhaps the drug should not be exposed to temperatures of +20 degrees – how will you transport it, and ensure that temperature is maintained)? How will you ensure that the right amount of the intervention reaches your trial sites and is correctly stored there?

Interventional Trials

There are two types of intervention studies, namely randomised controlled trials and non-randomised or quasi-experimental trials.

An interventional trial quite loosely involves selecting subjects with a particular characteristics and splitting those subjects into those receiving an intervention and those that receive no intervention (control group). Here, participants (volunteers) are assigned to exposures purely based on chance.The comparison of the outcomes of the two groups at the end of the study period is an evaluation of the intervention.

Intervention studies are not limited to clinical trials but are broadly used in many research studies such as sociological, epidemiological and psychological studies as well as public health research.

Aside from the ability to remove bias, another advantage of randomised trials is that, if they are conducted properly, it is likely to determine small to moderate effects of the intervention. This is something that is difficult to establish reliably from observational studies. They also eliminate confounding bias; as such studies tends to create groups that are comparable for all factors that influence outcome that are known, unknown or difficult to measure so that the only difference between the two groups is the intervention.

They can also be used to establish the safety, cost-effectiveness and acceptability of an intervention. Some disadvantages of randomised clinical trials are that they are not always ethical as the sample size can be too small. This wastes time and patients are included in a trial that is of no benefit to them or others. The trials can also be statistically significant but clinically unimportant and lastly, the results may not be able to be generalised to the broader community since those who volunteer tend to be different from those who do not.

Double blind randomised controlled trials are considered the gold standard of clinical research because they are one of the best ways of removing bias in clinical trials. If both the participants and the researchers are blinded as to the exposure the participant is receiving, it is known as a “double-blinded” study. 

Characteristics of an Intervention Study Target Population The first step in any intervention is to specify the target population, which is the population to which the findings of the trial should be extrapolated. This requires a specific definition of the subjects prior to selection as defined by the inclusion and exclusion criteria. The exclusion criteria specify the type of patients who will need to be excluded on the basis of reasons which would confound your results – for example, they are either old or young (which may affect how the drug is working), they are pregnant and you are not yet sure if the drug is safe for pregnancy, they are in another trial at the moment, they have another medical condition that might affect their involvement – or any other reason which affect their participation. Inclusion criteria clarify who should be in the trial: for example, males and females between the age of 18-50 who have X disease…. And so on.

Those who are eventually found to be both eligible and willing to enrol in the trial compose the actual “study population” and are often relatively a selected subgroup of the experimental population. Participants in an intervention study are very likely to differ from non-participants in many ways. The fact that the subgroup of participants is representative of the entire experimental population will not affect the validity of the trial but may affect the ability to generalise those results to the target population. It is important to obtain baseline data and/or to ascertain outcomes for subjects who are eligible but unwilling to participate. Such information is extremely valuable to assess the presence and extent of differences between participants and non-participants in a trial. This will help in judging whether the results among trial participants can be generalised to the target population.

Sample Size Sample size, simply put, is the number of participants in a study. It is a basic statistical principle that sample size be defined before starting a clinical study so as to avoid bias in the interpretation of the results. If there are too few subjects in a study, the results cannot be generalised to the population as this sample will not represent the size of the target population. Further, the study may not be able to detect the difference between test groups, making the study unethical. On the other hand, if more subjects than required are enrolled in a study, we put more individuals at risk of the intervention, also making the study unethical as well as wasting precious resources. The attribute of sample size is that every individual in the chosen population should have an equal chance to be included in the sample. Also, the choice of one participant should not affect the chance of another and hence the reason for random sample selection. The calculation of an adequate sample size thus becomes crucial in any clinical study and is the process by which we calculate the optimum number of participants required to arrive at an ethically and scientifically valid result. Factors to be considered while calculating the final sample size include the expected drop-out rate, an unequal allocation ratio, and the objective and design of the study. The sample size always has to be calculated before initiating a study and as far as possible should not be changed during the course of a study.

Power It is important that an intervention study is able to detect the anticipated effect of the intervention with a high probability. To this end, the necessary sample size needs to be determined such that the power is high enough. In clinical trials, the minimal value nowadays to demonstrate adequate power equals 0.80. This means that the researcher is accepting that one in five times (that is 20%) they will miss a real difference. This false negative rate is the proportion of positive instances that were erroneously reported as negative and is referred to in statistics by the letter β. The “power” of the study is then equal to (1 –β) and is the probability of failing to detect a difference when actually there is a difference. Sometimes for pivotal or large studies, the power is occasionally set at 90% to reduce to 10% the possibility of a “false negative” result

Study end points and outcome measures To evaluate the effect of the intervention, a specific outcome needs to be chosen. In the context of clinical trials, this outcome is called the endpoint. It is advisable to choose one endpoint, the primary endpoint, to make the likelihood of measuring this accurately as high as possible. The study might also measure other outcomes, and these are secondary endpoints. Once the primary endpoint has been decided, then deciding how the outcome that provides this endpoint is measured is the central focus of the study design and operation.

The choice of the primary endpoint is critical in the design of the study. Where the trial is intended to provide pivotal evidence for regulatory approval for marketing of drugs, biologics, or devices, the primary goal typically is to obtain definitive evidence regarding the benefit-to-risk profile of the experimental intervention relative to a placebo or an existing standard-of-care treatment. One of the most challenging and controversial issues in designing such trials relates to the choice of the primary-efficacy endpoint or outcome measure used to assess benefit. Given that such trials should provide reliable evidence about benefit as well as risk, the primary-efficacy endpoints preferably should be clinical efficacy that measures unequivocally tangible benefit to patients. For example, for life-threatening diseases, one would like to determine the effect of the intervention on mortality or on a clinically significant measure of quality of life, such as relief of disease-related symptoms, improvement in ability to carry out normal activities, or reduced hospitalisation time.   In many instances, it may be possible to propose alternative endpoints (that is, “surrogates” or surrogate markers) to reduce the duration and size of the trials. A common approach has been to identify a biological marker that is “correlated” with the clinical efficacy endpoint (meaning that patients having better results for the biological marker tend to have better results for the clinical efficacy endpoint) and then to document the treatment’s effect on this biomarker. In oncology, for example, one might attempt to show that the experimental treatment regimen induces tumor shrinkage, delays tumor growth in some patients, or improves levels of biomarkers such as carcinoembryonic antigen (CEA) in colorectal cancer or prostate-specific antigen (PSA) in prostate cancer. Although these effects do not prove that the patient will derive symptom relief or prolongation of survival, such effects on the biomarker are of interest because it is well known that patients with worsening levels of these biological markers have greater risk for disease-related symptoms or death. However demonstrating treatment effects on these biological “surrogate” endpoints, while clearly establishing biological activity, may not provide reliable evidence about the effects of the intervention on clinical efficacy. In the illustration above using biomarkers for cancer treatment, if the biomarker does not lie in the pathway by which the disease process actually influences the occurrence of the clinical endpoint, then affecting the biomarker might not, in fact, affect the clinical endpoint. Also, there may be multiple pathways through which the disease process influences the risk of the clinical-efficacy endpoints. If the proposed surrogate endpoint lies in only one of these pathways and if the intervention does not actually affect all pathways, then the effect of treatment on clinical efficacy endpoints could be over- or underestimated by the effect on the proposed surrogate. In summary, a well designed trial will have one primary endpoint and possibly several secondary endpoint. The power of the study is designed to answer the question that is being measured with the outcome for the primary endpoint. Measuring this outcome needs to be standardised and its importance well understood by everyone on the study team. A well designed and set up trial is able to measure this primary outcome measure accurately and consistently between staff members, between points in time (so the same way on the first visit as on the last visit 12 months later) and also between different sites in multi-centre studies. Randomisation Randomisation offers a robust method of preventing selection bias but may be unnecessary and other designs preferable; however the conditions under which non-randomised designs can yield reliable estimates are very limited. Non randomised studies are most useful where the effects of the intervention are large or where the effects of selection, allocation and other biases are relatively small. They may be used for studying rare adverse events, which a trial would have to be implausibly large to detect.

Where simple randomisation in small trials is likely to lead to unequal distributions in small studies, the participants might be randomised in smaller blocks of, for example, four participants where there is an equal number of control and intervention allocations (in this case two of each) randomly assigned in the blocks. This means that you will not end up with a significantly unequal allocation in the study overall.

For more information on Randomisation, visit: http://www.bmj.com/content/316/7126/201

Ethical Considerations There are clear ethical consideration regarding the sample size as discussed above however, whether a study is considered to be ethical or unethical is a subjective judgement based on cultural norms, which vary from society to society and over time. Ethical considerations are more important in intervention studies than in any other type of epidemiological study.

For instance in trials involving an intervention, it will be unethical to use a placebo as a comparator if there is already an established treatment of proven value. It would also be unethical to enrol more participants than are needed to answer the question set by the trial. Conversely it would also be unethical to recruit too few participants so that the trial could not answer the question.

To be ethical a trial also needs to have equipoise – this means that the trial is answering a real question and so it is scientifically justified. This means that there’s no evidence for the intervention yet in the specific circumstances, so nobody truly knows whether it has an effect. For example, you would not be in equipoise if you were assessing paracetamol as a pain relief drug against a placebo; there is already information suggesting that paracetamol is an acceptable pain reliever for low level pain, so this research would be unethical because some patients would be given a placebo when a perfectly viable alternative is known. In this case, it might be preferable to test a new compound pain relief against paracetamol in patients with low level pain.

Therefore intervention trials are ethically justified only in a situation of uncertainty, when there is genuine doubt concerning the value of a new intervention in terms of its benefits and risks. The researcher must have some evidence that the intervention may be of benefit, for instance, from laboratory and animal studies, or from observational epidemiological studies. Otherwise, there would be no justification for conducting a trial.

Evaluating an Intervention Best practice is to develop interventions systematically, using the best available evidence and appropriate theory, then to test them using a carefully phased approach, starting with a series of pilot studies targeted at each of the key uncertainties in the design, and moving on to an exploratory and then a definitive evaluation. The results should be disseminated as widely and persuasively as possible, with further research to assist and monitor the process of implementation. In practice, evaluation takes place in a wide range of settings that constrain researchers’ choice of interventions to evaluate and their choice of evaluation methods. Ideas for complex interventions emerge from various sources, including: past practice, existing evidence, policy makers or practitioners, new technology, or commercial interests. The source may have a significant impact on how much leeway the investigator has to modify the intervention or to choose an ideal evaluation design. In evaluating an intervention it is important not to rush into making a decision as strong evidence may be ignored or weak evidence rapidly taken up, depending on its political acceptability or fit with other ideas about what works. One should be cognizance of ‘blanket’ statements about what designs are suitable for what kind of intervention (e.g. ‘randomised trials are inappropriate for community-based interventions, psychiatry, surgery, etc.’). A design may rarely be used in a particular field, but that does not mean it cannot be used but the researcher will need to make a decision on the basis of specific characteristics of their study, such as expected effect size and likelihood of selection and other biases. A crucial aspect to evaluating an intervention is the choice of outcomes from the trial. The researcher will need to determine which outcomes are most important, and which are secondary as well as how to deal with multiple outcomes in the analysis. A single primary outcome, and a small number of secondary outcomes, is the most straightforward from the point of view of the statistical analysis. However, this may not represent the best use of the data. A good theoretical understanding of the intervention, derived from careful development work is key to choosing suitable outcome measures. It is equally important that a researcher remains alert to the possibility of unintended and possibly adverse consequences. Consideration should also be given to the sources of variation in outcomes and a sub group analysis may be required. As much as possible it is important to bear in mind the decision makers i.e. national or local policy-makers, opinion leaders, practitioners, patients, the public, etc and whether it is likely to be persuasive especially if it conflicts with deeply entrenched values. An economic evaluation should be included if at all possible, as this will make the results far more useful for decision-makers. Ideally, economic considerations should be taken fully into account in the design of the evaluation, to ensure that the cost of the study is justified by the potential benefit of the evidence it will generate.

Types of Randomised Clinical Designs Simple or Parallel trials is the most elementary form of randomisation and can be achieved by merely tossing a coin. However this should be discouraged in clinical studies as it cannot be reproducible or checked. The alternative is to use a table of random numbers or a computer generated randomisation list. The disadvantage of simple randomization is that it may result in markedly unequal number of subjects being allocated to each group. Also simple randomisation may lead to skewed composition of factors that may affect the outcome of the trial. For instance in a trial involving both sexes, there may be too many subjects of the same sex in one arm. This is particularly true in small studies.

Factorial trials can be used to improve efficiency in intervention trials by testing two or more hypotheses simultaneously. Some factorial studies are more complex involving a third or fourth level. The study design is such that subjects are first randomised to intervention A or B to address one hypothesis and then within each intervention, there is a further randomisation to intervention C and D to evaluate a second question. The advantage of this design is its ability to answer more than one trial question in a single trial. It also allows the researcher assess the interactions between interventions which cannot be achieved by single factor studies.

Crossover trials as the name suggests, is where each subject acts as its own control by receiving at least two interventions. Subject A receives a test and standard intervention or placebo during a period in the trial and then the order of receiving the intervention is alternated. Cross over design is not limited to two interventions but researchers can design cross over studies involving 3 interventions - 2 treatments and a control arm. The order in which each individual receives the intervention should be determined by random allocation and there should be a wash out period before the next intervention is administered to avoid any “carry over” effects. The design therefore is only suitable where the interventions have no long term effect or where the study drug has a short shelf life. Since each subject acts as its own control, the study design eliminates inter subject variability and therefore only fewer subjects are required. Cross over studies are consequently used in early phase studies such as pharmacokinetic studies.

Cluster trials This is where an intervention is allocated to groups of people or clusters against a control. Sometimes this is done by geographical area, community or health centre and mainly used to improve public health concerns. An example can be testing the effect of education versus a control in reducing deaths in subjects who have suffered a heart attack.

Adaptive design is sometimes referred to as a “flexible design” and it is a design that allows adaptations to trials and/or statistical procedures after its initiation without undermining the validity and integrity of the trial. Adaptive trial design allows for modification of the study design as data accrues. The purpose is not only to efficiently identify clinical benefits of the test treatment but also to increase the probability of success of clinical development. Some of the benefits of adaptive designs are that it reflects medical practice in the real world. It is ethical with respect to both efficacy and safety of the test treatment under investigation and therefore efficient in the early and late phases of clinical development. The main draw backs however, is a concern whether the p-value or confidence interval regarding the treatment obtained after the modification is reliable or correct. In addition, the use of adaptive design methods may lead to a totally different trial that is unable to address the scientific/medical questions that the trial sets out to answer. There is also the risk of introducing bias in subject selection or in the method the results are evaluated. In practice, commonly seen adaptations include, but are not limited to - a change in sample size or allocation to treatments, the deletion, addition, or change in treatment arms, shift in target patient population such as changes in inclusion/exclusion criteria, change in study endpoints, and change in study objectives such as the switch from a superiority to a non-inferiority trial. Prior to adopting an adaptive study, it is prudent to discuss with the regulators to ensure that the study addresses issues such as the level of modifications that will be acceptable to them as well as understand the regulatory requirements for review and approval. Adaptive trial design can be used in rare life threatening disease with unmet medical needs as it speeds up the clinical development process without compromising on safety and efficacy. Commonly considered strategies in adaptive design methods include adaptive seamless phase1/II studies. This is where several doses or schedules are run at the same time whilst dropping schedules or doses that are ineffective or toxic. Similar approaches can be used for seamless phase II/III.

Equivalence trial is where a new treatment or intervention is tested to see if it is equivalent to the current treatment. It is now becoming difficult to demonstrate that a particular intervention is better than an existing control; particularly in therapeutic areas where there has been vast improvement in the drug development process. The goal of an equivalent study is to show that the intervention is not worse, less toxic, less evasive or have some other benefit than an existing treatment. It is important however to ensure that the active control selected is an established standard treatment for the indication being studied and must be with the dose and formulation proven to be effective. Studies conducted to demonstrate benefit of the control against placebo must be sufficiently recent; such that there are no important medical advances or other changes that have occurred. Also the populations where the control was tested should be similar to those planned for the new trial and the researcher must be able to specify what they mean by equivalence at the start of the study.

Non-inferiority trial is where a new treatment or intervention is tested to see whether or not it is non-inferior to the current gold standard. The requirements for an equivalence study are similar to non inferiority studies. There should be similarities in the populations, concomitant therapy and dosage of the interventions. It is difficult to show statistically that two therapies are identical as an infinite sample size would be required. Therefore if the intervention falls sufficiently close to the standard as defined by reasonable boundaries, the intervention is deemed no worse than the control.

References Emmanuel G and Geert V “Clinical Trials and Intervention Studies” http://www.wiley.com/legacy/wileychi/eosbs/pdfs/bsa099.pdf

Intervention trials http://www.iarc.fr/en/publications/pdfs-online/epi/cancerepi/CancerEpi-7.pdf

Intervention studies http://www.drcath.net/toolkit/intervention-studies

Medical Research Council “Developing and evaluating complex interventions: new guidance” http://www.sphsu.mrc.ac.uk/Complex_interventions_guidance.pdf

Chow S and Chang Mark “Adaptive design methods in clinical trials – a review” 2008 (3) 11. Orphanet Journal of Rare Diseases http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2422839/pdf/1750-1172-3-11.pdf

Kadam P and Bhalerao S “sample size calculation” Int. Journal of Ayurveda research 2010 1 (1) 55 - 57 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2876926/

Fleming T.R “Surrogate Endpoints And FDA’s Accelerated Approval Process” Health affairs 2005 24 (1) 67 - 78 http://content.healthaffairs.org/content/24/1/67.full

Lawrence M. Friedman, Curt D. Furberg, David L. DeMets “Study Population” Fundamentals of Clinical Trials 2010 4th edition chapter 4 55 - 65

in Articles

Systematic review of preconception risks and interventions, field trials of health interventions: a toolbox, non-pharmaceutical interventions and the emergence of pathogen variants, essential interventions of maternal, newborn and child health, process: evaluating how the prime intervention worked in practice, the relationship between obstetrical interventions and the increase in u.s. preterm births, 2014-2019, effectiveness of a motivational interviewing intervention on weight loss, physical activity and cardiovascular disease risk factors: a randomised controlled trial with a 12-month post-intervention fol, community-based interventions for the prevention and control of infectious diseases of poverty, children of prisoners: interventions and mitigations to strengthen mental health, call out for evidence of interventions on ncds in prison.

Intervention and laboratory supply

You should consider what intervention supply you require and what tests and laboratory assays will be used to answer your research question. It is important to ...

Issues in study design

<p>-</p>

in Discussions

Upcoming quests webinar: the person-based approach for developing and optimising interventions.

Professor Lucy Yardley will deliver a webinar, titled “The Person-Based Approach for developing and optimising interventions”, on Tuesday 15th January 2019 from 12-1pm (GMT). This seminar will describe how to ...

The Bill & Melinda Gates Foundation & the National Natural Science Foundation of China - Grand Challenges China: New Interventions for Global Health

Grand Challenges China is focusing on calls for innovative concepts for effective and affordable interventions e.g vaccines and therapeutics which have the potential to protect against the progression or transmission ...

Develop Pharmacovigilance / Safety Reporting Plan

Develope pharmacovigilance / safety reporting plan

If your study is using an investigational medical product (IMP) you should develop a plan for reporting of adverse events and reactions. Details of ...

Webinar Recording: HOW TO SUPPORT PATIENT-PUBLIC INVOLVEMENT (PPI) CONTRIBUTORS IN THE USE OF QUALITATIVE METHODOLOGY OUTLINED

Dr Bláthín Casey and Prof Seán Dinneen delivered a QUESTS webinar on “Supporting patient-public involvement (PPI) contributors in the use of qualitative methodology: An example from the D1 Now intervention” ...

Live Q&A Research Ethics during Pandemics : ASK ANYTHING ON RESEARCH ETHICS"

In this live Q&A, Blessing Silaigwana will be answering any questions related to ethical conduct of research during pandemics. Please post your questions and get instant answers: #ASK ANYTHING ON ...

Is there also an applicable trial termination map?

I find this excerpt of an article by Prof Jimmy Withsworth a thought provoking one, and with his many hears of experience in directing clinical research programmes in ...

Beijing Statement from the Second Global Symposium on Health Systems Research

Almost 1800 participants from over 110 countries gathered in Beijing, China for the Second Global Symposium on Health Systems Research 31 October to 3 November, 2012.

Presentations and plenary videos: ...

Adolescent Health and Preconception Care - Discussion for the month

Revising the declaration of helsinki - placebos and post-trial responsibilities.

This blog is closed to new posts due to inactivity. The post remains here as part of the network’s archive ...

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

What is population health intervention research?

Affiliation.

  • 1 Population Health Intervention Research Centre, University of Calgary, G012, 3330 Hospital Drive NW, Calgary, AB T2N 4N1. [email protected]
  • PMID: 19263977
  • PMCID: PMC6973897
  • DOI: 10.1007/BF03405503

Abstract in English, French

Population-level health interventions are policies or programs that shift the distribution of health risk by addressing the underlying social, economic and environmental conditions. These interventions might be programs or policies designed and developed in the health sector, but they are more likely to be in sectors elsewhere, such as education, housing or employment. Population health intervention research attempts to capture the value and differential effect of these interventions, the processes by which they bring about change and the contexts within which they work best. In health research, unhelpful distinctions maintained in the past between research and evaluation have retarded the development of knowledge and led to patchy evidence about policies and programs. Myths about what can and cannot be achieved within community-level intervention research have similarly held the field back. The pathway forward integrates systematic inquiry approaches from a variety of disciplines.

Les interventions populationnelles de santé comprennent l’ensemble des actions qui visent à modifier la distribution des risques à la santé en ciblant les conditions sociales, économiques et environnementales qui façonnent la distribution des risques. Sous forme de programmes et politiques, ces interventions peuvent provenir du secteur de la santé mais sont aussi souvent pilotées par d’autres secteurs comme l’éducation, le logement ou l’emploi. La recherche sur les interventions de santé des populations poursuivent l’objectif de documenter la valeur et les effets de ces interventions, les processus par lesquels les changements opèrent et les conditions qui favorisent les effets. Dans le domaine de la recherche en santé, des distinctions inutiles entre la recherche et l’évaluation ont retardé le développement des connaissances sur l’intervention de santé des populations et mené à une mauvaise intégration des données de recherche pour soutenir la pratique et les décisions concernant les programmes et politiques de santé des populations. Cet article déboulonne donc certains mythes pernicieux concernant la recherche sur les interventions, notamment relativement aux coûts associés, à ses visées et à la croyance en un rôle nécessairement marginal des communautés concernées pour développer des interventions efficaces. Cet article retourne aussi comme arbitraire et injustifiée la distinction traditionnelle entre la recherche sur les interventions et la recherche évaluative. En fait cet article montre que la recherche sur les interventions a tout à gagner d’un rapprochement avec la recherche évaluative et d’une intégration des méthodes de recherche appliquée provenant d’une diversité de disciplines.

PubMed Disclaimer

Similar articles

  • Can the Canadian Heart Health Initiative inform the population Health Intervention Research Initiative for Canada? Riley BL, Stachenko S, Wilson E, Harvey D, Cameron R, Farquharson J, Donovan C, Taylor G. Riley BL, et al. Can J Public Health. 2009 Jan-Feb;100(1):Suppl I20-6. doi: 10.1007/BF03405505. Can J Public Health. 2009. PMID: 19263979 Free PMC article.
  • The imperative of strategic alignment across organizations: the experience of the Canadian Cancer Society's Centre for Behavioural Research and Program Evaluation. Cameron R, Riley BL, Campbell HS, Manske S, Lamers-Bellio K. Cameron R, et al. Can J Public Health. 2009 Jan-Feb;100(1):Suppl I27-30. doi: 10.1007/BF03405506. Can J Public Health. 2009. PMID: 19263980 Free PMC article.
  • Canadian Institutes of Health Research support for population health intervention research in Canada. Di Ruggiero E, Rose A, Gaudreau K. Di Ruggiero E, et al. Can J Public Health. 2009 Jan-Feb;100(1):Suppl I15-9. doi: 10.1007/BF03405504. Can J Public Health. 2009. PMID: 19263978 Free PMC article.
  • Obesity Prevention and Management Strategies in Canada: Shifting Paradigms and Putting People First. Sharma AM, Ramos Salas X. Sharma AM, et al. Curr Obes Rep. 2018 Jun;7(2):89-96. doi: 10.1007/s13679-018-0309-8. Curr Obes Rep. 2018. PMID: 29667158 Review.
  • Enhancing Community-Based Participatory Research Through Human-Centered Design Strategies. Chen E, Leos C, Kowitt SD, Moracco KE. Chen E, et al. Health Promot Pract. 2020 Jan;21(1):37-48. doi: 10.1177/1524839919850557. Epub 2019 May 25. Health Promot Pract. 2020. PMID: 31131633 Review.
  • Determinants of adult sedentary behavior and physical inactivity for the primary prevention of diabetes in historically disadvantaged communities: A representative cross-sectional population-based study from Reunion Island. Fianu A, Jégo S, Révillion C, Lenclume V, Neufcourt L, Viale F, Bouscaren N, Cubizolles S. Fianu A, et al. PLoS One. 2024 Aug 13;19(8):e0308650. doi: 10.1371/journal.pone.0308650. eCollection 2024. PLoS One. 2024. PMID: 39137192 Free PMC article.
  • Generation Pep - study protocol for an intersectoral community-wide physical activity and healthy eating habits initiative for children and young people in Sweden. Leijon M, Algotson A, Bernhardsson S, Ekholm D, Ersberg L, Höök MJ, Klüft C, Müssener U, Garås ES, Nilsen P. Leijon M, et al. Front Public Health. 2024 Feb 16;12:1299099. doi: 10.3389/fpubh.2024.1299099. eCollection 2024. Front Public Health. 2024. PMID: 38435288 Free PMC article.
  • Artificial intelligence for dementia prevention. Newby D, Orgeta V, Marshall CR, Lourida I, Albertyn CP, Tamburin S, Raymont V, Veldsman M, Koychev I, Bauermeister S, Weisman D, Foote IF, Bucholc M, Leist AK, Tang EYH, Tai XY; Deep Dementia Phenotyping (DEMON) Network; Llewellyn DJ, Ranson JM. Newby D, et al. Alzheimers Dement. 2023 Dec;19(12):5952-5969. doi: 10.1002/alz.13463. Epub 2023 Oct 14. Alzheimers Dement. 2023. PMID: 37837420 Review.
  • A realist review of best practices and contextual factors enhancing treatment of opioid dependence in Indigenous contexts. Henderson R, McInnes A, Danyluk A, Wadsworth I, Healy B, Crowshoe L. Henderson R, et al. Harm Reduct J. 2023 Mar 17;20(1):34. doi: 10.1186/s12954-023-00740-x. Harm Reduct J. 2023. PMID: 36932417 Free PMC article.
  • Realizing the potential of artificial intelligence in healthcare: Learning from intervention, innovation, implementation and improvement sciences. Nilsen P, Reed J, Nair M, Savage C, Macrae C, Barlow J, Svedberg P, Larsson I, Lundgren L, Nygren J. Nilsen P, et al. Front Health Serv. 2022 Sep 15;2:961475. doi: 10.3389/frhs.2022.961475. eCollection 2022. Front Health Serv. 2022. PMID: 36925879 Free PMC article.
  • Hawe P, Shiell A. Using evidence to expose the unequal distribution of problems and the unequal distribution of solutions. Eur J Public Health. 2007;17(5):413. doi: 10.1093/eurpub/ckm095. - DOI - PubMed
  • MacMahon B, Pugh TF. Epidemiology: Principles and Methods. Boston, MA: Little Brown; 1970.
  • Institute of PopulationPublic Health, Canadian Institutes of Health Research. Population Health Intervention Research Initiative for Canada (“PHIRIC”) Workshop Report. Ottawa, ON: CIHR; 2008.
  • Porter D. Health, Civilisation and the State. A History of Public Health from Ancient to Modern Times. London, UK: Routledge; 1999. - PMC - PubMed
  • Fassin D. L’espace politique de la santé. Essai de généalogie. Paris: Presses Universitaires de France; 1996.
  • Search in MeSH

Related information

  • Cited in Books

LinkOut - more resources

Full text sources.

  • Europe PubMed Central
  • PubMed Central

Miscellaneous

  • NCI CPTAC Assay Portal
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Frequently Asked Questions: NIH Clinical Trial Definition

What is the difference between clinical research and a clinical trial.

Clinical trials are clinical research studies.

Clinical research includes all research involving human participants. It does not include secondary studies using existing biological specimens or data collected without identifiers or data that are publicly available.

Clinical trials are clinical research studies involving human participants assigned to an intervention in which the study is designed to evaluate the effect(s) of the intervention on the participant and the effect being evaluated is a health-related biomedical or behavioral outcome.

How can researchers determine whether a proposed study is a clinical trial?

The following questions should be used to determine whether a study meets the NIH clinical trial definition:

  • Does the study involve human participants?
  • Are the participants prospectively assigned to an intervention?
  • Is the study designed to evaluate the effect of the intervention on the participants?
  • Is the effect being evaluated a health-related biomedical or behavioral outcome?

If the answers are all “ yes ” the study is a clinical trial. If any answers are “ no ” the study is not a clinical trial.

Does the primary outcome of a study need to be a health-related outcome in order for a study to be considered a clinical trial?

If any outcome is health-related and the answers to the four questions are all yes, then the study is meets the clinical trial definition. You should note, though, that all NIH-funded research investigating biomedical or behavioral outcomes is considered to be health- related. Hence, if the outcome is biomedical or behavioral, the study may be a clinical trial (if the answers to the other three questions are “yes”). Many clinical trials are “mechanistic” or “exploratory” falling outside the realm of efficacy or effectiveness trials.

What is the difference between the clinical trial definition in the revised Common Rule and the NIH clinical trial definition?

NIH considers the two definitions to have the same meaning.

  • Revised Common Rule § .102(b) : “Clinical trial means a research study in which one or more human subjects are prospectively assigned to one or more interventions (which may include placebo or other control) to evaluate the effects of the interventions on biomedical or behavioral health-related outcomes.”
  • NIH clinical trial definition : “A research study in which one or more human subjects are prospectively assigned to one or more interventions4 (which may include placebo or other control) to evaluate the effects of those interventions on health-related biomedical or behavioral outcomes.” ( October 23, 2014)

Does risk to human participants factor into whether a study is considered to be a clinical trial?

Risk is not part of the NIH clinical trial definition. NIH considers the study to be a clinical trial as long as all elements of the NIH clinical trial definition are met.

What is the sub-definition of “intervention”?

An intervention is defined as a manipulation of the subject or subject’s environment for the purpose of modifying one or more health-related biomedical or behavioral processes and/or endpoints. Examples include: drugs/small molecules/compounds; biologics; devices; procedures (e.g., surgical techniques); delivery systems (e.g., telemedicine, face- to-face interviews); strategies to change health-related behavior (e.g., diet, cognitive therapy, exercise, development of new habits); treatment strategies; prevention strategies; and, diagnostic strategies.

Are measurements the same as interventions?

No; measurements are used to evaluate outcomes.

Does the NIH clinical trial definition apply to foreign awards?

Yes; the NIH clinical trial definition applies to all NIH-funded studies.

How will NIH educate researchers?

https://grants.nih.gov/policy/clinical-trials.htm .

Additionally, NIH staff are prepared to help educate researchers on whether their studies meet the NIH clinical definition.

Specific Cases

If a proposed clinical study includes a plan for addressing incidental findings, is the study considered to be a clinical trial.

No; having a plan for addressing incidental findings does not determine whether a study is considered to be a clinical trial. To determine whether your study meets the NIH clinical trial definition, please refer to the four questions above that outline the criteria.

Are studies that propose to evaluate a clinical intervention or to develop a diagnostic tool considered to be clinical trials?

It depends; studies that involve prospective assignment of human participants to an intervention, which may be a clinical intervention or development of a diagnostic tool, and that are designed to evaluate an effect of the intervention on the participant, where the effect is a biomedical or behavioral health outcome, are clinical trials. ( See examples in these Case Studies ). Studies designed only to validate the sensitivity or specificity of a tool are not clinical trials ( See examples in these Case Studies ).

Are studies that elicit the opinions or preferences from human participants considered to be clinical trials?

No; studies eliciting opinions or preferences are not considered to be health-related outcomes.

Are observational studies, which do not include an intervention, considered to be clinical trials?

No; in order to meet the NIH clinical trial definition there must be an intervention.

Are studies that involve only healthy participants considered to be clinical trials?

Yes; studies involving healthy participants are considered clinical trials if all elements of the NIH clinical trial definition are met.

Are studies that are not designed to impact diagnoses or treatment of patients considered to be clinical trials?

It depends; studies that meet all elements of the NIH clinical trial definition are considered to be clinical trials. ( See examples in these Case Studies )

Are studies designed to investigate whether a technique can be used to measure a response in research participants considered to be clinical trials?

Are studies designed to compare two approved diagnostic or therapeutic devices considered to be clinical trials.

No; a study must be designed to evaluate the effect of the intervention on the human participant to meet the NIH clinical trial definition.

Must a health-related outcome be permanent or lasting in order for a study to be a clinical trial?

No; a transient health-related outcome is sufficient for a study to be considered a clinical trial, as long as all other elements of the NIH clinical trial definition are met.

Are studies that coordinate with health-care providers where the outcome is measured in their patients considered to be clinical trials?

Yes; in these studies, both the health-care providers and patients are human participants, and the health care providers become part of the intervention. The study is considered to be a clinical trial as long as all other elements of the NIH clinical trial definition are met. ( See examples in these Case Studies )

Are studies with just a few research participants considered to be clinical trials?

Yes; the NIH clinical trial definition specifies that there must be one or more human participants involved in the study. The study is considered to be a clinical trial if all elements of the NIH clinical trial definition are met.

Are studies ancillary to clinical trials considered to be clinical trials as well?

Yes; studies ancillary to clinical trials are themselves are considered to be clinical trials if all elements of the NIH clinical trial definition are met.

Are studies that use correlational designs considered to be clinical trials?

No; studies using correlational designs to prospectively associate biomedical parameters with other health-related measures, but do not involve an intervention, do not meet the NIH clinical trial definition.

Are studies designed to understand a disease mechanism considered to be clinical trials?

Yes; studies that are designed to evaluate the effect of an intervention on a research participant, and meet all other elements of the clinical trial definition, meet the NIH clinical trial definition.

Are studies that compare two different methods of diagnosing a disease in patients to determine the reliability of a new method, but have no intention of using the results to inform the clinical care of the patients considered to be clinical trials?

No; studies that involve a comparison of methods and that do not evaluate the effect of the interventions on the participant do not meet the NIH clinical trial definition.

Are studies that evaluate the effect of an intervention on research participants, but do not have a comparison group (e.g., placebo, control) considered to be clinical trials?

Studies need not include a comparison group to meet the NIH clinical trial definition. As long as all of the elements of the NIH clinical trial definition are met, the study would be considered to be a clinical trial.

This page was last updated on Wednesday, December 13, 2023

Medical School

  • MD Students
  • Residents & Fellows
  • Faculty & Staff
  • Duluth Campus Leadership
  • Department Heads
  • Dean's Distinguished Research Lectureship
  • Dean's Tribute to Excellence in Research
  • Dean’s Tribute to Excellence in Education
  • Twin Cities Campus
  • Office of the Regional Dean
  • Office of Research Support, Duluth Campus
  • CentraCare Regional Campus St. Cloud
  • Office of Diversity, Equity & Inclusion
  • Office of Faculty Affairs
  • Grant Support
  • Medical Student Research Opportunities
  • The Whiteside Institute for Clinical Research
  • Office of Research, Medical School
  • Graduate Medical Education Office
  • Office of Graduate and Postdoctoral Studies
  • Departments
  • Centers & Institutes
  • Dr. James E. Rubin Medical Memorial Award
  • Fisch Art of Medicine Student Awards
  • Graduating Medical Student Research Award
  • Veneziale-Steer Award
  • Dr. Marvin and Hadassah Bacaner Research Awards
  • Excellence in Geriatric Scholarship
  • Distinguished Mentoring Award
  • Distinguished Teaching Awards
  • Cecil J. Watson Award
  • Exceptional Affiliate Faculty Teaching Award
  • Exceptional Primary Care Community Faculty Teaching Award
  • Herz Faculty Teaching Development Awards
  • The Leonard Tow Humanism in Medicine Awards
  • Year 1+2 Educational Innovative Award
  • Alumni Philanthropy and Service Award
  • Distinguished Alumni Award
  • Early Distinguished Career Alumni Award
  • Harold S. Diehl Award
  • Nomination Requirements
  • Duluth Scholarships & Awards
  • Account Management
  • Fundraising Assistance
  • Research & Equipment Grants
  • Medical School Scholarships
  • Student Research Grants
  • Board members
  • Alumni Celebration & Reunions
  • Alumni Relations - Office of Alumni Relations
  • Maximizing Medical Practice Conference
  • Diversity & Inclusion
  • The Bob Allison Ataxia Research Center
  • Children's Health
  • Orthopaedics
  • Regenerative Medicine
  • Scholarships
  • Transplantation
  • MD Student Virtual Tour
  • Why Minnesota?
  • Facts & Figures
  • Annika Tureson
  • Caryn Wolter
  • Crystal Chang
  • Emily Stock
  • Himal Purani
  • Jamie Stang
  • Jordan Ammons
  • Julia Klein
  • Julia Meyer
  • Kirsten Snook
  • Madison Sundlof
  • Rashika K Shetty
  • Rishi Sharma
  • Savannah Maynard
  • Prerequisite Courses
  • Student National Medical Association Mentor Program
  • Minority Association of Pre-Med Students
  • Twin Cities Entering Class
  • Next Steps: Accepted Students
  • Degrees Offered
  • Preceptor Resources (MD)
  • Doctor of Physical Therapy (DPT)
  • Message from the Director
  • Clinical Training
  • Curriculum & Timeline
  • Integrated Physician Scientist Training
  • MSTP Student Mentoring and Career Development
  • External Fellowships
  • Student Directory
  • Student Advisory Committee
  • Staff Directory
  • Steering Committee
  • Alumni Directory
  • MSTP Student First Author Papers
  • Thesis Defense
  • MSTP Graduate Residency Match Information
  • MSTP Code of Ethics
  • Research Experience
  • International Applicants
  • Prerequisites
  • Applicant Evaluation
  • The Interview Process
  • Application Process
  • Financial Support
  • Pre-MSTP Summer Research Program
  • MSTP Virtual Tour
  • Physician Scientist Training Programs
  • Students with Disabilities
  • Clinical Continuity & Mentoring Program
  • Mental Health and Well-Being
  • Leadership in Diversity Fellowship
  • M1 and M2 Research Meeting
  • Monthly Student Meetings
  • MSTP Annual Retreat
  • Preparation
  • Women in Science & Medicine
  • Graduate Programs/Institute Activities & Seminars
  • CTSI Translational Research Development Program (TRDP)
  • Grant Opportunities
  • Graduate Programs
  • Residencies & Fellowships
  • Individualized Pathways
  • Continuing Professional Development
  • CQI Initiative
  • Improvement Summary
  • Quality Improvement Communication
  • Student Involvement
  • Student Voice
  • Contact the Medical School Research Office
  • Funding Opportunities
  • Research Ethics
  • Research Support
  • Training Grants
  • Veteran's Affairs (VA)
  • ALPS COVID For Patients
  • 3D/Virtual Reality Procedural Training
  • Addressing Social Determinants of Health in the Age of COVID-19
  • Augmented Reality Remote Procedural Training and Proctoring
  • COVID19: Outbreaks and the Media
  • Foundations in Health Equity
  • Global Medicine in a Changing Educational World
  • INMD 7013: COVID-19 Crisis Innovation Lab A course examining the COVID-19 crisis
  • Intentional Observational Exercises - Virtual Critical Care Curriculum
  • Live Suture Sessions for EMMD7500
  • M Simulation to Provide PPE training for UMN students returning to clinical environments
  • Medical Student Elective course: COVID-19 Contact Tracing with MDH
  • Medical Students in the M Health Fairview System Operations Center
  • Telehealth Management in Pandemics
  • Telemedicine Acting Internship in Pediatrics
  • The Wisdom of Literature in a Time of Plague
  • Virtual Simulations for EMMD7500 Students
  • COVID-19 Publications
  • Blood & Marrow Transplant
  • Genome Engineering
  • Immunology & Infectious Disease
  • Magnetic Resonance Imaging
  • News & Events
  • Support the Institute
  • Lung Health
  • Medical Discovery Team on Addiction Faculty
  • Impact on Education
  • Meet Our Staff
  • Donation Criteria
  • Health Care Directive Information
  • Inquiry Form
  • Other Donation Organizations
  • Death Certificate Information
  • Social Security Administration Notification and Death Benefit Information
  • Grief Support
  • Deceased Do Not Contact Registration and Preventing Identity Theft
  • Service of Gratitude
  • For Educators and Researchers
  • Clinical Repository
  • Patient Care
  • Blue Ridge Research Rankings
  • Where Discovery Creates Hope

Headshot of Diana Burgess

New research finds scalable mindfulness interventions delivered via telehealth improve pain and well-being for veterans with chronic pain

MINNEAPOLIS/ST. PAUL (08/19/2024) —  Mindfulness-based interventions delivered via telehealth in a scalable format can improve pain and overall well-being among veterans with chronic pain, according to new research published today in  JAMA Internal Medicine . 

In a randomized clinical trial, researchers aimed to test the effectiveness of two eight-week telehealth mindfulness-based interventions (MBIs) designed to be scalable and widely implemented in healthcare systems. MBIs help people pay attention non-judgmentally in the present moment and often involve practices like meditation, breathing exercises or gentle movement.

“Although mindfulness interventions are evidence-based treatment for chronic pain and conditions that often accompany pain, like anxiety and depression, many MBIs are difficult to implement at scale in healthcare systems. They require trained mindfulness instructors, dedicated space and pose barriers to patients due to the time commitment involved,” said Diana Burgess, PhD, a professor at the University of Minnesota Medical School and an investigator at the Minneapolis Veteran Affairs (VA) Healthcare System. “We wanted to develop MBIs that were relatively low resource, scalable and more accessible for patients ”

Between November 2020 and May 2022, 811 veterans with moderate to severe chronic pain participated in the Learning to Apply Mindfulness to Pain (LAMP) study at three VA facilities. Outcomes were assessed at outset, 10 weeks, six months and one year. The group MBI was conducted via video conference with pre-recorded mindfulness education and skill training videos, accompanied by discussions led by a trained facilitator who was not an expert in mindfulness. The self-paced MBI was asynchronous — allowing participants to engage with the MBI at their own pace — and supplemented with three individual facilitator calls.

Key findings from the study include: 

  • Pain-related function improved significantly for patients in the group and self-paced MBIs. 
  • There were significant improvements in pain intensity, physical functioning, fatigue, sleep disturbance, social functioning, depression and PTSD among patients in the group and self-paced MBIs over 12 months, compared to usual care.
  • The group and self-paced MBIs did not significantly differ from each other.

The results of this study suggest low-resource, telehealth-based MBIs could help accelerate and improve the implementation of non-medication pain treatment in VA healthcare and beyond.

Dr. Burgess and the research team are leading a new project — called Rural Veterans Applying Mind Body Skills for Pain (RAMP) — which will test the effectiveness of a scalable, mind-body telehealth intervention for chronic pain, designed for veterans living in rural areas. RAMP builds on LAMP through its use of mindfulness practices while also incorporating pain education, physical and rehabilitative exercise, and cognitive and behavioral strategies.

Funding was provided by the Department of Defense through the Pain Management Collaboratory - Pragmatic Clinical Trials Demonstration Projects [W81XWH-18-2-0003]. The research was also supported by the National Center for Complementary and Integrative Health [U24AT009769] and the Office of Behavioral and Social Sciences Research. 

About the University of Minnesota Medical School The University of Minnesota Medical School is at the forefront of learning and discovery, transforming medical care and educating the next generation of physicians. Our graduates and faculty produce high-impact biomedical research and advance the practice of medicine.  We acknowledge that the U of M Medical School is located on traditional, ancestral and contemporary lands of the Dakota and the Ojibwe, and scores of other Indigenous people, and we affirm our commitment to tribal communities and their sovereignty as we seek to improve and strengthen our relations with tribal nations. For more information about the U of M Medical School, please visit  med.umn.edu . 

  • Alzheimer's disease & dementia
  • Arthritis & Rheumatism
  • Attention deficit disorders
  • Autism spectrum disorders
  • Biomedical technology
  • Diseases, Conditions, Syndromes
  • Endocrinology & Metabolism
  • Gastroenterology
  • Gerontology & Geriatrics
  • Health informatics
  • Inflammatory disorders
  • Medical economics
  • Medical research
  • Medications
  • Neuroscience
  • Obstetrics & gynaecology
  • Oncology & Cancer
  • Ophthalmology
  • Overweight & Obesity
  • Parkinson's & Movement disorders
  • Psychology & Psychiatry
  • Radiology & Imaging
  • Sleep disorders
  • Sports medicine & Kinesiology
  • Vaccination
  • Breast cancer
  • Cardiovascular disease
  • Chronic obstructive pulmonary disease
  • Colon cancer
  • Coronary artery disease
  • Heart attack
  • Heart disease
  • High blood pressure
  • Kidney disease
  • Lung cancer
  • Multiple sclerosis
  • Myocardial infarction
  • Ovarian cancer
  • Post traumatic stress disorder
  • Rheumatoid arthritis
  • Schizophrenia
  • Skin cancer
  • Type 2 diabetes
  • Full List »

share this!

August 19, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

Study: Mindfulness interventions delivered via telehealth improve pain, well-being for veterans with chronic pain

by Alex Smith, University of Minnesota Medical School

US veteran

Mindfulness-based interventions delivered via telehealth in a scalable format can improve pain and overall well-being among veterans with chronic pain, according to new research published today in JAMA Internal Medicine .

In a randomized clinical trial , researchers aimed to test the effectiveness of two eight-week telehealth mindfulness-based interventions (MBIs) designed to be scalable and widely implemented in health care systems. MBIs help people pay attention non-judgmentally in the present moment and often involve practices like meditation, breathing exercises or gentle movement.

"Although mindfulness interventions are evidence-based treatment for chronic pain and conditions that often accompany pain, like anxiety and depression, many MBIs are difficult to implement at scale in health care systems. They require trained mindfulness instructors, dedicated space and pose barriers to patients due to the time commitment involved," said Diana Burgess, Ph.D., a professor at the University of Minnesota Medical School and an investigator at the Minneapolis Veteran Affairs (VA) Healthcare System. "We wanted to develop MBIs that were relatively low resource, scalable and more accessible for patients "

Between November 2020 and May 2022, 811 veterans with moderate to severe chronic pain participated in the Learning to Apply Mindfulness to Pain (LAMP) study at three VA facilities. Outcomes were assessed at the outset, 10 weeks, six months and one year.

The group MBI was conducted via video conference with pre-recorded mindfulness education and skill training videos, accompanied by discussions led by a trained facilitator who was not an expert in mindfulness. The self-paced MBI was asynchronous—allowing participants to engage with the MBI at their own pace—and supplemented with three individual facilitator calls.

Key findings from the study include:

  • Pain-related function improved significantly for patients in the group and self-paced MBIs.
  • There were significant improvements in pain intensity, physical functioning , fatigue, sleep disturbance, social functioning, depression and PTSD among patients in the group and self-paced MBIs over 12 months, compared to usual care.
  • The group and self-paced MBIs did not significantly differ from each other.

The results of this study suggest that low-resource, telehealth-based MBIs could help accelerate and improve the implementation of non-medication pain treatment in VA health care and beyond.

Dr. Burgess and the research team are leading a new project—called Rural Veterans Applying Mind Body Skills for Pain (RAMP)—which will test the effectiveness of a scalable, mind-body telehealth intervention for chronic pain, designed for veterans living in rural areas . RAMP builds on LAMP through its use of mindfulness practices while also incorporating pain education, physical and rehabilitative exercise, and cognitive and behavioral strategies.

Explore further

Feedback to editors

health research intervention definition

Deadly sea snail toxin could be key to making better medicines

3 hours ago

health research intervention definition

Blood platelet score detects previously unmeasured risk of heart attack and stroke

health research intervention definition

Preclinical study shows potential of Manuka honey as a nutraceutical for breast cancer

15 hours ago

health research intervention definition

Low cortisol and hair-trigger stress response in the brain may underlie long COVID, study finds

16 hours ago

health research intervention definition

Study finds potential link between DNA markers and aging process

health research intervention definition

Platelets under control: Protecting the heart and brain more effectively after an infarction

17 hours ago

health research intervention definition

Study finds constipation is a significant risk factor for major cardiac events

19 hours ago

health research intervention definition

Study shows macrophages form in eye before birth, offering hope for diabetic retinopathy treatment

health research intervention definition

Examining Alzheimer's disease drug impact on tissue samples from people with Down syndrome: Study raises safety concerns

20 hours ago

health research intervention definition

Researchers discover new way to control the sense of touch

Related stories.

health research intervention definition

Mindfulness meditation: What are its potential health benefits?

Jun 3, 2024

health research intervention definition

Mindfulness-oriented recovery enhancement boosts methadone treatment

Jan 8, 2024

health research intervention definition

Mindfulness interventions can change health behaviors

Nov 10, 2020

Mindfulness promising option for easing chronic pain

Jan 31, 2019

health research intervention definition

Confronting trauma alleviates chronic pain among older veterans, study shows

Jun 13, 2024

health research intervention definition

Mindfulness may reduce opioid cravings, study finds

Oct 15, 2019

Recommended for you

health research intervention definition

Swiping through online videos increases boredom, study finds

23 hours ago

health research intervention definition

Arts and crafts may improve your mental health at least as much as being employed, scientists find

Aug 16, 2024

health research intervention definition

Parents' excessive smartphone use could harm children's mental health

health research intervention definition

Intervention for cleaning shared health care equipment could significantly reduce health care–associated infections

health research intervention definition

Study: Rare cancer patients nearly three times more likely to develop anxiety and depression than common cancer patients

health research intervention definition

Non-deceptive placebos can reduce stress, anxiety and depression, study finds

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Medical Xpress in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Instructions for Authors
  • BMJ Journals

You are here

  • Online First
  • Destitute and dying: interventions and models of palliative and end of life care for homeless adults – a systematic review
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0003-4949-6847 Megan Rose Coverdale and
  • http://orcid.org/0000-0003-1289-3726 Fliss Murtagh
  • Wolfson Palliative Care Research Centre, Hull York Medical School , University of Hull , Kingston upon Hull , UK
  • Correspondence to Megan Rose Coverdale, Wolfson Palliative Care Research Centre, Hull York Medical School, University of Hull, Kingston upon Hull, HU6 7RU, UK; hymc24{at}hyms.ac.uk

Background Homeless adults experience a significant symptom burden when living with a life-limiting illness and nearing the end of life. This increases the inequalities that homeless adults face while coping with a loss of rootedness in the world. There is a lack of palliative and end of life care provision specifically adapted to meet their needs, exacerbating their illness and worsening the quality of their remaining life.

Aim To identify interventions and models of care used to address the palliative and end of life care needs of homeless adults, and to determine their effectiveness.

Methods Standard systematic reviewing methods were followed, searching from 1 January 2000 the databases: Ovid MEDLINE, EMBASE, SCOPUS, Web of Science, CINAHL and PsycInfo. Results were reported following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines and described using a narrative synthesis. Study quality was assessed using Hawker’s Quality Assessment Tool.

Results Nine studies primarily focused on: education and palliative training for support staff; advance care planning; a social model for hospice care; and the creation of new roles to provide extra support to homeless adults through health navigators, homeless champions or palliative outreach teams. The voices of those experiencing homelessness were rarely included.

Conclusion We identified key components of care to optimise the support for homeless adults needing palliative and end of life care: advocacy; multidisciplinary working; professional education; and care in the community. Future research must include the perspectives of those who are homeless.

  • Palliative Care
  • Terminal care
  • Supportive care
  • Hospice care
  • Quality of life

Data availability statement

All data relevant to the study are included in the article or uploaded as supplementary information. Not applicable.

This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See:  https://creativecommons.org/licenses/by/4.0/ .

https://doi.org/10.1136/spcare-2024-004883

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

Use of palliative care services by the homeless population is limited; there is a considerable lack of end of life care provision specifically adapted to meet their biopsychosocial needs.

WHAT THIS STUDY ADDS

This systematic review provides a detailed understanding of the nature of interventions and models of care currently used to deliver palliative and end of life care to homeless adults, and what interventions are most effective to enhance the support for these marginalised individuals.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

Key components have been identified which are most important for optimising the delivery of palliative and end of life care to homeless adults: advocacy; multidisciplinary working; professional education; and care integrated into the community settings where the homeless population is based. Future research must include the perspectives of those who are homeless, build on the components which we know work, and address the sustainability of these interventions and models of care.

The numbers and needs of people experiencing homelessness while living with a life-limiting illness are increasing, yet these marginalised individuals are restricted from mainstream health and social care, despite often having the greatest needs; this is a pertinent issue that must be addressed within palliative and end of life care. 1 The number of people experiencing homelessness in the UK is rapidly rising; Shelter 2 reports that 1/182 people are homeless, with over 3000 rough sleeping every night. Similarly, rates of homelessness within European countries including Germany, Spain and Ireland are also rising: recent data from FEANTSA 3 (the European Federation of National Organisations Working with the Homeless) report rising levels of homelessness within these countries, reaching 262 645, 28 552 and 11 632, respectively. However, these figures are likely an underestimation due to ‘hidden homelessness’ in which an individual is homeless but missing from the data.

Homelessness, according to Somerville, 4

is not just a matter of lack of shelter or lack of abode, a lack of a roof over one’s head. It involves deprivation across a number of different dimensions—physiological (lack of bodily comfort or warmth), emotional (lack of love or joy), territorial (lack of privacy), ontological (lack of rootedness in the world, and anomie [a theory in which purpose and goals cannot be achieved due to lack of means 5 ]) and spiritual (lack of hope, lack of purpose).

Elements of the ETHOS light criteria 6 have been adopted within this review to describe the different types of homelessness ( table 1 ). Many homeless adults suffer with trimorbidity, explained by Vickery et al 7 as ‘a subset of multimorbidity representing overlap of physical health, mental health, and substance use conditions’; this can make caring for these individuals complex and challenging. Health inequalities are evident for this ostracised community, and life expectancy is exceedingly low: 43 years for women and 45 years for men, compared with the UK national average of 83 years and 79 years, respectively. 8 Most notably, deaths within the homeless community are continuing to rise annually. 9

  • View inline

The spectrum of homelessness

Somerville highlights the crucial need for a holistic approach to care for homeless adults, yet numerous hurdles exist in the delivery of good palliative and end of life care for this population. Homeless adults often experience a large symptom burden near the end of life, particularly pain, worry, sadness and exhaustion. 10 Many also have growing mistrust in healthcare professionals and underuse healthcare services due to fear of stigmatisation, discrimination and perceived healthcare prejudice. There is a considerable lack of palliative and end of life care provision specifically adapted to meet the biopsychosocial needs of the homeless population; this exacerbates the illness burden for these destitute individuals and worsens the quality of their remaining lifetime.

A systematic review was undertaken to identify the strengths and gaps in the delivery of palliative and end of life care to homeless adults, and make recommendations to bridge these gaps, with the aim of improving health and social care practice. Our review question aimed to determine what interventions and models of care are used to address the palliative and end of life care needs of adults who are experiencing homelessness, and are they effective? The objectives for adults experiencing homelessness, and needing palliative and end of life care, were to: (1) describe the interventions and models of care and (2) consider the strengths and gaps of the interventions and models of care, and discuss their effectiveness.

Preliminary searching was first undertaken using the Database of Systematic Reviews, the Database of Abstracts of Reviews of Effects and PROSPERO. We identified no similar systematic reviews, however, a relevant scoping review by James et al 11 reports that the provision of palliative care to homeless adults is complex with many barriers hindering the delivery of quality care.

We conducted a systematic review using a standard methodological framework, adapted from the Centre for Reviews and Dissemination 12 (CRD) on how to undertake systematic reviews in healthcare. We followed the PEOS (Population, Exposure, Outcome and Study Design) framework to build a focused research question ( table 2 ). Case series and case reports, commentary, review and opinion pieces were excluded due to the high potential for bias within these types of study designs. Research which did not report on interventions or models of care, or include homeless adults for at least 50% of their study population was also excluded. Results were reported following Preferred Reporting Items for Systematic Reviews and Meta-Analyses reporting guidelines. 13

PEOS framework

Our search strategy ( table 3 ) was independently reviewed and refined by a librarian with expertise in information skills. Systematic searches were conducted on electronic databases using MeSH terms employing Boolean logical operators of “AND” and “OR”, in addition to free text searches (identified as key words). Truncations of words using an * was undertaken to enable the inclusion of multiple endings of the specific term. We limited the search criteria to “English language”. Online databases were searched on 22 November 2023 for articles published from 1 January 2000 using Ovid MEDLINE, EMBASE, SCOPUS, Web of Science, CINAHL and PsycInfo.

Search strategy

All identified citations were uploaded into the bibliographic software, EndNote21, and duplicate studies were removed. During initial screening, one author (MRC) oversaw the screening of study titles and abstracts; discussion with the second author (FM) was undertaken on 5% of studies during this stage to determine their relevance to the research question. One reviewer undertook full-text screening of studies (MRC); 30% of studies at this stage were discussed with the second author (FM) to determine suitability for inclusion. Data were extracted into tabular format on the design, context, quality and effectiveness of interventions and models of care.

Quality assessment of individual studies was formally undertaken using Hawker’s Quality Assessment Tool for Qualitative Studies. 14 Nine domains within each individual study were assessed and categorised as being of good, fair, poor or very poor quality. The minimum score that could be achieved for each paper using this tool was 9, the maximum 36. High, medium and low quality studies were determined based on their cumulative score across the nine domains, ranging between 30–36, 24–29 and 9–23, respectively.

A meta-analysis was not completed due to the heterogeneity of studies; instead, we undertook a narrative synthesis due to its appropriateness for organising and summarising the main findings from a varied body of research. We used formal guidance by Popay et al 15 for conducting a narrative synthesis and the following steps were addressed: ‘developing a theory of how the intervention works, why and for whom; developing a preliminary synthesis of findings of included studies; exploring relationships in the data; assessing the robustness of the synthesis’. We selected thematic analysis as the best method to describe the interventions and models of care, and consider their strengths, gaps and effectiveness.

Study selection

The search was conducted in November 2023. 5487 studies were initially identified. After screening and full-text review, nine studies were included ( figure 1 ).

  • Download figure
  • Open in new tab
  • Download powerpoint

Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram. 13

Study characteristics

Three included studies were qualitative, 16–18 one was mixed methods, 19 three were service evaluations/improvements, 20–22 one was a retrospective cohort study 23 and one was a randomised control trial. 24 Different interventions and models of care and whether they worked were mainly reported qualitatively, however, four studies included quantitative reporting. 19 20 23 24 Qualitative data were collected using questionnaires, 19 24 semi-structured interviews, 18 monthly reporting via email and telephone, 18 focus group interviews 19 and photovoice exploration in which participants photographed components of a model of care found to be most meaningful to them. 17 Quantitative data were collected from tabulation of key performance indicators achieved, 20 review of patient medical notes, 23 evaluation of baseline and outcome questionnaires 19 and uptake of an intervention. 16 24 Results from two studies did not specify how they were collected. 21 22 The locations of these included studies were the UK, 18 19 21 Canada 20 22 23 and the USA. 16 17 24 All interventions and models of care were instigated between the years 2001 and 2021.

Sample sizes were only reported in some studies. Where documented, numbers of participants ranged from 3 to >150. Five study populations primarily reported from professionals 18–22 with roles including non-clinical hostel staff, palliative care doctors, palliative care nurses, social workers and community nurses. Four study populations focused primarily on the care of homeless adults. 16 17 23 24 Financial reimbursement was provided within two studies to homeless adults for their participation. 17 24 Only one study focused on the experiences of a model of care directly from the perspective of the homeless adults involved; this was through interpretation of photos taken by patients during their stay in a social hospice. 17

Demographics of homeless adults were not always reported. However, in studies that did document homeless adult characteristics, most were of white ethnicity; two study populations predominantly consisted of black homeless adults. 16 24 There was a significant male preponderance in all the homeless populations studied; one study included transgender adults. 23 Homeless adults involved in this review were at different stages of their disease trajectories: one study focused on the transfer of terminally ill homeless individuals into a hospice to die in their preferred place of death 23 ; another focused on the provision of care to adults with a predicted life expectancy of under 6 months, 17 and a further study emphasised engagement with homeless adults living with a high degree of frailty. 22

Most homeless individuals were hostel-based. Only two papers included homeless adults that were rough sleeping, couch surfing or vulnerably housed within other accommodations beyond the realms of a shelter. 17 23 Most homeless adults were aged under 65 years due to age limits of sheltered accommodation restricting acceptance beyond this age. 18 Many were living with trimorbidity (overlap of poor physical health, poor mental health and substance use conditions); substance use is a common barrier to being accepted into hostels and hospices. 16 18 23 However, in light of this, three studies were inclusive of patients with addiction: two implemented a harm reduction strategy to minimise the adverse effects of substance misuse 22 23 ; one explicitly enrolled homeless adults living with a substance abuse diagnosis. 16

Quality of included studies

The quality assessment of all included studies can be found within online supplemental table 1 . The highest scoring study in this review scored 33/36, 18 while the lowest scored 17/36. 21 The highest quality score was awarded to an explorative qualitative study. It described the nature of the intervention and its success in detail using interviews and surveys, completed preintervention and postintervention, to gain insight into its impact. Its methodology was clearly ascribed with minimal areas for bias identified. The lowest quality study was a service evaluation on a nurse led homeless project. Despite the model working well to deliver its outcomes, the study had poor internal validity with limited description of its methodology and sampling; and little reporting of limitations.

Supplemental material

Nature of the interventions/models of care.

The nature of the interventions and models of care identified are explored in table 4 , including specification of the types of homelessness included. The included studies primarily focused on education and palliative training for support staff, 18 19 21 22 advance care planning, 16 19 23 24 the creation of new roles to provide extra support to homeless individuals via the introduction of health navigators, 20 homeless champions 18 or palliative outreach teams. 22 One study focused on the implementation of a social model of hospice care based within the community. 17 Three studies were undertaken within hostels, 16 18 19 two within hospices 17 23 and four were embedded directly into the community as in-reach models of care. 20–22 24

The nature of the interventions and models of care

Effectiveness of the interventions/models of care

Outcomes and effectiveness of the interventions and models of care are reported in table 5 (see online supplemental table 2 for a detailed review of the evidence of effectiveness). Multiagency communication and collaboration were common findings which enhanced the quality of palliative and end of life care provided to homeless adults and reduced care fragmentation among the various professionals involved. 18–20 Advocacy was another common attribute of many interventions, enhancing person centred care for vulnerable adults experiencing homelessness. The introduction of a healthcare navigator, with expertise in social work, enabled the social determinants of health to be targeted for homeless adults in receipt of care. 20 Similarly, social worker participation in advance directive completion favourably enhanced uptake of the intervention. 24 Embedding specialist palliative care teams into hostels 18 helped hostel staff to develop an increased awareness of both the social and healthcare needs of their hostel residents. Through collaboration with social workers and palliative care nurses, hostel staff felt this intervention was invaluable and allowed for the provision of individually tailored, holistic care. 18

Outcomes and effectiveness of identified interventions and models of care

Educational programmes 19 21 enhanced the confidence and knowledge of hostel staff on the ethos of palliative care and how to use this within their practice, noting that the training was ‘invaluable’, ‘extremely beneficial’ 18 and ‘empowering’. 21 However, it was mentioned that the increased workload relating to educational programmes risked potential staff burnout. To minimise this, two interventions encouraged hostel staff to optimise their well-being through use of counselling services and psychological support. 20 22

Working within supportive environments enabled hostel staff to improve productivity and become more proactive, liaising with colleagues, and challenging external agencies, when needed, to act in the best interests of their residents. 18 Hostel staff used their new skills to commence discussions on death and dying with hostel residents; they felt empowered having broken down the taboo associated with this subject. 21 Hostel residents emphasised the beneficial impact of these timely conversations, and positively reported that they felt cared for which reduced fear and anxiety. 18 21 Early recognition of health deterioration by hostel staff allowed for a prompt transfer of homeless adults to hospices or hospitals, according to their wishes. This allowed homeless adults to die with dignity and in the place of their choosing with appropriate support. 21

To address the impact of grief and bereavement, spiritual support was offered for both homeless individuals and professionals involved in their care. 23 A chaplain was available to provide religious counsel in one study. 17 Grief circles, 22 death cafes, and vigils were also introduced into hostels. 18 These interventions were used to help ease the loss of fellow residents and improve psychological well-being within supportive and nurturing environments. Pets were encouraged in one hospice to provide invaluable companionship, unconditional love and comfort to their owners who were living with a terminal illness. 17

A key challenge faced when engaging homeless adults in end of life discussions was their concurrent addictions to drugs and alcohol; this was a barrier that professionals involved in their care often struggled to overcome. 19 However, interventions were also identified which actively overcame this struggle and acknowledged the vulnerability of homeless adults, including factors that potentiate their low self-esteem, such as racism, addiction and homophobia. 19 22 23 Harm reduction strategies were advantageous in caring for homeless adults with trimorbidity, and when coupled with the reduction of pain and other symptoms attributable to terminal conditions, homeless adults were able to gradually reduce their intake of illicit drugs. 23

Similarly, the utilisation of trauma informed care for homeless adults 22 (a method which increases professional understanding that many homeless adults have lived through an unsurmountable level of trauma), helped to educate professionals involved in their care as to why they may maintain dependency on substances, despite the severity of their faltering health. Hostel staff was encouraged to build rapport with residents and break down existential barriers (including stigma for selected lifestyle choices) which helped residents to feel secure in their surroundings and improve adherence to medical treatment, 19 while preventing further worsening harmful behaviours, such as the continuing use of drugs and alcohol. 22 Previous research emphasises that understanding and addressing the complexities of the individual is essential within palliative and end of life care to ensure the delivery of personalised, holistic services. 11 24

Two interventions and models of care were cost saving. 16 23 However, others were time consuming to undertake, requiring both investment and dedication from professionals to integrate their newly learnt skills into practice. New interventions sometimes came at the expense and compromise of professionals fulfilling their usual routine tasks. 18 The role of a healthcare navigator 20 was recognised as causing a large workload for one person to manage; additionally, its lack of funding highlighted another potential barrier to sustaining the role and maximising its impact long-term.

We identified several pivotal interventions and models of care which were successful for optimising the delivery of palliative and end of life care to homeless adults, and improving their outcomes. These key components are: advocacy; multidisciplinary communication and collaboration; professional education; and community-based, rather than institution-based (hospital or inpatient hospice), care.

Prior evidence supports our findings. A systematic review by Ahmed et al 25 identified that a lack of palliative care knowledge and education among health and social care staff is a severe limitation to providing support to the homeless when living with a terminal illness. Our review shows that educational programmes are beneficial for improving palliative care delivery to homeless adults in the community. The HEARTH study conducted by Crane et al 26 evaluated the success of specialised primary care services to deliver healthcare to the homeless; it emphasised that homeless adults felt most trusting of healthcare providers working within specialist homeless services, tailored to meet their complex needs. Cook et al 27 identified that homeless adults often have significant comorbidities while living with concurrent addiction, meaning their palliative and end of life care needs often differ from the general population. Our review adds to the findings of both Cook et al and the HEARTH study and recognises that flexible, holistic, multidisciplinary palliative care is paramount in addressing the trimorbid elements (poor physical health, poor mental health and substance use conditions) influencing palliative care needs among the homeless community.

Somerville 4 emphasises that homelessness comprises physiological deprivation (lack of bodily comfort or warmth); a pivotal factor needing to be addressed. The research we identified demonstrated gaps in addressing this; for example, some homeless adults were refused access into homeless shelters due to age, leaving them destitute and in distress, despite living with a terminal illness. Two hostels had an upper age limit of 65 years old, 18 restricting the support accessible to them, yet palliative care provision is most often associated with older patients. 27.2% of adults over 65 identify as homeless, 28 yet limiting access to hostels where palliative needs can be addressed, due to chronological age, is a significant structural hurdle.

Somerville also states that homelessness involves ontological deprivation relating to a lack of rootedness in the world. 4 This lack of rootedness was particularly evident for some groups; for example, 25% of transgender adults experience homelessness in Britain, 29 yet only one study included transgender participants within their study population. 23 Homeless services often fail to support transgender adults culturally; these individuals are less likely to be accepted into shelter-based accommodation, and due to fear of discrimination and lack of understanding, transgender adults have increasing mistrust in health and social care providers. 30 The Office for National Statistics 31 reinforce our findings and document that other homeless populations are also under-recognised, including women and ethnic minorities. Black ethnicities are three times, and mixed race are two times, more likely to be experiencing homelessness than white adults. 28 32 However, inclusion of these ethnicities which are most prevalently affected by social inequalities are limited in research. Hidden homelessness may explain why these subgroups are limited in inclusion within this review. Hospices and hostels must be inclusive within their policies; it is essential that they adopt safe, stable, and welcoming environments to deliver palliative and end of life interventions and models of care to all demographics of homeless adults.

There are more gaps identified in our review. We found that healthcare needs were not always addressed; in one study, only 57.1% of terminally ill homeless patients admitted into a hospice received a palliative consult during their admission. 23 Optimising medical comorbidities, through liaison with medical staff and external agencies (mainly primary care providers), was a significant challenge faced by hostel staff to provide total palliative symptom control. 18

Furthermore, the ability to read and write was a crucial requirement of the homeless adults involved within two interventions, 16 24 yet one-third of homeless adults have no educational qualifications. 28 Both literacy and language barriers can make it difficult to engage with homeless adults directly within health and society. Application of structural interventions within health and social care policy, including access to advocates and translators, is essential to overcome these barriers.

We also recognised that the sustainability of some interventions was uncertain. Frequent staff turnover within hostels can affect the long-term impact of newly implemented educational programmes 19 ; this risks losing knowledgeable staff and the initial successes achieved in improving palliative care delivery to hostel residents. 19 We recommend that hostel and hospice staff participation in these educational programmes is made compulsory within their job specification. This will foster permanency of these interventions and models of care within practice; we found one instance of this which was very successful. 18

This systematic review has strengths and limitations. We have advanced understanding on the interventions and models of care used for homeless adults needing palliative and end of life care, while also considering how well they work in achieving this. The nine included studies have been assessed to be of intermediate methodological quality overall and credible in their findings. However, the synthesis of this review demonstrates that there is a real paucity of research specifically relating to the availability of interventions and models of care used in the delivery of palliative and end of life care to homeless adults. A lack of quantitative data meant that we were unable to numerically quantify the effectiveness of identified palliative and end of life interventions and models of care. We synthesised data, mainly qualitative, from the perspectives of professionals involved in the provision of the palliative and end of life care to homeless adults. Rarely was the viewpoint directly obtained from homeless adults regarding their individual experiences of these interventions and models of care; this is a pertinent barrier which must be addressed in future research.

This paper followed guidance from the CRD 12 for conducting a systematic review. Studies were limited to those published in the English language and the search strategy did not include bibliographic searching or grey literature; this may have prevented identification of unpublished documentation on interventions and models of care used within current health and social care practice. Only one author had undertaken the quality assessment of individual studies; this may have introduced bias into our methodology. Many studies were conducted in the USA, yet homelessness is a global finding; this limits the generalisability of our results within other cultural contexts. Most studies had small sample sizes which highlights the difficulty in recruiting homeless adults for participation in research.

Implications

Recent statistics document the rapid rise in the number of people experiencing homelessness 2 : it is imperative that we identify and implement effective palliative and end of life interventions and models of care into practice for homeless adults immediately. This will reduce health inequalities and promote equitable and accessible care to all, regardless of housing status.

All studies in this review addressed at least one of the critical components of homelessness according to the philosophy of Somerville. 4 However, most failed to equally address and optimise them in their totality. The in-reach support model 18 was the most valuable approach out of all the studies assessed, encompassing a holistic and inclusive approach to support the palliative and end of life needs of homeless adults within the hostel environment. It placed great emphasis on the optimisation of the medical, psychological, social and spiritual aspects of care for homeless adults, including addiction. Its methodological rigour was of high quality and, matched with the success of the model, we recommended that employment in wider health and social care practice is fulfilled to support the homeless population needing palliative and end of life care.

Gaps in the delivery of palliative and end of life care to homeless adults have been highlighted and we indicate where further direction is needed. We acknowledge that there is no specific definition of homelessness; however, perspectives from adults experiencing other types of homelessness, besides shelter-based living, must be of greater focus in future research. The impact of palliative and end of life interventions and models of care must also be sought from homeless women, older adults, LGBTQ+ and ethnic minorities who are underrepresented in the research.

There are key components that help optimise support for homeless adults needing palliative and end of life care: advocacy; multidisciplinary working; professional education; and care in the community. Most studies focused on the professionals involved in the care of homeless individuals; few studies included the voices of those experiencing homelessness. Future research must include the perspectives of those who are homeless, build on the components which we know work, and address sustainability of these interventions and models of care.

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

Acknowledgments.

FM is a UK National Institute for Health and Care Research (NIHR) Senior Investigator. The views expressed in this article are those of the author(s) and not necessarily those of the UK NIHR, or the Department of Health and Social Care.

  • Somerville P
  • ETHOS Light
  • Vickery KD ,
  • Winkelman TNA ,
  • Ford BR , et al
  • de Veer AJE ,
  • Stringer B ,
  • van Meijel B , et al
  • Flemming K ,
  • Hodson M , et al
  • Centre for Reviews and Dissemination
  • McKenzie JE ,
  • Bossuyt PM , et al
  • Kerr C , et al
  • Roberts H ,
  • Sowden A , et al
  • Coulter AMZ
  • Jensen FB ,
  • Supiano KP ,
  • Towsley GL , et al
  • Armstrong M ,
  • Shulman C ,
  • Hudson B , et al
  • Hudson BF ,
  • Kennedy P , et al
  • Robinson L ,
  • Trevors Babici L ,
  • Tedesco A , et al
  • Speight C ,
  • Buchanan N ,
  • Bond A , et al
  • Podymow T ,
  • Turnbull J ,
  • Ratner ER ,
  • Wall MM , et al
  • Bestall JC ,
  • Ahmedzai SH , et al
  • Daly BJ , et al
  • Crosbie K , et al
  • Office for National Statistics
  • Hunt R , Stonewall
  • AKT and Homeless link

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1
  • Data supplement 2

Contributors Study conception and design: MRC and FM. Screening and data extraction: MRC and FM. Analysis and interpretation of results: MRC and FM. Draft manuscript preparation: MRC. All authors reviewed the results and approved the final version of the manuscript. FM is the guarantor.

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

Author note Transparency declaration: The lead author (the manuscript’s guarantor) affirms that the manuscript is an honest, accurate and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned have been explained.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Institute of Medicine (US) Committee on Health and Behavior: Research, Practice, and Policy. Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences. Washington (DC): National Academies Press (US); 2001.

Cover of Health and Behavior

Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences.

  • Hardcopy Version at National Academies Press

7 Evaluating and Disseminating Intervention Research

Efforts to change health behaviors should be guided by clear criteria of efficacy and effectiveness of the interventions. However, this has proved surprisingly complex and is the source of considerable debate.

The principles of science-based interventions cannot be overemphasized. Medical practices and community-based programs are often based on professional consensus rather than evidence. The efficacy of interventions can only be determined by appropriately designed empirical studies. Randomized clinical trials provide the most convincing evidence, but may not be suitable for examining all of the factors and interactions addressed in this report.

Information about efficacious interventions needs to be disseminated to practitioners. Furthermore, feedback is needed from practitioners to determine the overall effectiveness of interventions in real-life settings. Information from physicians, community leaders, public health officials, and patients are all-important for determining the overall effectiveness of interventions.

The preceding chapters review contemporary research on health and behavior from the broad perspectives of the biological, behavioral, and social sciences. A recurrent theme is that continued multidisciplinary and interdisciplinary efforts are needed. Enough research evidence has accumulated to warrant wider application of this information. To extend its use, however, existing knowledge must be evaluated and disseminated. This chapter addresses the complex relationship between research and application. The challenge of bridging research and practice is discussed with respect to clinical interventions, communities, public agencies, systems of health care delivery, and patients.

During the early 1980s, the National Heart, Lung, and Blood Institute (NHLBI) and the National Cancer Institute (NCI) suggested a sequence of research phases for the development of programs that were effective in modifying behavior ( Greenwald, 1984 ; Greenwald and Cullen, 1984 ; NHLBI, 1983 ): hypothesis generation (phase I), intervention methods development (phase II), controlled intervention trials (phase III), studies in defined populations (phase IV), and demonstration research (phase V). Those phases reflect the importance of methods development in providing a basis for large-scale trials and the need for studies of the dissemination and diffusion process as a means of identifying effective application strategies. A range of research and evaluation methods are required to address diverse needs for scientific rigor, appropriateness and benefit to the communities involved, relevance to research questions, and flexibility in cost and setting. Inclusion of the full range of phases from hypothesis generation to demonstration research should facilitate development of a more balanced perspective on the value of behavioral and psychosocial interventions.

  • EVALUATING INTERVENTIONS

Assessing Outcomes

Choice of outcome measures.

The goals of health care are to increase life expectancy and improve health-related quality of life. Major clinical trials in medicine have evolved toward the documentation of those outcomes. As more trials documented effects on total mortality, some surprising results emerged. For example, studies commonly report that, compared with placebo, lipid-lowering agents reduce total cholesterol and low-density lipoprotein cholesterol, and might increase high-density lipoprotein cholesterol, thereby reducing the risk of death from coronary heart disease ( Frick et al., 1987 ; Lipid Research Clinics Program, 1984 ). Those trials usually were not associated with reductions in death from all causes ( Golomb, 1998 ; Muldoon et al, 1990 ). Similarly, He et al. (1999) demonstrated that intake of dietary sodium in overweight people was not related to the incidence of coronary heart disease but was associated with mortality form coronary heart disease. Another example can be found in the treatment of cardiac arrhythmia. Among adults who previously suffered a myocardial infarction, symptomatic cardiac arrhythmia is a risk factor for sudden death ( Bigger, 1984 ). However, a randomized drug trial in 1455 post-infarction patients demonstrated that those who were randomly assigned to take an anti-arrhythmia drug showed reduced arrhythmia, but were significantly more likely to die from arrhythmia and from all causes than those assigned to take a placebo. If investigators had measured only heart rhythm changes, they would have concluded that the drug was beneficial. Only when primary health outcomes were considered was it established that the drug was dangerous ( Cardiac Arrhythmia Suppression Trial (CAST) Investigators, 1989 ).

Many behavioral intervention trials document the capacity of interventions to modify risk factors ( NHLBI, 1998 ), but relatively few Level I studies measured outcomes of life expectancy and quality of life. As the examples above point out, assessing risk factors may not be adequate. Ramifications of interventions are not always apparent until they are fully evaluated. It is possible that a recommendation for a behavioral change could increase mortality through unforeseen consequences. For example, a recommendation of increased exercise might heighten the incidence of roadside auto fatalities. Although risk factor modification is expected to improve outcomes, assessment of increased longevity is essential. Measurement of mortality as an endpoint does necessitate long-duration trials that can incur greater costs.

Outcome Measurement

One approach to representing outcomes comprehensively is the quality-adjusted life year (QALY). QALY is a measure of life expectancy ( Gold et al., 1996 ; Kaplan and Anderson, 1996 ) that integrates mortality and morbidity in terms of equivalents of well-years of life. If a woman expected to live to age 75 dies of lung cancer at 50, the disease caused 25 lost life-years. If 100 women with life expectancies of 75 die at age 50, 2,500 (100×25 years) life-years would be lost. But death is not the only outcome of concern. Many adults suffer from diseases that leave them more or less disabled for long periods. Although still alive, their quality of life is diminished. QALYs account for the quality-of-life consequences of illnesses. For example, a disease that reduces quality by one-half reduces QALY by 0.5 during each year the patient suffers. If the disease affects 2 people, it will reduce QALY by 1 (2×0.5) each year. A pharmaceutical treatment that improves life by 0.2 QALYs for 5 people will result in the equivalent of 1 QALY if the benefit is maintained over a 1-year period. The basic assumption is that 2 years scored as 0.5 each add to the equivalent of 1 year of complete wellness. Similarly, 4 years scored as 0.25 each are equivalent to 1 year of complete wellness. A treatment that boosts a patient's health from 0.50 to 0.75 on a scale ranging from 0.0 (for death) to 1.0 (for the highest level of wellness) adds the equivalent of 0.25 QALY. If the treatment is applied to 4 patients, and the duration of its effect is 1 year, the effect of the treatment would be equivalent to 1 year of complete wellness. This approach has the advantage of considering benefits and side-effects of treatment programs in a common term. Although QALYs typically are used to assess effects on patients, they also can be used as a measure of effect on others, including caregivers who are placed at risk because their experience is stressful. Most important, QALYs are required for many methods of cost-effectiveness analysis. The most controversial aspect of the methodology is the method for assigning values along the scale. Three methods are commonly used: standard reference gamble, time-tradeoff, and rating scales. Economists and psychologists differ on their preferred approach to preference assessment. Economists typically prefer the standard gamble because it is consistent with the axioms of choice outlined in decision theory ( Torrence, 1976 ). Economists also accept time-tradeoff because it represents choice even though it is not exactly consistent with the axioms derived from theory ( Bennett and Torrence, 1996 ). However, evidence from experimental studies questions many of the assumptions that underlie economic models of choice. In particular, human evaluators do poorly at integrating complex probability information when making decisions involving risk ( Tversky and Fox, 1995 ). Economic models often assume that choice is rational. However, psychological experiments suggest that methods commonly used for choice studies do not represent the true underlying preference continuum ( Zhu and Anderson, 1991 ). Some evidence supports the use of simple rating scales ( Anderson and Zalinski, 1990 ). Recently, research by economists has attempted to integrate studies from cognitive science, while psychologists have begun investigations of choice and decision-making ( Tversky and Shafir, 1992 ). A significant body of studies demonstrates that different methods for estimating preferences will produce different values ( Lenert and Kaplan, 2000 ). This happens because the methods ask different questions. More research is needed to clarify the best method for valuing health states.

The weighting used for quality adjustment comes from surveys of patient or population groups, an aspect of the method that has generated considerable discussion among methodologists and ethicists ( Kaplan, 1994 ). Preference weights are typically obtained by asking patients or people randomly selected from a community to rate cases that describe people in various states of wellness. The cases usually describe level of functioning and symptoms. Although some studies show small but significant differences in preference ratings between demographic groups ( Kaplan, 1998 ), most studies have shown a high degree of similarity in preferences (see Kaplan, 1994 , for review). A panel convened by the U.S. Department of Health and Human Services reviewed methodologic issues relevant to cost and utility analysis (the formal name for this approach) in health care. The panel concluded that population averages rather than patient group preference weights are more appropriate for policy analysis ( Gold et al., 1996 ).

Several authors have argued that resource allocation on the basis of QALYs is unethical (see La Puma and Lawlor, 1990 ). Those who reject the use of QALY suggest that QALY cannot be measured. However, the reliability and validity of quality-of-life measures are well documented ( Spilker, 1996 ). Another ethical challenge to QALYs is that they force health care providers to make decisions based on cost-effectiveness rather than on the health of the individual patient.

Another common criticism of QALYs is that they discriminate against the elderly and the disabled. Older people and those with disabilities have lower QALYs, so it is assumed that fewer services will be provided to them. However, QALYs consider the increment in benefit, not the starting point. Programs that prevent the decline of health status or programs that prevent deterioration and functioning among the disabled do perform well in QALY outcome analysis. It is likely that QALYs will not reveal benefits for heroic care at the very end of life. However, most people prefer not to take treatment that is unlikely to increase life expectancy or improve quality of life ( Schneiderman et al., 1992 ). Ethical issues relevant to the use of cost-effectiveness analysis are considered in detail in the report of the Panel on Cost-Effectiveness in Health and Medicine ( Gold et al., 1996 ).

Evaluating Clinical Interventions

Behavioral interventions have been used to modify behaviors that put people at risk for disease, to manage disease processes, and to help patients cope with their health conditions. Behavioral and psychosocial interventions take many forms. Some provide knowledge or persuasive information; others involve individual, family, group, or community programs to change or support changes in health behaviors (such as in tobacco use, physical activity, or diet); still others involve patient or health care provider education to stimulate behavior change or risk-avoidance. Behavioral and psychosocial interventions are not without consequence for patients and their families, friends, and acquaintances; interventions cost money, take time, and are not always enjoyable. Justification for interventions requires assurance that the changes advocated are valuable. The kinds of evidence required to evaluate the benefits of interventions are discussed below.

Evidence-Based Medicine

Evidence-based medicine uses the best available scientific evidence to inform decisions about what treatments individual patients should receive ( Sackett et al., 1997 ). Not all studies are equally credible. Last (1995) offered a hierarchy of clinical research evidence, shown in Table 7-1 . Level I, the most rigorous, is reserved for the randomized clinical trials (RCT), in which participants are randomly assigned to the experimental condition or to a meaningful comparison condition—the most widely accepted standard for evaluating interventions. Such trials involve either “single blinding” (investigators know which participants are assigned to the treatment and groups but participants do not) or “double blinding” (neither the investigators nor the participants know the group assignments) ( Friedman et al., 1985 ). Double blinding is difficult in behavioral intervention trials, but there are some good examples of single-blind experiments. Reviews of the literature often grade studies according to levels of evidence. Level I evidence is considered more credible than Level II evidence; Level III evidence is given little weight.

TABLE 7-1. Research Evidence Hierarchy.

Research Evidence Hierarchy.

There has been concern about the generalizability of RCTs ( Feinstien and Horwitz, 1997 ; Horwitz, 1987a , b ; Horwitz and Daniels, 1996 ; Horwitz et al., 1996 , 1990 ; Rabeneck et al., 1992 ), specifically because the recruitment of participants can result in samples that are not representative of the population ( Seligman, 1996 ). There is a trend toward increased heterogeneity of the patient population in RCTs. Even so, RCTs often include stringent criteria for participation that can exclude participants on the basis of comorbid conditions or other characteristics that occur frequently in the population. Furthermore, RCTs are often conducted in specialized settings, such as university-based teaching hospitals, that do not draw representative population samples. Trials sometimes exhibit large dropout rates, which further undermine the generalizability of their findings.

Oldenburg and colleagues (1999) reviewed all papers published in 1994 in 12 selected journals on public health, preventive medicine, health behavior, and health promotion and education. They graded the studies according to evidence level: 2% were Level I RCTs and 48% were Level II. The authors expressed concern that behavioral research might not be credible when evaluated against systematic experimental trials, which are more common in other fields of medicine. Studies with more rigorous experimental designs are less likely to demonstrate treatment effectiveness ( Heaney and Goetzel, 1997 ; Mosteller and Colditz, 1996 ). Although there have been relatively few behavioral intervention trials, those that have been published have supported the efficacy of behavioral interventions in a variety of circumstances, including smoking, chronic pain, cancer care, and bulimia nervosa ( Compas et al., 1998 ).

Efficacy and Effectiveness

Efficacy is the capacity of an intervention to work under controlled conditions. Randomized clinical trials are essential in establishing the effects of a clinical intervention ( Chambless and Hollon, 1998 ) and in determining that an intervention can work. However, demonstration of efficacy in an RCT does not guarantee that the treatment will be effective in actual practice settings. For example, some reviews suggest that behavioral interventions in psychotherapy are generally beneficial ( Matt and Navarro, 1997 ), others suggest that interventions are less effective in clinical settings than in the laboratory ( Weisz et al., 1992 ), and others find particular interventions equally effective in experimental and clinical settings ( Shadish et al., 1997 ).

The Division of Clinical Psychology of the American Psychological Association recently established criteria for “empirically supported” psychological treatments ( Chambless and Hollon, 1998 ). In an effort to establish a level of excellence in validating the efficacy of psychological interventions the criteria are relatively stringent. A treatment is considered empirically supported if it is found to be more effective than either an alternative form of treatment or a credible control condition in at least two RCTs. The effects must be replicated by at least two independent laboratories or investigative teams to ensure that the effects are not attributable to special characteristics of a specific investigator or setting. Several health-related behavior change interventions meeting those criteria have been identified, including interventions for management of chronic pain, smoking cessation, adaptation to cancer, and treatment of eating disorders ( Compas et al., 1998 ).

An intervention that has failed to meet the criteria still has potential value and might represent important or even landmark progress in the field of health-related behavior change. As in many fields of health care, there historically has been little effort to set standards for psychological treatments for health-related problems or disease. Recently, however, managed-care and health maintenance organizations have begun to monitor and regulate both the type and the duration of psychological treatments that are reimbursed. A common set of criteria for making coverage decisions has not been articulated, so decisions are made in the absence of appropriate scientific data to support them. It is in the best interest of the public and those involved in the development and delivery of health-related behavior change interventions to establish criteria that are based on the best available scientific evidence. Criteria for empirically supported treatments are an important part of that effort.

Evaluating Community-Level Interventions

Evaluating the effectiveness of interventions in the communities requires different methods. Developing and testing interventions that take a more comprehensive, ecologic approach, and that are effective in reducing risk-related behaviors and influencing the social factors associated with health status, require many levels and types of research ( Flay, 1986 ; Green et al., 1995 ; Greenwald and Cullen, 1984 ). Questions have been raised about the appropriateness of RCTs for addressing research questions when the unit of analysis is larger than the individual, such as a group, organization, or community ( McKinlay, 1993 ; Susser, 1995 ). While this discussion uses the community as the unit of analysis, similar principles apply to interventions aimed at groups, families, or organizations.

Review criteria of community interventions have been suggested by Hancock and colleagues ( Hancock et al., 1997 ). Their criteria for rigorous scientific evaluation of community intervention trials include four domains: (1) design, including the randomization of communities to condition, and the use of sampling methods that assure representativeness of the entire population; (2) measures, including the use of outcome measures with demonstrated validity and reliability and process measures that describe the extent to which the intervention was delivered to the target audience; (3) analysis, including consideration of both individual variation within each community and community-level variation within each treatment condition; and (4) specification of the intervention in enough detail to allow replication.

Randomization of communities to various conditions raises challenges for intervention research in terms of expense and statistical power ( Koepsell et al., 1995 ; Murray, 1995 ). The restricted hypotheses that RCTs test cannot adequately consider the complexities and multiple causes of human behavior and health status embedded within communities ( Israel et al., 1995 ; Klitzner, 1993 ; McKinlay, 1993 ; Susser, 1995 ). A randomized controlled trial might actually alter the interaction between an intervention and a community and result in an attenuation of the effectiveness of the intervention ( Fisher, 1995 ; McKinlay, 1993 ). At the level of community interventions, experimental control might not be possible, especially when change is unplanned. That is, given the different sociopolitical structures, cultures, and histories of communities and the numerous factors that are beyond a researcher's ability to control, it might be impossible to identify and maintain a commensurate comparison community ( Green et al., 1996 ; Hollister and Hill, 1995 ; Israel et al., 1995 ; Klitzner, 1993 ; Mittelmark et al., 1993 ; Susser, 1995 ). Using a control community does not completely solve the problem of comparison, however, because one “cannot assume that a control community will remain static or free of influence by national campaigns or events occurring in the experimental communities” ( Green et al., 1996 , p. 274).

Clear specification of the conceptual model guiding a community intervention is needed to clarify how an intervention is expected to work ( Koepsell, 1998 ; Koepsell et al., 1992 ). This is the contribution of the Theory of Change model for communities described in Chapter 6 . A theoretical framework is necessary to specify mediating mechanisms and modifying conditions. Mediating mechanisms are pathways, such as social support, by which the intervention induces the outcomes; modifying conditions, such as social class, are not affected by the intervention but can influence outcomes independently. Such an approach offers numerous advantages, including the ability to identify pertinent variables and how, when, and in whom they should be measured; the ability to evaluate and control for sources of extraneous variance; and the ability to develop a cumulative knowledge base about how and when programs work ( Bickman, 1987 ; Donaldson et al., 1994 ; Lipsey, 1993 ; Lipsey and Polard, 1989 ). When an intervention is unsuccessful at stimulating change, data on mediating mechanisms can allow investigators to determine whether the failure is due to the inability of the program to activate the causal processes that the theory predicts or to an invalid program theory ( Donaldson et al., 1994 ).

Small-scale, targeted studies sometimes provide a basis for refining large-scale intervention designs and enhance understanding of methods for influencing group behavior and social change ( Fisher, 1995 ; Susser, 1995 ; Winkleby, 1994 ). For example, more in-depth, comparative, multiple-case-study evaluations are needed to explain and identify lessons learned regarding the context, process, impacts, and outcomes of community-based participatory research ( Israel et al., 1998 ).

Community-Based Participatory Research and Evaluation

As reviewed in Chapter 4 , broad social and societal influences have an impact on health. This concept points to the importance of an approach that recognizes individuals as embedded within social, political, and economic systems that shape their behaviors and constrain their access to resources necessary to maintain their health ( Brown, 1991 ; Gottlieb and McLeroy, 1994 ; Krieger, 1994 ; Krieger et al., 1993 ; Lalonde, 1974 ; Lantz et al., 1998 ; McKinlay, 1993 ; Sorensen et al., 1998a , b ; Stokols, 1992 , 1996 ; Susser and Susser, 1996a , b ; Williams and Collins, 1995 ; World Health Organization [WHO], 1986 ). It also points to the importance of expanding the evaluation of interventions to incorporate such factors ( Fisher, 1995 ; Green et al., 1995 ; Hatch et al., 1993 ; Israel et al., 1995 ; James, 1993 ; Pearce, 1996 ; Sorensen et al., 1998a , b ; Steckler et al., 1992 ; Susser, 1995 ).

This is exemplified by community-based participatory programs, which are collaborative efforts among community members, organization representatives, a wide range of researchers and program evaluators, and others ( Israel et al., 1998 ). The partners contribute “unique strengths and shared responsibilities” ( Green et al., 1995 , p. 12) to enhance understanding of a given phenomenon, and they integrate the knowledge gained from interventions to improve the health and well-being of community members ( Dressler, 1993 ; Eng and Blanchard, 1990–1 ; Hatch et al., 1993 ; Israel et al., 1998 ; Schulz et al., 1998a ). It provides “the opportunity…for communities and science to work in tandem to ensure a more balanced set of political, social, economic, and cultural priorities, which satisfy the demands of both scientific research and communities at higher risk” ( Hatch et al., 1993 , p. 31). The advantages and rationale of community-based participatory research are summarized in Table 7–2 ( Israel et al., 1998 ). The term “community-based participatory research,” is used here to clearly differentiate from “community-based research,” which is often used in reference to research that is placed in the community but in which community members are not actively involved.

TABLE 7-2. Rationale for Community-Based Participatory Research.

Rationale for Community-Based Participatory Research.

Table 7-3 presents a set of principles, or characteristics, that capture the important components of community-based participatory research and evaluation ( Israel et al., 1998 ). Each principle constitutes a continuum and represents a goal, for example, equitable participation and shared control over all phases of the research process ( Cornwall, 1996 ; Dockery, 1996 ; Green et al., 1995 ). Although the principles are presented here as distinct items, community-based participatory research integrates them.

TABLE 7-3. Principles of Community-Based Participatory Research and Evaluation.

Principles of Community-Based Participatory Research and Evaluation.

There are four major foci of evaluation with implications for research design: context, process, impact, and outcome ( Israel, 1994 ; Israel et al., 1995 ; Simons-Morton et al., 1995 ). A comprehensive community-based participatory evaluation would include all types, but it is often financially practical to pursue only one or two. Evaluation design is extensively reviewed in the literature ( Campbell and Stanley, 1963 ; Cook and Reichardt, 1979 ; Dignan, 1989 ; Green, 1977 ; Green and Gordon, 1982 ; Green and Lewis, 1986 ; Guba and Lincoln, 1989 ; House, 1980 ; Israel et al., 1995 ; Patton, 1987 , 1990 ; Rossi and Freeman, 1989 ; Shadish et al., 1991 ; Stone et al., 1994 ; Thomas and Morgan, 1991 ; Windsor et al., 1994 ; Yin, 1993 ).

Context encompasses the events, influences, and changes that occur naturally in the project setting or environment during the intervention that might affect the outcomes ( Israel et al., 1995 ). Context data provide information about how particular settings facilitate or impede program success. Decisions must be made about which of the many factors in the context of an intervention might have the greatest effect on project success.

Evaluation of process assesses the extent, fidelity, and quality of the implementation of interventions ( McGraw et al., 1994 ). It describes the actual activities of the intervention and the extent of participant exposure, provides quality assurance, describes participants, and identifies the internal dynamics of program operations ( Israel et al., 1995 ).

A distinction is often made in the evaluation of interventions between impact and outcome ( Green and Lewis, 1986 ; Israel et al., 1995 ;

Simons-Morton et al., 1995 ; Windsor et al., 1994 ). Impact evaluation assesses the effectiveness of the intervention in achieving desired changes in targeted mediators. These include the knowledge, attitudes, beliefs, and behavior of participants. Outcome evaluation examines the effects of the intervention on health status, morbidity, and mortality. Impact evaluation focuses on what the intervention is specifically trying to change, and it precedes an outcome evaluation. It is proposed that if the intervention can effect change in some intermediate outcome (“impact”), the “final“ outcome will follow.

Although the association between impact and outcome may not always be substantiated (as discussed earlier in this chapter), impact may be a necessary measure. In some instances, the outcome goals are too far in the future to be evaluated. For example, childhood cardiovascular risk factor intervention studies typically measure intermediate gains in knowledge ( Parcel et al., 1989 ) and changes in diet or physical activity ( Simons-Morton et al., 1991 ). They sometimes assess cholesterol and blood pressure, but they do not usually measure heart disease because that would not be expected to occur for many years.

Given the aims and the dynamic context within which community-based participatory research and evaluation are conducted, methodologic flexibility is essential. Methods must be tailored to the purpose of the research and evaluation and to the context and interests of the community ( Beery and Nelson, 1998 ; deKoning and Martin, 1996 ; Dockery, 1996 ; Dressler, 1993 ; Green et al., 1995 ; Hall, 1992 ; Hatch et al., 1993 ; Israel et al., 1998 ; Marin and Marin, 1991 ; Nyden and Wiewel, 1992 ; Schulz et al., 1998b ; Singer, 1993 ; Stringer, 1996 ). Numerous researchers have suggested greater use of qualitative data, from in-depth interviews and observational studies, for evaluating the context, process, impact, and outcome of community-based participatory research interventions (Fortmann et al., 1995; Goodman, 1999 ; Hugentobler et al., 1992 ; Israel et al., 1995 , 1998 ; Koepsell et al., 1992 ; Mittelmark et al., 1993 ; Parker et al., 1998 ; Sorensen et al., 1998a ; Susser, 1995 ). Triangulation is the use of multiple methods and sources of data to overcome limitations inherent in each method and to improve the accuracy of the information collected, thereby increasing the validity and credibility of the results ( Denzin, 1970 ; Israel et al., 1995 ; Reichardt and Cook, 1980 ; Steckler et al., 1992 ). For examples of the integration of qualitative and quantitative methods in research and evaluation of public-health interventions, see Steckler et al. (1992) and Parker et al. (1998) .

Assessing Government Interventions

Despite the importance of legislation and regulation to promote public health, the effectiveness of government interventions are poorly understood. In particular, policymakers often cannot answer important empirical questions: do legal interventions work and at what economic and social cost? In particular, policymakers need to know whether legal interventions achieve their intended goals (e.g., reducing risk behavior). If so, do legal interventions unintentionally increase other risks (risk/risk tradeoff)? Finally, what are the adverse effects of regulation on personal or economic liberties and general prosperity in society? This is an important question not only because freedom has an intrinsic value in democracy, but also because activities that dampen economic development can have health effects. For example, research demonstrates the positive correlation between socioeconomic status and health ( Chapter 4 ).

Legal interventions often are not subjected to rigorous research evaluation. The research that has been done, moreover, has faced challenges in methodology. There are so many variables that can affect behavior and health status (e.g., differences in informational, physical, social, and cultural environments) that it can be extraordinarily difficult to demonstrate a causal relationship between an intervention and a perceived health effect. Consider the methodologic constraints in identifying the effects of specific drunk-driving laws. Several kinds of laws can be enacted within a short period, so it is difficult to isolate the effect of each law. Publicity about the problem and the legal response can cross state borders, making state comparisons more difficult. Because people who drive under the influence of alcohol also could engage in other risky driving behaviors (e.g., speeding, failing to wear safety belts, running red lights), researchers need to control for changes in other highway safety laws and traffic law enforcement. Subtle differences between comparison communities can have unanticipated effects on the impact of legal interventions ( DeJong and Hingson, 1998 ; Hingson, 1996 ).

Despite such methodologic challenges, social science researchers have studied legal interventions, often with encouraging results. The social science, medical, and behavioral literature contains evaluations of interventions in several public health areas, particularly in relation to injury prevention ( IOM, 1999 ; Rivara et al., 1997a , b ). For example, studies have evaluated the effectiveness of regulations to prevent head injuries (bicycle helmets: Dannenberg et al., 1993 ; Kraus et al., 1994 ; Lund et al., 1991 ; Ni et al., 1997 ; Thompson et al., 1996a , b ), choking and suffocation (refrigerator disposal and warning labels on thin plastic bags: Kraus, 1985 ), child poisoning (childproof packaging: Rogers, 1996 ), and burns (tap water: Erdmann et al., 1991 ). One regulatory measure that has received a great deal of research attention relates to reductions in cigarette-smoking ( Chapter 6 ).

Legal interventions can be an important part of strategies to change behaviors. In considering them, government and other public health agencies face difficult and complex tradeoffs between population health and individual rights (e.g., autonomy, privacy, liberty, property). One example is the controversy over laws that require motorcyclists to wear helmets. Ethical concerns accompany the use of legal interventions to mandate behavior change and must be part of the deliberation process.

  • COST-EFFECTIVENESS EVALUATION

It is not enough to demonstrate that a treatment benefits some patients or community members. The demand for health programs exceeds the resources available to pay for them so that treatments provide clinical benefit and value for money. Investigators, clinicians, and program planners must demonstrate that their interventions constitute a good use of resources.

Well over $ 1 trillion is spent on health care each year in the United States. Current estimates suggest that expenditures on health care exceed $4000 per person ( Health Care Financing Administration, 1998 ). Investments are made in health care to produce good health status for the population, and it is usually assumed that more investment will lead to greater health. Some expenditures in health care produce relatively little benefit; others produce substantial benefits. Cost-effectiveness analysis (CEA) can help guide the use of resources to achieve the greatest improvement in health status for a given expenditure.

Consider the medical interventions in Table 7-4 , all of which are wellknown, generally accepted, and widely used. Some are traditional medical care and some are preventive programs. To emphasize the focus on increasing good health, the table presents the data in units of health bought for $1 million rather than in dollars per unit of health, the usual approach in CEA. The life-year is the most comprehensive unit measure of health. Table 7-4 reveals several important points about resource allocation. There is tremendous variation among the interventions in what can be accomplished for $1 million; which nets 7,750 life-years if used for influenza vaccinations for the elderly, 217 life-years if applied to smoking-cessation programs, but only 2 life-years if used to supply Lovastatin to men aged 35–44 who have high total cholesterol but no heart disease and no other risk factors for heart disease.

TABLE 7-4. Life-Years Yielded by Selected Interventions per $1 Million, 1997 Dollars.

Life-Years Yielded by Selected Interventions per $1 Million, 1997 Dollars.

How effectively an intervention contributes to good health depends not only on the intervention, but also on the details of its use. Antihypertensive medication is effective, but Propranolol is more cost-effective than Captopril. Thyroid screening is more cost-effective in women than in men. Lovastatin produces more good health when targeted at older high-risk men than at younger low-risk men. Screening for cervical cancer at 3-year intervals with the Pap smear yields 36 life-years per $1 million (compared with no screening), but each $1 million spent to increase the frequency of screening to 2 years brings only 1 additional life-year.

The numbers in Table 7-4 illustrate a central concept in resource allo-cation: opportunity cost. The true cost of choosing to use a particular intervention or to use it in a particular way is not the monetary cost per se, but the health benefits that could have been achieved if the money had been spent on another service instead. Thus, the opportunity cost of providing annual Pap smears ($1 million) rather than smoking-cessation programs is the 217 life-years that could have been achieved through smoking cessation.

The term cost-effectiveness is commonly used but widely misunderstood. Some people confuse cost-effectiveness with cost minimization. Cost minimization aims to reduce health care costs regardless of health outcomes. CEA does not have cost-reduction per se as a goal but is designed to obtain the most improvement in health for a given expenditure. CEA also is often confused with cost/benefit analysis (CBA), which compares investments with returns. CBA ranks the amount of improved health associated with different expenditures with the aim of identifying the appropriate level of investment. CEA indicates which intervention is preferable given a specific expenditure.

Usually, costs are represented by the net or difference between the total costs of the intervention and the total costs of the alternative to that intervention. Typically, the measure of health is the QALY. The net health effect of the intervention is the difference between the QALYs produced by an intervention and the QALYs produced by an alternative or other comparative base.

Comprehensive as it is, CEA does not include everything that might be relevant to a particular decision—so it should never be used mechanically. Decision-makers can have legitimate reasons to emphasize particular groups, benefits, or costs more heavily than others. Furthermore, some decisions require information that cannot be captured easily in a CEA, such as the effect of an intervention on individual privacy or liberty.

CEA is an analytical framework that arises from the question of which ways of promoting good health—procedures, tests, medications, educational programs, regulations, taxes or subsidies, and combinations and variations of these—provide the most effective use of resources. Specific recommendations about behavioral and psychosocial interventions will contribute the most to good health if they are set in this larger context and based on information that demonstrates that they are in the public interest. However, comparing behavioral and psychosocial interventions with other ways of promoting health on the basis of cost-effectiveness requires additional research. Currently there are too few studies that meet this standard to support such recommendations.

  • DISSEMINATION

A basic assumption underlying intervention research is that tested interventions found to be effective are disseminated to and implemented in clinics, communities, schools, and worksites. However, there is a sizable gap between science and practice ( Anderson, 1998 ; Price, 1989 , 1998 ). Researchers and practitioners need to ensure that an intervention is effective, and that the community or organization is prepared to adopt, implement, disseminate, and institutionalize it. There also is a need for demonstration research (phase V) to explain more about the process of dissemination itself.

Dissemination to Consumers

Biomedical research results are commonly reported in the mass media. Nearly every day people are given information about the risks of disease, the benefits of treatment, and the potential health hazards in their environments. They regularly make health decisions on the basis of their understanding of such information. Some evidence shows that lay people often misinterpret health risk information ( Berger and Hendee, 1989 ; Fischhoff, 1999a ) as do their doctors ( Kalet et al., 1994 ; Kong et al., 1986 ). On the question of such a widely publicized issue as mammography, for example, evidence suggests that women overestimate their risk of getting breast cancer by a factor of at least 20 and that they overestimate the benefits of mammography by a factor of 100 ( Black et al., 1995 ). In a study of 500 female veterans ( Schwartz et al., 1997 ), half the women over-estimated their risk of death from breast cancer by a factor of 8. This did not appear to be because the subjects thought that they were more at risk than other women; only 10% reported that they were at higher risk than the average woman of their age. The topic of communication of health messages to the public is discussed at length in an IOM report, Speaking of Health: Assessing Health Communication. Strategies for Diverse Populations ( IOM, 2001 ).

Communicating Risk Information

Improving communication requires understanding what information the public needs. That necessitates both descriptive and normative analyses, which consider what the public believes and what the public should know, respectively. Juxtaposing normative and descriptive analyses might provide guidance for reducing misunderstanding ( Fischhoff and Downs, 1997 ). Formal normative analysis of decisions involves the creation of decision trees, showing the available options and the probabilities of various outcomes of each, whose relative attractiveness (or aversiveness) must be evaluated by people. Although full analyses of decision problems can be quite complex, they often reveal ways to drastically simplify individuals' decision-making problems—in the sense that they reveal a small number of issues of fact or value that really merit serious attention ( Clemen, 1991 ; Merz et al., 1993 ; Raiffa, 1968 ). Those few issues can still pose significant challenges for decision makers. The actual probabilities can differ from people's subjective probabilities (which govern their behavior). For example, a woman who overestimates the value of a mammogram might insist on tests that are of little benefit to her and mistrust the political/ medical system that seeks to deny such care ( Woloshin et al., 2000 ). Obtaining estimates of subjective probabilities is difficult. Although eliciting probabilities has been studied in other contexts over the past two generations ( von Winterfeldt and Edwards, 1986 ; Yates, 1990 ), it has received much less attention in medical contexts, where it can pose questions that people are unwilling or unable to confront ( Fischhoff and Bruine de Bruin, 1999 ).

In addition to such quantitative beliefs, people often need a qualitative understanding of the processes by which risks are created and controlled. This allows them to get an intuitive feeling for the quantitative estimates, to feel competent to make decisions in their own behalf, to monitor their own experience, and to know when they need help ( Fischhoff, 1999b ; Leventhal and Cameron, 1987 ). Not seeing the world in the same way as scientists do also can lead lay people to misinterpret communications directed at them. One common (and some might argue, essential) strategy for evaluating any public health communication or research instrument is to ask people to think aloud as they answer draft versions of questions ( Ericsson and Simon, 1994 ; Schriver, 1989 ). For example, subjects might be asked about the probability of getting HIV from unprotected sexual activity. Reasons for their assessments might be explored as they elaborate on their impressions and the assumptions they use ( Fischhoff, 1999b ; McIntyre and West, 1992 ). The result should both reveal their intuitive theories and improve the communication process.

When people must evaluate their options, the way in which information is framed can have a substantial effect on how it is used ( Kahneman and Tversky, 1983 ; Schwartz, 1999 ; Tversky and Kahneman, 1988 ). The fairest presentation of risk information might be one in which multiple perspectives are used ( Kahneman and Tversky, 1983 , 1996 ). For example, one common situation involves small risks that add up over the course of time, through repeated exposures. The chances of being injured in an automobile crash are very small for any one outing, whether or not the driver wears a seatbelt. However, driving over a lifetime creates a substantial risk—and a substantial benefit for seatbelt use. One way to communicate that perspective is to do the arithmetic explicitly, so that subjects understand it ( Linville et al., 1993 ). Another method that helps people to understand complex information involves presenting ranges rather than best estimates. Science is uncertain, and it should be helpful for people to understand the intervals within which their risks are likely to fall ( Lipkus and Hollands, 1999 ).

Risk communication can be improved. For example, many members of the public have been fearful that proximity to electromagnetic fields and power lines can increase the risk of cancer. Studies revealed that many people knew very little about properties of electricity. In particular, they usually were unaware that exposure decreases as a function of the cube root of distance from the lines. After studying mental models of this risk, Morgan (1995) developed a tiered brochure that presented the problem at a variety of risks. The brochure addressed common misconceptions and explained why scientists disagree about the risks posed by electromagnetic fields. Participants on each side of the debate reviewed the brochure for fairness. Several hundred thousand copies of the brochure have now been distributed. This approach to communication requires that the public listen to experts, but it also requires that the experts listen to the public. Providing information is not enough; it is necessary to take the next step to demonstrate that the information is presented in an unbiased fashion and that the public accurately processes what is offered ( Edworthy and Adams, 1997 ; Hadden, 1986 ; Morgan et al., 2001 ; National Research Council, 1989 ).

The electromagnetic field brochure is an example of a general approach in cognitive psychology, in which communications are designed to create coherent mental models of the domain being considered ( Ericsson and Simon, 1994 ; Fischhoff, 1999b ; Gentner and Stevens, 1983 ; Johnson-Laird, 1980 ). The bases of these communications are formal models of the domain. In the case of the complex processes creating and controlling risks, the appropriate representation is often an influence diagram, a directed graph that captures the uncertain relationships among the factors involved ( Clemen, 1991 ; Morgan et al., 2001 ). Creating such a diagram requires pooling the knowledge of diverse disciplines, rather than letting each tell its own part of the story. Identifying the critical messages requires considering both the science of the risk and recipients' intuitive conceptualizations.

Presentation of Clinical Research Findings

Research results are commonly misinterpreted. When a study shows that the effect of a treatment is statistically significant, it is often assumed that the treatment works for every patient or at least for a high percentage of those treated. In fact, large experimental trials, often with considerable publicity, promote treatments that have only minor effects in most patients. For example, contemporary care for high blood serum cholesterol has been greatly influenced by results of the Coronary Primary Prevention Trial or CPPT Lipid Research Clinics Program, 1984 , in which men were randomly assigned to take a placebo or cholestyramine. Cholestyramine can significantly lower serum cholesterol and, in this trial, reduced it by an average of 8.5%. Men in the treatment group experienced 24% fewer heart attack deaths and 19% fewer heart attacks than did men who took the placebo.

The CPPT showed a 24% reduction in cardiovascular mortality in the treated group. However, the absolute proportions of patients who died of cardiovascular disease were similar in the 2 groups: there were 38 deaths among 1900 participants (2%) in the placebo group and 30 deaths among 1906 participants (1.6%) in the cholestyramine group. In other words, taking the medication for 6 years reduced the chance of dying from cardiovascular disease from 2% to 1.6%.

Because of the difficulties in communicating risk ratio information, the use of simple statistics, such as the number needed to treat (NNT), has been suggested ( Sackett et al., 1997 ). NNT is the number of people that must be treated to avoid one bad outcome. Statistically, NNT is defined as the reciprocal of the absolute-risk reduction. In the cholesterol example, if 2% (0.020) of the patients died in the control arm of an experiment and 1.6% (0.016) died in the experimental arm, the absolute risk reduction is 0.020–0.016=0.004. The reciprocal of 0.004 is 250. In this case, 250 people would have to be treated for 6 years to avoid 1 death from coronary heart disease. Treatments can harm as well as benefit, so in addition to calculating the NNT, it is valuable to calculate the number needed to harm (NNH). This is the number of people a clinician would need to treat to produce one adverse event. NNT and NNH can be modified for those in particular risk groups. The advantage of these simple numbers is that they allow much clearer communication of the magnitude of treatment effectiveness.

Shared Decision Making

Once patients understand the complex information about outcomes, they can fully participate in the decision-making process. The final step in disseminating information to patients involves an interactive process that allows patients to make informed choices about their own health-care.

Despite a growing consensus that they should be involved, evidence suggests that patients are rarely consulted. Wennberg (1995) outlined a variety of common medical decisions in which there is uncertainty. In each, treatment selection involves profiles of risks and benefits for patients. Thiazide medications can be effective at controlling blood pressure, they also can be associated with increased serum cholesterol; the benefit of blood pressure reduction must be balanced against such side effects as dizziness and impotence.

Factors that affect patient decision making and use of health services are not well understood. It is usually assumed that use of medical services is driven primarily by need, that those who are sickest or most disabled use services the most ( Aday, 1998 ). Although illness is clearly the major reason for service use, the literature on small-area variation demonstrates that there can be substantial variability in service use among communities that have comparable illness burdens and comparable insurance coverage ( Wennberg, 1998 ). Therefore, social, cultural, and system variables also contribute to service use.

The role of patients in medical decision making has undergone substantial recent change. In the early 1950s, Parsons (1951) suggested that patients were excluded from medical decision making unless they assumed the “sick role,” in which patients submit to a physician's judgment, and it is assumed that physicians understand the patients' preferences. Through a variety of changes, patients have become more active. More information is now available, and many patients demand a greater role ( Sharf, 1997 ). The Internet offers vast amounts of information to patients; some of it misleading or inaccurate ( Impicciatore et al., 1997 ). One difficulty is that many patients are not sophisticated consumers of technical medical information ( Strum, 1997 ).

Another important issue is whether patients want a role. The literature is contradictory on this point; at least eight studies have addressed the issue. Several suggest that most patients express little interest in participating ( Cassileth et al., 1980 ; Ende et al., 1989 ; Mazur and Hickam, 1997 ; Pendleton and House, 1984 ; Strull et al., 1984 ; Waterworth and Luker, 1990 ). Those studies challenge the basis of shared medical decision making. Is it realistic to engage patients in the process if they are not interested? Deber ( Deber, 1994 ; Deber et al., 1996 ) has drawn an important distinction between problem solving and decision making. Medical problem solving requires technical skill to make an appropriate diagnosis and select treatment. Most patients prefer to leave those judgments in the hands of experts ( Ende et al., 1989 ). Studies challenging the notion that patients want to make decisions typically asked questions about problem solving ( Ende et al., 1989 ; Pendleton and House, 1984 ; Strull et al., 1984 ).

Shared decision making requires patients to express personal preferences for desired outcomes, and many decisions involve very personal choices. Wennberg (1998) offers examples of variation in health care practices that are dominated by physician choice. One is the choice between mastectomy and lumpectomy for women with well-defined breast cancer. Systematic clinical trials have shown that the probability of surviving breast cancer is about equal after mastectomy and after lumpectomy followed by radiation ( Lichter et al., 1992 ). But in some areas of the United States, nearly half of women with breast cancer have mastectomies (for example, Provo, Utah); in other areas less than 2% do (for example, New Jersey; Wennberg, 1998 ). Such differences are determined largely by surgeon choice; patient preference is not considered. In the breast cancer example, interviews suggest that some women have a high preference for maintaining the breast, and others feel more comfortable having more breast tissue removed. The choices are highly personal and reflect variations in comfort with the idea of life with and without a breast. Patients might not want to engage in technical medical problem solving, but they are the only source of information about preferences for potential outcomes.

The process by which patients exercise choice can be difficult. There have been several evaluations of efforts to involve patients in decision making. Greenfield and colleagues (1985) taught patients how to read their own medical records and offered coaching on what questions to ask during encounters with physicians. In this randomized trial involving patients with peptic ulcer disease, those assigned to a 20-minute treatment had fewer functional limitations and were more satisfied with their care than were patients in the control group. A similar experiment involving patients treated for diabetes showed that patients randomly assigned to receive visit preparation scored significantly better than controls on three dimensions of health-related quality of life (mobility, role performance, physical activity). Furthermore, there were significant improvements for biochemical measures of diabetes control ( Greenfield et al., 1988 ).

Many medical decisions are more complex than those studied by Greenfield and colleagues. There are usually several treatment alternatives, and the outcomes for each choice are uncertain. Also, the importance of the outcomes might be valued differently by different people. Shared decision-making programs have been proposed to address those concerns ( Kasper et al., 1992 ). The programs usually use electronic media. Some involve interactive technologies in which a patient becomes familiar with the probabilities of various outcomes. With some technologies, the patient also has the opportunity to witness others who have embarked on different treatments. Video allows a patient to witness the outcomes of others who have made each treatment choice. A variety of interactive programs have been systematically evaluated. In one study ( Barry et al., 1995 ), patients with benign prostatic hyperplasia were given the opportunity to use an interactive video. The video was generally well received, and the authors reported that there was a significant reduction in the rate of surgery and an increase in the proportion who chose “watchful waiting” after using the decision aid. Flood et al. (1996) reported similar results with an interactive program.

Not all evaluations of decision aids have been positive. In one evaluation of an impartial video for patients with ischemic heart disease, ( Liao et al., 1996 ) 44% of the patients found it helpful for making treatment choices but more than 40% reported that it increased their anxiety ( Liao et al., 1996 ). Most of the patients had received advice from their physicians before watching the video.

Despite enthusiasm for shared medical decision making, little systematic research has evaluated interventions to promote it ( Frosch and Kaplan, 1999 ). Systematic experimental trials are needed to determine whether the use of shared decision aids enhances patient outcomes. Although decision aids appear to enhance patient satisfaction, it is unclear whether they result in reductions in surgery, as suggested by Wennberg (1998) , or in improved patient outcomes ( Frosch and Kaplan, 1999 ).

Dissemination Through Organizations

The effect of any preventive intervention depends both on its ability to influence health behavior change or reduce health risks and on the extent to which the target population has access to and participates in the program. Few preventive interventions are free-standing in the community. Rather, organizations serve as “hosts” for health promotion and disease prevention programs. Once a program has proven successful in demonstration projects and efficacy trials, it must be adopted and implemented by new organizations. Unfortunately, diffusion to new organizations often proceeds very slowly ( Murray, 1986 ; Parcel et al., 1990 ).

A staged change process has been proposed for optimal diffusion of preventive interventions to new organizations. Although different researchers have offered a variety of approaches, there is consensus on the importance of at least four stages ( Goodman et al., 1997 ):

  • dissemination, during which organizations are made aware of the programs and their benefits;
  • adoption, during which the organization commits to initiating the program;
  • implementation, during which the organization offers the program or services;
  • maintenance or institutionalization, during which the organization makes the program part of its routines and standard offerings.

Research investigating the diffusion of health behavior change programs to new organizations can be seen, for example, in adoption of prevention curricula by schools and of preventive services by medical care practices.

Schools are important because they allow consistent contact with children over their developmental trajectory and they provide a place where acquisition of new information and skills is normative ( Orlandi, 1996b ). Although much emphasis has been placed on developing effective health behavior change curricula for students throughout their school years, the literature is replete with evaluations of school-based curricula that suggest that such programs have been less than successful ( Bush et al., 1989 ; Parcel et al., 1990 ; Rohrbach et al., 1996 ; Walter, 1989 ). Challenges or barriers to effective diffusion of the programs include organizational issues, such as limited time and resources, few incentives for the organization to give priority to health issues, pressure to focus on academic curricula to improve student performance on proficiency tests, and unclear role delineation in terms of responsibility for the program; extra-organizational issues or “environmental turbulence,” such as restructuring of schools, changing school schedules or enrollments, uncertainties in public funding; and characteristics of the programs that make them incompatible with the potential host organizations, such as being too long, costly, and complex ( Rohrbach et al., 1996 ; Smith et al., 1995 ).

Initial or traditional efforts to enhance diffusion focused on the characteristics of the intervention program, but more recent studies have focused on the change process itself Two NCI-funded studies to diffuse tobacco prevention programs throughout schools in North Carolina and Texas targeted the four stages of change and were evaluated through randomized, controlled trials ( Goodman et al., 1997 ; Parcel et al., 1989 , 1995 ; Smith et al., 1995 ; Steckler et al., 1992 ). Teacher-training interventions appeared to enhance the likelihood of implementation in each study (an effect that has been replicated in other investigations; see Perry et al., 1990 ). However, other strategies (e.g., process consultation, newsletters, self-paced instructional video) were less successful at enhancing adoption and institutionalization. None of the strategies attempted to change the organizing arrangements (such as reward systems or role responsibilities) of the school districts to support continued implementation of the program.

These results suggest that further reliance on organizational change theory might greatly enhance the diffusion of programs more rapidly and thoroughly. For example, Rohrbach et al. (1996 , pp. 927–928) suggest that “change agents and school personnel should work as a team to diagnose any problems that may impede program implementation and develop action plans to address them [and that]…change agents need to promote the involvement of teachers, as well as that of key administrators, in decisions about program adoption and implementation.” These suggestions are clearly consistent with an organizational development approach. Goodman and colleagues (1997) suggest that the North Carolina intervention might have been more effective had it included more participative problem diagnosis and action planning, and had consultation been less directive and more oriented toward increasing the fit between the host organization and the program.

Medical Practices

Primary care medical practices have long been regarded as organizational settings that provide opportunities for health behavior interventions. With the growth of managed care and its financial incentives for prevention, these opportunities are even greater ( Gordon et al., 1996 ). Much effort has been invested in the development of effective programs and processes for clinical practices to accomplish health behavior change. However, the diffusion of such programs to medical practices has been slow (e.g., Anderson and May, 1995 ; Lewis, 1988 ).

Most systemic programs encourage physicians, nurses, health educators, and other members of the health-professional team to provide more consistent change-related statements and behavioral support for health-enhancing behaviors in patients ( Chapter 5 ). There might be fundamental aspects of a medical practice that support or inhibit efforts to improve health-related patient behavior ( Walsh and McPhee, 1992 ). Visual reminders to stay up-to-date on immunizations, to stop smoking cigarettes, to use bicycle helmets, and to eat a healthy diet are examples of systemic support for patient activation and self-care ( Lando et al., 1995 ). Internet support for improved self-management of diabetes has shown promise ( McKay et al., 1998 ). Automated chart reminders to ask about smoking status, update immunizations, and ensure timely cancer-screening examinations—such as Pap smears, mammography, and prostate screening—are systematic practice-based improvements that increase the rate of success in reaching stated goals on health process and health behavior measures ( Cummings et al., 1997 ). Prescription forms for specific telephone callback support can enhance access to telephone-based counseling for weight loss, smoking cessation, and exercise and can make such behavioral teaching and counseling more accessible ( Pronk and O'Connor, 1997 ). Those and other structural characteristics of clinical practices are being used and evaluated as systematic practice-based changes that can improve treatment for, and prevention of, various chronic illnesses ( O'Connor et al., 1998 ).

Barriers to diffusion include physician factors, such as lack of training, lack of time, and lack of confidence in one's prevention skills; health-care system factors, such as lack of health-care coverage and inadequate reimbursement for preventive services in fee-for-service systems; and office organization factors, such as inflexible office routines, lack of reminder systems, and unclear assignment of role responsibilities ( Thompson et al., 1995 ; Wagner et al., 1996 ).

The capitated financing of many managed-care organizations greatly reduces system barriers. Interventions that have focused solely on physician knowledge and behavior have not been very effective. Interventions that also addressed office organization factors have been more effective ( Solberg et al., 1998b ; Thompson et al., 1995 ). For example, the diffusion of the Put Prevention Into Practice (PPIP) program ( Griffith et al., 1995 ), a comprehensive federal effort, was recommended by the U.S. Preventive Services Task Force and is distributed by federal agencies and through professional associations. Using a case study approach, McVea and colleagues (1996) studied the implementation of the program in family practice settings. They found that PPIP was “used not at all or only sporadically by the practices that had ordered the kit” (p. 363). The authors suggested that the practices that provided selected preventive services did not adopt the PPIP because they did not have the organizational skills and resources to incorporate the prevention systems into their office routines without external assistance.

Descriptive research clearly indicates a need for well-conceived and methodologically-rigorous diffusion research. Many of the barriers to more rapid and effective diffusion are clearly “systems problems” ( Solberg et al., 1998b ). Thus, even though the results are somewhat mixed, recent work applying systems approaches and organizational development strategies to the diffusion dilemma is encouraging. In particular, the emphasis on building internal capacity for diffusion of the preventive interventions—for example, continuous quality improvement teams ( Solberg et al., 1998a ) and the identification and training of “program champions” within the adopting systems ( Smith et al., 1995 )—seems crucial for institutionalization of the programs.

Dissemination to Community-Based Groups

This section examines three aspects of dissemination: the need for dissemination of effective community interventions, community readiness for interventions, and the role of dissemination research.

Dissemination of Effective Community Interventions

Dissemination requires the identification of core and adaptive elements of an intervention ( Pentz et al., 1990 ; Pentz and Trebow, 1997 ; Price, 1989 ). Core elements are features of an intervention program or policy that must be replicated to maintain the integrity of the interventions as they are transferred to new settings. They include theoretically based behavior change strategies, targeting of multiple levels of influence, and the involvement of empowered community leaders ( Florin and Wandersman, 1990 ; Pentz, 1998 ). Practitioners need training in specific strategies for the transfer of core elements ( Bero et al., 1998 ; Orlandi, 1986 ). In addition, the amount of intervention delivered and its reach into the targeted population might have to be unaltered to replicate behavior change in a new setting. Research has not established a quantitative “dose” of intervention or a quantitative guide for the percentage of core elements that must be implemented to achieve behavior change. Process evaluation can provide guidance regarding the desired intensity and fidelity to intervention protocol. Botvin and colleagues (1995) , for example, found that at least half the prevention program sessions needed to be delivered to achieve the targeted effects in a youth drug abuse prevention program. They also found that increased prevention effects were associated with fidelity to the intervention protocol, which included standardized training of those implementing the program, implementation within 2 weeks of that training, and delivery of at least two program sessions or activities per week ( Botvin et al., 1995 ).

Adaptive elements are features of an intervention that can be tailored to local community, organizational, social, and economic realities of a new setting without diluting the effectiveness of the intervention ( Price, 1989 ). Adaptations might include timing and scheduling or culturally meaningful themes through which the educational and behavior change strategies are delivered.

Community and Organizational Readiness

Community and organizational factors might facilitate or hinder the adoption, implementation, and maintenance of innovative interventions. Diffusion theory assumes that the unique characteristics of the adopter (such as community, school, or worksite) interact with the specific attributes of the innovation (risk factor targets) to determine whether and when an innovation is adopted and implemented ( Emmons et al., 2000 ; Rogers, 1983 , 1995 ). Rogers (1983 , 1995) has identified characteristics that predict the adoption of innovations in communities and organizations. For example, an innovation that has a relative advantage over the idea or activity that it supersedes is more likely to be adopted. In the case of health promotion, organizations might see smoke-free worksites as having a relative advantage not only for employee health, but also for the reduction of absenteeism. An innovation that is seen as compatible with adopters' sociocultural values and beliefs, with previously introduced ideas, or with adopters' perceived needs for innovation is more likely to be implemented. The less complex, and clearer the innovation, the more likely it is to be adopted. For example, potential adopters are more likely to change their health behaviors when educators provide clear specification of the skills needed to change the behaviors. Trialability is the degree to which an innovation can be experimented with on a limited basis. In nutrition education, adopters are more likely to prepare low-fat recipes at home if they have an opportunity to taste the results in a class or supermarket and are given clear, simple directions for preparing them. Finally, observability is the degree to which the results of an innovation are visible to others. In health behavior change, an example of observability might be attention given to a health promotion program by the popular press ( Pentz, 1998 ; Rogers, 1983 ).

Dissemination Research

The ability to identify effective interventions and explain the characteristics of communities and organizations that support dissemination of those interventions provides the basic building blocks for dissemination. It is necessary, however, to learn more about how dissemination occurs to increase its effectiveness ( Pentz, 1998 ). What are the core elements of interventions, and how can they be adapted ( Price, 1989 )? How do the predictors of diffusion function in the dissemination process ( Pentz, 1998 )? What characteristics of community leaders are associated with dissemination of prevention programs? What personnel and material resources are needed to implement and maintain prevention programs? How can written materials and training in program implementation be provided to preserve fidelity to core elements ( Price, 1989 )?

Dissemination research could help identify alternatives to conceptualizing transfer of intervention technology from research to the practice setting. Rather than disseminating an exact replication of specific tested interventions, program transfer might be based on core and adaptive intervention components at both the individual and community organizational levels ( Blaine et al., 1997 ; Perry 1999 ). Dissemination might also be viewed as replicating a community-based participatory research process, or as a planning process that incorporates core components ( Perry 1999 ), rather than exact duplication of all aspects of intervention activities.

The principles of community-based participatory research presented here could be operationalized and used as criteria for examining the extent to which these dimensions were disseminated to other projects. The guidelines developed by Green and colleagues (1995) for classifying participatory research projects also could be used. Similarly, based on her research and experience with children and adolescents in school health behavior change programs, Perry (1999) developed a guidebook that outlines a 10-step process for developing communitywide health behavior programs for children and adolescents.

Facilitating Interorganizational Linkages

To address complex health issues effectively, organizations increasingly form links with one another to form either dyadic connections (pairs) or networks ( Alter and Hage, 1992 ). The potential benefits of these interorganizational collaborations include access to new information, ideas, materials, and skills; minimization of duplication of effort and services; shared responsibility for complex or controversial programs; increased power and influence through joint action; and increased options for intervention (e.g., one organization might not experience the political constraints that hamper the activities of another; Butterfoss et al., 1993 ). However, interorganizational linkages have costs. Time and resources must be devoted to the formation and maintenance of relationships. Negotiating the assessment and planning processes can take a longer time. And sometimes an organization can find that the policies and procedures of other organizations are incompatible with its own ( Alter and Hage, 1992 ; Butterfoss et al., 1993 ).

One way a dyadic linkage between organizations can serve health-promoting goals grows out of the diffusion of innovations through organizations. An organization can serve as a “linking agent” ( Monahan and Scheirer, 1988 ), facilitating the adoption of a health innovation by organizations that are potential implementors. For example, the National Institute for Dental Research (NIDR) developed a school-based program to encourage children to use a fluoride mouth rinse to prevent caries. Rather than marketing the program directly to the schools, NIDR worked with state agencies to promote the program. In a national study, Monahan and Scheirer (1988) found that when state agencies devoted more staff to the program and located a moderate proportion of their staff in regional offices (rather than in a central office) there was likely to be a larger proportion of school districts implementing the program. Other programs, such as the Heart Partners program of the American Heart Association ( Roberts-Gray et al., 1998 ), have used the concept of linking agents to diffuse preventive interventions. Studies of these approaches attempt to identify the organizational policies, procedures, and priorities that permit the linking agent to successfully reach a large proportion of the organizations that might implement the health behavior program. However, the research in this area does not allow general conclusions or guidelines to be drawn.

Interorganizational networks are commonly used in community-wide health initiatives. Such networks might be composed of similar organizations that coordinate service delivery (often called consortia) or organizations from different sectors that bring their respective resources and expertise to bear on a complex health problem (often called coalitions). Multihospital systems or linkages among managed-care organizations and local health departments for treating sexually transmitted diseases ( Rutherford, 1998 ) are examples of consortia. The interorganizational networks used in Project ASSIST and COMMIT, major NCI initiatives to reduce the prevalence of smoking, are examples of coalitions ( U.S. Department of Health and Human Services, 1990 ).

Stage theory has been applied to the formation and performance of interorganizational networks ( Alter and Hage, 1992 ; Goodman and Wandersman, 1994 ). Various authors have posited somewhat different stages of development, but they all include: initial actions, to form the coalition; the formalization of the mission, structure, and processes of the coalition; planning, development, and implementation of programmatic activities; and accomplishment of the coalition's health goals. Stage theory suggests that different strategies are likely to facilitate success at different stages of development ( Lewin, 1951 ; Schein, 1987 ). The complexity, formalization, staffing patterns, communication and decision-making patterns, and leadership styles of the interorganizational network will affect its ability to progress toward its goals ( Alter and Hage, 1992 ; Butterfoss et al., 1993 ; Kegler et al., 1998a , b ).

In 1993, Butterfoss and colleagues reviewed the literature on community coalitions and found “relatively little empirical evidence” (p. 315) to bring to bear on the assessment of their effectiveness. Although the use of coalitions in community-wide health promotion continues, the accumulation of evidence supporting their effectiveness is still slim. Several case studies suggest that coalitions and consortia can be successful in bringing about changes in health behaviors, health systems, and health status (e.g., Butterfoss et al., 1998 ; Fawcett et al., 1997 ; Kass and Freudenberg, 1997 ; Myers et al., 1994 ; Plough and Olafson, 1994 ). However, the conditions under which coalitions are most likely to thrive and the strategies and processes that are most likely to result in effective functioning of a coalition have not been consistently identified empirically.

Evaluation models, such as the FORECAST model ( Goodman and Wandersman, 1994 ) and the model proposed by the Work Group on Health Promotion and Community Development at the University of Kansas ( Fawcett et al., 1997 ), address the lack of systematic and rigorous evaluation of coalitions. These models provide strategies and tools for assessing coalition functioning at all stages of development, from initial formation to ultimate influence on the coalition's health goals and objectives. They are predicated on the assumption that the successful passage through each stage is necessary, but not sufficient, to ensure successful passage through the next stage. Widespread use of these and other evaluation frameworks and tools can increase the number and quality of the empirical studies of the effects of interorganizational linkages.

Orlandi (1996a) states that diffusion failures often result from a lack of fit between the proposed host organization and the intervention program. Thus, he suggests that if the purpose is to diffuse an existing program, the design of the program and the process of diffusion need to be flexible enough to adapt to the needs and resources of the organization. If the purpose is to develop and disseminate a new program, innovation development and transfer process should be integrated. Those conclusions are consistent with some of the studies reviewed above. For example, McVea et al. (1996) concluded that a “one size fits all” approach to clinical preventive systems was not likely to diffuse effectively.

  • Aday LA. Evaluating the Healthcare System: Effectiveness, Efficiency, and Equity. Chicago: Health Administration Press; 1998.
  • Alter C, Hage J. Organisations Working Together. Newbury Park, CA: Sage; 1992.
  • Altman DG. Sustaining interventions in community systems: On the relationship between researchers and communities. Health Psychology. 1995; 14 :526–536. [ PubMed : 8565927 ]
  • Anderson LM, May DS. Has the use of cervical, breast, and colorectal cancer screening increased in the United States? American Journal of Public Health. 1995; 85 :840–842. [ PMC free article : PMC1615482 ] [ PubMed : 7762721 ]
  • Anderson NB.After the discoveries, then what? A new approach to advancing evidence-based prevention practice. Programs and abstracts from NIH Conference, Preventive Intervention Research at the Crossroads; Bethesda, MD. 1998. pp. 74–75.
  • Anderson NH, Zalinski J. Functional measurement approach to self-estimation in multiattribute evaluation. In: Anderson NH, editor. Contributions to Information Integration Theory, Vol. 1: Cognition; Vol. 2: Social; Vol. 3: Developmental. Hillsdale, NJ: Erlbaum Press; 1990. pp. 145–185.
  • Antonovsky A. The life cycle, mental health and the sense of coherence. Israel Journal of Psychiatry and Related Sciences. 1985; 22 (4):273–280. [ PubMed : 3836223 ]
  • Baker EA, Brownson CA. Defining characteristics of community-based health promotion programs. In: Brownson RC, Baker EA, Novick LF, editors. Community -Based Prevention Programs that Work. Gaithersburg, MD: Aspen; 1999. pp. 7–19.
  • Balestra DJ, Littenberg B. Should adult tetanus immunization be given as a single vaccination at age 65? A cost-effectiveness analysis. Journal of General Internal Medicine. 1993; 8 :405–412. [ PubMed : 8410405 ]
  • Barry MJ, Fowler FJ, Mulley AG, Henderson JV, Wennberg JE. Patient reactions to a program designed to facilitate patient participation in treatment decisions for benign prostatic hyperplasia. Medical Care. 1995; 33 :771–782. [ PubMed : 7543639 ]
  • Beery B, Nelson G. Making outcomes matter. Seattle: Group Health/Kaiser Permanente Community Foundation; 1998. Evaluating community-based health initiatives: Dilemmas, puzzles, innovations and promising directions.
  • Bennett KJ, Torrance GW. Measuring health preferences and utilities: Rating scale, time trade-off and standard gamble methods. In: Spliker B, editor. Quality of Life and Pharmacoeconomics in Clinical Trials. Philadelphia: Lippincott-Raven; 1996. pp. 235–265.
  • Berger ES, Hendee WR. The expression of health risk information. Archives of Internal Medicine. 1989; 149 :1507–1508. [ PubMed : 2742423 ]
  • Berger PL, Neuhaus RJ. To empower people: The role of mediating structures in public policy. Washington, DC: American Enterprise Institute for Public Policy Research; 1977.
  • Bero LA, Grilli R, Grimshaw JM, Harvey E, Oxman AD, Thomson MA. Closing the gap between research and practice: An overview of systematic reviews of interventions to promote the implementation of research findings. British Medical Journal. 1998; 317 :465–468. [ PMC free article : PMC1113716 ] [ PubMed : 9703533 ]
  • Bickman L. The functions of program theory. New Directions in Program Evaluation. 1987; 33 :5–18.
  • Bigger JTJ. Antiarrhythmic treatment: An overview. American Journal of Cardiology. 1984; 53 :8B–16B. [ PubMed : 6364771 ]
  • Bishop R. Initiating empowering research? New Zealand Journal of Educational Studies. 1994; 29 :175–188.
  • Bishop R. Addressing issues of self-determination and legitimation in Kaupapa Maori research. In: Webber B, editor. Research Perspectives in Maori Education. Wellington, New Zealand: Council for Educational Research; 1996. pp. 143–160.
  • Black WC, Nease RFJ, Tosteson AN. Perceptions of breast cancer risk and screening effectiveness in women younger than 50 years of age. Journal of the National Cancer Institute. 1995; 87 :720–731. [ PubMed : 7563149 ]
  • Blaine TM, Forster JL, Hennrikus D, O'Neil S, Wolfson M, Pham H. Creating tobacco control policy at the local level: Implementation of a direct action organizing approach. Health Education and Behavior. 1997; 24 :640–651. [ PubMed : 9307899 ]
  • Botvin GJ, Baker E, Dusenbury L, Botvin EM, Diaz T. Long-term followup results of a randomized drug abuse prevention trial in a white middle-class population. Journal of the American Medical Association. 1995; 273 :1106–1112. [ PubMed : 7707598 ]
  • Brown ER. Community action for health promotion: A strategy to empower individuals and communities. International Journal of Health Services. 1991; 21 :441–456. [ PubMed : 1917205 ]
  • Brown P. The role of the evaluator in comprehensive community initiatives. In: Connell JP, Kubisch AC, Schorr LB, Weiss CH, editors. New Approaches to Evaluating Community Initiatives. Washington, DC: Aspen; 1995. pp. 201–225.
  • Bush PJ, Zuckerman AE, Taggart VS, Theiss PK, Peleg EO, Smith SA. Cardiovascular risk factor prevention in black school children: The Know Your Body: Evaluation Project. Health Education Quarterly. 1989; 16 :215–228. [ PubMed : 2732064 ]
  • Butterfoss FD, Morrow AL, Rosenthal J, Dini E, Crews RC, Webster JD, Louis P. CINCH: An urban coalition for empowerment and action. Health Education and Behavior. 1998; 25 :212–225. [ PubMed : 9548061 ]
  • Butterfoss FD, Goodman RM, Wandersman A. Community coalitions for prevention and health promotion. Health Education Research. 1993; 8 :315–330. [ PubMed : 10146473 ]
  • Campbell DT, Stanley JC. Experimental and Quasi-Experimental Designs for Research. Chicago: Rand McNally; 1963.
  • Cardiac Arrhythmia Suppression Trial (CAST) Investigators. Preliminary report: Effect of encainide and flecainide on mortality in a randomized trial of arrhythmia suppression after myocardial infarction. The Cardiac Arrhythmia Suppression Trial (CAST) Investigators. New England Journal of Medicine. 1989; 321 :406–412. [ PubMed : 2473403 ]
  • Cassileth BR, Zupkis RV, Sutton-Smith K, March V. Information and participation preferences among cancer patients. Annals of Internal Medicine. 1980; 92 :832–836. [ PubMed : 7387025 ]
  • Centers for Disease Control, Agency for Toxic Substances and Disease Registry (CDC/ ATSDR). Principles of Community Engagement. Atlanta: CDC Public Health Practice Program Office; 1997.
  • Chambless DL, Hollon SD. Defining empirically supported therapies. Journal of Consulting and Clinical Psychology. 1998; 66 :7–18. [ PubMed : 9489259 ]
  • Clemen RT. Making Hard Decisions. Boston: PWS-Kent; 1991.
  • Compas BE, Haaga DF, Keefe FJ, Leitenberg H, Williams DA. Sampling of empirically supported psychological treatments from health psychology: Smoking, chronic pain, cancer, and bulimia nervosa. Journal of Consulting and Clinical Psychology. 1998; 66 :89–112. [ PubMed : 9489263 ]
  • Cook TD, Reichardt CS. Qualitative and Quantitative Methods in Evaluation Research. Beverly Hills, CA: Sage; 1979.
  • Cornwall A. Towards participatory practice: Participatory rural appraisal (PRA) and the participatory process. In: deKoning K, Martin M, editors. Participatory Research in Health: Issues and Experiences. London: Zed Books; 1996. pp. 94–107.
  • Cornwall A, Jewkes R. What is participatory research? Social Science and Medicine. 1995; 41 :1667–1676. [ PubMed : 8746866 ]
  • Cousins JB, Earl LM, editors. Participatory Evaluation: Studies in Evaluation Use and Organizational Learning. London: Falmer; 1995.
  • Cromwell J, Bartosch WJ, Fiore MC, Hasselblad V, Baker T. Cost-effectiveness of the clinical practice recommendations in the AHCPR guideline for smoking cessation. Journal of the American Medical Association. 1997; 278 :1759–1766. [ PubMed : 9388153 ]
  • Cummings NA, Cummings JL, Johnson JN, editors. Behavioral Health in Primary Care: A Guide for Clinical Integration. Madison, CT: Psychosocial Press; 1997.
  • Danese MD, Powe NR, Sawin CT, Ladenson PW. Screening for mild thyroid failure at the periodic health examination: A decision and cost-effectiveness analysis. Journal of the American Medical Association. 1996; 276 :285–292. [ PubMed : 8656540 ]
  • Dannenberg AL, Gielen AC, Beilenson PL, Wilson MH, Joffe A. Bicycle helmet laws and educational campaigns: An evaluation of strategies to increase children's helmet use. American Journal of Public Health. 1993; 83 :667–674. [ PMC free article : PMC1694700 ] [ PubMed : 8484446 ]
  • Deber RB. Physicians in health care management. 7. The patient-physician partnership: Changing roles and the desire for information. Canadian Medical Association Journal. 1994; 151 :171–176. [ PMC free article : PMC1336877 ] [ PubMed : 8039062 ]
  • Deber RB, Kraetschmer N, Irvine J. What role do patients wish to play in treatment decision making? Archives of Internal Medicine. 1996; 156 :1414–1420. [ PubMed : 8678709 ]
  • DeJong W, Hingson R. Strategies to reduce driving under the influence of alcohol. Annual Review of Public Health. 1998; 19 :359–378. [ PubMed : 9611624 ]
  • deKoning K, Martin M. Participatory research in health: Setting the context. In: deKoning K, Martin M, editors. Participatory Research in Health: Issues and Experiences. London: Zed Books; 1996. pp. 1–18.
  • Denzin NK. The research act. In: Denzin NK, editor. The Research Act in Sociology: A Theoretical Introduction to Sociological Methods. Chicago, IL: Aldine; 1970. pp. 345–360.
  • Denzin NK. The suicide machine. In: Long RE, editor. Suicide. 2. Vol. 67. New York: H.W. Wilson; 1994.
  • Dignan MB, editor. Measurement and evaluation of health education. Springfield, IL: C.C. Thomas; 1989.
  • Dockery G. Rhetoric or reality? Participatory research in the National Health Service, UK. In: deKoning K, Martin M, editors. Participatory Research in Health: Issues and Experiences. London: Zed Books; 1996. pp. 164–176.
  • Donaldson SI, Graham JW, Hansen WB. Testing the generalizability of intervening mechanism theories: Understanding the effects of adloescent drug use prevention interventions. Journal of Behavioral Medicine. 1994; 17 :195–216. [ PubMed : 8035452 ]
  • Dressler WW. Commentary on “Community Research: Partnership in Black Communities.” American Journal of Preventive Medicine. 1993; 9 :32–34. [ PubMed : 8123284 ]
  • Durie MH.Characteristics of Maori health research. Presented at Hui Whakapiripiri: A Hui to Discuss Strategic Directions for Maori Health Research; Wellington, New Zealand: Eru Pomare Maori Health Research Centre, Wellington School of Medicine, University of Otago; 1996.
  • Eddy DM. Screening for cervical cancer. Annals of Internal Medicine. 1990; 113 :214–226. Reprinted in Eddy, D.M. (1991). Common Screening Tests. Philadelphia: American College of Physicians. [ PubMed : 2115753 ]
  • Edelson JT, Weinstein MC, Tosteson ANA, Williams L, Lee TH, Goldman L. Long-term cost-effectiveness of various initial monotherapies for mild to moderate hypertension. Journal of the American Medical Association. 1990; 263 :407–413. [ PubMed : 2136759 ]
  • Edworthy J, Adams AS. Warning Design. London: Taylor and Francis; 1997.
  • Elden M, Levin M. Cogenerative learning. In: Whyte WF, editor. Participatory Action Research. Newbury Park, CA: Sage; 1991. pp. 127–142.
  • Emmons KM, Thompson B, Sorensen G, Linnan L, Basen-Engquist K, Biener L, Watson M. The relationship between organizational characteristics and the adoption of workplace smoking policies. Health Education and Behavior. 2000; 27 :483–501. [ PubMed : 10929755 ]
  • Ende J, Kazis L, Ash A, Moskowitz MA. Measuring patients' desire for autonomy: Decision making and information-seeking preferences among medical patients. Journal of General Internal Medicine. 1989; 4 :23–30. [ PubMed : 2644407 ]
  • Eng E, Blanchard L. Action-oriented community diagnosis: A health education tool. International Quarterly of Community Health Education. 1990–1; 11 :93–110. [ PubMed : 20840941 ]
  • Eng E, Parker EA. Measuring community competence in the Mississippi Delta: the interface between program evaluation and empowerment. Health Education Quarterly. 1994; 21 :199–220. [ PubMed : 8021148 ]
  • Erdmann TC, Feldman KW, Rivara FP, Heimbach DM, Wall HA. Tap water burn prevention: The effect of legislation. Pediatrics. 1991; 88 :572–577. [ PubMed : 1881739 ]
  • Ericsson A, Simon HA. Verbal Protocol As Data. Cambridge, MA: MIT Press; 1994.
  • Fawcett SB, Lewis RK, Paine-Andrews A, Francisco VT, Richter KP, Williams EL, Copple B. Evaluating community coalitions for prevention of substance abuse: The case of Project Freedom. Health Education and Behavior. 1997; 24 :812–828. [ PubMed : 9408793 ]
  • Fawcett SB. Some values guiding community research and action. Journal of Applied Behavior Analysis. 1991; 24 :621–636. [ PMC free article : PMC1279615 ] [ PubMed : 16795759 ]
  • Fawcett SB, Paine-Andrews A, Francisco VT, Schultz JA, Richter KP, Lewis RK, Harris KJ, Williams EL, Berkley JY, Lopez CM, Fisher JL. Empowering community health initiatives through evaluation. In: Fetterman D, Kaftarian S, Wandersman A, editors. Empowerment Evaluation: Knowledge And Tools Of Self-Assessment And Accountability. Thousand Oaks, CA: Sage; 1996. pp. 161–187.
  • Feinstein AR, Horwitz RI. Problems in the “evidence” of “evidence-based medicine.” American Journal of Medicine. 1997; 103 :529–535. [ PubMed : 9428837 ]
  • Fischhoff B.Risk Perception And Risk Communication. Presented at the Workshop on Health, Communications and Behavior of the IOM Committee on Health and Behavior: Research, Practice and Policy; Irvine, CA. 1999a.
  • Fischhoff B. Why (cancer) risk communication can be hard. Journal of the National Cancer Institute Monographs. 1999b; 25 :7–13. [ PubMed : 10854449 ]
  • Fischhoff B, Bruine de Bruin W. Fifty/fifty=50? Journal of Behavioral Decision Making. 1999; 12 :149–163.
  • Fischhoff B, Downs J. Accentuate the relevant. Psychological Science. 1997; 18 :154–158.
  • Fisher EB Jr. The results of the COMMIT trial. American Journal of Public Health. 1995; 85 :159–160. [ PMC free article : PMC1615304 ] [ PubMed : 7856770 ]
  • Flay B. Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine. 1986; 15 :451–474. [ PubMed : 3534875 ]
  • Flood AB, Wennberg JE, Nease RFJ, Fowler FJJ, Ding J, Hynes LM. The importance of patient preference in the decision to screen for prostate cancer. Prostate Patient Outcomes Research Team [see comments] Journal of General Internal Medicine. 1996; 11 :342–349. [ PubMed : 8803740 ]
  • Florin P, Wandersman A. An introduction to citizen participation, voluntary organizations, and community development: Insights for empowerment through research. American Journal of Community Psychology. 1990; 18 :41–53.
  • Francisco VT, Paine AL, Fawcett SB. A methodology for monitoring and evaluating community health coalitions. Health Education Research. 1993; 8 :403–416. [ PubMed : 10146477 ]
  • Freire P. Education for Critical Consciousness. New York: Continuum; 1987.
  • Frick MH, Elo O, Haapa K, Heinonen OP, Heinsalmi P, Helo P, Huttunen JK, Kaitaniemi P, Koskinen P, Manninen V, Maenpaa H, Malkonen M, Manttari M, Norola S, Pasternack A, Pikkarainen J, Romo M, Sjoblom T, Nikkila EA. Helsinki Heart Study: Primary-prevention trial with gemfibrozil in middle-aged men with dyslipidemia. Safety of treatment, changes in risk factors, and incidence of coronary heart disease. New England Journal of Medicine. 1987; 317 :1237–1245. [ PubMed : 3313041 ]
  • Friedman LM, Furberg CM, De Mets DL. Fundamentals of Clinical Trials. St. Louis: Mosby-Year Book; 1985.
  • Frosch M, Kaplan RM. Shared decision-making in clinical practice: Past research and future directions. American Journal of Preventive Medicine. 1999; 17 :285–294. [ PubMed : 10606197 ]
  • Gaventa J. The powerful, the powerless, and the experts: Knowledge struggles in an information age. In: Park P, Brydon-Miller M, Hall B, Jackson T, editors. Voices of Change: Participatory Research In The United States and Canada. Westport, CT: Bergin and Garvey; 1993. pp. 21–40.
  • Gentner D, Stevens A. Mental Models (Cognitive Science). Hillsdale, NJ: Erlbaum; 1983.
  • Gold MR, Siegel JE, Russell LB, Weinstein MC, editors. Cost-Effectiveness in Health And Medicine. New York: Oxford University Press; 1996.
  • Goldman L, Weinstein MC, Goldman PA, Williams LW. Cost-effectiveness of HMG-CoA reductase inhibition. Journal of the American Medical Association. 1991; 6 :1145–1151. [ PubMed : 1899896 ]
  • Golomb BA. Cholesterol and violence: is there a connection? Annals of Internal Medicine. 1998; 128 :478–487. [ PubMed : 9499332 ]
  • Goodman RM. Principles and tools for evaluating community-based prevention and health promotion programs. In: Brownson RC, Baker EA, Novick LF, editors. Community-Based Prevention Programs That Work. Gaithersburg, MD: Aspen; 1999. pp. 211–227.
  • Goodman RM, Wandersman A. FORECAST: A formative approach to evaluating community coalitions and community-based initiatives. Journal of Community Psychology, Supplement. 1994:6–25.
  • Goodman RM, Steckler A, Kegler MC. Mobilizing organizations for health enhancement: Theories of organizational change. In: Glanz K, Lewis FM, Rimer BK, editors. Health Behavior and Health Education. San Francisco: Jossey-Bass; 1997. pp. 287–312.
  • Gordon RL, Baker EL, Roper WL, Omenn GS. Prevention and the reforming U.S. health care system: Changing roles and responsibilities for public health. Annual Review of Public Health. 1996; 17 :489–509. [ PubMed : 8724237 ]
  • Gottlieb NH, McLeroy KR. Social health. In: O'Donnell MP, Harris JS, editors. Health promotion in the workplace. Albany, NY: Delmar; 1994. pp. 459–493.
  • Green LW. Evaluation and measurement: Some dilemmas for health education. American Journal of Public Health. 1977; 67 :155–166. [ PMC free article : PMC1653552 ] [ PubMed : 402085 ]
  • Green LW, Gordon NP. Productive research designs for health education investigations. Health-Education. 1982; 13 :4–10.
  • Green LW, Lewis FM. Measurement and Evaluation in Health Education and Health Promotion. Palo Alto, CA: Mayfield; 1986.
  • Green LW, George MA, Daniel M, Frankish CJ, Herbert CJ, Bowie WR, O'Neil M. Study of Participatory Research in Health Promotion. University of British Columbia, Vancouver: The Royal Society of Canada; 1995.
  • Green LW, Richard L, Potvin L. Ecological foundations of health promotion. American Journal of Health Promotion. 1996; 10 :270–281. [ PubMed : 10159708 ]
  • Greenfield S, Kaplan S, Ware JE. Expanding patient involvement in care. Annals of Internal Medicine. 1985; 102 :520–528. [ PubMed : 3977198 ]
  • Greenfield S, Kaplan SH, Ware JE, Yano EM, Frank HJL. Patients participation in medical care: Effects on blood sugar control and quality of life in diabetes. Journal of General Internal Medicine. 1988; 3 :448–457. [ PubMed : 3049968 ]
  • Greenwald P. Epidemiology: A step forward in the scientific approach to preventing cancer through chemoprevention. Public Health Reports. 1984; 99 :259–264. [ PMC free article : PMC1424586 ] [ PubMed : 6429723 ]
  • Greenwald P, Cullen JW. A scientific approach to cancer control. CA: A Cancer Journal for Clinicians. 1984; 34 :328–332. [ PubMed : 6437624 ]
  • Griffith HM, Dickey L, Kamerow DB. Put prevention into practice: a systematic approach. Journal of Public Health Management and Practice. 1995; 1 :9–15. [ PubMed : 10186631 ]
  • Guba EG, Lincoln YS. Fourth Generation Evaluation. Newbury Park, CA: Sage; 1989.
  • Hadden SG. Read The Label: Reducing Risk By Providing Information. Boulder, CO: Westview; 1986.
  • Hall BL. From margins to center? The development and purpose of participatory research. American Sociologist. 1992; 23 :15–28.
  • Hancock L, Sanson-Fisher RW, Redman S, Burton R, Burton L, Butler J, Girgis A, Gibberd R, Hensley M, McClintock A, Reid A, Schofield M, Tripodi T, Walsh R. Community action for health promotion: A review of methods and outcomes 1990–1995. American Journal of Preventive Medicine. 1997; 13 :229–239. [ PubMed : 9236957 ]
  • Hancock T. The healthy city from concept to application: Implications forresearch. In: Davies JK, Kelly MP, editors. Healthy Cities: Research and Practice. New York: Routledge; 1993. pp. 14–24.
  • Hatch J, Moss N, Saran A, Presley-Cantrell L, Mallory C. Community research: partnership in Black communities. American Journal of Preventive Medicine. 1993; 9 :27–31. [ PubMed : 8123284 ]
  • He J, Ogden LG, Vupputuri S, Bazzano LA, Loria C, Whelton PK. Dietary sodium intake and subsequent risk of cardiovascular disease in overweight adults. Journal of the American Medical Association. 1999; 282 :2027–2034. [ PubMed : 10591385 ]
  • Health Care Financing Administration, Department of Health and Human Services. Highlights: National Health Expenditures, 1997. 1998. [Accessed October 31, 1998]. [On-line]. Available: http://www ​.hcfa.gov/stats ​/nhe-oact/hilites.htm .
  • Heaney CA, Goetzel RZ. A review of health-related outcomes of multi-component worksite health promotion programs. American Journal of Health Promotion. 1997; 11 :290–307. [ PubMed : 10165522 ]
  • Hingson R. Prevention of drinking and driving. Alcohol Health and Research World. 1996; 20 :219–226. [ PMC free article : PMC6876524 ] [ PubMed : 31798161 ]
  • Himmelman AT. Communities Working Collaboratively for a Change. University of Minnesota, MN: Humphrey Institute of Public Affairs; 1992.
  • Hollister RG, Hill J. Problems in the evaluation of community-wide initiatives. In: Connell JP, Kubisch AC, Schorr LB, Weiss CH, editors. New Approaches to Evaluating Community Initiatives. Washington, DC: Aspen; 1995. pp. 127–172.
  • Horwitz RI, Daniels SR. Bias or biology: Evaluating the epidemiologic studies of L-tryptophan and the eosinophilia-myalgia syndrome. Journal of Rheumatology Supplement. 1996; 46 :60–72. [ PubMed : 8895182 ]
  • Horwitz RI. Complexity and contradiction in clinical trial research. American Journal of Medicine. 1987a; 82 :498–510. [ PubMed : 3548349 ]
  • Horwitz RI. The experimental paradigm and observational studies of cause-effect relationships in clinical medicine. Journal of Chronic Disease. 1987b; 40 :91–99. [ PubMed : 3805237 ]
  • Horwitz RI, Singer BH, Makuch RW, Viscoli CM. Can treatment that is helpful on average be harmful to some patients? A study of the conflicting information needs of clinical inquiry and drug regulation. Journal of Clinical Epidemiology. 1996; 49 :395–400. [ PubMed : 8621989 ]
  • Horwitz RI, Viscoli CM, Clemens JD, Sadock RT. Developing improved observational methods for evaluating therapeutic effectiveness. American Journal of Medicine. 1990; 89 :630–638. [ PubMed : 1978566 ]
  • House ER. Evaluating with validity. Beverly Hills, CA: Sage; 1980.
  • Hugentobler MK, Israel BA, Schurman SJ. An action research approach to workplace health: Integrating methods. Health Education Quarterly. 1992; 19 :55–76. [ PubMed : 1568874 ]
  • Impicciatore P, Pandolfini C, Casella N, Bonati M. Reliability of health information for the public on the world wide web: Systematic survey of advice on managing fever in children at home. British Medical Journal. 1997; 314 :1875–1881. [ PMC free article : PMC2126984 ] [ PubMed : 9224132 ]
  • IOM (Institute of Medicine). Reducing the Burden of Injury: Advancing Prevention and Treatment. Washington, DC: National Academy; 1999. [ PubMed : 25101422 ]
  • IOM (Institute of Medicine). Speaking of Health: Assessing Health Communication. In: Chrvala C, Scrimshaw S, editors. Strategies for Diverse Populations. Washington, DC: National Academy Press; 2001.
  • Israel BA.Practitioner-oriented Approaches to Evaluating Health EducationInterventions: Multiple Purposes—Multiple Methods. Paper presented at the National Conference on Health Education and Health Promotion; Tampa, FL. 1994.
  • Israel BA, Schurman SJ. Social support, control and the stress process. In: Glanz K, Lewis FM, Rimer BK, editors. Health Behavior and Health Education: Theory, Research and Practice. San Francisco: Jossey-Bass; 1990. pp. 179–205.
  • Israel BA, Baker EA, Goldenhar LM, Heaney CA, Schurman SJ. Occupational stress, safety, and health: Conceptual framework and principles for effective prevention interventions. Journal of Occupational Health Psychology. 1996; 1 :261–286. [ PubMed : 9547051 ]
  • Israel BA, Checkoway B, Schulz AJ, Zimmerman MA. Health education and community empowerment: conceptualizing and measuring perceptions of individual, organizational, and community control. Health Education Quarterly. 1994; 21 :149–170. [ PubMed : 8021145 ]
  • Israel BA, Cummings KM, Dignan MB, Heaney CA, Perales DP, Simons-Morton BG, Zimmerman MA. Evaluation of health education programs: Current assessment and future directions. Health Education Quarterly. 1995; 22 :364–389. [ PubMed : 7591790 ]
  • Israel BA, Schulz AJ, Parker EA, Becker AB. Review of community-based research: Assessing partnership approaches to improve public health. Annual Review of Public Health. 1998; 19 :173–202. [ PubMed : 9611617 ]
  • Israel BA, Schurman SJ, House JS. Action research on occupational stress: Involving workers as researchers. International Journal of Health Services. 1989; 19 :135–155. [ PubMed : 2925298 ]
  • Israel BA, Schurman SJ, Hugentobler MK. Conducting action research: Relationships between organization members and researchers. Journal of Applied Behavioral Science. 1992a; 28 :74–101.
  • Israel BA, Schurman SJ, Hugentobler MK, House JS. A participatory action research approach to reducing occupational stress in the United States. In: DiMartino V, editor. Preventing Stress at Work: Conditions of Work Digest. II. Geneva, Switzerland: International Labor Office; 1992b. pp. 152–163.
  • James SA. Racial and ethnic differences in infant mortality and low birth weight: A psychosocial critique. Annals of Epidemiology. 1993; 3 :130–136. [ PubMed : 8269064 ]
  • Johnson-Laird PN. Cognitive Science. 6. New York: Cambridge University Press; 1980. Mental models: Towards a cognitive science of language, inference and consciousness.
  • Kahneman D, Tversky A. Choices, values, and frames. American Psychologist. 1983; 39 :341–350.
  • Kahneman D, Tversky A. On the reality of cognitive illusions. Psychological Review. 1996; 103 :582–591. [ PubMed : 8759048 ]
  • Kalet A, Roberts JC, Fletcher R. How do physicians talk with their patients about risks? Journal of General Internal Medicine. 1994; 9 :402–404. [ PubMed : 7931751 ]
  • Kaplan RM. Value judgment in the Oregon Medicaid experiment. Medical Care. 1994; 32 :975–988. [ PubMed : 7934274 ]
  • Kaplan RM. Profile versus utility based measures of outcome for clinical trials. In: Staquet MJ, Hays RD, Fayers PM, editors. Quality of Life Assessment in Clinical Trials. London: Oxford University Press; 1998. pp. 69–90.
  • Kaplan RM, Anderson JP. The general health policy model: An integrated approach. In: Spilker B, editor. Quality of Life and Pharmacoeconomics in Clinical Trials. Philadephia: Lippencott-Raven; 1996. pp. 309–322.
  • Kasper JF, Mulley AG, Wennberg JE. Developing shared decision-making programs to improve the quality of health care. Quality Review Bulletin. 1992; 18 :183–190. [ PubMed : 1379705 ]
  • Kass D, Freudenberg N. Coalition building to prevent childhood lead poisoning: A case study from New York City. In: Minkler M, editor. Community Organizing and Community Building for Health. New Brunswick, NJ: Rutgers University Press; 1997. pp. 278–288.
  • Kegler MC, Steckler A, Malek SH, McLeroy K. A multiple case study of implementation in 10 local Project ASSIST coalitions in North Carolina. Health Education Research. 1998a; 13 :225–238. [ PubMed : 10181021 ]
  • Kegler MC, Steckler A, McLeroy K, Malek SH. Factors that contribute to effective community health promotion coalitions: A study of 10 Project ASSIST coalitions in North Carolina. American Stop Smoking Intervention Study for Cancer Prevention. Health Education and Behavior. 1998b; 25 :338–353. [ PubMed : 9615243 ]
  • Klein DC. Community Dynamics and Mental Health. New York: Wiley; 1968.
  • Klitzner M. A public health/dynamic systems approach to community-wide alcohol and other drug initiatives. In: Davis RC, Lurigo AJ, Rosenbaum DP, editors. Drugs and the Community. Springfield, IL: Charles C. Thomas; 1993. pp. 201–224.
  • Koepsell TD. Epidemiologic issues in the design of community intervention trials. In: Brownson R, Petitti D, editors. Applied Epidemiology: Theory To Practice. New York: Oxford University Press; 1998. pp. 177–212.
  • Koepsell TD, Diehr PH, Cheadle A, Kristal A. Invited commentary: Symposium on community intervention trials. American Journal of Epidemiology. 1995; 142 :594–599. [ PubMed : 7653467 ]
  • Koepsell TD, Wagner EH, Cheadle AC, Patrick DL, Martin DC, Diehr PH, Perrin EB, Kristal AR, Allan-Andrilla CH, Dey LJ. Selected methodological issues in evaluating community-based health promotion and disease prevention programs. Annual Review of Public Health. 1992; 13 :31–57. [ PubMed : 1599591 ]
  • Kong A, Barnett GO, Mosteller F, Youtz C. How medical professionals evaluate expressions of probability. New England Journal of Medicine. 1986; 315 :740–744. [ PubMed : 3748081 ]
  • Kraus JF. Effectiveness of measures to prevent unintentional deaths of infants and children from suffocation and strangulation. Public Health Report. 1985; 100 :231–240. [ PMC free article : PMC1424727 ] [ PubMed : 3920722 ]
  • Kraus JF, Peek C, McArthur DL, Williams A. The effect of the 1992 California motorcycle helmet use law on motorcycle crash fatalities and injuries. Journal of the American Medical Association. 1994; 272 :1506–1511. [ PubMed : 7966842 ]
  • Krieger N. Epidemiology and the web of causation: Has anyone seen the spider? Social Science and Medicine. 1994; 39 :887–903. [ PubMed : 7992123 ]
  • Krieger N, Rowley DL, Herman AA, Avery B, Phillips MT. Racism, sexism and social class: Implications for studies of health, disease and well-being. American Journal of Preventive Medicine. 1993; 9 :82–122. [ PubMed : 8123288 ]
  • La Puma J, Lawlor EF. Quality-adjusted life-years. Ethical implications for physicians and policymakers. Journal of the American Medical Association. 1990; 263 :2917–2921. [ PubMed : 2110986 ]
  • Labonte R. Health promotion and empowerment: reflections on professionalpractice. Health Education Quarterly. 1994; 21 :253–268. [ PubMed : 8021151 ]
  • Lalonde M. A new perspective on the health of Canadians. Ottawa, ON: Ministry of Supply and Services; 1974.
  • Lando HA, Pechacek TF, Pirie PL, Murray DM, Mittelmark MB, Lichtenstein E, Nothwehyr F, Gray C. Changes in adult cigarette smoking in the Minnesota Heart Health Program. American Journal of Public Health. 1995; 85 :201–208. [ PMC free article : PMC1615309 ] [ PubMed : 7856779 ]
  • Lantz PM, House JS, Lepkowski JM, Williams DR, Mero RP, Chen J. Socioeconomic factors, health behaviors, and mortality. Journal of the American Medical Association. 1998; 279 :1703–1708. [ PubMed : 9624022 ]
  • Last J. Redefining the unacceptable. Lancet. 1995; 346 :1642–1643. [ PubMed : 8551816 ]
  • Lather P. Research as praxis. Harvard Educational Review. 1986; 56 :259–277.
  • Lenert L, Kaplan RM. Validity and interpretation of preference-based measures of health-related quality of life. Medical Care. 2000; 38 :138–150. [ PubMed : 10982099 ]
  • Leventhal H, Cameron L. Behavioral theories and the problem of compliance. Patient Education and Counseling. 1987; 10 :117–138.
  • Levine DM, Becker DM, Bone LR, Stillman FA, Tuggle MB II, Prentice M, Carter J, Filippeli J. A partnership with minority populations: A community model of effectiveness research. Ethnicity and Disease. 1992; 2 :296–305. [ PubMed : 1467764 ]
  • Lewin K. Field Theory in Social Science. New York: Harper; 1951.
  • Lewis CE. Disease prevention and health promotion practices of primary care physicians in the United States. American Journal of Preventive Medicine. 1988; 4 :9–16. [ PubMed : 3079144 ]
  • Liao L, Jollis JG, DeLong ER, Peterson ED, Morris KG, Mark DB. Impact of an interactive video on decision making of patients with ischemic heart disease. Journal of General Internal Medicine. 1996; 11 :373–376. [ PubMed : 8803746 ]
  • Lichter AS, Lippman ME, Danforth DN Jr, d'Angelo T, Steinberg SM, deMoss E, MacDonald HD, Reichert CM, Merino M, Swain SM, et al. Mastectomy versus breast-conserving therapy in the treatment of stage I and II carcinoma of the breast: A randomized trial at the National Cancer Institute. Journalof Clinical Oncokgy. 1992; 10 :976–983. [ PubMed : 1588378 ]
  • Lillie-Blanton M, Hoffman SC. Conducting an assessment of health needs and resources in a racial/ethnic minority community. Health Services Research. 1995; 30 :225–236. [ PMC free article : PMC1070051 ] [ PubMed : 7721594 ]
  • Lincoln YS, Reason P. Editor's introduction. Qualitative Inquiry. 1996; 2 :5–11.
  • Linville PW, Fischer GW, Fischhoff B. AIDS risk perceptions and decision biases. In: Pryor JB, Reeder GD, editors. The Social Psychology of HIV Infection. Hillsdale, NJ: Lawrence Erlbaum; 1993. pp. 5–38.
  • Lipid Research Clinics Program. The Lipid Research Clinics Coronary Primary Prevention Trial results. I. Reduction in incidence of coronary heart disease. Journal of the American Medical Association. 1984; 251 :351–364. [ PubMed : 6361299 ]
  • Lipkus IM, Hollands JG. The visual communication of risk. Journal of National Cancer Institute Monographs. 1999; 25 :149–162. [ PubMed : 10854471 ]
  • Lipsey MW. Theory as method: Small theories of treatments. New Direction in Program Evaluation. 1993; 57 :5–38.
  • Lipsey MW, Polard JA. Driving toward theory in program evaluation: More models to choose from. Evaluation and Program Planning. 1989; 12 :317–328.
  • Lund AK, Williams AF, Womack KN. Motorcycle helmet use in Texas. Public Health Reports. 1991; 106 :576–578. [ PMC free article : PMC1580316 ] [ PubMed : 1910193 ]
  • Maguire P. School of Education. Amherst, MA: The University of Massachusetts; 1987. Doing Participatory Research: A Feminist Approach.
  • Maguire P. Considering more feminist participatory research: What's congruency got to do with it? Qualitative Inquiry. 1996; 2 :106–118.
  • Marin G, Marin BV. Research with Hispanic Populations. Newbury Park, CA: Sage; 1991.
  • Matt GE, Navarro AM. What meta-analyses have and have not taught us about psychotherapy effects: A review and future directions. Clinical Psychology Review. 1997; 17 :1–32. [ PubMed : 9125365 ]
  • Mazur DJ, Hickam DH. Patients' preferences for risk disclosure and role in decision making for invasive medical procedures. Journal of General Internal Medicine. 1997; 12 :114–117. [ PMC free article : PMC1497069 ] [ PubMed : 9051561 ]
  • McGraw SA, Stone EJ, Osganian SK, Elder JP, Perry CL, Johnson CC, Parcel GS, Webber LS, Luepker RV. Design of process evaluation within the child and adolescent trial for cardiovascular health (CATCH). Health Education Quarterly. 1994:S5–S26. [ PubMed : 8113062 ]
  • McIntyre S, West P. What does the phrase “safer sex” mean to you? AIDS. 1992; 7 :121–126. [ PubMed : 8442902 ]
  • McKay HG, Feil EG, Glasgow RE, Brown JE. Feasibility and use of an internet support service for diabetes self-management. The Diabetes Educator. 1998; 24 :174–179. [ PubMed : 9555356 ]
  • McKinlay JB. The promotion of health through planned sociopolitical change: challenges for research and policy. Social Science and Medicine. 1993; 36 :109–117. [ PubMed : 8421787 ]
  • McKnight JL. Regenerating community. Social Policy. 1987; 17 :54–58.
  • McKnight JL. Politicizing health care. In: Conrad P, Kern R, editors. The Sociology Of Health And Illness: Critical Perspectives. New York: St. Martin's; 1994. pp. 437–441.
  • McVea K, Crabtree BF, Medder JD, Susman JL, Lukas L, McIlvain HE, Davis CM, Gilbert CS, Hawver M. An ounce of prevention? Evaluation of the ‘Put Prevention into Practice' program. Journal of Family Practice. 1996; 43 :361–369. [ PubMed : 8874371 ]
  • Merz J, Fischhoff B, Mazur DJ, Fischbeck PS. Decision-analytic approach to developing standards of disclosure for medical informed consent. Journal of Toxicsand Liability. 1993; 15 :191–215.
  • Minkler M. Health education, health promotion and the open society: An historical perspective. Health Education Quarterly. 1989; 16 :17–30. [ PubMed : 2649456 ]
  • Mittelmark MB, Hunt MK, Heath GW, Schmid TL. Realistic outcomes: Lessons from community-based research and demonstration programs for the prevention of cardiovascular diseases. Journal of Public Health Policy. 1993; 14 :437–462. [ PubMed : 8163634 ]
  • Monahan JL, Scheirer MA. The role of linking agents in the diffusion of health promotion programs. Health Education Quarterly. 1988; 15 :417–434. [ PubMed : 3230017 ]
  • Morgan MG. Fields from Electric Power [brochure]. Pittsburgh, PA: Department of Engineering and Public Policy, Carnegie Mellon University; 1995.
  • Morgan MG, Fischhoff B, Bostrom A, Atman C. Risk Communication:The Mental Models Approach. New York: Cambridge University Press; 2001.
  • Mosteller F, Colditz GA. Understanding research synthesis (meta-analysis). Annual Review of Public Health. 1996; 17 :1–23. [ PubMed : 8724213 ]
  • Muldoon MF, Manuck SB, Matthews KA. Lowering cholesterol concentrations and mortality: A quantitative review of primary prevention trials. British Medical Journal. 1990; 301 :309–314. [ PMC free article : PMC1663605 ] [ PubMed : 2144195 ]
  • Murray D. Design and analysis of community trials: Lessons from the Minnesota Heart Health Program. American Journal of Epidemilogy. 1995; 142 :569–575. [ PubMed : 7653464 ]
  • Murray DM. Dissemination of community health promotion programs: The Fargo-Moorhead Heart Health Program. Journal of School Health. 1986; 56 :375–381. [ PubMed : 3640927 ]
  • Myers AM, Pfeiffle P, Hinsdale K. Building a community-based consortium for AIDS patient services. Public Health Reports. 1994; 109 :555–562. [ PMC free article : PMC1403533 ] [ PubMed : 8041856 ]
  • National Research Council, Committee on Risk Perception and Communication. Improving Risk Communication. Washington, DC: National Academy Press; 1989.
  • NHLBI (National Heart, Lung, and Blood Institute). Guidelines for Demonstration And Education Research Grants. Washington, DC: National Institutes of Health; 1983.
  • NHLBI (National Heart, Lung, and Blood Institute). Report of the Task Force on Behavioral Research in Cardiovascular, Lung, and Blood Health and Disease. Bethesda, MD: National Institutes of Health; 1998.
  • Ni H, Sacks JJ, Curtis L, Cieslak PR, Hedberg K. Evaluation of a statewide bicycle helmet law via multiple measures of helmet use. Archives of Pediatric and Adolescent Medicine. 1997; 151 :59–65. [ PubMed : 9006530 ]
  • Nyden PW, Wiewel W. Collaborative research: harnessing the tensions between researcher and practitioner. American Sociologist. 1992; 24 :43–55.
  • O'Connor PJ, Solberg LI, Baird M. The future of primary care. The enhanced primary care model. Journal of Family Practice. 1998; 47 :62–67. [ PubMed : 9673610 ]
  • Office of Technology Assessment, U.S. Congress. Cost-Effectiveness of Influenza Vaccination. Washington, DC: Office of Technology Assessment; 1981.
  • Oldenburg B, French M, Sallis JF.Health behavior research: The quality of the evidence base. Paper presented at the Society of Behavioral Medicine Twentieth Annual Meeting; San Diego, CA. 1999.
  • Orlandi MA. Health Promotion Technology Transfer: Organizational Perspectives. Canadian Journal of Public Health. 1996a; 87 (Supplement 2):528–533. [ PubMed : 9002340 ]
  • Orlandi MA. Intervening with Drug-Involved Youth: Prevention, Treatment, and Research. Newbury Park, CA: Sage Publications; 1996b. Prevention Technologies for Drug-Involved Youth; pp. 81–100.
  • Orlandi MA. The diffusion and adoption of worksite health promotion innovations: An analysis of barriers. Preventive Medicine. 1986; 15 :522–536. [ PubMed : 3774782 ]
  • Parcel GS, Eriksen MP, Lovato CY, Gottlieb NH, Brink SG, Green LW. The diffusion of school-based tobacco-use prevention programs: Program description and baseline data. Health Education Research. 1989; 4 :111–124.
  • Parcel GS, O'Hara-Tompkins NM, Harris RB, Basen-Engquist KM, McCormick LK, Gottlieb NH, Eriksen MP. Diffusion of an Effective Tobacco Prevention Program. II. Evaluation of the Adoption Phase. Health Education Research. 1995; 10 :297–307. [ PubMed : 10158027 ]
  • Parcel GS, Perry CL, Taylor WC. Beyond Demonstration: Diffusion of Health Promotion Innovations. In: Bracht N, editor. Health Promotion at the Community Level. Thousand Oaks, CA: Sage Publications; 1990. pp. 229–251.
  • Parcel GS, Simons-Morton BG, O'Hara NM, Baranowski T, Wilson B. School promotion of healthful diet and physical activity: Impact on learning outcomes and self-reported behavior. Health Education Quarterly. 1989; 16 :181–199. [ PubMed : 2732062 ]
  • Park P, Brydon-Miller M, Hall B, Jackson T, editors. Voices of Change: Participatory Research in the United States and Canada. Westport, CT: Bergin and Garvey; 1993.
  • Parker EA, Schulz AJ, Israel BA, Hollis R. East Side Village Health Worker Partnership: Community-based health advisor intervention in an urban area. Health Education and Behavior. 1998; 25 :24–45. [ PubMed : 9474498 ]
  • Parsons T. The Social System. Glencoe, IL: Free Press; 1951.
  • Patton MQ. How to Use Qualitative Methods In Evaluation. Newbury Park, CA: Sage Publications; 1987.
  • Patton MQ. Qualitative Evaluation And Research Methods. 2nd Edition. Newbury Park, CA: Sage Publications; 1990.
  • Pearce N. Traditional epidemiology, modern epidemiology and public health. American Journal of Public Health. 1996; 86 :678–683. [ PMC free article : PMC1380476 ] [ PubMed : 8629719 ]
  • Pendleton L, House WC. Preferences for treatment approaches in medical care. Medical Care. 1984; 22 :644–646. [ PubMed : 6748782 ]
  • Pentz MA. Programs and Abstracts. Bethesda, MD: 1998. Research to practice in community-based prevention trials. Preventive intervention research at the crossroads: contributions and opportunities from the behavioral and social sciences; pp. 82–83.
  • Pentz MA, Trebow E. Implementation issues in drug abuse prevention research. Substance Use and Misuse. 1997; 32 :1655–1660. [ PubMed : 1922302 ]
  • Pentz MA, Trebow E, Hansen WB, MacKinnon DP, Dwyer JH, Flay BR, Daniels S, Cormack C, Johnson CA. Effects of program implementation on adolescent drug use behavior: The Midwestern Prevention Project (MPP). Evaluation Review. 1990; 14 :264–289.
  • Perry CL. Cardiovascular disease prevention among youth: Visioning the future. Preventive Medicine. 1999; 29 :S79–S83. [ PubMed : 10641822 ]
  • Perry CL, Murray DM, Griffin G. Evaluating the statewide dissemination of smoking prevention curricula: Factors in teacher compliance. Journal of School Health. 1990; 60 :501–504. [ PubMed : 2283869 ]
  • Plough A, Olafson F. Implementing the Boston Healthy Start Initiative: A case study of community empowerment and public health. Health Education Quarterly. 1994; 21 :221–234. [ PubMed : 8021149 ]
  • Price RH. Prevention programming as organizational reinvention: From research to implementation. In: Silverman MM, Anthony V, editors. Prevention of MentalDisorders, Alcohol and Drug Use in Children and Adolescents. Rockville, MD: Department of Health and Human Services; 1989. pp. 97–123.
  • Price RH.Theory guided reinvention as the key high fidelity prevention practice. Paper presented at the National Institute of Health meeting, “Preventive Intervention Research at the Crossroads: Contributions and Opportunities from the Behavioral and Social Sciences”; Bethesda, MD. 1998.
  • Pronk NP, O'Connor PJ. Systems approach to population health improvement. Journal of Ambulatory Care Management. 1997; 20 :24–31. [ PubMed : 10181620 ]
  • Putnam RD. Making Democracy Work: Civic Traditions in Modern Italy. Princeton: Princeton University; 1993.
  • Rabeneck L, Viscoli CM, Horwitz RI. Problems in the conduct and analysis of randomized clinical trials. Are we getting the right answers to the wrong questions? Archives of Internal Medicine. 1992; 152 :507–512. [ PubMed : 1546913 ]
  • Raiffa H. Decision Analysis. Reading, MA: Addison-Wesley; 1968.
  • Reason P. Three approaches to participative inquiry. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. Thousand Oaks, CA: Sage; 1994. pp. 324–339.
  • Reason P, editor. Human Inquiry in Action: Developments in New Paradigm Research. London: Sage; 1988.
  • Reichardt CS, Cook TD. “Paradigms Lost”: Some thoughts on choosing methods in evaluation research. Evaluation and Program Planning: An International Journal. 1980; 3 :229–236.
  • Rivara FP, Grossman DC, Cummings P. Injury prevention. First of two parts. New England Journal of Medicine. 1997a; 337 :543–548. [ PubMed : 9262499 ]
  • Rivara FP, Grossman DC, Cummings P. Injury prevention. Second of two parts. New England Journal of Medicine. 1997b; 337 :613–618. [ PubMed : 9271485 ]
  • Roberts-Gray C, Solomon T, Gottlieb N, Kelsey E. Heart partners: A strategy for promoting effective diffusion of school health promotion programs. Journal of School Health. 1998; 68 :106–116. [ PubMed : 9608451 ]
  • Robertson A, Minkler M. New health promotion movement: A critical examination. Health Education Quarterly. 1994; 21 :295–312. [ PubMed : 8002355 ]
  • Rogers EM. Diffusion of Innovations. 3rd ed. New York: The Free Press; 1983.
  • Rogers EM. Communication of Innovations. New York: The Free Press; 1995.
  • Rogers GB. The safety effects of child-resistant packaging for oral prescription drugs. Two decades of experience. Journal of the American Medical Association. 1996; 275 :1661–1665. [ PubMed : 8637140 ]
  • Rohrbach LA, D'Onofrio C, Backer T, Montgomery S. Diffusion of school' based substance abuse prevention programs. American Behavioral Scientist. 1996; 39 :919–934.
  • Rossi PH, Freeman HE. Evaluation: A Systematic Approach. Newbury Park, CA: Sage Publications; 1989.
  • Rutherford GW. Public health, communicable diseases, and managed care: Will managed care improve or weaken communicable disease control? American Journal of Preventive Medicine. 1998; 14 :53–59. [ PubMed : 9566938 ]
  • Sackett DL, Richardson WS, Rosenberg W, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM. New York: Churchill Livingstone; 1997.
  • Sarason SB. The Psychological Sense of Community: Prospects for a Community Psychology. San Francisco: Jossey-Bass; 1984.
  • Schein EH. Process Consulting. Reading, MA: Addition Wesley; 1987.
  • Schensul JJ, Denelli-Hess D, Borreo MG, Bhavati MP. Urban comadronas: Maternal and child health research and policy formulation in a Puerto Rican community. In: Stull DD, Schensul JJ, editors. Collaborative Research andSocial Change: Applied Anthropology in Action. Boulder, CO: Westview; 1987. pp. 9–32.
  • Schensul SL. Science, theory and application in anthropology. American Behavioral Scientist. 1985; 29 :164–185.
  • Schneiderman LJ, Kronick R, Kaplan RM, Anderson JP, Langer RD. Effects of offering advance directives on medical treatments and costs. Annals of Internal Medicine. 1992; 117 :599–606. [ PubMed : 1524334 ]
  • Schriver KA. Evaluating text quality: The continuum from text-focused to reader-focused methods. IEEE Transactions on Professional Communication. 1989; 32 :238–255.
  • Schulz AJ, Israel BA, Selig SM, Bayer IS. Development and implementation of principles for community-based research in public health. In: Macnair RH, editor. Research Strategies For Community Practice. New York: Haworth Press; 1998a. pp. 83–110.
  • Schulz AJ, Parker EA, Israel BA, Becker AB, Maciak B, Hollis R. Conducting a participatory community-based survey: Collecting and interpreting data for a community health intervention on Detroit's East Side. Journal of Public Health Management Practice. 1998b; 4 :10–24. [ PubMed : 10186730 ]
  • Schwartz LM, Woloshin S, Black WC, Welch HG. The role of numeracy in understanding the benefit of screening mammography. Annals of Internal Medicine. 1997; 127 :966–972. [ PubMed : 9412301 ]
  • Schwartz N. Self-reports: How the questions shape the answer. American Psychologist. 1999; 54 :93–105.
  • Seligman ME. Science as an ally of practice. American Psychologist. 1996; 51 :1072–1079. [ PubMed : 8870544 ]
  • Shadish WR, Cook TD, Leviton LC. Foundations of Program Evaluation. Newbury Park, CA: Sage Publications; 1991.
  • Shadish WR, Matt GE, Navarro AM, Siegle G, Crits-Christoph P, Hazelrigg MD, Jorm AF, Lyons LC, Nietzel MT, Prout HT, Robinson L, Smith ML, Svartberg M, Weiss B. Evidence that therapy works in clinically representative conditions. Journal of Consulting and Clinical Psychology. 1997; 65 :355–365. [ PubMed : 9170759 ]
  • Sharf BF. Communicating breast cancer on-line: Support and empowerment on the internet. Women and Health. 1997; 26 :65–83. [ PubMed : 9311100 ]
  • Simons-Morton BG, Green WA, Gottlieb N. Health Education and Health Promotion. Prospect Heights, IL: Waveland; 1995.
  • Simons-Morton BG, Parcel GP, Baranowski T, O'Hara N, Forthofer R. Promoting a healthful diet and physical activity among children: Results of a school-based intervention study. American Journal of Public Health. 1991; 81 :986–991. [ PMC free article : PMC1405714 ] [ PubMed : 1854016 ]
  • Singer M. Knowledge for use: Anthropology and community-centered substanceabuse research. Social Science and Medicine. 1993; 37 :15–25. [ PubMed : 8332920 ]
  • Singer M. Community-centered praxis: Toward an alternative non-dominative applied anthropology. Human Organization. 1994; 53 :336–344.
  • Smith DW, Steckler A, McCormick LK, McLeroy KR. Lessons learned about disseminating health curricula to schools. Journal of Health Education. 1995; 26 :37–43.
  • Smithies J, Adams L. Walking the tightrope. In: Davies JK, Kelly MP, editors. Healthy Cities: Research and Practice. New York: Routledge; 1993. pp. 55–70.
  • Solberg LI, Kottke TE, Brekke ML. Will primary care clinics organize themselves to improve the delivery of preventive services? A randomized controlled trial. Preventive Medicine. 1998a; 27 :623–631. [ PubMed : 9672958 ]
  • Solberg LI, Kottke TE, Brekke ML, Conn SA, Calomeni CA, Conboy KS. Delivering clinical preventive services is a systems problem. Annals of Behavioral Medicine. 1998b; 19 :271–278. [ PubMed : 9603701 ]
  • Sorensen G, Emmons K, Hunt MK, Johnston D. Implications of the results of community intervention trials. Annual Rreview of Public Health. 1998a; 19 :379–416. [ PubMed : 9611625 ]
  • Sorensen G, Thompson B, Basen-Engquist K, Abrams D, Kuniyuki A, DiClemente C, Biener L. Durability, dissemination and institutionalization of worksite tobacco control programs: Results from the Working Well Trial. International Journal of Behavioral Medicine. 1998b; 5 :335–351. [ PubMed : 16250700 ]
  • Spilker B. Quality of Life and Pharmacoeconomics. In: Spilker B, editor. Clinical Trials. Philadelphia: Lippincott-Raven; 1996.
  • Steckler A, Goodman RM, McLeroy KR, Davis S, Koch G. Measuring the diffusion of innovative health promotion programs. American Journal of Health Promotion. 1992; 6 :214–224. [ PubMed : 10148679 ]
  • Steckler AB, Dawson L, Israel BA, Eng E. Community health development: An overview of the works of Guy W. Steuart. Health Education Quarterly. 1993;(Suppl. 1):S3–S20. [ PubMed : 8354649 ]
  • Steckler AB, McLeroy KR, Goodman RM, Bird ST, McCormick L. Toward integrating qualitative and quantitative methods: an introduction. Health Education Quarterly. 1992; 19 :1–8. [ PubMed : 1568869 ]
  • Steuart GW. Social and cultural perspectives: Community intervention and mental health. Health Education Quarterly. 1993:S99. [ PubMed : 8354654 ]
  • Stokols D. Establishing and maintaining healthy environments: Toward a social ecology of health promotion. American Psychologist. 1992; 47 :6–22. [ PubMed : 1539925 ]
  • Stokols D. Translating social ecological theory into guidelines for community health promotion. American Journal of Health Promotion. 1996; 10 :282–298. [ PubMed : 10159709 ]
  • Stone EJ, McGraw SA, Osganian SK, Elder JP. Process evaluation in the multicenter Child and Adolescent Trial for Cardiovascular Health (CATCH). Health Education Quarterly. 1994;(Suppl. 2):1–143. [ PubMed : 8113062 ]
  • Stringer ET. Action Research: A Handbook For Practitioners. Thousand Oaks, CA: Sage; 1996.
  • Strull WM, Lo B, Charles G. Do patients want to participate in medical decision making? Journal of the American Medical Association. 1984; 252 :2990–2994. [ PubMed : 6502860 ]
  • Strum S. Consultation and patient information on the Internet: The patients' forum. British Journal of Urology. 1997; 80 :22–26. [ PubMed : 9415081 ]
  • Susser M. The tribulations of trials-intervention in communities. American Journal of Public Health. 1995; 85 :156–158. [ PMC free article : PMC1615322 ] [ PubMed : 7856769 ]
  • Susser M. Choosing a future for epidemiology. I. Eras and paradigms. American Journal of Public Health. 1996a; 86 :668–673. [ PMC free article : PMC1380474 ] [ PubMed : 8629717 ]
  • Susser M, Susser E. From black box to Chinese boxes and eco-epidemiology. American Journal of Public Health. 1996b; 86 :674–677. [ PMC free article : PMC1380475 ] [ PubMed : 8629718 ]
  • Tandon R. Participatory evaluation and research: Main concepts and issues. In: Fernandes W, Tandon R, editors. Participatory Research and Evaluation. New Delhi: Indian Social Institute; 1981. pp. 15–34.
  • Thomas SB, Morgan CH. Evaluation of community-based AIDS education and risk reduction projects in ethnic and racial minority communities. Evaluation and Program Planning. 1991; 14 :247–255.
  • Thompson DC, Nunn ME, Thompson RS, Rivara FP. Effectiveness of bicycle safety helmets in preventing serious facial injury. Journal of the American Medical Association. 1996a; 276 :1974–1975. [ PubMed : 8971067 ]
  • Thompson DC, Rivara FP, Thompson RS. Effectiveness of bicycle safety helmets in preventing head injuries: A case-control study. Journal of the American Medical Association. 1996b; 276 :1968–1973. [ PubMed : 8971066 ]
  • Thompson RS, Taplin SH, McAfee TA, Mandelson MT, Smith AE. Primary and secondary prevention services in clinical practice. Twenty years' experience in development, implementation, and evaluation. Journal of the American Medical Association. 1995; 273 :1130–1135. [ PubMed : 7707602 ]
  • Torrance GW. Toward a utility theory foundation for health status index models. Health Services Research. 1976; 11 :349–369. [ PMC free article : PMC1071938 ] [ PubMed : 1025050 ]
  • Tversky A, Fox CR. Weighing risk and uncertainty. Psychological Review. 1995; 102 :269–283.
  • Tversky A, Kahneman D. Rational choice and the framing of decisions. In: Bell DE, Raiffa H, Tversky A, editors. Decision Making: Descriptive, Normative, And Prescriptive Interactions. Cambridge: Cambridge University Press; 1988. pp. 167–192.
  • Tversky A, Shafir E. The disjunction effect in choice under uncertainty. Psychological Science. 1992; 3 :305–309.
  • U.S. Department of Health and Human Services. Status Report. Washington, DC: NIH Publication #90-3107; 1990. Smoking, Tobacco, and CancerProgram: 1985–1989.
  • Vega WA. Theoretical and pragmatic implications of cultural diversity for community research. American Journal of Community Psychology. 1992; 20 :375–391.
  • Von Winterfeldt D, Edwards W. Decision Analysis and Behavioral Research. New York: Cambridge University Press; 1986.
  • Wagner E, Austin B, Von Korff M. Organizing care for patients with chronic illness. Millbank Quarterly. 1996; 76 :511–544. [ PubMed : 8941260 ]
  • Wallerstein N. Powerlessness, empowerment, and health: implications for health promotion programs. American Journal of Health Promotion. 1992; 6 :197–205. [ PubMed : 10146784 ]
  • Walsh JME, McPhee SJ. A systems model of clinical preventive care: An analysis of factors influencing patient and physician. Health Education Quarterly. 1992; 19 :157–175. [ PubMed : 1618625 ]
  • Walter HJ. Primary prevention of chronic disease among children: The school-based “Know Your Body Intervention Trials.” Health Education Quarterly. 1989; 16 :201–214. [ PubMed : 2732063 ]
  • Waterworth S, Luker KA. Reluctant collaborators: Do patients want to be involved in decisions concerning care? Journal of Advanced Nursing. 1990; 15 :971–976. [ PubMed : 2229694 ]
  • Weisz JR, Weiss B, Donenberg GR. The lab versus the clinic. Effects of child and adolescent psychotherapy. American Psychologist. 1992; 47 :1578–1585. [ PubMed : 1476328 ]
  • Wennberg JE. Shared decision making and multimedia. In: Harris LM, editor. Health and the New Media: Technologist Transforming Personal And Public Health. Mahwah, NJ: Erlbaum; 1995. pp. 109–126.
  • Wennberg JE. The Dartmouth Atlas Of Health Care In the United States. Hanover, NH: Trustees of Dartmouth College; 1998.
  • Whitehead M. The ownership of research. In: Davies JK, Kelly MP, editors. Healthy Cities: Research and practice. New York: Routledge; 1993. pp. 83–89.
  • Williams DR, Collins C. U.S. socioeconomic and racial differences in health: patterns and explanations. Annual Review of Sociology. 1995; 21 :349–386.
  • Windsor R, Baranowski T, Clark N, Cutter G. Evaluation Of Health Promotion, Health Education And Disease Prevention Programs. Mountain View, CA: Mayfield; 1994.
  • Winkleby MA. The future of community-based cardiovascular disease intervention studies. American Journal of Public Health. 1994; 84 :1369–1372. [ PMC free article : PMC1615141 ] [ PubMed : 8092354 ]
  • Woloshin S, Schwartz LM, Byram SJ, Sox HC, Fischhoff B, Welch HG. Women's understanding of the mammography screening debate. Archives of Internal Medicine. 2000; 160 :1434–1440. [ PubMed : 10826455 ]
  • World Health Organization (WHO). Ottawa Charter for Health Promotion. Copenhagen: WHO; 1986.
  • Yates JF. Englewood Cliffs. NJ: Prentice-Hall; 1990. Judgment and Decision Making.
  • Yeich S, Levine R. Participatory research's contribution to a conceptualization of empowerment. Journal of Applied Social Psychology. 1992; 22 :1894–1908.
  • Yin RK. Applied Social Research Methods Series. Vol. 34. Newbury Park, CA: Sage Publications; 1993. Applications of case study research.
  • Zhu SH, Anderson NH. Self-estimation of weight parameter in multi-attribute analysis. Organizational Behavior and Human Decision Processes. 1991; 48 :36–54.
  • Zich J, Temoshok C. Applied methodology: A primer of pitfalls and opportunities in AIDS research. In: Feldman D, Johnson T, editors. The Social Dimensions of AIDS. New York: Praeger; 1986. pp. 41–60.
  • Cite this Page Institute of Medicine (US) Committee on Health and Behavior: Research, Practice, and Policy. Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences. Washington (DC): National Academies Press (US); 2001. 7, Evaluating and Disseminating Intervention Research.
  • PDF version of this title (5.9M)

In this Page

Other titles in this collection.

  • The National Academies Collection: Reports funded by National Institutes of Health

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Evaluating and Disseminating Intervention Research - Health and Behavior Evaluating and Disseminating Intervention Research - Health and Behavior

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Last Updated: January 02, 2023

Medically reviewed by NKF Patient Education Team

Table of Contents

About dialysis, how it works, effectiveness, side effects, additional considerations, preparing for your appointment.

Dialysis is a type of treatment that helps your body remove extra fluid and waste products from your blood when the kidneys are not able to. Dialysis was first used successfully in the 1940's and became a standard treatment for kidney failure starting in the 1970s. Since then, millions of patients have been helped by these treatments.

Dialysis can be done in a hospital, a dialysis center, or at home. You and your doctor will decide which type of dialysis and which place is best, based on your medical condition and your wishes.

Check out our online communities to connect, learn more and hear from others going through similar experiences.

Dialysis is helpful for two different situations: 

  • Acute kidney injury (AKI) : a sudden episode of kidney failure or kidney damage that happens within a few hours or days. AKI is usually treated in a hospital setting with intravenous fluids (given through the vein). In severe cases, dialysis may also be needed for a short time until the kidneys get better.
  • Kidney failure : when 10-15% of your kidney function remains, measured by an  estimated glomerular filtration rate (eGFR)  of less than 15 mL/min. At this stage, your kidneys are no longer able to keep you alive without some extra help. This is also known as end-stage kidney disease (ESKD). With kidney failure, dialysis is only able to do some of the work of healthy kidneys, but it is not a cure for kidney disease. With ESKD, you will need dialysis for the rest of your life or until you are able to get a  kidney transplant .

Sign up for a deep dive into dialysis

Learn about the different types of dialysis, receive additional resources, and learn so much more.

Dialysis performs some of the duties that your kidney usually does to keep your body in balance, such as:

  • removing waste and extra fluids in your body to prevent them from building up in the body
  • keeping safe levels of minerals in your blood, such as potassium, sodium, calcium, and bicarbonate
  • helping to regulate your blood pressure

Hemodialysis (HD)

In  hemodialysis , a dialyzer (filtering machine) is used to remove waste and extra fluid from your blood, and then return the filtered blood into your body. Before starting hemodialysis, a minor surgery is needed to create a  vascular access site  (opening into one of your blood vessels), usually in your arm. This access site is important to have an easy way to get blood from your body, through the dialyzer, and back into your body. Hemodialysis can be done at a dialysis center or at home. Treatments usually last about four hours and are done three times per week. Some people may need more time for treatments based on their specific needs.

health research intervention definition

Peritoneal Dialysis (PD)

In peritoneal dialysis , your blood is filtered inside your own body instead of using a dialyzer machine. For this type of dialysis, the lining of your abdomen or belly area (also called the peritoneum) is used as a filter. Before starting peritoneal dialysis, a minor surgery is needed to place a catheter (soft tube) in your belly. During each treatment, your belly area is slowly filled with dialysate (a cleansing fluid made from a mixture of water, salt, and other additives) through the catheter. As your blood flows naturally through the area, extra fluid and waste products are pulled out of the blood vessels and into the belly area by the dialysate (almost like a magnet). After a few hours, the fluid mixture is drained from your belly using the same catheter and bag that was used at the beginning of the treatment. Peritoneal dialysis can be done almost anywhere if you have the supplies required to perform the treatment.  Two of the most common types of peritoneal dialysis are:

  • Continuous Ambulatory Peritoneal Dialysis (CAPD)
  • Automated Peritoneal Dialysis (APD)

health research intervention definition

Honor a Hero. Save a life.

Dialysis is a very effective treatment option for clearing waste products and extra fluid from your blood. However, it does not fully replace all the kidney’s functions, so it is not considered a cure for kidney disease or kidney failure.

All types of dialysis are equally effective, but your medical condition and personal preferences may match one treatment approach better than others. You and your doctor will discuss this and decide which type of dialysis and which place is best. You may also find it helpful to talk with other people who are living with dialysis to learn from their experiences.

The following steps can help increase the effectiveness of your dialysis treatments:

  • complete your treatments according to your prescribed schedule
  • follow your customized eating plan recommended by your kidney dietitian
  • get as much physical activity as possible to boost your strength and heart health
  • talk with your dialysis provider and pharmacist about any medications, supplements, or herbal products you are taking or are considering starting
  • talk with your dialysis team about any concerns or side effects that you may have

Both types of dialysis come with side effects. It can also be hard to tell for sure whether a symptom is because of the dialysis or the kidney failure that is also affecting the body. Some of the most common side effects that people report include:

  • Blockage in your vascular access site (entrance point)
  • Muscle cramps
  • Hypotension (low blood pressure)
  • Weakness, dizziness, or nausea

Peritoneal dialysis (PD)

  • Hernia (weakness in your abdomen muscle, often presenting as a lump or swollen area)
  • Weight gain

Both HD and PD

  • Infection of the skin, blood, and/or peritoneum (belly area)  - if left untreated, these can cause sepsis (a life-threatening condition leading to multiple organ failure).
  • Fatigue (feeling tired)  - This can affect anyone but is usually more common for people who have been on dialysis for a long time. It is often hard to tell for sure if this is a side effect of the dialysis or a symptom of long-term kidney disease. 
  • Pruritus - itchy skin that people with kidney disease may experience, especially in more advanced stages of CKD and people on dialysis. Like fatigue, it is often hard to tell for sure if this is a side effect of the dialysis or a symptom of long-term kidney disease.

Every person responds differently to dialysis, and your level of risk for each side effect will differ from others. If you have concerns about any of these risks, talk to your doctor and dialysis team about ways you can lower your risk. Although these side effects may sound scary, they should be compared to the risks that come from continuing to live with untreated kidney failure.

Impact on regular routine

Most people on dialysis are able to keep a regular routine except for the time needed for treatments. Dialysis often makes people feel better because it helps clear the waste products that have built up in the blood between treatments. However, some people report feeling tired after dialysis, especially if they have been getting dialysis treatments for a long time.

People receiving dialysis treatments also need to be mindful of what they eat. The specific meal plan recommended for you may vary depending on which type of dialysis you receive. Work with your kidney dietitian to create a meal plan that fits your routine and lifestyle.

Traveling is also a possibility for people on dialysis. Dialysis centers are in every part of the United States and many other countries. The treatment is standardized. You must make an appointment for dialysis treatments at another dialysis center before you go. The staff at your current center may help you make the appointment. Visit the NKF Travel Tips AtoZ page for more information.

Many people on dialysis can go back to work after they have gotten used to dialysis. However, if your job has a lot of physical labor (heavy lifting, digging, etc.), you may need to look for a different type of work. Visit the NKF Working with Kidney Disease AtoZ page for more information.

It will likely take you and your family some time to get used to including dialysis treatments into a new routine.

Dialysis treatments are very expensive. However, most people with kidney failure are eligible for Medicare when they start dialysis. This means the federal government pays 80 percent of all dialysis costs. Private health insurance or state Medicaid programs may also help with the costs. Visit the NKF resource on insurance options for people on dialysis or with a kidney transplant to learn more.

You may have some discomfort as the needles are put into your access site. Over time, people usually get used to being around these needles and equipment. The dialysis treatment itself is painless.

Life Expectancy

Life expectancy on dialysis varies depending on your other medical conditions, how well you follow your treatment plan, and various other factors. The average life expectancy on dialysis is 5-10 years. However, many patients have lived well on dialysis for 20 or even 30 years. Talk to your healthcare team about how to take care of yourself and stay healthy on dialysis.

Questions to Ask

  • Which type of dialysis might work best for my situation?
  • Would I be a candidate for completing my dialysis treatments at home?
  • How do I determine which dialysis center I should go to?
  • How can I lower my risk of infection and other side effects while on dialysis?
  • Do I need to adjust any of my medications because of these dialysis treatments?
  • What type of meal plan should I be following while I’m on dialysis?
  • How can I get added to the kidney transplant waitlist?
  • National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK)
  • Centers for Disease Control and Prevention (CDC)
  • NKF Council on Renal Nutrition CKD Kidney Dietitian Directory

What’s your story?

We want to hear about your unique experience with a kidney transplant, living donation, or kidney disease. Your story may be the one that gives someone hope.

How helpful was this content?

Related kidney topics, which drugs are harmful to your kidneys, immunosuppressants, dialysis: dry, itchy skin, missing dialysis treatment is dangerous for your health, foods to avoid after transplantation, what is dry weight, kidney stone treatment: shock wave lithotripsy, five drugs you may need to avoid or adjust if you have kidney disease, taking care of your peritoneal dialysis (pd) catheter, related news and stories.

Three scientists in protective gear high fiving

July 25, 2024

The Future of Artificial Kidneys

Sarah Hyland web.jpg

December 12, 2018

Modern Family Star Opens Up About Her Kidney Disease and Transplants

mari_artinian.png

June 18, 2024

A Son’s Gift. A Wife’s Promise: Fighting for Living Kidney Donors

anthony-tuggle-walk.png

June 04, 2024

Unfiltered Story: Losing a Transplant

xenotransplant-timeline.png

May 31, 2024

Breaking Ground in Transplantation: A New Era with Xenotransplantation

anthony_tuggle.png

April 24, 2024

Unfiltered Story: Anthony Tuggle

molly_matilda_and_anthony_ruane_.png

April 02, 2024

Matilda's Miracle: From Dialysis to Kidney Transplant

making_the_case_for_home_dialysis.png

March 26, 2024

Making the Case for Home Dialysis

charles-rice_002.jpg

March 01, 2024

Shining A Bright Light of Advocacy

IMAGES

  1. Applied Health Intervention Research

    health research intervention definition

  2. Intervention-Based Research

    health research intervention definition

  3. Intervention framework.

    health research intervention definition

  4. Figure 15.1, [Example of the role of formative research in intervention

    health research intervention definition

  5. PPT

    health research intervention definition

  6. Intervention Research Methodology Examples / Pdf An Adapted

    health research intervention definition

COMMENTS

  1. Types of intervention and their development

    1. Introduction to types of intervention and their development. This book is about the evaluation of the effectiveness of health-related interventions. We use the term 'intervention' to apply to any activity undertaken with the objective of improving human health by preventing disease, by curing or reducing the severity or duration of an ...

  2. Guidance on how to develop complex interventions to improve health and

    Based on a consensus exercise informed by reviews and qualitative interviews, we present key principles and actions for consideration when developing interventions to improve health. These include seeing intervention development as a dynamic iterative process, involving stakeholders, reviewing published research evidence, drawing on existing ...

  3. From intervention to interventional system: towards ...

    Background Population health intervention research raises major conceptual and methodological issues. These require us to clarify what an intervention is and how best to address it. This paper aims to clarify the concepts of intervention and context and to propose a way to consider their interactions in evaluation studies, especially by addressing the mechanisms and using the theory-driven ...

  4. Evaluating the impact of healthcare interventions using ...

    A health intervention is a combination of activities or strategies designed to assess, improve, maintain, promote, or modify health among individuals or an entire population. Interventions can include educational or care programmes, policy changes, environmental improvements, or health promotion campaigns.

  5. Evaluating Public Health Interventions: 1. Examples, Definitions, and a

    In this first contribution, I also define implementation science, program evaluation, impact evaluation, and cost-effectiveness research, disciplines that have tremendous methodological and substantive overlap with evaluation of public health interventions—the focus of this section.

  6. Developing Interventions

    Public health officials who have responsibility and legal authority for making decisions about interventions should consider certain key principles: selecting the appropriate intervention, facilitating implementation of the intervention, and assessing the effectiveness of the intervention.

  7. Introduction to Intervention Research

    Summary In this chapter, an overview of the current state in intervention research is provided. Limitations of evidence on the effectiveness of health interventions that is derived from randomized trials, in informing treatment decision-making in practice are highlighted. Disregarding the principles of client-centeredness and the complexity of practice in randomized trials contributes to the ...

  8. New framework on complex interventions to improve health

    The new framework provides an updated definition of complex interventions, highlighting the dynamic relationship between the intervention and its context. Complex interventions are widely used in the health service, in public health practice, and in areas of social policy that have important health consequences, such as education, transport ...

  9. Intervention Research in Health Care

    The purpose of an intervention study is to compare outcomes of health services delivered under different policies. If the occurrence of the outcome differs between the policies, researchers conclude that changing the policy affected the delivery of care.

  10. What Is Intervention Research?

    That is, through experience and research, you begin to devise a different practice strategy—an approach that perhaps has no clear evidence base, but one that may improve current services. When you attempt to develop new strategies or enhance existing strategies, you are ready to engage in intervention research.

  11. Types of Intervention Studies

    Intervention studies often test the efficacy of drugs, but one might also use this design to test the efficacy of differing management strategies or regimens. There are two major types of intervention studies: Community interventions, in which an intervention is assigned to an entire group.

  12. Implementation research: what it is and how to do it

    Implementation research is a growing but not well understood field of health research that can contribute to more effective public health and clinical policies and programmes. This article provides a broad definition of implementation research and outlines key principles for how to do it. The field of implementation research is growing, but it ...

  13. Definition of a health-related intervention

    The University of Waterloo Research Ethics Boards (REBs) have adopted the following definition of a health-related intervention: "An activity or set of activities aimed at modifying a process, course of action or sequence of events in order to change one or several of their characteristics such as performance of expected outcome." ( International Classification of Health Interventions ...

  14. PDF Public health interventions, definitions, and practice levels (Public

    Interventions are actions that public health nurses take on behalf of individuals/families, communities, and systems, to improve or protect health status (Minnesota Department of Health, 2001, p. 1). Surveillance is "an ongoing, systematic collection, analysis and interpretation of health-related data essential to the planning, implementation ...

  15. Research

    Health research entails systematic collection or analysis of data with the intent to develop generalizable knowledge to understand health challenges and mount an improved response to them. The full spectrum of health research spans five generic areas of activity: measuring the health problem; understanding its cause(s); elaborating solutions; translating the solutions or evidence into policy ...

  16. Tools for Implementing an Evidence-Based Approach in Public Health Practice

    Increasing disease rates, limited funding, and the ever-growing scientific basis for intervention demand the use of proven strategies to improve population health. Public health practitioners must be ready to implement an evidence-based approach in their work to meet health goals and sustain necessary resources.

  17. Implementability of healthcare interventions: an overview of reviews

    Background Implementation research may play an important role in reducing research waste by identifying strategies that support translation of evidence into practice. Implementation of healthcare interventions is influenced by multiple factors including the organisational context, implementation strategies and features of the intervention as perceived by people delivering and receiving the ...

  18. Frequently asked questions about population health intervention research

    Population health intervention research requires stronger definition. There are overlaps and differences between it and established domains such as evaluation, health impact assessment, knowledge translation, health services research, and social and public policy analysis.

  19. What is the intervention?

    A randomised controlled clinical trial is a specific type of intervention study where two or more groups are given differing interventions.What is the definition of a trial?The WHO definition of a clinical trial is given as follows:'For the purposes of registration, a clinical trial is any research study that prospectively assigns human ...

  20. What is population health intervention research?

    In health research, unhelpful distinctions maintained in the past between research and evaluation have retarded the development of knowledge and led to patchy evidence about policies and programs. Myths about what can and cannot be achieved within community-level intervention research have similarly held the field back.

  21. Frequently Asked Questions: NIH Clinical Trial Definition

    Clinical trials are clinical research studies involving human participants assigned to an intervention in which the study is designed to evaluate the effect (s) of the intervention on the participant and the effect being evaluated is a health-related biomedical or behavioral outcome.

  22. New research finds scalable mindfulness interventions delivered via

    Mindfulness-based interventions delivered via telehealth in a scalable format can improve pain and overall well-being among veterans with chronic pain, according to new research published today in JAMA Internal Medicine.

  23. Study: Mindfulness interventions delivered via telehealth improve pain

    Mindfulness-based interventions delivered via telehealth in a scalable format can improve pain and overall well-being among veterans with chronic pain, according to new research published today in ...

  24. International Research Collaboration Challenges in the Household Air

    BACKGROUND AND AIM[|]Multi-country trials are critical to investigating environmental health threats. They also generate a spectrum of methodological and logistical challenges. We describe obstacles encountered and solutions employed during implementation of an international research program involving 18 institutions in 6 countries.[¤]METHOD[|]The Household Air Pollution Intervention Network ...

  25. Destitute and dying: interventions and models of palliative and end of

    Application of structural interventions within health and social care policy, including access to advocates and translators, is essential to overcome these barriers. We also recognised that the sustainability of some interventions was uncertain.

  26. From intervention to interventional system: towards greater

    Population health intervention research raises major conceptual and methodological issues. These require us to clarify what an intervention is and how best to address it.This paper aims to clarify the concepts of intervention and context and to propose ...

  27. Study demonstrates effective screening and intervention ...

    Study demonstrates effective screening and intervention for unhealthy alcohol use in primary care settings ... "This study emphasizes the widely recognized gap between research evidence and ...

  28. Nitrogen interventions as a key to better health and ...

    An international research team combined multidisciplinary methods to evaluate how nitrogen interventions could improve air quality and reduce nitrogen deposition. Their study found that ...

  29. Evaluating and Disseminating Intervention Research

    Evaluating and Disseminating Intervention Research. Efforts to change health behaviors should be guided by clear criteria of efficacy and effectiveness of the interventions. However, this has proved surprisingly complex and is the source of considerable debate. The principles of science-based interventions cannot be overemphasized.

  30. Dialysis

    Learn about dialysis, a treatment to remove extra fluid and waste when kidneys fail. Discover types, processes, and ways to manage dialysis effectively.