• Open access
  • Published: 28 March 2022

Development of the ASSESS tool: a comprehenSive tool to Support rEporting and critical appraiSal of qualitative, quantitative, and mixed methods implementation reSearch outcomes

  • Nessa Ryan   ORCID: orcid.org/0000-0002-8051-0021 1   na1 ,
  • Dorice Vieira 2   na1 ,
  • Joyce Gyamfi 1 ,
  • Temitope Ojo 3 ,
  • Donna Shelley 4 ,
  • Olugbenga Ogedegbe 5 ,
  • Juliet Iwelunmor 6 &
  • Emmanuel Peprah 1 , 3  

Implementation Science Communications volume  3 , Article number:  34 ( 2022 ) Cite this article

6123 Accesses

4 Citations

17 Altmetric

Metrics details

Several tools to improve reporting of implementation studies for evidence-based decision making have been created; however, no tool for critical appraisal of implementation outcomes exists. Researchers, practitioners, and policy makers lack tools to support the concurrent synthesis and critical assessment of outcomes for implementation research. Our objectives were to develop a comprehensive tool to (1) describe studies focused on implementation that use qualitative, quantitative, and/or mixed methodologies and (2) assess risk of bias of implementation outcomes.

A hybrid consensus-building approach combining Delphi Group and Nominal Group techniques (NGT) was modeled after comparative methodologies for developing health research reporting guidelines and critical appraisal tools. First, an online modified NGT occurred among a small expert panel ( n = 5), consisting of literature review, item generation, round robin with clarification, application of the tool to various study types, voting, and discussion. This was followed by a larger e-consensus meeting and modified Delphi process with implementers and implementation scientists ( n = 32). New elements and elements of various existing tools, frameworks, and taxonomies were combined to produce the ASSESS tool.

The 24-item tool is applicable to a broad range of study designs employed in implementation science, including qualitative studies, randomized-control trials, non-randomized quantitative studies, and mixed methods studies. Two key features are a section for assessing bias of the implementation outcomes and sections for describing the implementation strategy and intervention implemented. An accompanying explanation and elaboration document that identifies and describes each of the items, explains the rationale, and provides examples of reporting and appraising practice, as well as templates to allow synthesis of extracted data across studies and an instructional video, has been prepared.

Conclusions

The comprehensive, adaptable tool to support both reporting and critical appraisal of implementation science studies including quantitative, qualitative, and mixed methods assessment of intervention and implementation outcomes has been developed. This tool can be applied to a methodologically diverse and growing body of implementation science literature to support reviews or meta-analyses that inform evidence-based decision-making regarding processes and strategies for implementation.

Peer Review reports

Contributions to the literature

The ASSESS tool addresses the challenge of critical assessment of a methodologically diverse and growing body of implementation science literature.

This tool is helpful for designing and executing reviews and meta-analyses of empirical studies of implementation, examining how process and context may lead to heterogeneity of results.

Its use standardizes the reporting and synthesis of implementation strategies, which will facilitate translation of effective public health interventions into routine practice within clinical or community settings.

Implementation research applies a diverse range of study designs to increase translation of research evidence into policies and practice [ 1 , 2 , 3 , 4 , 5 , 6 , 7 ]. It allows us to conceptualize and evaluate successful implementation of interventions, particularly via assessment of implementation outcomes, which are the effects of implementation strategies, or deliberate and purposive actions to implement a new treatment, practice, or service [ 8 ]. As a poorly implemented program or policy will not have the intended interventional impact [ 8 ], robust implementation outcomes are also crucial to achieve the desired population health impact [ 8 , 9 , 10 ]. Implementation science studies may use quantitative, qualitative, and/or mixed-methodologies to assess these implementation outcomes (i.e., acceptability, adoption, appropriateness, cost, feasibility, fidelity, penetration, or sustainability) or intervention outcomes (i.e., effectiveness, efficiency, equity, patient-centeredness, safety, or timeliness), particularly within hybrid effectiveness-implementation designs [ 1 ]. However, researchers, practitioners, and policy makers lack tools to support the concurrent synthesis and critical assessment of implementation outcomes. Tools are needed that can support systematic reviews or meta-analyses comparing multiple types of implementation outcomes across diverse study designs.

No tool to support critical assessment of implementation outcomes exists. The product of the process of critical assessment is knowledge, usually based on appraisal of study methods that provides a level of confidence in study findings. This is an important part of evidence-based decision making, as having only an understanding of the magnitude of the success of an intervention and its implementation without an understanding of one’s confidence in study findings limits the capacity for knowledge translation. Ultimately, comprehensive identification, synthesis, and appraisal of implementation outcomes, will improve understanding of implementation processes and allow comparison of the effectiveness of different implementation strategies. Indeed, previous research has shown the need for pragmatic measures in implementation practice (including assessment of implementation context, processes, and outcomes) [ 11 ] that should be useful, compatible, acceptable, and easy [ 12 ]. Researchers have established there remains a dearth of psychometrically valid survey assessment tools for implementation outcomes, and this area of investigation is ongoing [ 13 , 14 ]. Some efforts have been made to generate valid, brief assessment surveys for feasibility, acceptability, and appropriateness [ 15 ].

A tool is needed to support systematic reviews and meta-analyses of studies using qualitative, quantitative, and/or mixed methods assessment to inform evidence-based decision making on implementation. We have developed ASSESS, a comprehensive 24-item tool that (1) can describe studies evaluating implementation outcomes using qualitative, quantitative, and/or mixed methodologies and (2) can provide a rubric to grade the risk of bias of implementation outcomes.

The development of the ASSESS tool was modeled after recommended methodologies for developing health research reporting guidelines and critical appraisal tools [ 16 , 17 , 18 , 19 ]. A completed checklist of the recommended steps for developing a health research reporting guideline is available as an additional file [ 16 ]. We utilized a hybrid consensus-building approach combining e-Delphi Group and Nominal Group techniques (NGT). This approach builds on the strengths of these different techniques, namely the opportunity for discussion and efficient information exchange among a smaller group of experts that is characteristic of the NGT and the process of structured documentation and consensus meeting with a larger group that is characteristic of the Delphi method [ 18 , 19 ].

This hybrid process is mapped by phase in Fig. 1 , with each phase described in detail below: an online modified NGT among a small expert panel (phase 1), an e-Consensus meeting among a larger panel (phase 2), and post-meeting activities (phase 3).

figure 1

Phases of modified nominal group technique and e-consensus meeting. Adapted from McMillan SS, King M, Tully MP. How to use the nominal group and Delphi techniques. Int J Clin Pharm. 2016;38(3):655-662. doi:10.1007/s11096-016-0257-x and Moher D, Schulz KF, Simera I, Altman DG. Guidance for Developers of Health Research Reporting Guidelines. PLOS Medicine. 2010;7(2):e1000217

Phase 1: Nominal group technique modified for online interaction

From February to October 2020 when social distancing guidelines for the COVID-19 pandemic prohibited in-person meetings, a panel of five public health professionals and implementation researchers carried out bimonthly online meetings using a NGT to conceptualize, reflect upon, develop, discuss, and refine the tool. The group’s experience and expertise was in epidemiology ( n = 4); implementation science ( n = 4); quantitative ( n = 5), qualitative ( n = 3), and mixed methodology ( n = 3); and library science ( n = 1). Different public health specialty areas included non-communicable disease, epigenetics, maternal health, and global health. Members were predominantly female ( n = 4) and were working as faculty ( n = 2), post-doctoral fellows ( n = 2), or a public health doctoral candidate ( n = 1). Phase 1 entailed reviewing the literature and brainstorming to generate items, followed by multiple rounds of independent assessment of items through structured data collection among this panel. Independent ratings were compiled, summarized, distributed, and discussed. This process continued until convergence of ratings was achieved.

Literature review and idea generation

Panel members conducted a thorough literature search of several databases (i.e., PubMed, PsycInfo, CINAHL, EMBASE, Web of Science, and Google Scholar) to inform the rationale for and conceptualization of the ASSESS tool. This review was carried out in February 2020 to begin the NGT process and was revisited eight months later to ensure no recent, relevant publications had been missed. Review findings included material on the development of tools for the purpose of reporting interventions [ 20 , 21 ], reporting implementation strategies [ 20 ], the adaptation of interventions and/or their delivery [ 22 ], and identifying potential sources of bias in relevant studies using quantitative, qualitative, or mixed method assessment [ 23 ]. A search of the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network’s library for health research reporting ( http://www.equator-network.org ) confirmed there were no tools for critical appraisal of implementation outcomes, thus solidifying the need for this tool. To develop our tool, a list of new items and items from existing tools were combined, including elements from the TIDieR checklist [ 21 ], StaRI checklist [ 20 ], MMAT tool [ 23 ], implementation outcomes taxonomy [ 8 ], and FRAME framework [ 22 ]. These tools are described briefly in Table 1 . Novel elements of the tool included a section for critical appraisal of implementation outcomes and a space to indicate implementation phase (i.e., whether assessment was carried out pre-, during, or post implementation).

Round robin and clarification

After integrating existing reporting and appraisal tools with novel elements, we developed an initial shared draft of the tool in Excel 2016. During multiple online meetings, panel members were provided the opportunity to provide structured feedback on each item, its content and presentation, as well as the overall structure of the tool and its instructions. All panel members were encouraged to provide clarification on their feedback, including rationale for rankings, while one panel member took notes on a shared document (this replaced the white board upon which notes would be taken if this were an in-person meeting).

Voting and discussion

All panel members voted on the items to be included within the ASSESS tool and their presentation. The panel suggested four domains capturing implementation methods, intervention methods, implementation results, and intervention results. These domains include (i) intervention and implementation description: methods, (ii) intervention and implementation description: results, (iii) intervention and implementation evaluation: methods, and (iv) intervention and implementation evaluation: results. Panel members discussed the rationale for these domains: that they would allow users to fully describe the methods and results of the study relevant to the intervention and implementation strategy, as well as to critically appraise the outcomes relevant to the implementation strategy and the intervention being implemented. The team deliberated and agreed upon content, structure, and the addition of instructions and further explanation on using the tool.

Once an initial version was developed, the tool was applied by each panel member to articles representing various study types (i.e., randomized-control trials, non-randomized quantitative studies, qualitative studies, and mixed methods studies) and studies representing various phases of implementation. In between meetings, all panel members would apply the tool to the same articles as other panel members and take notes on that experience. Then, during meetings, the NGT process would be repeated with periods of generation of suggested modifications, round robin, clarification, voting, and discussion. Modifications made to enhance the tool as needed based on results from this process included adding further explanation of items and re-ordering the presentation of items for clarity.

Additional expert feedback

Additional expert feedback was invited on draft versions of the ASSESS tool. A draft version of the tool was shared via e-mail, along with a suggested article for application of the tool, with two experts for feedback. These experts reviewed and provided significant feedback before seeking further feedback via a larger group e-Consensus meeting. Experts shared suggestions for adding further explanation on the critical appraisal section and for re-formatting instructions for clarity.

Phase 2: e-Consensus meeting

After the iterative process incorporating feedback from panel experts and additional experts, we sought feedback from hypothetical users of the tool. Implementation researchers and implementers ( N = 32) were recruited via email and invited to one of two online meetings in October 2020 during which they were introduced to the tool, the rationale for its development, and then asked for feedback. Initial feedback on usability and utility was provided by two smaller groups ( n = 12 and 12) with novice implementers and implementation science researchers (i.e., less than 1 years’ experience or training in implementation science or implementation) and one experienced group (i.e., more than 1 years’ experience or training) ( n = 8). Participants in these meetings represented experience and expertise across multiple relevant areas. As per recommendations [ 16 ], the proportion of content experts was greater than 25%.

Meetings began with a presentation on relevant background topics, including a summary of the evidence on existing tools and a summary of the progress in consensus building among the expert panel to develop the current items presented in the tool. Meetings were moderated by one expert panel member while 1–2 team members took notes. The discussions were recorded. Analysis of discussion notes was conducted by NR, and findings were shared with the expert panel for interpretation. In addition to verbal feedback, participants were invited to complete questionnaires ( n = 9). Data management and analysis was carried out in Excel 2016. An audit trail was generated to capture the progression of the tool development and decisions that were made regarding additions or edits to its components and structure. At the end of each meeting, the expert panel sought feedback on a knowledge translation strategy.

Phase 3: Post-meeting activities

After the meetings, the expert panel reconvened with an online meeting to debrief on the larger consensus generating meetings, including voting on suggested modifications for usability and automation. The panel began work on implementing a knowledge translation strategy, including preparing publication of the tool and an explanation and elaboration document and development of a website to host the tool ( https://publichealth.nyu.edu/research-scholarship/centers-labs-initiatives/isee-laboratory ).

The tool domains are identified below, including the description of the intervention and implementation strategy methods, the description of the intervention and implementation strategy results, the evaluation of intervention and implementation strategy methods, and the evaluation of the intervention and implementation strategy results. The instructions for its use are shared in Table 2 . The 24 items are applicable to a broad range of study designs employed in implementation science, including qualitative studies, randomized-control trials, non-randomized quantitative studies, and mixed methods studies. A key feature of the tool is the dual columns for implementation strategy and intervention, within which the methods and results are described and the intervention and implementation outcomes are assessed for bias. Accompanying instructions and an elaboration document that identifies and describes each of the items, explains the rationale, and models examples of good reporting and appraising practice, as well as an instructional video were prepared.

Intervention and implementation description: methods

This is the first domain (items 1–19), which tasks the user with describing the implementation strategy and intervention implemented, including the following items: (1) overall review or meta-analysis question, (2) study author and publication year, (3) study title, (4) rationale, (5) aim(s), objective(s), or research question(s), (6) description of the intervention and/or implementation strategy, (7) description of any adaptation of the intervention or its delivery, (8) study design, (9) participant type(s), (10) comparison group, (11) context, (12) study sites, (13) subgroups (optional), (14) implementation phase, (15) process evaluation, (16) sample size, (17) analysis, (18) sub-group analyses (optional), and (19) outcomes (assessment). The user enters how data was collected for assessment of both implementation outcomes (i.e., acceptability, appropriateness, adoption, feasibility, fidelity, penetration, cost, and sustainability) and intervention outcomes (i.e., effectiveness, efficiency, equity, patient-centeredness, safety, and timeliness), recommended by Proctor et al. [ 8 ] as relevant.

Intervention and implementation description: results

The next domain (items 20–22) is where the user describes the results of both the implementation strategy and intervention implemented. As appropriate, the user enters [ 20 ] outcomes (implementation and intervention outcomes), [ 21 ] barriers to implementation, and [ 22 ] facilitators of implementation.

Intervention and implementation evaluation: methods

The third domain (item 23) is where the user evaluates the methods reported within the paper to assess the implementation strategy and the intervention implemented. The tool guides the user through this process in three steps. First, the user selects the study design (i.e., qualitative, randomized control trial, non-randomized quantitative study, or mixed methods). Next, the user is prompted to respond to five questions regarding the study design reported in the paper. Criteria are indicated by study design, so that criteria for qualitative studies correspond to criteria 1.1–1.5, those for quantitative RCTs correspond to 2.1–2.5, those for quantitative non-randomized studies correspond to 3.1–3.5, and those for mixed methods studies correspond to 4.1–4.5. Each question represents a quality criterion for evaluating the study design. For qualitative studies, for example, the criteria are as follows: 1.1. Is the qualitative approach appropriate to answer the research question?; 1.2. Are the qualitative data collection methods adequate to address the research question?; 1.3. Are the findings adequately derived from the data?; 1.4. Is the interpretation of results sufficiently substantiated by data?; 1.5. Is there coherence between qualitative data sources, collection, analysis and interpretation? In comparison, for the quantitative RCTs, the criteria are as follows: 2.1. Is randomization appropriately performed?; 2.2. Are the groups comparable at baseline?; 2.3. Are there complete outcome data?; 2.4. Are outcome assessors blinded to the intervention provided?; 2.5 Did the participants adhere to the assigned intervention? Finally, the user provides a score (0 or 1) to each question, to indicate whether each criterion was (1) or was not (0) met.

Intervention and implementation evaluation: results

The last domain (item 24) is where the user inputs their evaluation of the results of the implementation strategy and the intervention implemented. The user sums the score from the last step of the third domain and applies this summary score to the intervention and implementation outcomes assessed. Based on this appraisal section, the risk of bias will be higher (i.e., score of 1–2), lower (i.e., score of 3–5), or unclear (i.e., not able to be assessed). A summary score can be applied to each implementation or intervention outcome assessed in the paper, if these outcomes were assessed in different manners. For example, a study may have poorly evaluated the intervention outcome (i.e., summary score for effectiveness = 2 and for patient centeredness = 1) but appropriately evaluated the implementation outcome (i.e., summary score for adoption = 4 and for acceptability = 5). Additionally, summary scores may be compared across various studies within a review, which will provide an overall understanding of the risk of bias within the literature for each outcome of an intervention and implementation strategy. This synthesis and appraisal can be guided by the templates included as supplementary documents. As with standards for systematic reviews, it is advised that at least two reviewers independently carry out the appraisal process and compare extraction until reaching consensus or have a third reviewer resolve discordant outputs.

Usability and utility findings

Once the tool was developed, feedback was sought on its usability and utility. Our sample of 32 meeting participants were majority female, with a mix of education attainment, across various healthcare and public health disciplines, and ranged from novice to expert implementers and researchers. (Table 3 ) Users reported they liked the layout of the tool, its detailed instructions, and ease of use (Table 4 ). Many reported it was comprehensive and saw utility in being able to extract both qualitative and quantitative results, with one participant sharing “I am very excited about this tool because I am working on a literature review and have been having trouble thinking about how to organize the evidence to inform implementation science.” Participants also recognized that this made for a lengthy extraction process. Another participant shared: “I think this is really useful. It would be great to employ this in several different disciplines to see how it works in real practice.” They found the criteria scoring for critical appraisal were straightforward. Participants asked for examples of completed entries and wanted to have space to identify the individual entering information in the form, so one could assess inter-rater reliability. Many had suggestions for how to improve automation.

The team is in the process of investigating existing platforms that will facilitate automation. It is noted that reviewers will have access to different tools, thus Excel will be the primary tool. This will allow the tool to interface with statistical packages to enable the generation of summary statistics when comparing across multiple extracted papers within a systematic review or meta-analysis. Editing capabilities specific to the process will be incorporated into future tools.

The comprehensive, adaptable 24-item ASSESS tool allows for both (1) reporting of implementation strategies and the intervention being implemented and (2) critical appraisal of intervention and implementation outcomes resulting from quantitative, qualitative, or mixed methods assessment. The tool shares with the STARI checklist [ 20 ] the aim for enhancing adoption and sustainability of effective interventions by structuring reporting of implementation studies, as well as the presentation of dual strands describing the implementation strategy and the intervention that is being implemented. The ASSESS tool is novel in its inclusion of the implementation phases, which allows for comparison of studies across pre-implementation, during implementation, and post-implementation stages. These stages could then be mapped onto implementation science theoretical frameworks like the Exploration, Preparation, Implementation and Sustainment (EPIS) Framework, to generate findings related to applicability of implementation strategies and assessment of implementation outcomes at different implementation phases. The ASSESS tool is innovative in that other reporting tools generally do not assess risk of bias among implementation outcomes. Other critical appraisal tools do not provide guidance on how to separately assess quantitative and qualitative data for risk of bias. Shaped by Proctor’s taxonomy, the ASSESS tool moves from simply reporting implementation outcomes to evaluating quality of data on the outcomes and thus the risk of bias. The ASSESS tool will need to be refined in the light of the practical experience of using the tool. Further research is needed to examine how to integrate quantitative (risk of bias) and qualitative (trustworthiness) if critical appraisal findings are discordant.

This tool has various strengths and limitations. As a strength, it does not promote one study design over another, for example randomized-control trial over qualitative. It provides a way to appraise qualitative findings, which are somewhat lesser reported than quantitative appraisal. It further incorporates implementation phases. Importantly, this work presents the development of the tool and initial qualitative assessment; it is only once it is available for use that its greater utility can be subsequently assessed. Future research should examine the validity and reliability of the ASSESS tool, as has been done using a stakeholder-driven approach for pragmatic measurement of implementation outcomes, strategies, and context [ 11 , 12 , 24 ]. Future research should also examine tool iterations that integrate aspects of additional novel and relevant tools, such as the FRAME-IS tool for documenting modifications to implementation strategies in healthcare [ 25 ], which was published after our work was carried out and therefore did not inform the consensus building process. This tool is not designed for use with non-empirical papers (i.e., review papers, theoretical papers, gray literature where the methods are not fully described), economic studies, or diagnostic accuracy studies. Future research may examine iterations of this tool to allow application to these types of studies, as well as examine the variability of qualitative designs for critical appraisal. Although a Delphi method may provide more reliable findings, there are certain advantages to using nominal groups, including greater consensus and understanding of reasons for disagreement; therefore, elements of a modified Delphi method and a nominal group technique were combined through a hybrid method that has been previously suggested [ 26 ]. These structured methods attempt to combat cognitive biases in judgment [ 27 ], which are particularly influential in complex tasks [ 19 ], as both require an independent initial rating to anchor opinions based on an individual’s own knowledge. This hybrid method maintained a focused discussion on specific topics pertinent to the underlying validity of each item in the tool and allowed all panelists to have access to the same information regarding the tool prior to evaluating it.

The comprehensive, adaptable 24-item ASSESS tool allows for both (1) reporting of the implementation strategy and the intervention being implemented and (2) critical appraisal of intervention and implementation outcomes resulting from quantitative, qualitative, or mixed methods assessment. It addresses the challenge of critical assessment of a methodologically diverse and growing body of implementation science literature. This tool could prove particularly helpful for designing and carrying out systematic reviews and meta-analyses of empirical studies of implementation, examining how process and context may lead to heterogeneity of results. The ASSESS tool will be disseminated via posting on the researchers’ website ( https://publichealth.nyu.edu/research-scholarship/centers-labs-initiatives/isee-laboratory ) and via submission to the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network. Its use could improve the synthesis of implementation strategies, which will facilitate translation of effective public health interventions into routine practice within clinical or community settings.

Availability of data and materials

The tool will be available on a website ( https://publichealth.nyu.edu/research-scholarship/centers-labs-initiatives/isee-laboratory ). Templates in various forms will be made available.

Abbreviations

A comprehenSive tool to Support rEporting and critical appraiSal of qualitative quantitative and mixed methods implementation reSearch outcomes

Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26.

Article   PubMed   PubMed Central   Google Scholar  

Kilbourne AM, Almirall D, Eisenberg D, Waxmonsky J, Goodrich DE, Fortney JC, et al. Protocol: Adaptive Implementation of Effective Programs Trial (ADEPT): cluster randomized SMART trial comparing a standard versus enhanced implementation strategy to improve outcomes of a mood disorders program. Implement Sci. 2014;9(1):132.

Smith JD, Li DH, Rafferty MR. The Implementation Research Logic Model: a method for planning, executing, reporting, and synthesizing implementation projects. Implement Sci. 2020;15(1):84.

Sarkies MN, Skinner EH, Bowles K-A, Morris ME, Williams C, O’Brien L, et al. A novel counterbalanced implementation study design: methodological description and application to implementation research. Implement Sci. 2019;14(1):45.

Hemming K, Haines TP, Chilton PJ, Girling AJ, Lilford RJ. The stepped wedge cluster randomised trial: rationale, design, analysis, and reporting. BMJ. 2015;350:h391.

Article   CAS   PubMed   Google Scholar  

Child S, Goodwin V, Garside R, Jones-Hughes T, Boddy K, Stein K. Factors influencing the implementation of fall-prevention programmes: a systematic review and synthesis of qualitative studies. Implement Sci. 2012;7(1):91.

van Dongen JM, Tompa E, Clune L, Sarnocinska-Hart A, Bongers PM, van Tulder MW, et al. Bridging the gap between the economic evaluation literature and daily practice in occupational health: a qualitative study among decision-makers in the healthcare sector. Implement Sci. 2013;8(1):57.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Admin Pol Ment Health. 2011;38(2):65–76.

Article   Google Scholar  

Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Admin Pol Ment Health. 2009;36(1):24–34.

Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F, Burns B, et al. Implementation research: a synthesis of the literature. 2005.

Google Scholar  

Powell BJ, Stanick CF, Halko HM, Dorsey CN, Weiner BJ, Barwick MA, et al. Toward criteria for pragmatic measurement in implementation research and practice: a stakeholder-driven approach using concept mapping. Implement Sci. 2017;12(1):118.

Stanick CF, Halko HM, Dorsey CN, Weiner BJ, Powell BJ, Palinkas LA, et al. Operationalizing the ‘pragmatic’ measures construct using a stakeholder feedback and a multi-method approach. BMC Health Serv Res. 2018;18(1):882.

Lewis CC, Fischer S, Weiner BJ, Stanick C, Kim M, Martinez RG. Outcomes for implementation science: an enhanced systematic review of instruments using evidence-based rating criteria. Implement Sci. 2015;10(1):155.

Khadjesari Z, Boufkhed S, Vitoratou S, Schatte L, Ziemann A, Daskalopoulou C, et al. Implementation outcome instruments for use in physical healthcare settings: a systematic review. Implement Sci. 2020;15(1):66.

Weiner BJ, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):108.

Moher D, Schulz KF, Simera I, Altman DG. Guidance for Developers of Health Research Reporting Guidelines. PLoS Med. 2010;7(2):e1000217.

Black N, Murphy M, Lamping D, McKee M, Sanderson C, Askham J, et al. Consensus development methods: a review of best practice in creating clinical guidelines. J Health Serv Res Policy. 1999;4(4):236–48.

McMillan SS, King M, Tully MP. How to use the nominal group and Delphi techniques. Int J Clin Pharm. 2016;38(3):655–62.

PubMed   PubMed Central   Google Scholar  

Davies S, Romano PS, Schmidt EM, Schultz E, Geppert JJ, McDonald KM. Assessment of a novel hybrid Delphi and Nominal Groups technique to evaluate quality indicators. Health Serv Res. 2011;46(6pt1):2005–18.

Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for Reporting Implementation Studies (StaRI) Statement. Bmj. 2017;356:i6795.

Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. Bmj. 2014;348:g1687.

Article   PubMed   Google Scholar  

Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 2019;14(1):58.

Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, et al. Improving the content validity of the mixed methods appraisal tool: a modified e-Delphi study. J Clin Epidemiol. 2019;111:49–59.e1.

Stanick CF, Halko HM, Nolen EA, Powell BJ, Dorsey CN, Mettert KD, et al. Pragmatic measures for implementation research: development of the Psychometric and Pragmatic Evidence Rating Scale (PAPERS). Transl Behav Med. 2021;11(1):11–20.

Miller CJ, Barnett ML, Baumann AA, Gutner CA, Wiltsey-Stirman S. The FRAME-IS: a framework for documenting modifications to implementation strategies in healthcare. Implement Sci. 2021;16(1):36.

Hutchings A, Raine R, Sanderson C, Black N. A comparison of formal consensus methods used for developing clinical guidelines. J Health Serv Res Policy. 2006;11(4):218–24.

Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):1124–31.

Download references

Acknowledgements

We wish to thank our panel participants, expert reviewers, and the ISEE (Implementing Sustainable Evidence-based interventions through Engagement) lab students at New York University for their time and feedback.

This work was supported in part by the NYU CTSA grants UL1 TR0001445 and TL1 TR001447 from the National Center for Advancing Translational Sciences, National Institutes of Health.

Author information

Nessa Ryan and Dorice Vieira contributed equally to this work.

Authors and Affiliations

Global Health Program, New York University School of Global Public Health, Public Health, 708 Broadway, 4th floor - Room 453, New York, NY, 10003, USA

Nessa Ryan, Joyce Gyamfi & Emmanuel Peprah

NYU Health Sciences Library, Grossman School of Medicine, New York University, New York, NY, USA

Dorice Vieira

Department of Social and Behavioral Sciences, New York University School of Global Public Health, New York, NY, USA

Temitope Ojo & Emmanuel Peprah

Department of Public Health Policy and Management, New York University School of Global Public Health, New York, NY, USA

Donna Shelley

Department of Population Health, NYU School of Medicine, NYU Langone Health, New York, NY, USA

Olugbenga Ogedegbe

Behavioral Science and Health Education, College for Public Health and Social Justice, Salus Center, Saint Louis University, Saint Louis, MO, USA

Juliet Iwelunmor

You can also search for this author in PubMed   Google Scholar

Contributions

DV, NR, and EP contributed to the conceptualization of this work. NR drafted the tool. NR, DV, JG, TO, and EP provided feedback on iterations and applied the tool to various types of articles. NR, DV, JG, TO, and EP led meetings on the utility and usefulness of the tool. NR and DV developed templates for tool automation. NR drafted the manuscript, to which DV, JG, TO, DS, OO, JI, and EP contributed. All authors have reviewed and approved the submitted version.

Corresponding author

Correspondence to Nessa Ryan .

Ethics declarations

Ethics approval and consent to participate.

Human subjects approval was not necessary for the purpose of tool development. Data collected from users of the tool was regarding the tool, and any demographic data was de-identified by a team member not part of the tool development and evaluation process.

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Ryan, N., Vieira, D., Gyamfi, J. et al. Development of the ASSESS tool: a comprehenSive tool to Support rEporting and critical appraiSal of qualitative, quantitative, and mixed methods implementation reSearch outcomes. Implement Sci Commun 3 , 34 (2022). https://doi.org/10.1186/s43058-021-00236-4

Download citation

Received : 09 February 2021

Accepted : 03 November 2021

Published : 28 March 2022

DOI : https://doi.org/10.1186/s43058-021-00236-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Reporting tool
  • Critical appraisal
  • Implementation outcomes
  • Implementation strategies
  • Qualitative methods
  • Quantitative methods
  • Mixed methods
  • Systematic review
  • Meta-analysis

Implementation Science Communications

ISSN: 2662-2211

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research helps develop tools for assessing effectiveness

Home

The UK Faculty of Public Health has recently taken ownership of the Health Knowledge resource. This new, advert-free website is still under development and there may be some issues accessing content. Additionally, the content has not been audited or verified by the Faculty of Public Health as part of an ongoing quality assurance process and as such certain material included maybe out of date. If you have any concerns regarding content you should seek to independently verify this.

Study design for assessing effectiveness, efficiency and acceptability of services including measures of structure, process, service quality, and outcome of health care

Health care evaluation is the critical assessment, through rigorous processes, of an aspect of healthcare to assess whether it fulfils its objectives. Aspects of healthcare which can be assessed include:

  • Effectiveness – the benefits of healthcare measured by improvements in health
  • Efficiency – relates the cost of healthcare to the outputs or benefits obtained
  • Acceptability – the social, psychological and ethical acceptability regarding the way people are treated in relation to healthcare
  • Equity - the fair distribution of healthcare amongst individuals or groups

Healthcare evaluation can be carried out during a healthcare intervention, so that findings of the evaluation inform the ongoing programme (known as formative evaluation) or can be carried out at the end of a programme (known as summative evaluation).

Evaluation can be undertaken prospectively or retrospectively. Evaluating on a prospective basis has the advantage of ensuring that data collection can be adequately planned and hence be specific to the question posed (as opposed to retrospective data dredging for proxy indicators) as well as being more likely to be complete. Prospective evaluation processes can be built in as an intrinsic part of a service or project (usually ensuring that systems are designed to support the ongoing process of review).

There are several eponymous frameworks for undertaking healthcare evaluation. These are set out in detail in the Healthcare Evaluation frameworks section of this website and different frameworks are best used for evaluating differing aspects of healthcare as set out above. The steps involved in designing an evaluation are described below.

Steps in designing an evaluation

Firstly it is important to give thought to the purpose of the evaluation, audience for the results, and potential impact of the findings. This can help guide which dimensions are to be evaluated – inputs, process, outputs, outcomes, efficiency etc. Which of these components will give context to, go toward answering the question of interest and be useful to the key audience of the evaluation?

Objectives for the evaluation itself should be set (remember SMART) –

S     -  specific – effectiveness/efficiency/acceptability/equity M    -  measurable A     -  achievable – are objectives achievable R     -  realistic (can objectives realistically be achieved within available resources?) T     -  time- when do you want to achieve objectives by?

Having identified what the evaluation is attempting to achieve, the following 3 steps should be considered:

1. What study design should be used?

When considering study design, several factors must be taken into account:

  • How will the population / service being evaluated be defined?
  • Will the approach be quantitative / qualitative / mixed? (Qualitative evaluation can help answer the ‘why’ questions which can complement quantitative evaluation for instance in explaining the context of the intervention). Level of data collection and analysis - will it be possible to collect what is needed or is it possible to access routinely collected data (e.g. Hospital Episode Statistics if this data is appropriate to answer the questions being asked)?
  • The design should seek to eliminate bias and confounding as far as possible – is it possible to have a comparator group?
  • The strengths and weaknesses of each approach should be weighed up when finalising a design and the implication on the interpretation of the findings noted.

Study designs include:

a) Randomised methods

  • Through the random allocation of an intervention, confounders are equally distributed. Randomised controlled trials can be expensive to undertake rigorously and are not always practical in the service setting. This is usually carried out prospectively.
  • Development of matched control methods has been used to retrospectively undertake a high quality evaluation. A guide to undertaking evaluations of complex health and care inteventions using this method can be found here: http://www.nuffieldtrust.org.uk/sites/files/nuffield/publication/evaluation_report_final_0.pdf
  • ‘Zelen’s design’ offers an alternative method incorporating randomisation to evaluate an intervention in a healthcare setting.

b) Non randomised methods

  • Cohort studies - involve the non-random allocation of an intervention, can be retrospective or prospective, but adjustment must be made for confounders
  • Case-control studies – investigate rare outcomes, participants are defined on the basis of outcome rather than healthcare. There is a need to match controls however the control group selection itself is a major form of bias.

c) Ecological studies

  • cheap and quick, cruder and less sensitive than individual level studies, can be useful for studying the impact of health policy

d) Descriptive studies

  • used to generate hypotheses, help understand complexities of a situation and gain insight into processes e.g. case series.

e) Health technology assessment

  • examines what technology can best deliver benefits to a particular patient or population group. It assesses the cost-effectiveness of treatments against current or next best treatments. See economic evaluation section of this website for more details.

f) Qualitative studies

  • Methods are covered in section 1d of this textbook.
  • Researchers-in-residence are an innovative method used in evaluation whereby the researcher becomes a member of the operational team and brings a focus to optimising effectiveness of the intervention or programme rather than assessing effectiveness.

2. What measures should be used?

The choice of measure will depend on the study design or indeed evaluation framework used as well as the objectives of the evaluation. For example, the Donabedian approach considers a programme or intervention in terms of inputs, process, outputs and outcomes.

  • Inputs - (also known as structure) describes what has gone into an intervention to make it happen e.g. people, time, money
  • Process - describes how it has happened e.g. strategy development, a patient pathway
  • Outputs - describe what the intervention or programme has produced e.g. throughput of patients
  • Outcomes - describes the actual benefits or disbenefits of that intervention or programme.

The table below gives some further examples of measures that can be used for each aspect of the evaluation. Such an evaluation could measure process against outcomes, inputs versus outputs or any combination.

research helps develop tools for assessing effectiveness

3. How and when to collect data?

The choice of qualitative versus quantitative data collection will influence the timing of such collection, as will the choice of the evaluation being carried out prospectively or retrospectively. The amount of data that needs to be collected will also impact on timing, and sample-size calculations at the beginning of the evaluation will be an important part of planning.

For qualitative studies, the sample must be big enough that enlargement is unlikely to yield additional insights e.g. undertaking another interview with a member of staff is unlikely to identify any new themes. Most qualitative approaches, in real life, would ensure that all relevant staff groups were sampled.

For quantitative studies the following must be considered (using statistical software packages such as Stata):

  • the size of the treatment effect that would be of clinical/social/public health significance
  • the required power of the study
  • acceptable level of statistical significance
  • variability between individuals in the outcome measure of interest

If the evaluation is of a longitudinal design, the follow up time is important to consider, although in some instances may be dictated by availability of data. There may also be measures which are typically reported over defined lengths of time such as readmission rates which are often measured at 7 days and 30 days.

Trends in health services evaluation

Evaluation from the patient perspective has increasingly become an established part of working in the health service. Assessment of service user opinion can include results from surveys, external assessment (such as NHS patient experience surveys led by the CQC) as well as outcomes reported by patients themselves (patient reported outcome measures) which from April 2009 are a mandatory part of commissioners’ service contracts with provider organisations and are currently collected for four clinical procedures; hip replacements, knee replacements, groin hernia and varicose veins procedures.

               © Rosalind Blackwood 2009, Claire Currie 2016

  • Open access
  • Published: 22 December 2023

A scoping review of the globally available tools for assessing health research partnership outcomes and impacts

  • Kelly J. Mrklas   ORCID: orcid.org/0000-0002-3887-1843 1 , 2 ,
  • Jamie M. Boyd 3 ,
  • Sumair Shergill 4 ,
  • Sera Merali 5 ,
  • Masood Khan 6 ,
  • Cheryl Moser 6 ,
  • Lorelli Nowell 7 ,
  • Amelia Goertzen 8 ,
  • Liam Swain 1 ,
  • Lisa M. Pfadenhauer 9 , 10 ,
  • Kathryn M. Sibley 6 , 11 ,
  • Mathew Vis-Dunbar 12 ,
  • Michael D. Hill 1 , 13 , 14 ,
  • Shelley Raffin-Bouchal 7 ,
  • Marcello Tonelli 15 , 16 &
  • Ian D. Graham 17 , 18  

Health Research Policy and Systems volume  21 , Article number:  139 ( 2023 ) Cite this article

1372 Accesses

9 Altmetric

Metrics details

Health research partnership approaches have grown in popularity over the past decade, but the systematic evaluation of their outcomes and impacts has not kept equal pace. Identifying partnership assessment tools and key partnership characteristics is needed to advance partnerships, partnership measurement, and the assessment of their outcomes and impacts through systematic study.

To locate and identify globally available tools for assessing the outcomes and impacts of health research partnerships.

We searched four electronic databases (Ovid MEDLINE, Embase, CINAHL + , PsychINFO) with an a priori strategy from inception to June 2021, without limits. We screened studies independently and in duplicate, keeping only those involving a health research partnership and the development, use and/or assessment of tools to evaluate partnership outcomes and impacts. Reviewer disagreements were resolved by consensus. Study, tool and partnership characteristics, and emerging research questions, gaps and key recommendations were synthesized using descriptive statistics and thematic analysis.

We screened 36 027 de-duplicated citations, reviewed 2784 papers in full text, and kept 166 studies and three companion reports. Most studies originated in North America and were published in English after 2015. Most of the 205 tools we identified were questionnaires and surveys targeting researchers, patients and public/community members. While tools were comprehensive and usable, most were designed for single use and lacked validity or reliability evidence. Challenges associated with the interchange and definition of terms (i.e., outcomes, impacts, tool type) were common and may obscure partnership measurement and comparison. Very few of the tools identified in this study overlapped with tools identified by other, similar reviews. Partnership tool development, refinement and evaluation, including tool measurement and optimization, are key areas for future tools-related research.

This large scoping review identified numerous, single-use tools that require further development and testing to improve their psychometric and scientific qualities. The review also confirmed that the health partnership research domain and its measurement tools are still nascent and actively evolving. Dedicated efforts and resources are required to better understand health research partnerships, partnership optimization and partnership measurement and evaluation using valid, reliable and practical tools that meet partners’ needs.

Peer Review reports

Health research partnerships involve researchers engaging with diverse partners, including patients, decision or policy makers, health care administrators and healthcare or community agencies, among others, in any or all parts of the research process [ 1 , 2 ]. Numerous health research partnership approaches or traditions have independently evolved over the past half century, including participatory research, co-production, mode 2 research, engaged scholarship and integrated knowledge translation, among others [ 3 ]. The increasing popularity of partnership approaches is promising [ 4 ] because partnerships are known to help enhance our understanding of key ‘factors that facilitate and hinder the development and sharing of knowledge in healthcare systems’ (p. 2) [ 5 ] and to increase the relevance, use, sustainability and impact of research [ 6 , 7 , 8 ]. For partners themselves [ 9 ], the increased popularity of research partnerships creates new opportunities for greater equity [ 7 ], shared power, trust, synergy, capacities and sustainability in health research and for generating non-traditional benefits for partners and researchers alike [ 7 , 9 , 10 , 11 , 12 , 13 , 14 ].

However, while the qualitative and anecdotal value of these approaches is well established [ 1 , 7 , 13 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 ], their systematic, causal and quantified measurement is not. Partnership measurement has lagged behind [ 26 , 27 ], despite increasing demand for tangible evidence of the resulting outcomes and impacts [ 28 , 29 , 30 , 31 ]. With increasing fiscal constraints in health and health research sectors, the need to understand and link health research partnerships to real-world outcomes and impacts is paramount. However, tangible examples of studies assessing the causal influences of health research partnerships on outcomes and impacts are few [ 7 , 8 , 24 , 32 , 33 , 34 ]. Findings generated by researchers at the Center for Participatory Research at the University of New Mexico [ 35 ] and their collaborating teams provide strong examples of theorized, quantified partnership outcomes and impacts [ 36 , 37 , 38 , 39 ]. Similarly, King and colleagues [ 27 , 40 ] also provide a strong example of partnership impact measurement.

In this review, we refer to outcomes as measurable factors that change as a result of intervention(s) and that are not futuristic, including process and summative outcomes (adapted from University of Waterloo, 2018 and Hoekstra et al., 2018) [ 1 , 41 ] and impacts as effects, influences or changes to the economy, society, public policy or services, individuals, teams, organizations, health, the environment or quality of life beyond academia (adapted from the Higher Education Funding Council of England, 2014 and Hoekstra et al., 2018) [ 1 , 42 ] (Table 1 ).

There are many documented challenges for measurement in this field, with multiple contributing causes, including the sheer diversity of partnership approaches [ 43 ], the type and maturity of evaluative designs and an historical inclination towards qualitative designs and methods [ 31 , 32 ]. This context makes cross-partnership comparisons and transferability of findings challenging [ 7 , 11 , 12 , 13 ]. Other reported measurement complexities pertain to a lack of measurement neutrality, a lack of clarity around outcome and impact terms, definitions and their inconsistent application [ 31 ], and the positioning of health research partnership outcomes and impacts as secondary objectives or incidental findings in research reports. These factors hinder measurement advancements and the ability to draw causal links between the influence of partnerships and their outcomes and impacts [ 24 , 31 ].

Furthermore, researchers report a lack of theoretical foundations, validated, psychometrically-tested and pragmatic assessment tools [ 23 , 24 , 29 ], and objective (instead of proxy or self-reported measures) [ 32 , 33 ] among their key measurement concerns [ 7 , 13 , 23 , 32 ]. For the last 20 years, there have been recurrent calls to develop more quantitative, pragmatic, generalizable and flexible tools to better understand partnership establishment, processes, outcomes and impacts [ 12 , 16 , 28 , 29 , 44 , 45 , 46 , 47 ]. There is increasing demand for valid, reliable and pragmatic measures to assess the nature, type, and dose of health research partnership activities necessary to optimize outcomes and impacts, while minimizing costs and harms [ 13 , 23 , 24 , 28 , 31 , 48 ]. Optimizing health research partnership design, execution and evaluation in the future is predicated on the extent to which partnership outcomes and impacts measures and measurement evolves [ 23 , 27 ].

Finally, multiple, pre-existing reviews exist in this research domain. However, many of these reviews are narrowly focussed on research partnership evaluation tools for specific populations [ 24 , 28 , 48 ], specific partnership traditions or health-inclusive domains [ 7 , 10 , 13 , 29 , 44 , 49 , 50 , 51 ], or on the quality and outcomes of research collaborations [ 23 ]. This review adds a unique perspective in attempting to locate and describe globally available tools for health research partnership outcome and impact assessment without restriction on population, tradition, domain, partnership elements or specific types of outcomes and impacts. The review is pragmatic by design and motivated by the need to offer researchers and stakeholders alike ready access to tools for assessing research partnership outcomes and impacts.

Research questions

The primary research question is: what are the globally available tools for assessing the outcomes and impacts of health research partnerships in the published literature? Our secondary research questions are: what is the nature and scope of the literature, including relevant terminology, study characteristics, tool, tool evaluation; and partnership characteristics, emergent gaps, future research questions, and what is the feasibility for conducting a systematic review of the identified tools?

This scoping review was designed to identify and describe tools for assessing the outcomes and impacts of health research partnerships, and is guided by a collaboratively built conceptual framework [ 1 ]. The detailed scoping review protocol [ 52 ] outlining the objectives, inclusion criteria and methods was specified a priori and posted to the Open Science Framework [ 53 ], prior to full text abstraction. Protocol deviations and rationale are detailed in the supplementary file (Additional file 1 : Appendix 2). Expanded methods are provided in the supplementary file (Additional file 1 : Appendix 3).

Search strategy and data sources

An a priori search strategy was developed from relevant keywords, publication indexing and Medical Subject Headings (MeSH) in consultation with a medical research librarian (MVD) (Additional file 1 : Appendix 4). Four electronic health research databases [MEDLINE (OVID), EMBASE, CINAHL Plus, PsychINFO] were searched from inception to 21 October 2018 with two updates (31 December 2019 and 2 June 2021). The search yielded 36 027 unique citations.

We defined a health research partnership as ‘…individuals, groups or organizations engaged in collaborative, health research activity involving at least one researcher (e.g., individual affiliated with an academic department, hospital or medical centre), and any partner actively engaged in any part of the research process (e.g., decision or policy maker, health care administrator or leader, community agency, charities, network, patients, industry partner, etc.)’ [ 1 , 2 ]. Tools were defined as ‘instruments (e.g., survey, measures, assessments, questionnaire, inventory, checklist, questionnaires, checklists, list of factors, subscales or similar) that can be used to assess the outcome or impact elements or domains of a health research partnership’ [ 1 , 54 ]. An outcome was defined as ‘factor(s) described in the study methods used to determine a change in status as a result of interventions, can be measured or assessed as component(s) of the study, and are not futuristic’; including both process and summative outcomes (adapted from Hoekstra et al., 2018; University of Waterloo, 2018) [ 1 , 41 ]. Impact was defined as ‘any effect, influence on, or change to the economy, society, public policy or services, individuals, teams, organizations, health, the environment, quality of life or academia’ (adapted from Hoekstra et al., 2018; Higher Education Funding Council for England) [ 1 , 42 ] (Table 1 ). Remaining operational terms and definitions are provided in Additional file 1 : Appendix 2 and online [ 1 , 52 ].

Eligibility and screening

We retained studies describing a health research partnership and the development, use and/or assessment of a health research partnership outcome or impact assessment tool (or element of, or at least one health research partnership outcome or impact measurement property [ 49 , 55 ] of a tool), as an aim of the study (Table 2 ).

All title, abstract and full text screening was undertaken independently and in duplicate. We used a hybrid strategy involving independent abstraction (K.J.M) and independent validation by a second, trained investigator (M.K., S.S., S.M.) in the data abstraction phase [ 56 ], with all discrepancies resolved with consensus by dual review, discussion at weekly meetings and guided by a pilot-tested tool and coding manual [ 57 , 58 , 59 ]. Variables pertaining to study characteristics, tool characteristics, partnership characteristics and tool evaluation characteristics, were abstracted according to the protocol [ 52 ]; and Additional file 1 : Appendix 2.

Tool evaluation criteria

We adapted consensus-built criteria developed by Boivin and colleagues to arrive at a final set of 20 criteria and companion scoring rubric [ 28 , 60 ] (Additional file 1 : Appendix 5).

We synthesized key study, tool, tool evaluation and partnership characteristics (Additional file 1 : Appendix 2) using basic descriptive statistics (mean/standard deviation, frequency counts) for tabular presentation using MS Excel [ 61 ] and Stata v13.1 [ 62 ]. We analysed qualitative data in NVivo v12.7 [ 63 ] using an inductive thematic approach [ 64 ] and a descriptive-analytical process for reviews [ 65 ] and reported findings according to guidelines [ 66 , 67 , 68 ].

The initial search (31 Oct 2018) and updates (31 December 2019 and 2 June 2021) generated 36 027 de-duplicated citations, and of these, 2784 full text reports were retrieved for evaluation, ultimately yielding 169 studies (166 unique studies with three companion reports). Companion reports comprised published protocols and a tool language translation study. Study citation flow is provided in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagram (Fig.  1 ).

figure 1

Scoping review PRISMA study flow diagram

The most common reasons for exclusion were studies lacking tools or lacking tools that assessed partnership outcomes/impacts ( n  = 1204), followed by studies involving outcomes and impacts assessment by another method that did not match the study definition of a tool (e.g., involved other modalities or methods of assessment, such as focus groups, interviews, evaluative approaches such as social network analysis, etc.) ( n  = 695). ‘Substantial’ inter-rater agreement [ 69 , 70 ] was achieved at L1 title/abstract [Cohen’s κ : 0.66 95% confidence interval (CI) (0.64–0.67)] and L2 full text [Cohen’s κ : 0.74 95% CI (0.72–0.76)] review stages.

Study characteristics

Included studies were distributed across a broad scope of peer-reviewed journals. Just under half of included studies (45%, 75) were clustered in 10 journals and several smaller clusters located in three others (5%, 9). The remainder (82) was widely dispersed across 72 other journals and a single government report.

In total, 24 countries were represented by eligible studies; most studies were located in minority countries. Minority countries refer to locations where the minority of the global populace resides and replaces the outdated term ‘developed’ nations (Additional file 1 : Appendix 2). We found 157 single-site and nine multi-site studies in the data set. Of the single-site studies, 109 originated in North America (69%); 86 studies from the United States and 23 from Canada (79% and 21%, respectively). A further 36 studies originated from Europe (23%), including the United Kingdom (21), Ireland (5), The Netherlands (4), Germany (2), Spain (2), Sweden (1) and Denmark (1). A smaller number of studies originated from Australasia (12, 8%) [Australia (10), New Zealand (1), Taiwan (1)]; we also located one eligible single-site study in the Middle East (1, 1%). Of the nine multi-site studies identified (5%), four involved minority countries (Canada, Australia, New Zealand, United States, Mexico), leaving a very small proportion of the literature originating from majority countries, including South America (Argentina, Bolivia, Brazil, Chile, Columbia, Peru), African nations (South Africa, Uganda, Ghana) and a single site in the Caribbean (Saint Lucia). With only one exception, no studies originated from majority countries alone, and where majority countries were involved, all were partnered with minority country partners. Majority countries refer to locations where the majority of the global populace resides and replaces the outdated term ‘developing’ nations (Additional file 1 : Appendix 2).

Additional file 2 : Table S1 reports key characteristics of included studies. More than half of included studies were published after 2015 (91, 55%); there was a steady increase in the eligible health research partnership literature over the last 30 years (Additional file 1 : Appendix 6).

All but one eligible study was published in the English language (99%, 165); however, we also identified six studies containing English–French (2) [ 71 , 72 , 73 ] and English–Spanish (4) [ 36 , 74 , 75 , 76 ] bilingual tools, respectively, and four other studies with German [ 77 ], French [ 78 ], Spanish [ 79 ] and Dutch [ 80 ] language tools.

Diverse health sub-domains were represented by included studies (Fig.  2 ). We coded 221 health sub-domains, organized into seven themes, including disease-specific (71, 32%), health promotion and prevention (43, 22%), special populations (38, 17%), partnerships (21, 10%), health services research (18, 8%), health equity (17, 8%), and community health and development (13, 6%) studies. The most frequently occurring study designs were mixed methods designs (79, 48%), cross-sectional (58, 35%) and case or multiple case study designs (16, 10%). The remaining study designs comprised nested, descriptive, pre-post or post-test, Delphi and qualitative surveys (13, 9%). The methods employed in these studies were primarily mixed (122, 73%), followed by quantitative (38, 23%) and qualitative (6, 4%) methods. Of the mixed methods utilized, 88% (106) were mixed quantitative–qualitative, 10% (12) were multi-qualitative methods and 3% (4) were multi-quantitative methods.

figure 2

Health sub-domains and key sub-domain cluster. *where necessary, ≧ 1 sub-domain code per study was allowed, resulting in 221 sub-domain codes n  = 166 studies. STBBI sexually transmitted and blood borne infections, KT knowledge translation, IKT integrated knowledge translation, HTA health technology assessment

Most studies described multiple activities pertaining to one or more aspects of tool development (101, 61%), modification (52, 31%), use (142, 86%), evaluation (26, 16%) and validation (49, 30%). Conceptually, 119 (72%) studies cited an underlying framework or model, 12 (7%) generated a new framework or model during the study, and nine studies (5%) were both based on and generated a new framework or model. Most studies reported an evaluation of both outcomes and impacts (94, 57%), followed by outcomes (61, 37%), and impacts alone (11, 6%); however, we note these terms were frequently interchanged within and among study reports.

The sex of individuals filling out partnership assessment tools was reported in 33% of studies (54), and in 7% (11) reporting was incomplete. In a further 4% of studies (6), sex was requested but not reported. When sex was reported, the overall crude mean proportion of female participants across 54 studies was 67.1% [standard deviation (SD) 0.15]. A weighted mean average could not be calculated due to the frequent absence of denominator data. Other key social variables were not consistently available for reporting.

Tool characteristics

Additional file 2 : Table S2 summarizes key characteristics of the tools in included studies. Overall, 205 tools were identified, and of these, surveys and questionnaires were the most frequently reported tool type (100, 49% and 66, 32%, respectively). We noted that the terms survey and questionnaire were frequently interchanged within study reports; when this occurred, we elected the term most frequently associated with the methodological description of the tool. Scales were the third most frequent type of tool (15, 7%) and the remaining tools comprised indices, checklists, rubrics, criteria, and logs (11, 5%). We also identified a number of studies that employed toolkits (multiple tools in combination or as part of a process) (13, 6%), to assess health research partnership outcomes and impacts (Table 4 ). More than two thirds of tools were underpinned by a conceptual framework or model (144, 70%), but very few cited a review (e.g., synthesis or other review, or informed by a search of > 1 electronic databases with reported time frame) as underlying evidence informing the tool (35, 17%). In slightly more than a third of studies, we were able to find explicit reference to tool validity (63, 38%) and reliability evidence (59, 36%), but most involved self-reported measures of perception (161, 97%).

There was a high degree of shared provenance among the tools. Many tools referred to the adoption or modification of components from one or more pre-existing tools. From the studies that reported tool provenance, we were able to identify several distinct clusters of tools comprising derivations, modifications, or applications of a single tool. There were eight clusters (70 studies) linked to early tools and related research conducted by Israel, Lantz, Schulz and colleagues (17) [ 15 , 81 , 82 , 83 , 84 , 85 ], Wallerstein and colleagues (13) [ 19 , 86 , 87 , 88 , 89 , 90 ], Butterfoss, Goodman, Wandersman and colleagues (10) [ 46 , 91 , 92 , 93 , 94 , 95 , 96 , 97 , 98 ], Weiss, Lasker and colleagues, (8) [ 99 , 100 , 101 , 102 , 103 ], Feinberg, Brown, Chilenski and colleagues (6) [ 104 , 105 , 106 , 107 , 108 , 109 ], Abelson and colleagues (6) [ 110 , 111 , 112 , 113 ], Forsythe and colleagues, the Patient-Centered Outcomes Institute (PCORI) (5) [ 114 , 115 , 116 ], and Jones and Barry and colleagues (5) [ 117 , 118 , 119 ]. We also noted significant cross-referencing among the clusters.

In more than a third of studies, the specific partner group affiliation for those filling out tools was not provided (61, 37%). Where partners were defined, we sorted these 222 reported targets into different 13 partnering groups. The most frequently described partner groups targeted by tools were researchers (68, 31%), followed by patients and the public (54, 24%), community members (24, 11%), health care systems stakeholders (21, 9%), coalition staff (15, 7%), partner organizations (15, 7%) and research staff (14, 6%). The remaining stakeholders comprised government (3), policymakers, education sector staff, research funders and reviewers (2, respectively), decision makers and industry partners (1, respectively). In 75% of eligible studies, two or more partner groups were targeted by health research partnership outcomes and impacts tools; few studies targeted only a single partner group for health research partnership outcomes and impacts assessment.

Partnership characteristics

As anticipated, we were able to identify an array of research partnership approaches from authors’ partnership descriptions (Table 3 ). Community-based participatory research approaches arose most frequently in the data set, and included both CBPR (47, 23%) and organizational-based participatory research (OBPR) (3, 1%). General partnership approaches were the next most frequent category (32, 16%), followed by patient and public involvement (PPI) (26, 13%) and coalitions (22, 11%).

We identified several smaller approach clusters pertaining to participatory research [participatory action research (PAR), action research (AR), community-based participatory action research (CBPAR), and participatory evaluation] (17, 8%); patient and public engagement (13, 6%), community engaged research (CEnR or CER) (10, 5%), consumer involvement in research (9, 4%), community engagement (8, 4%), co-research (8, 4%), integrated knowledge translation (IKT) (7, 3%), and others [participatory and embedded implementation, practice-based research network (PBRN) and inclusive research] (4, 2%). The diversity of partnership approach descriptors further reveals a rich and broad set of approaches in the included literature (Table 3 ).

The complexity of and overlap in partnership approaches was further revealed when we examined key terms used to describe partnerships (Table 3 ). We collated unique key terms used by authors to describe health research partnerships and synthesized these by approach. As depicted in the unique terms column, there were 256 total terms used, with high overlap of terms between the 12 different approach domains. The coalition and partnerships domains contained the highest number of terms (50, 20% and 45, 18%, respectively), followed by participatory research (30, 12%) and patient and public involvement (24, 9%).

In almost half of included studies the initiating partner was researchers (74, 45%), followed by multi-stakeholder partnerships (16, 10%), and government departments, ministries and agencies (13, 8%) (Additional file 1 : Appendix 7). The remaining partnerships were initiated by funders (6, 4%), not-for-profit organizations (4, 2%), foundations (3, 2%), community members and service users (2, 1% each), and clinicians and academic institutions (1, 1% each). In almost a third of included studies, the initiating partner was not reported (44, 27%). Of 260 reported partnership funding sources, government (including ministries, funding agencies, and departments) was by far the most frequent funder of health research partnerships (161, 62%), followed by non-profit organizations (25, 9%), foundations (22, 8%) and academic institutions (20, 8%). The remainder (16, 6%) were funded by endowments and healthcare organizations (5 each), industry (4), and regulatory bodies (2) (Table 3).

Importantly, 124 studies (75%) reported some level of co-production between researchers and partners in one or more phases of the research process.

Tool evaluation criteria for included studies

An inventory of tools and their domain and overall percentage scores is appended (Additional file 1 : Appendix 8) on the basis of the modified, pragmatic health research partnership tool evaluation criteria (Additional file 1 : Appendix 5). In total, we scored 205 tools, including 13 toolkits; the distribution of overall percentage pragmatic and of domain-specific scores is shown in Figs.  3 and 4 . Mean domain scores were highest for tool comprehensiveness (4.01, SD 0.75), followed by tool usability (3.40, SD 1.25) and inclusion of the partner perspective (3.16, SD 0.93). The lowest mean domain score was for scientific rigor (2.21, SD 1.34). The mean overall tool score across all four domains, for the entire set of tools was 63.98% (SD 14.04).

figure 3

Health research partnership tool evaluation criteria scores ( n  = 205* tool scores). *Studies reporting multi-tools intended for simultaneous use were captured as toolkits and given a single, combined score

figure 4

Health research partnership tool evaluation criteria scores, by domain

Synthesis of documented future research questions, evidence gaps and key recommendations

Most studies posed questions for future research, described evidence gaps and/or provided key recommendations related to outcomes and impacts assessment in their reports. We synthesized these, noting a high degree of overlap between future questions, evidence gaps and key recommendations, and hence, these findings were tabulated to facilitate their cross-referencing (i.e., study authors provided key recommendations that may help address some of the reported research questions and gaps). This aspect of the synthesis provides a rich series of research questions to guide the next steps in health research partnership assessment, tool development and partnership research in general (Additional file 1 : Appendix 9).

Of the total number of reported research questions identified (325), a large number pertained to the further development and evolution of tools (80), including psychometric testing (30), tool testing (35) and tool and assessment process refinements and adaptations (11). The next most frequent type of research question pertained to partnership measurement and methods (46). A series of other research questions were identified, including the role of partnership in supporting sustainment (14), comparative effectiveness of partnership approaches (12), the use of theory (i.e., to guide evaluation, understand the influence of partnerships, expand and test conceptual frameworks and principles)(8), and questions pertaining to the evolution of partnerships over time, the role of leadership in partnerships (7, respectively), the role of context (6) and optimizing implementation and addressing priority population needs and concerns through partnership approaches (5, respectively). In sum, there is a significant overall call to address ‘how, how much, in whom, why (or why not) and under which circumstances’ questions for research partnerships to better understand how they develop, operate, achieve success and are best sustained.

Reported research gaps (Additional file 1 : Appendix 10) were fewer in number but were closely aligned to the identified future research questions. The gaps comprised the need for objective metrics and for establishing conceptual underpinnings and structures supporting public and patient involvement. There was a single, sentinel reference regarding the need for advancing partnership research as a field (i.e., uncovering the contexts and mechanisms of engagement as a gateway to understanding impact), and one reference to health systems strengthening (i.e., the need to build capacity for systems thinking). Both questions align well with the general trend of using partnership to aid evidence uptake and use.

We also identified 54 key recommendations for the field of health research partnership outcomes and impacts assessment that may be helpful to investigators seeking direction for research questions and addressing gaps (Additional file 1 : Appendix 11). Key recommendations included structural and other supports for research partnerships (26), sustainability planning (5), terminology (4), and for rigorous evaluation of partnerships (1).

Overall, we were able to identify multiple studies containing tools for the assessment of health research partnership outcomes and impacts in this scoping review [ 56 ]; a subset of these reported psychometric and pragmatic characteristics, hence we anticipate that a future systematic review on these tools and tool properties is feasible.

A synopsis of key findings from this large volume scoping review are outlined in Table 4 . Briefly, we identified 166 unique papers and three companion reports containing 205 partnership assessment tools. Most studies were English language, originated in North America, were published after 2015 and were widely dispersed in the literature. Most studies were multi-purpose, featuring mainly mixed methods designs and the use of mixed methods. There were four main partnership approaches, and partnerships were primarily initiated by researchers and funded by government-funded departments, ministries, and funding agencies. Key terms were often interchanged and inconsistently defined and applied. Overall, identified tools were moderately comprehensive and usable, with lesser integration of partner perspectives. The scientific rigour of tools was low and few had evidence of psychometric testing. The focus of emerging research questions and recommendations was on tool evolvement and better understanding partnership measurement.

Overall, the findings suggest that the nature of this research domain and its tools are still nascent and actively evolving, as evidenced by high variation in terminology, concept definitions and their application. Numerous terms were frequently interchanged and mixed, obscuring the measurement and comparison of key concepts.

Our findings aligned well with other authors noting a lack of quantitative study designs and methods [ 28 , 29 , 30 , 31 , 120 ] across multiple partnership approaches and populations. The number and diversity of solely quantitative designs and methods in our study was also low. However, as compared with earlier reviews [ 44 , 49 ], mixed methods were more common. It is unclear whether the increased use of mixed methods designs and methods over earlier reviews [ 44 , 49 ] reflects deliberate efforts to move beyond more traditional, qualitative evaluation approaches by integrating elements of quantitative partnership measurement (e.g., mixed methods approaches) in this field, or simply reflects a greater societal trend towards quantitative assessments and the pursuit of demonstrable, measurable impacts from research investments [ 121 ].

Our findings were also consistent with recommendations encouraging the development and use of objective measures (rather than proxy or self-reported measures) to assess partnership outcomes and impacts [ 28 , 32 , 33 ] to facilitate comparisons. Almost all included studies in this review involved self-reported measures of perception.

The location and language of the literature is explained by the geographic origins of partnership traditions and methods. High literature dispersion can be traced back to the independent evolvement of multiple health research partnership approaches over the past half century [ 3 ], and the lack of consolidation across partnership traditions [ 3 ].

The developmental state of partnership research and measurement is at least partly explained by studies’ purpose statements; most focussed on understanding and improving individual partnerships using fit-for-purpose tools. Only a small subset of studies had high scientific rigour domain scores, and few focussed specifically on tool development, testing, or evaluation. While these factors are at least partly a function of the complexities of partnership assessment, the challenges associated with tool development cannot be understated [ 122 ].

The development of high quality, psychometrically and pragmatically robust tools is a function of unique resource, time and expertise demands of tool development [ 122 ]. These requirements are often underestimated, and lack of attention to tool development requirements can slow scientific measurement and innovation [ 122 ]. Based on our synthesis of future research questions, existing knowledge gaps and recommendations, a focus on measurement, methods and tool development, testing and refinement is considered a necessary next step in advancing the field.

Despite differences in review scope (e.g., populations, partnership traditions, databases, search terminology, effects), our findings were similar to other reviews on broad issues related to diverse terminology, location, accessibility of tools and publication dispersion in the health research partnership domain [ 13 , 28 , 29 , 33 , 49 , 123 ]. However, more detailed comparisons with these and other existing reviews directly related to partnership assessment tools and their characteristics revealed complexities. We found only a 5%–50% overlap of identified tools when we compared our findings with pre-existing reviews pertaining to: (a) patient and public involvement evaluation tools (6 of 27 tools overlapped with our study, 22%) [ 28 ], (b) an overview of reviews pertaining to research co-production impact assessment tools (4 of 75 tools overlapped with our study, 5%) [ 29 ], (c) a review of CBPR process and outcome measurement tools (14 of 46 tools overlapped with our study, 30%) [ 49 ], (d) a review of success in long-standing CBPR partnerships (tools in 3 of 16 relevant partnerships overlapped with our review, 19%) [ 51 ] and (e) a review of the organizational participatory research (OPR) health partnerships (three of six tools overlapped with our review, 50%) [ 50 ]. In the tools we identified in our review, only 30 (of a possible 170, 18%) overlapped with these other reviews.

In each case, the lack of overlap can be accounted for by fundamental differences in the partnership concept with linked search terms and scope (e.g., breadth of literature, search time frame, inclusion of research domains beyond health, and different measured effects).

More specifically, Boivin and colleagues’ review [ 28 ] differed in its limitation to patient/public-focussed evaluation tools for assessing engagement in health system decision making and health research. It employed narrower search terms over a shorter frame (1980–2016), but accessed an additional database (Cochrane Database of Systematic Reviews) and grey literature (Google) sources [ 28 ]. The MacGregor overview of reviews examined impacts, but also differed by time frame, key partnership terminology and domain scope. Seven of eight included reviews were published since 2015, four of these were out of scope, and only 17.2% of the primary studies were published since 2010 (in our review, 55% of the primary literature was published after 2015). Sandoval and colleagues’ review used a broader database set and grey literature (PubMed, SciSearch, SocioFile, Business Source Premier, PsycINFO, Communication and Mass Media Complete and a Google key term search). Brush and colleagues’ review [ 51 ] identified studies and tools used to evaluate partnerships on a more limited time span (2007–2017) and was limited to CBPR terms and used different databases (PubMed, Scopus, CINAHL). Finally, Hamzeh and colleagues’ review [ 50 ] identified three (of 6, 50%) overlapping tools using comprehensive OPR search terms, a broader database scope and multiple bibliographic and grey literature sources.

In each case, subtle differences in partnership terminology and scope generated very different results—and very little overlap with the tools we identified in our review. Nonetheless, comparisons with these other reviews revealed a multitude of partnership assessment tools, albeit variably defined, in this research domain. It was noteworthy that despite these clear differences in terminology and scope, several key, overarching messages were recurrent and similar: (a) there is a need to advance quantitative measurement, tool development and psychometric and pragmatic tool testing, and (b) there is a need to better understand partnerships, and how to monitor, measure and optimize them and their outcomes and impacts. In our review, these priorities were further evidenced in the partnership tool development and measurement and partnership themes gleaned from our synthesis of reported research questions, evidence gaps and key recommendations, combined (Additional file 1 : Appendices 9–11). Authors of studies included in our review identified the need to raise awareness, develop knowledge and competency in partnership working, establish clear terminology and definitions, and to advance specific roles for researchers, funders and partnership stakeholders to support partnership establishment, maintenance, measurement and sustainment. These priorities align well with calls for dedicated investment to systematically and rigorously measure partnership outcomes and impacts [ 12 , 124 , 125 , 126 , 127 ].

In sum, there is increased use and prominence of partnership approaches as a mechanism to achieve more user-relevant outcomes and impacts. In this way, partnership approaches are particularly relevant in the field of knowledge translation and implementation sciences [ 1 , 7 , 24 , 25 , 33 , 125 , 128 , 129 , 130 , 131 ]. Addressing the aforementioned and fundamental issues related to partnership conceptualization, measurement and optimization will be required for the overall advancement of the field of partnership research and its application.

Strengths and limitations

This review is unique in its attempt to locate literature and health research partnership outcomes and impacts assessment tools spanning multiple health research partnership approaches and partners, in varied contexts, within the health domain. To our knowledge, this is the largest review of its kind, traversing multiple traditions and partner groups in the health research partnerships domain. Uniquely, our review strategy employed terms spanning multiple research partnership approaches and partner types, from database inception, and without restrictions (e.g., by study design, language, research domain or time frame). We followed strict methodological protocols at each review stage and generated detailed assessments of tool and partnership characteristics that can assist researchers in choosing, applying and considering testing and refining tools.

The location and retrieval of relevant literature and tools in this review was limited by documented challenges relating to locating literature in multiple research partnership traditions, diverse and inconsistent terminology, literature dispersion and journal limits (e.g., space limits, lack of open access and appendices for tools). We attempted to mitigate these challenges by using a pre-tested and inclusive terminology catchment for key search terms, by searching four key databases from inception, and by making at least two attempts to reach investigators and locate tools. A significant number of inquiries went unanswered or bounced back; tools were generally unavailable from publication files, there was high non-response to emails, and many tools were unavailable, even upon researcher contact. As other authors attest, tool accessibility remains problematic [ 28 ] and may preclude tool use in this research domain.

Another limitation of this review was the lack of detail pertaining to the assessment of the health research partnerships present in published abstracts and full text reports. We purposefully retained studies for full text review if their eligibility was uncertain due to ambiguity in the title/abstract screening phase but note the burden of this approach in a large evidence review. Despite this effort, a general lack of evaluative detail regarding health research partnerships persisted in the full text articles. Furthermore, when health research partnership and tool assessment outcomes occurred as secondary (or as inexplicit) research objectives in published reports, reporting detail was frequently lacking, exacerbating abstraction challenges. Also, studies were often multi-purpose, mixing multiple methods. While beneficial for research purposes, this posed challenges for data abstraction because the degree to which mixed methods were integrated in the results varied greatly. At times, this made differentiating partnership, tool and tool assessment findings challenging.

Future research

There is a need for research into both the measurement and the partnership approach facets of this growing research field. First, it is important to recognize that measurement is a key precursor to advancing partnership research and partnership measurement research. The combined complexity of partnership assessment and tool development will require dedicated resources, time spans and researcher expertise that will need to be built [ 122 ]. Given the number of existing tools, future research should focus on both the psychometric and pragmatic testing of fit-for-purpose and other tools and/or their components in different contexts. The diversity of approaches, and the volume and variable quality of tools in this literature offers significant potential to consolidate, share, apply, test and compare knowledge of partnerships and partnership measurement across traditions. Consensus building and ongoing dialogue to compare and contrast the different approaches, terminologies and definitions will be important next steps, as reflected by our synopses (Additional file 1 : Appendices 9–11). It is unclear whether partnerships vary in distinct ways (e.g., by partner, partnership type, context and/or partnership tradition) that necessitate different (and/or fit-for-purpose) tools or tool components or whether standardized tools can be feasibly developed and applied; this is a key area of future research. Finally, our understanding of the effects of health research partnerships is nascent and will require focussed measurement and adequate evaluation time spans to optimize health research partnerships, assessment measures and their outcomes and impacts.

Conclusions

This large volume scoping review extends our understanding of the characteristics, types and accessibility of tools to assess the outcomes and impacts of health research partnerships. Not many of the identified tools overlapped with those identified in previous reviews, but their characteristics were similar in that most were tailored for specific partnerships and lacked scientific rigour and evidence of psychometric testing. Our synthesis of tool, tool evaluation and partnership characteristics confirmed the need for dedicated efforts and resources to study health research partnerships and their systematic evaluation using valid, reliable and pragmatic tools that meet partner needs. Investing in research to better understand research partnership outcomes and impacts measurement remains a key priority for this field.

Scoping review and coordinated multicentre team protocol registrations

Open Science Framework (Scoping Review Protocol): https://osf.io/j7cxd/

Open Science Framework (Coordinated Multicentre Team Protocol): https://osf.io/gvr7y/

Coordinated Multicenter Team Protocol Publication: https://systematicreviewsjournal.biomedcentral.com/articles/10.1186/s13643-018-0879-2

Availability of data and materials

The study search strategy, abstraction tools and bibliographic tool index will be available through the Open Science Framework upon completion of the research and publication of findings. Data generated and/or analysed during the current study will be made available upon reasonable request from the author, after completion of the dissertation research and publication of findings.

Hoekstra F, Mrklas KJ, Sibley K, Nguyen T, Vis-Dunbar M, Neilson CJ, Crockett LK, Gainsforth HL, Graham ID. A review protocol on research partnerships: a coordinated multicenter team approach. Syst Rev. 2018;7(217):1–14.

Google Scholar  

Drahota A, Meza RD, Brikho B, Naaf M, Estabillo JA, Gomez ED, Vejnoska SF, Dufek S, Stahmer AC, Aarons GA. Community–Academic partnerships: a systematic review of the state of the literature and recommendations for future research. Milbank Q. 2016;94(1):163–214.

Article   PubMed   PubMed Central   Google Scholar  

Nguyen T, et al. How does integrated knowledge translation (IKT) compare to other collaborative research approaches to generating and translating knowledge? Learning from experts in the field. Health Res Policy Syst. 2020;18(1):35.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Greenhalgh T, Jackson C, Shaw S, Janamian T. Achieving research impact through co-creation in community-based health services: literature review and case study. Milbank Q. 2016;94(2):392–429.

Jull J, Giles A, Graham ID. Community-based participatory research and integrated knowledge translation: advancing the co-creation of knowledge. Implement Sci. 2017;12(150):1–9.

Staniszewska S, Brett J, Simera I, Seers K, Mockford C, Goodlad S, et al. GRIPP2 reporting checklists: tool to improve reporting of patient and public involvement in research. BMJ. 2017;358:3453.

Article   Google Scholar  

Luger TM, Hamilton AB, True G. Measuring community-engaged research contexts, processes and outcomes: a mapping review. Milbank Q. 2020;98(2):493–553.

Goodman MS, Ackermann N, Bowen DJ, Thompson V. Content validation of a quantitative stakeholder engagement measure. J Community Psychol. 2019;47:1937–51.

Price A, Clarke M, Staniszewska S, Chu L, Tembo D, Kirkpatrick M, Nelken Y. Patient and public involvement in research: a journey to co-production. Patient Educ Couns. 2021;105(4):1041–7.

Article   PubMed   Google Scholar  

Joss N, Keleher H. Partnership tools for health promotion: are they worth the effort? Glob Health Promot. 2010;18(3):8–14.

Jagosh J, Macaulay AC, Pluye P, Salsbert J, Bush PL, Henderson J, Greenhalgh T. Uncovering the benefits of participatory research: implications of a realist review for health research and practice. Millbank Q. 2012;90(2):311–46.

Goodman MS, Sanders Thompson VL, Arroyo Johnson C, Gennarelli R, Drake BF, Bajwa P, Witherspoon M, Bowen D. Evaluating community engagement in research: quantitative measure development. J Community Psychol. 2017;45(1):17–32.

Bowen DJ, Hyams T, Goodman M, West KM, Harris-Wai J, Yu JH. Systematic review of quantitative measures of stakeholder engagement. Clin Transl Sci. 2017;10:314–36.

Stephens R, Staniszeska S. Research involvement and engagement: reflections so far and future directions. Res Involv Engagem. 2017;3:24.

Israel BA, Schulz AJ, Parker EA, Becker AB. Review of community-based research: assessing partnership approaches to improve public heath. Annu Rev Public Health. 1998;19:173–202.

Article   CAS   PubMed   Google Scholar  

Roussos ST, Fawcett SB. A review of collaborative partnerships as a strategy for improving community health. Annu Rev Public Health. 2000;21:369–402.

El Ansari W, Phillips CJ, Hammick M. Collaboration and partnerships: developing an evidence base. Health Soc Care Community. 2001;9(4):215–27.

Israel BA. Methods in community-based participatory research for health. San Francisco, CA: Josse-Bass Inc; 2005.

Minkler M, Wallerstein N, editors. Community-based participatory research for health: from process to outcomes. San Francisco, CA: Jossey-Bass; 2010.

Wallerstein N, Duran B. Community-based participatory research contributions to intervention research: the intersection of science and practice to improve health equity. Am J Public Health. 2010;100:40–6.

Nguyen T, Graham ID, Mrklas KJ, et al. How does integrated knowledge translation (IKT) compare to other collaborative research approaches to generating and translating knowledge? Learning from experts in the field. Health Res Policy Sys. 2020;18:35. https://doi.org/10.1186/s12961-020-0539-6 .

Article   CAS   Google Scholar  

Goodman MS, Sanders Thompson VL. The science of stakeholder engagement in research: classification, implementation and evaluation. Transl Behav Med. 2017;7(3):486–91.

Tigges BB, Miller D, Dudding KM, Balls-Berry JE, et al. Measuring quality and outcomes of research collaborations: an integrative review. J Clin Transl Sci. 2019;3:261–89.

Vat LE, Finlay T, Schuitmaker-Warnaar TJ, et al. Evaluating the ‘return on patient engagement initiatives’ in medicines research and development: a literature review. Health Expect. 2020;23:5–18.

Brett J, Staniszewska S, Mockford C, Herron-Marx S, Hughes J, Tysall C, Suleman R. Mapping the impact of patient and public involvement on health and social care research: a systematic review. Health Expect. 2012;17:637–50.

Hagedoorn J, Link AN, Vonortas NS. Research partnerships. Res Policy. 2000;29:567–86.

King G, Servais M, Forchuk C, Chalmers H, Currie M, Law M, Specht J, Rosenbaum P, Willoughby T, Kertoy M. Features and impacts of five multidisciplinary community-university research partnerships. Health Soc Care Community. 2010;18(1):59–69.

PubMed   Google Scholar  

Boivin A, L’Esperance A, Gauvin FP, Dumez V, Maccaulay AC, Lehoux P, Abelson J. Patient and public engagement in research and health system decision making: a systematic review of evaluation tools. Health Expect. 2018;21(6):1075–84.

MacGregor S. An overview of quantitative instruments and measures for impact in co-production. J Prof Capital Community. 2020;6(2):163–83.

Milat AJ, Bauman AE, Redman S. A narrative review of research impact assessment models and methods. Health Res Policy Syst. 2015;13(18):1–7.

Staniszewska S, Herron-Marx S, Mockford C. Measuring the impact of patient and public involvement: the need for an evidence base. Int J Qual Health Care. 2008;20(6):373–4.

Daigneault PM. Taking stock of four decades of quantitative research on stakeholder participation and evaluation use: a systematic map. Eval Program Plann. 2014;45:171–81.

Hoekstra F, Mrklas KJ, Khan M, et al. A review of reviews on principles, strategies, outcomes and impacts of research partnerships approaches: a first step in synthesising the research partnership literature. Health Res Policy Sys. 2020;18:51. https://doi.org/10.1186/s12961-020-0544-9 .

Zakocs RE, Edwards EM. What explains community coalition effectiveness? A review of the literature. Am J Prev Med. 2006;30:351–61.

University of New Mexico Center for Participatory Research. Research Projects: Center for Participatory Research. 2022. Available from: https://cpr.unm.edu/research-projects/index.html . Accessed4 Jul 2022.

Duran B, Oetzel J, Magarati M, et al. Toward health equity: a national study of promising practices in community-based participatory research. Prog Community Health Partnersh Res Educ Act. 2019;13(4):337–52.

Oetzel JG, Wallerstein N, Duran B, Sanchez-Youngman T, Woo K, Wang J, et al. Impact of participatory health research: a test of the community-based participatory research conceptual model. Biomed Res Int. 2018;1:7281405.

Boursaw B, Oetzel JG, Dickson E, et al. Scales of practices and outcomes for community-engaged research. Am J Community Psychol. 2021. https://doi.org/10.1002/ajcp.12503 .

Lucero JE, Boursaw B, Eder M, Greene-Moton E, Wallerstein N, Oetzel JG. Engage for equity: the role of trust and synergy in community-based participatory research. Health Educ Behav. 2020;47(3):372–9.

King G, Servais M, Kertoy M, Specht J, Currie M, Rosenbaum P, Law M, Forchuk C, Chalmers H, Willoughby T. A measure of community members’ perceptions of the impacts of research partnerships in health and social services. Eval Program Plann. 2009;32:289–99.

University of Waterloo. Research Ethics: Definition of a health outcome. 2018. Available from: https://uwaterloo.ca/research/office-research-ethics/research-human-participants/pre-submission-and-training/human-research-guidelines-and-policies-alphabetical-list/definition-health-outcome . Accessed 7 Mar 2018.

Higher Education Funding Council for England: Research Excellence Framework 2014. Assessment framework and guidance on submissions 2011. 2014. http://www.ref.ac.uk/2014/media/ref/content/pub/assessmentframeworkandguidanceonsubmissions/GOS%20including%20addendum.pdf . Accessed 14 Nov 2017.

Slattery P, Saeri AK, Bragge P. Research co-design in health: a rapid overview of reviews. Health Res Policy Syst. 2020. https://doi.org/10.1186/s12961-020-0528-9 .

Granner ML, Sharpe PA. Evaluating community coalition characteristics and functioning: a summary of measurement tools. Health Educ Res Theory Pract. 2004;19(5):514–32.

Kothari A, McCutcheon C, Graham ID, for the iKT Research Network. Defining integrated knowledge translation and moving forward: a response to recent commentaries. Int J Health Policy Manag. 2017;6:1–2.

Butterfoss FD, Goodman RM, Wandersman A. Community coalitions for prevention and health promotion. Health Educ Res. 1993;8(3):315–30.

Rogers JD, Bozeman B. ‘Knowledge value alliances’: an alternative to the R and D project focus in evaluation. Sci Technol Human Values. 2001;26:23–55.

Staley K. Exploring Impact: public involvement in NHS, public health and social care research. 2009: Eastleigh, UK. 116 pp.

Sandoval JA, Lucero J, Oetzel J, Avila M, Belone L, Mau M, Pearson C, Tafoya G, Duran B, Iglesias Rios L, Wallerstein N. Process and outcome constructs for evaluating community-based participatory research projects: a matrix of existing measures. Health Educ Res. 2012;27(4):680–90.

Hamzeh J, Pluye P, Bush PL, Ruchon C, Vedel I, Hudon C. Towards assessment for organizational participatory research health partnerships: a systematic mixed studies review with framework synthesis. Eval Program Plann. 2018;73:116–28.

Brush BL, Mentz G, Jensen M, Jacobs B, Saylor KM, Rowe Z, Israel BA, Lachance L. Success in longstanding community based participatory research (CBPR) partnerships: a scoping literature review. Health Educ Behav. 2019;47(4):556–68.

Mrklas KJ, et al. Open science framework file: towards the development of a valid, reliable and acceptable tool for assessing the impact of health research partnerships (Protocols). 2021 19 April 2021, 23 November 2021]; Available from: https://mfr.ca-1.osf.io/render?url=https://osf.io/j7cxd/?direct%26mode=render%26action=download%26mode=render .

Foster ED, Deardorff A. Open science framework (OSF). J Med Library Assoc (JMLA). 2017;105(2):203–6.

Mrklas KJ. Towards the development of a valid, reliable and acceptable tool for assessing the impact of health research partnerships (PhD dissertation thesis proposal). 2018, University of Calgary: Calgary, Canada. 119 pp.

Terwee CB, de Vet, HCW, Prinsen CAC, Mokkink LB. Protocol for systematic reviews of measurement properties. 2011. Available from: https://fdocuments.net/document/protocol-for-systematic-reviews-of-measurement-properties.html . Accessed 24 Feb 2022.

Tricco AC, Lillie E, Zarin W, O’Brien K, Colquhoun H, Kastner M, Levac D, Ng C, Pearson Sharpe J, Wilson K, Kenny M, Warren R, Wilson C, Stelfox HT, Straus SE. A scoping review on the conduct and reporting of scoping reviews. BMC Med Res Methodol. 2016;16(15):1–10.

Armstrong R, Hall BJ, Doyle J, Waters E. ‘Scoping the scope’ of a cochrane review. J Public Health. 2011;33(1):147–50.

Valaitis R, Martin-Misenter R, Wong ST, et al. Methods, strategies and technologies used to conduct a scoping literature review of collaboration between primary care and public health. Prim Health Care Res Dev. 2012;13(3):219–36.

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol Theory Pract. 2005;8(1):19–32.

Centre of Excellence on Partnership with Patients and the Public (CEPPP). Patient and Public Engagement Evaluation Toolkit. 2021.  https://ceppp.ca/en/evaluation-toolkit/ . Accessed 23 Nov 2021.

Microsoft Corporation. Microsoft Excel for Mac 2021, V. (21101001), Editor. 2021, 2021 Microsoft Corporation.

Statacorp LP. Stata 13.1 Statistics/Data Analysis Special Edition. 2013, StataCorp LP: College Station, TX.

International Q. NVivo12 for Mac. 2019, QSR International: New York, USA.

Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101.

Pawson R. Evidence-based policy: in search of a method. Evaluation. 2002;8(2):157–81.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol. 2012. https://doi.org/10.1186/1471-2288-12-181 .

O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):1245–51.

Altman DG. Practical statistics for medical research: measuring agreement. London, UK: Chapman and Hall; 1991.

McHugh ML. Interrater reliability: the kappa statistic. Biochem Med. 2012;22(3):276–82.

Bilodeau A, et al. L’Outil diagnostique de l’action en partenariat: fondements, élaboration et validation. Can J Public Health. 2011;102(4):298–302.

Bilodeau A, Kranias G. Self-evaluation tool for action in partnership: translation and cultural adaptation of the original Quebec French tool to Canadian English. Can J Prog Eval. 2019;34(2):192–206.

Loban E, Scott C, Lewis V, Haggerty J. Measuring partnership synergy and functioning: multi-stakeholder collaboration in primary health care. PLoS ONE. 2021;16: e0252299.

Moore de Peralta A, Prieto Rosas V, Smithwick J, Timmons SM, Torres ME. A contribution to measure partnership trust in community-based participatory research and interventions with Latinx communities in the United States. Health Promot Pract. 2021. https://doi.org/10.1177/15248399211004622 .

Dickson E, Magarati M, Boursaw B, Oetzel J, Devia C, Ortiz K, Wallerstein N. Characteristics and practices within research partnerships for health and social equity. Nurs Res. 2020;69(1):51–61.

Brown LD, Chilenski SM, Ramos R, Gallegos N, Feinberg ME. Community prevention coalition context and capacity assessment: comparing the United States and Mexico. Health Educ Behav. 2016;43(2):145–55.

Seeralan T, Haerter M, Koschnitzke C, et al. Patient involvement in developing a patient-targeted feedback intervention after depression screening in primary care within the randomized controlled trial GET.FEEDBACK.GP. Health Expect. 2020;24:95–112.

Haesebaert J, Samson I, Lee-Gosselin H, et al. “They heard our voice!” patient engagement councils in community-based primary care practices: a participatory action research pilot study. Res Involv Engagem. 2020;6:54. https://doi.org/10.1186/s40900-020-00232-3 .

Toledo-Chavarri A, TrinanesPego Y, Reviriego Rodrigo E, et al. Evaluation of patient involvement strategies in health technology assessment in Spain: the viewpoint of HTA researchers. Int J Technol Assess Health. 2020;37(e25):1–6.

Wagemakers MA, Koelen MA, Lezwijn J, Vaandrager L. Coordinated action checklist: a tool for partnerships to facilitate and evaluation community health promotion. Glob Health Promot. 2010;17(3):17–28.

Parker EA, Schulz AJ, Israel BA, Hollis B. Detroit’s East Side Village Health Worker Partnership: community-based lay health advisory intervention in an urban area. Health Educ Behav. 1998;25(1):24–45.

Israel BA, Lichtenstein R, Lantz P, McGranaghan R, Allen A, Guzman JR, Softley D, Maciak B. The Detroit Community-Academic Urban Research Center: development, implementation and evaluation. J Pub Health Manag Pract. 2001;7(5):1–19.

Schulz AJ, Israel BA, Selig SM, Bayer IS, Griffin CB. Development and implementation of principles for community-based research in public health. In: MacNair RH, editor. Research strategies for community practice. Haworth Press: New York; 1998. p. 83–110.

Israel BA, Eng E, Schulz AJ, Partker EA, editors. Methods in community-based participatory research for health. 1st ed. San Francisco: Jossey-Bass; 2005.

Schulz AJ, Israel BA, Lantz P. Instrument for evaluating dimensions of group dynamics within community-based participatory research partnerships. Eval Program Plann. 2003;26:249–62.

Wallerstein N, Bernstein E. Community empowerment, participatory education and health—part 1. Health Educ Q. 1994;21:141–268.

Minkler M, Wallerstein N, editors. Community-based participatory research for health. San Francisco, CA: Jossey-Bass; 2003.

Wallerstein N, Oetzel J, Duran B, Tafoya G, Belone L, Rae R. CBPR: what predicts outcomes? In: Minkler M, Wallerstein N, editors. Community-based participatory research for health: from process to outcomes. San Francisco: Jossey-Bass; 2008. p. 371–92.

Wallerstein N, Duran B. CBPR contributions to intervention research: the intersection of science and practice to improve health equity. Am J Public Health. 2010;100:S40–5.

University of New Mexico Center for Participatory Research. Community based participatory research model. 2020. Available from: https://cpr.unm.edu/research-projects/cbpr-project/cbpr-model.html . Accessed 12 Dec 2021.

Wandersman A, Florin PF, Meier R. Who participates, who does not and why? An analysis of voluntary neighborhood associations in the United States and Israel. Sociol Forum. 1987;2:534–55.

Wandersman A, Goodman R. Community partnerships for alcohol and other drug abuse prevention. Fam Resour Coalit. 1991;10:8–9.

Butterfoss FD, Goodman RM, Wandersman A. Community coalitions for prevention and health promotion: factors predicting satisfaction, participation and planning. Health Educ Q. 1996;23:65–79.

Butterfoss FD, Goodman RM, Wandersman A. Citizen participation and health: toward a psychology of improving health through individual, organizational and community involvement. In: Baum A, Revenson TA, Singer JE, editors. Handbook of health psychology. Mahwah, NJ: Lawrence Erlbaum; 2001. p. 613–26.

Butterfoss FD, Kegler MK. Toward a comprehensive understanding of community coalitions: moving from practice to theory. In: DiClemente RJ, Crosby RA, Kegler MC, editors. Emerging theories in health promotion practice and research: strategies for improving public health. Jossey-Bass: San Francisco; 2002. p. 157–93.

Fawcett SB, Lewis RK, Paine-Andrews A, Francisco VT, Richter KP, Williams EL, Copple B. Evaluating community coalitions for prevention of substance abuse: the case of Project Freedom. Health Educ Behav. 1997;24(6):812–28.

Kegler MC, Steckler A, McLeroy K, Malek SH. Factors that contribute to effective community health promotion coalitions: a study of 10 Project ASSIST coalitions in North Carolina. Health Educ Behav. 1998;25(3):338–53.

Goodman RM, Wandersman A. An ecological assessment of community-based interventions for prevention and health promotion: approaches to measuring community coalitions. Am J Community Psychol. 1996;24(1):33–61.

Lasker RD, The Committee on Medicine and Public Health. Medicine and public health: the power of collaboration. Chicago, Ill: Health Administration Press; 1997.

Lasker RD, Abramson DM, Freedman GR. Pocket guide to cases of medicine and public health collaboration. New York, USA: New York Academy of Medicine; 1998.

Lasker RD, Weiss ES, Miller R. Partnership synergy: a practical framework for studying and strengthening the collaborative advantage. Milbank Q. 2001;79(2):179–205.

Weiss ES, Anderson RM, Lasker RD. Making the most of collaboration: exploring the relationship between partnership synergy and partnership functioning. Health Educ Behav. 2002;29(6):683–98.

Lasker RD, Weiss ES. Broadening participation in community problem solving: a multidisciplinary model to support collaborative practice and research. J Urban Health. 2003;80(1):14–59.

Feinberg ME, Greenberg MT, Osgood WO, Sartorious J. Effects of the communities that care model in Pennsylvania on youth risk and problem behaviours. Prev Sci. 2007;8:261–70.

Gomez BJ, Greenberg MT, Feinberg ME. Sustainability of prevention coalitions. Prev Sci. 2005;6:199–202.

Feinberg ME, Chilenski SM, Greenberg MT, Spoth RI, Redmond C. Community and team member factors that influence the operations phase of local prevention teams: the PROSPER project. Prev Sci. 2007;8:214–26.

Greenberg MT, Feinberg ME, Meyer-Chilenski SE, Spoth RI, Redmond C. Community and team member factors that influence the early phase functioning of community prevention teams: the PROSPER project. J Prim Prevent. 2007;28:485–504.

Feinberg ME, Greenberg MT, Osgood DW. Readiness, functioning, and perceived effectiveness of community prevention coalitions: a study of communities that care. Am J Community Psychol. 2004;33:163–76.

Brown LD, Feinberg ME, Greenberg MT. Determinants of community coalition ability to support evidence-based programs. Prev Sci. 2010;11:287–97.

Lavis JN, Robertson D, Woodside JM, McLeod CB, Abelson J. How can research organizations more effectively transfer research knowledge to decision makers? Milbank Q. 2003;81(2):221–2.

Mitton C, Smith N, Peacock S, Evoy B, Abelson J. Public participation in health care priority setting: a scoping review. Health Policy. 2009;91:219–28.

Abelson J, Gauvin FP. Assessing the impacts of public participation: concepts, evidence and policy implications (Research Report P06). Canadian Policy Research Networks: Ontario; 2006.

Abelson J, Montessanti S, Li K, Gauvin F-P, Martin E. Effective strategies for interactive public engagement in the development of healthcare policies and programs. Canadian Health Services Research Foundation: Ontario; 2010.

Forsythe LP, Frank L, Walker KO, Anise A, Wegener N, Weisman H, et al. Patient and clinician views on comparative effectiveness research and engagement in research. J Compar Effect Res. 2015;4(1):11–25.

Patient-Centered Outcomes Research Institute (PCORI). Patient-Centered Outcomes Research Institute (PCORI) Evaluation Framework 2.0. 2015. http://www.pcori.org/sites/default/files/PCORI-Evaluation-Framework-2.0.pdf . Accessed 12 Dec 2021.

Frank L, Forsythe L, Ellis L, Schrandt S, Sheridan S, Gerson J, et al. Conceptual and practical foundations of patient engagement in research at the patient-centered outcomes research institute. Qual Life Res. 2015;24(5):1033–41.

Jones J, Barry MM. Developing a scale to measure synergy in health promotion partnerships. Glob Health Promot. 2011;18(2):36–44.

Jones B, Barry MM. Developing a scale to measure trust in health promotion partnerships. Health Promot Int. 2011;26(4):484–91.

Jones J, Barry MM. Factors influencing trust and mistrust in health promotion partnerships. Glob Health Promot. 2018;25(2):16–24.

Marsilio M, Fusco F, Gheduzzi E, Guglielmetti C. Co-production performance evaluation in healthcare. A systematic review of methods, tools and metrics. Int J Environ Res Public Health. 2021;18:3336. https://doi.org/10.3390/ijerph18073336 .

Raftery J, Hanney S, Greenhalgh T, Glover M, Young A. Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment Programme. Health Technol Assess. 2016;20(76):1–282.

Boateng GO, Neilands TB, Frongillo EA, Melgar-Quinonez HR, Young SL. Best practices for developing and validating scales for health, social, and behavioural research. Front Public Health. 2018. https://doi.org/10.3389/fpubh.2018.00149 .

Hoekstra F, Trigo F, Sibley K, Graham ID, Kennefick M, Mrklas KJ, Nguyen T, Vis-Dunbar M, Gainforth HL. Systematic overviews of partnership principles and strategies identified from health research about spinal cord injury and related health conditions: a scoping review. J Spin Cord Med. 2021. https://doi.org/10.1080/10790268.2022.2033578 .

Tetroe JM, Graham ID, Foy R, Robinson N, Eccles MP, Wensing M, Grimshaw JM. Health research funding agencies’ support and promotion of knowledge translation: an international study. Milbank Q. 2008;86(1):125–55.

Gagliardi A, Berta W, Kothari A, Boyko J, Urquhart R. Integrated knowledge translation (iKT) in health care: a scoping review. Implement Sci. 2016;11(38):1–12.

Graham ID, Tetroe JM, Pearson A, editors. Turning knowledge into action: practical guidance on how to do integrated knowledge translation research. Lippincott Williams and Wilkins: Philadelphia, PA; 2014. 196 pp.

Wallerstein N, Oetzel J, Sanchez-Youngman S, et al. Engage for equity: a long-term study of community-based participatory research and community-engaged research practices and outcomes. Health Educ Behav. 2020;47(3):380–90.

Lohr KN, Steinwachs DM. Health services research: an evolving definition of the field. Health Serv Res. 2002;37:15–7.

Article   PubMed Central   Google Scholar  

Banzi R, Moja L, Pistotti V, Facchini A, Liberati A. Conceptual frameworks and empirical approaches used to assess the impact of health research: an overview of reviews. Health Res Policy Syst. 2011;9(26):1–10.

Staley K. “Is it worth doing?” Measuring the impact of patient and public involvement in research. Res Involv Engagem. 2015. https://doi.org/10.1186/s40900-015-0008-5 .

Collins M, Long R, Page A, Popay J, Lobban F. Using the public involvement impact assessment framework to assess the impact of public involvement in a mental health research context: a reflective case study. Health Expect. 2018;21:950–63.

Download references

Acknowledgements

Kind thanks to Christie Hurrell (University of Calgary) for consultative advice regarding the refinement of search term clusters and to Christine Neilson (University of Manitoba) for her assistance with PRESS assessment of the draft partnership term theme, and the draft search strategy. Thanks to Swati Dhingra and Kimberly Andrews at the Faculty of Nursing, University of Calgary for their support at the full text screening calibration stage and to Kevin Paul (Renert School) for his assistance building abstraction tools and assisting with pragmatic criteria validation. Thanks to Kate Aspinall for assistance screening titles and abstracts from the final search update. Much gratitude to Dr Aziz Shaheen, Department of Gastroenterology, Cumming School of Medicine, University of Calgary for providing summer student support for Liam Swain, Kevin Paul and Kate Aspinall. Our sincere thanks to Dr Audrey L’Esperance at the Center of Excellence on Partnership with Patients and the Public (CEPPP) who provided access to the Patient and Public Engagement Evaluation Toolkit assessment grid so it could be modified for our study purposes. Special thanks to colleagues in the IKTR Network and the Multicentre Collaborative Team for insight-generating discussions and iterative feedback.

This work is supported by a Canadian Institutes for Health Research (CIHR) Foundation Scheme grant (FDN#143237) entitled Moving Knowledge Into Action for More Effective Practice, Programs and Policy: A Research Program Focusing on Integrated Knowledge Translation (Lead: Graham, I.D.) and a CIHR Project grant (FRN#156372) entitled Advancing the Science of Integrated Knowledge Translation with Health Researchers and Knowledge Users: Understanding Current and Developing Recommendations for iKT Practice (Lead: Sibley, K.M.). Both grants contributed funds to support two project research assistants (M.K., C.M.) and were administered through the University of Manitoba. The Department of Gastroenterology, Cumming School of Medicine, University of Calgary provided summer studentship support through Dr Aziz Shaheen, for trainees Swain, L., Paul, K. and Aspinall, K. Funding agencies were not involved in any aspect of study design, nor in the collection, analysis or interpretation of the data, the writing of the manuscript or its dissemination.

Author information

Authors and affiliations.

Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, 3D10-3280 Hospital Drive NW, Calgary, AB, T2N 4Z6, Canada

Kelly J. Mrklas, Liam Swain & Michael D. Hill

Strategic Clinical Networks™, Provincial Clinical Excellence, Alberta Health Services, Calgary, AB, Canada

Kelly J. Mrklas

Knowledge Translation Program, St Michael’s Hospital, Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, ON, Canada

Jamie M. Boyd

Cumming School of Medicine, University of Calgary, Calgary, AB, Canada

Sumair Shergill

Faculty of Kinesiology, University of Calgary, Calgary, AB, Canada

Sera Merali

Department of Community Health Sciences, University of Manitoba, Winnipeg, MB, Canada

Masood Khan, Cheryl Moser & Kathryn M. Sibley

Faculty of Nursing, University of Calgary, Calgary, AB, Canada

Lorelli Nowell & Shelley Raffin-Bouchal

Faculty of Science, University of Alberta, Edmonton, AB, Canada

Amelia Goertzen

Institute for Medical Information Processing, and Epidemiology-IBE, Ludwig-Maximilians Universität Munich, Munich, Germany

Lisa M. Pfadenhauer

Pettenkofer School of Public Health, Munich, Germany

George & Fay Yee Centre for Healthcare Innovation, University of Manitoba, Winnipeg, MB, Canada

Kathryn M. Sibley

University of British Columbia-Okanagan, Kelowna, BC, Canada

Mathew Vis-Dunbar

Departments of Clinical Neurosciences, Medicine and Radiology, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada

Michael D. Hill

Hotchkiss Brain Institute, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada

Department of Medicine, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada

Marcello Tonelli

Office of the Vice-President (Research), University of Calgary, Calgary, AB, Canada

Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada

Ian D. Graham

Schools of Epidemiology and Public Health and Nursing, University of Ottawa, Ottawa, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, study design carried out by K.J.M. with doctoral supervisory committee: M.D.H., S.R.B., C.T. and I.D.G.; formal analysis performed by K.J.M.; funding acquisition carried out by K.J.M., K.M.S. and I.D.G.; investigation performed by K.J.M., J.M.B., S.S., S.M., C.M., M.K., L.N., A.G., L.S., L.M.P., K.M.S. and M.V.D.; methodology detailed by K.J.M., K.M.S., M.V.D., M.D.H., S.R.B., M.T. and I.D.G.; project administration carried out by K.J.M. and I.D.G.; supervision performed by I.D.G., M.D.H., S.R.B. and C.T.; validation performed by K.J.M., S.S., S.M., C.M., M.K. and L.S.; writing—original draft performed by K.J.M.; writing—review, editing and approval of final manuscript performed by K.J.M., J.M.B., S.S., S.M., C.M., M.K., L.N., A.G., L.S., L.M.P., K.M.S., M.V.D., M.D.H., S.R.B., C.T. and I.D.G.; I.D.G. was the guarantor.

Corresponding author

Correspondence to Kelly J. Mrklas .

Ethics declarations

Ethics approval and consent to participate.

This study was reviewed and approved by the Conjoint Health Research Ethics Board (CHREB) at the University of Calgary (REB180174).

Consent for publication

Not applicable.

Competing interests

K.J.M., J.M.B., S.S., S.M., M.K., C.M., L.N., A.G., L.S., L.M.P., K.M.S., M.V.D., S.R.B. and C.T. have no competing interests to declare. M.D.H. is the medical director (Stroke) for the Cardiovascular and Stroke Strategic Clinical Network™ at Alberta Health Services. I.D.G. holds the position of scientific director for the Integrated Knowledge Translation Research Network (IKTRN).

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: appendix 1..

Scoping review data map. Appendix 2. Protocol deviations and rationale. Appendix 3. Expanded methods. Appendix 4. Search strategy. Appendix 5. Health research partnership tool evaluation criteria. Appendix 6. Year of publication for included studies. Appendix 7. Partnership characteristics. Appendix 8. Pragmatic health research partnership criteria assessments. Appendix 9. Synthesis of future research questions. Appendix 10. Synthesis of evidence gaps. Appendix 11. Synthesis of recommendations. Appendix 12. Bibliography of included studies. Appendix 13. PRISMA-Scoping Reviewschecklist, references.

Additional file 2: Table S1.

Characteristics of included studies, Table S2. Tool characteristics

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mrklas, K.J., Boyd, J.M., Shergill, S. et al. A scoping review of the globally available tools for assessing health research partnership outcomes and impacts. Health Res Policy Sys 21 , 139 (2023). https://doi.org/10.1186/s12961-023-00958-y

Download citation

Received : 20 February 2022

Accepted : 03 January 2023

Published : 22 December 2023

DOI : https://doi.org/10.1186/s12961-023-00958-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Health research partnerships
  • Evaluation tools
  • Scoping review
  • Integrated knowledge translation
  • Community-based participatory research

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research helps develop tools for assessing effectiveness

  • Research article
  • Open access
  • Published: 18 July 2014

Tools to support evidence-informed public health decision making

  • Jennifer Yost 1 ,
  • Maureen Dobbins 1 ,
  • Robyn Traynor 1 ,
  • Kara DeCorby 2 ,
  • Stephanie Workentine 1 &
  • Lori Greco 1  

BMC Public Health volume  14 , Article number:  728 ( 2014 ) Cite this article

25k Accesses

64 Citations

30 Altmetric

Metrics details

Public health professionals are increasingly expected to engage in evidence-informed decision making to inform practice and policy decisions. Evidence-informed decision making involves the use of research evidence along with expertise, existing public health resources, knowledge about community health issues, the local context and community, and the political climate. The National Collaborating Centre for Methods and Tools has identified a seven step process for evidence-informed decision making. Tools have been developed to support public health professionals as they work through each of these steps. This paper provides an overview of tools used in three Canadian public health departments involved in a study to develop capacity for evidence-informed decision making.

As part of a knowledge translation and exchange intervention, a Knowledge Broker worked with public health professionals to identify and apply tools for use with each of the steps of evidence-informed decision making. The Knowledge Broker maintained a reflective journal and interviews were conducted with a purposive sample of decision makers and public health professionals. This paper presents qualitative analysis of the perceived usefulness and usability of the tools.

Tools were used in the health departments to assist in: question identification and clarification; searching for the best available research evidence; assessing the research evidence for quality through critical appraisal; deciphering the ‘actionable message(s)’ from the research evidence; tailoring messages to the local context to ensure their relevance and suitability; deciding whether and planning how to implement research evidence in the local context; and evaluating the effectiveness of implementation efforts. Decision makers provided descriptions of how the tools were used within the health departments and made suggestions for improvement. Overall, the tools were perceived as valuable for advancing and sustaining evidence-informed decision making.

Tools are available to support the process of evidence-informed decision making among public health professionals. The usability and usefulness of these tools for advancing and sustaining evidence-informed decision making are discussed, including recommendations for the tools’ application in other public health settings beyond this study. Knowledge and awareness of these tools may assist other health professionals in their efforts to implement evidence-informed practice.

Peer Review reports

Systematically incorporating research evidence in program planning and policy decision making supports the provision of high-quality, effective, and efficient health services. This further ensures a more responsible use of the financial and human resource investments that are made in healthcare and in public health [ 1 – 3 ]. As such, public health professionals are increasingly expected to engage in evidence-informed decision making (EIDM). EIDM involves using research evidence with public health expertise, resources, and knowledge about community health issues, local context, and political climate to make policy and programming decisions [ 4 ].

Efforts are growing to promote EIDM within the public health sector in Canada [ 5 – 12 ]. To support such efforts, the National Collaborating Centre for Methods and Tools (NCCMT) has developed a seven step process to guide public health professionals through EIDM. This process includes: 1) defining the question, problem or issue; 2) searching for the best available research evidence; 3) assessing the quality of the evidence; 4) deciphering the ‘actionable message(s)’ from the evidence; 5) tailoring messages to the local context to ensure their relevance and suitability; 6) deciding whether and planning how to implement the evidence in the local context; and 7) evaluating the effectiveness of implementation efforts [ 13 ].

However, barriers to supporting, advancing, and sustaining EIDM exist at both individual and organizational levels [ 10 , 14 , 15 ]. The social, political, and historical context of public health practice and decision-making can also hinder the optimal use of evidence [ 10 , 16 ]. For example, the literature suggests that without an organized and methodical process for applying research evidence to decision making, the evidence can be selectively used to justify a decision that has already been made for other tactical or political reasons [ 16 – 18 ]. At an organizational level, barriers include a general resistance to change, limited access to evidence, unsupportive communication and organizational structures, heavy workloads, and frequent public health crises (e.g. outbreaks, environmental disasters) that require urgent attention [ 10 , 16 ]. Limited knowledge and skills to access, interpret, evaluate, and synthesize research evidence are additional barriers to EIDM at an individual level [ 3 , 17 ].

Conversely, EIDM can be facilitated by supportive infrastructure and organizational roles. Organization-level facilitators include strong leadership, a vision and commitment to EIDM, a receptive workforce culture, and committing time and financial resources to support EIDM [ 9 – 11 , 19 , 20 ]. The development of specific positions, such as Knowledge Brokers (KBs) or contracts with external KBs who are responsible for building capacity and supporting the use of evidence among public health professionals, helps establish an organizational climate that is supportive of research use [ 20 – 22 ]. EIDM is further advanced by improving access to research and library services, supporting the use of knowledge management tools that actively share relevant research evidence with users, and involving organizations in research activities that support collaboration between researchers and decision makers [ 1 , 9 , 10 , 14 , 15 , 17 , 20 , 23 ]. Individual-level facilitators include training and continuing education in EIDM and its associated knowledge and skill set [ 9 , 20 ].

Tools (guidelines, templates, checklists, assessment criteria, etc.) have been developed by various organizations for specific audiences, including public health, to support EIDM [ 13 , 24 , 25 ]. The use of such tools can help build health professionals’ skills and can assist them in appraising, synthesizing and applying research findings [ 1 , 9 , 24 ]. Previous studies have shown that a KB can play a key role in providing assistance in identifying, revising or creating applicable tools to further support engagement in EIDM at individual and organization levels [ 26 ].

The purpose of this paper is to report on the use of tools used by three Canadian public health departments in a study assessing the effectiveness of a KB-delivered knowledge translation and exchange (KTE) intervention . We describe the tools used to support steps in the EIDM process, evaluate their usability through qualitative analysis, and recommend their application beyond this study to the broader field of public health.

Study design

We partnered with three Ontario public health departments on a Canadian Institutes of Health Research (CIHR) ‘Partnerships for Health System Improvement’ (PHSI) grant (FRN 101867) to evaluate the effectiveness of KTE interventions to enhance capacity for and facilitate organizational contexts conducive to EIDM. This study received ethics approval from the McMaster University Research Ethics Board and the ethics boards of each participating health department. Using a case study design, we tailored a 22-month KTE intervention to the unique needs of each health department (Case A, Case B, Case C). The main strategy or component of each tailored intervention included a KB (authors LG and KD, with assistance from RT) working through the steps of EIDM [ 13 ] with selected public health professionals, including specialists (e.g. epidemiologists, consultants, Research and Policy Analysts (RPAs), dieticians, and nutritionists), management, and front line staff (e.g. Public Health Nurses, Health Promotion Officers, Public Health Inspectors, and dental professionals). Table  1 provides a description of the tailored intervention and outcomes for each Case. A more in depth discussion of the KTE intervention implemented at each of the three health departments has been submitted for publication and additional results are also expected to be published.

Data collection

Quantitative and qualitative data were collected to determine the impact of the intervention on knowledge, capacity, and behaviour for EIDM and the contextual factors that facilitated or impeded impact within each health department. Here we discuss the data collection strategies relevant to the qualitative analysis presented in this paper. This discussion adheres to the RATS guidelines for reporting qualitative studies [ 27 ]. The KBs delivering the intervention maintained a reflective journal to track meetings, observations, and reflections of their experiences in each of the health departments. Organizational documents were also collected. These included: strategic plans, internal communications related to EIDM (meeting minutes), policies and procedures related to the sharing and integration of EIDM, existing tools to facilitate the implementation of EIDM, and existing write-ups of literature reviews.

A purposive sample of senior management and public health professionals involved in the intervention were identified by the KB and a health department liaison to the research team. One member of the research team (RT) invited these staff, via email, to participate in a telephone interview. All staff who agreed to participate provided informed consent. One member of the research team (RT) conducted each telephone interview, lasting approximately 20–40 minutes, at baseline and follow-up using a semi-structured interview guide. At baseline, management and key contacts involved in supporting the research study were interviewed to understand the current organizational environment. At follow-up, staff that had been intensively involved in the intervention and additional management were interviewed. Participants were asked to reflect on the intervention and EIDM process at their respective health department and identify what they thought went well, including what resources or supports were helpful, and if they thought their colleagues were aware that these resources were available to them. They were also asked if any supports were “missing” or if they had suggestions for how the process could be improved. Data collection, via interviews, was considered complete when all identified staff had either declined to participate or were interviewed.

Data analysis

All data collected throughout the intervention (baseline to follow-up) were analysed for this paper in order to understand the change in organizational use of tools. Interviews were recorded and transcribed verbatim, with light editing to remove fillers (“ums”, “ahs”), ambient sounds, non-verbal communication, and all identifying information. NVivo 9 was used for data management and coding. Two authors (RT, KD) independently coded several interview transcripts and journal entries based on an initial coding structure derived from the McKinsey 7-S Model [ 28 – 30 ], a framework used to help guide the study design. The authors compared their coding and further refined the structure as themes emerged using a constant comparative process [ 31 ]. One author (RT) applied the refined coding structure to analyze remaining data from all sources. Regular meetings were held with research team members involved in qualitative analysis (RT, KD, MD, and an additional co-investigator) to discuss any issues and proposed revisions to the coding scheme. The team discussed and came to consensus on any new themes. Organizational documents were reviewed and data relevant to the types of tools and how these tools were used within the health departments was extracted. The data was then reviewed by members of the research team and the KBs and presented back to key contacts at the health departments to confirm accuracy.

Tools for EIDM

A variety of tools to support the steps of EIDM were used within the three health departments. New tools were created and existing tools were adapted to meet the health departments’ needs; several tools were formally adopted into health department policies and procedures. A number of tools were developed in Case A as part of an Executive Training for Research Application (EXTRA) Fellowship project of one senior manager [ 18 , 32 ]. Here we describe the tools that were created, adapted, used, and adopted at each of the health departments, organized by step of the EIDM process. Additional file 1 provides a succinct description of the tools and how they were used in the health departments, as well as identifies the developer of the tool and the format in which they are available. Table  2 provides an abridged version of Additional file 1 .

Tools for defining the question/problem/issue

The Developing An Efficient Search Strategy tool was developed by Health Evidence – an organization that facilitates EIDM among public health professionals in Canada – to turn practice-based issues into answerable, searchable questions [ 33 ]. This tool provides users with a framework for articulating different types of questions. It includes an explanation and public health-related example of how to identify important components related to an issue, including population, intervention, comparison, and outcomes for quantitative questions and population and setting for qualitative questions [ 34 ]. All three health departments used this tool. Cases A and C adapted and adopted it within their formal procedure for conducting “rapid evidence reviews”, defined as a more accelerated or streamlined version of traditional systematic reviews [ 35 , 36 ]. Case A also decided to develop a conceptual model of the practice-based issue before embarking on a rapid evidence review. Supported through an EXTRA Fellowship, a senior manager in this health department created the Developing a Conceptual Model tool [ 32 , 37 ]. This tool identifies five basic steps to guide users through the process of developing a model, with examples of public health-related issues, and has undergone modifications based on user feedback [ 18 ].

Tools for searching for the best available research evidence

The 6S Pyramid was developed by Haynes et al. [ 38 ] to help users efficiently and effectively find the best available research evidence relevant to their defined question. The tool guides searches through six levels of resources, beginning with the most synthesized evidence and ending with single studies [ 38 , 39 ]. A related tool, the Resources to Guide & Track your Search, was created by Health Evidence to enable easy access to public health relevant databases and track search results [ 40 , 41 ]. For each level of the 6S Pyramid , the Resources to Guide & Track your Search tool provides the names and links to searchable databases for public health evidence. The tool indicates whether the databases are publicly available and whether the evidence retrieved from these databases has been quality appraised. Cases A and B used this tool and Case C adopted it in their formal procedure for conducting rapid evidence reviews. Health Evidence created a third tool, Keeping Track of Search Results: A Flowchart, as a template for documenting search results [ 42 ]. This tool enables users to clearly track the total number of articles identified from different sources and focus in on the final number of relevant articles from a search. The completed tool can be appended to the final version of a report of a rapid evidence review to increase the transparency of the process. Cases A and C adopted this tool into their formal procedures for conducting rapid evidence reviews, although some modifications have occurred to address user feedback.

Tools for assessing the research evidence for quality through critical appraisal

The health departments used a variety of critical appraisal tools to assess the methodological quality of various types of research evidence. The Appraisal of Guidelines for Research and Evaluation (AGREE) II Instrument, an internationally accepted and tested tool was used by all health departments to assess the methodological rigor of practice guidelines [ 43 – 45 ]. The AGREE II Instrument contains 23 items within six quality domains. Its internal consistency ranges between 0.64 and 0.89 and its inter-rater reliability has been reported as satisfactory. The instrument’s items have been validated by stakeholder groups [ 43 – 45 ]. The AGREE II instrument concludes by assigning an overall quality rating and recommendation for using (or not using) the guideline [ 43 ]. The following two tools were used to assess the quality of systematic reviews: Health Evidence’s Quality Assessment Tool [ 46 ] and A Measurement Tool to Assess Systematic Reviews (AMSTAR) [ 47 , 48 ]. Health Evidence’s Quality Assessment Tool assigns an overall quality rating based on 10 items. The tool is accompanied by a dictionary that provides definitions of terms and instructions for assessing each criterion [ 46 ]. AMSTAR was initially developed for syntheses of randomized controlled trials (RCTs) but the 11-criteria tool has since been applied to syntheses that include non-RCTs [ 47 , 48 ]. The tool has demonstrated construct validity and satisfactory inter-observer agreement, with reliability of the total score documented as excellent [ 48 ]. The group is now developing a version to assess the quality of syntheses that include observational studies [ 49 ]. Available in Japanese, French and Spanish, the AMSTAR tool has received an endorsement from the Canadian Agency for Drugs and Technologies in Health and has been cited approximately 200 times over the past three years [ 50 ]. The Critical Appraisal Skills Programme (CASP) [ 51 ] and Scottish Intercollegiate Guidelines Network (SIGN) [ 52 ] have also developed tools for the critical appraisal of syntheses, as well as for several single study designs. Users appraise evidence using the CASP tools by asking: 1) “Is the study valid?”; 2) “What are the results?”; and 3) “Are the results applicable to my needs?” [ 51 ]. Since the core checklists (syntheses and randomized controlled trials) were developed and piloted, the suite of CASP tools has been expanded and evaluated for suitability [ 53 ] and usefulness [ 54 ]. The validity of the tool for qualitative studies has also been evaluated [ 55 ]. The SIGN tool provides an overall quality rating based on internal validity criteria [ 52 ]. In addition to these tools, all three health departments used the Effective Public Health Practice Project’s (EPHPP) Quality Assessment Tool for Quantitative Studies to appraise single studies. The EPHPP tool provides an overall quality rating based on six individual quality domains [ 56 ]. Finally, the health departments used the Critical Review Form - Qualitative Studies (Version 2.0) to assess the methodological quality of qualitative studies based on the rigor of eight components [ 57 ]. This tool, and its accompanying guidelines for users, has demonstrated an agreement of 75% to 86% between two researchers [ 58 ].

Tools for interpreting evidence and forming recommendations for practice

Case A developed the Data Extraction for Systematic Reviews tool to guide rapid evidence reviews as part of an EXTRA fellowship. This tool has been modified following user feedback [ 18 , 59 ]. Users can use this table template to organize and synthesize research evidence, specifically by extracting actionable messages and recommendations from retrieved articles [ 59 , 60 ]. Case C adapted and formally adopted the tool into their organizational documents.

Tools for deciding whether to use the evidence in the local context

NCCMT’s Applicability and Transferability of Evidence Tool (A&T Tool) identifies several areas (feasibility, generalizability) to consider when determining if the evidence is relevant for the local setting and circumstances [ 61 – 64 ]. The tool had reported content validity and can be applied either when starting or eliminating programs and interventions [ 63 – 65 ]. Cases A and C adapted the A&T Tool and included it in their organizational documents for conducting rapid evidence reviews. Case A created the Rapid Review Report Structure tool as part of the EXTRA fellowship, and has continually modified the tool based on user feedback. The tool’s purpose is to guide the writing up of the results of a rapid evidence review, outlining recommendations, and identifying and assigning responsibilities for next actions. The Rapid Review Report Structure tool builds on the Canadian Foundation for Healthcare Improvement’s standard report format [ 18 , 66 , 67 ]. The tool includes one page of key messages, a 2-page executive summary, and a full report of no more than 20 pages. Case C subsequently adapted and formally adopted the adapted tool.

Tools for deciding and planning how to implement the message in the local context

The Knowledge Translation Planning Guide provides direction on how to plan, implement, and evaluate plans for knowledge translation (KT) [ 68 , 69 ]. Case C adopted this tool into their formal organizational documents to guide the EIDM process. The tool and its accompanying guidebook provide information on integrating KT into specific research projects, a summary of key factors for assessing a KT plan, examples of hypothetical KT plans, and a checklist for reviewing KT plans [ 70 ].

Tools for evaluating the effectiveness of implementation efforts

The final step in the EIDM process involves evaluating the effectiveness of implementing the evidence-informed practice, program, or policy decision. As mentioned above, the Knowledge Translation Planning Tool incorporates evaluation of whether the intervention achieved the anticipated results (program evaluation) and whether the implementation strategies were delivered as intended (process outcomes) [ 71 ]. In the “KT Impact & Evaluation” component, the Knowledge Translation Planning Tool asks users to identify important aspects of evaluation such as the expected result of the intervention, the indicators of practice change, and a measure of usefulness. Case A developed a tool during the EXTRA fellowship to evaluate whether the original goals of the rapid evidence review were met [ 18 ]. The Manager’s Checklist [ 72 ], which continues to be modified, outlines key elements of the EIDM process with space to record comments on each element. The tool can be used to assess the impact of the rapid evidence review on decisions and serve as a quick reference for future reviews.

Experiences in using tools for EIDM

A total of 37 interviews were conducted throughout the intervention. Participants who agreed to be interviewed represented project/team staff and specialists (n = 14), managers/support staff (n = 16), and senior management (n = 7), with varied backgrounds (undergraduate, graduate, and M.D. degrees), length of time in public health (from 3 to 30 years), and the number of rapid evidence reviews (from 0 to 3) in which they were personally involved. Over 170 journal entries from the KBs’ reflective journals were also analyzed. Themes related to easing the process of EIDM, accessibility, and a role in increasing users’ confidence emerged related to the tools used in this study. Speculation about future use of the tools, ideas for new tools, and suggestions to improve existing tools were also discussed. Over 160 organizational documents were collected from the key contacts in each health department to confirm and augment data collected through interviews and journal entries, including the use, adaptation, and adoption of specific tools.

Easing the process of EIDM

Participants interviewed generally agreed that the tools facilitated engagement in the EIDM process by increasing efficiency, providing a concrete process to follow, providing guidance on searching for research evidence, and documenting their work. They thought the tools provided structure to the EIDM process and kept them “on track.” The tools’ accompanying instructions systematically outlined what needed to be considered, ultimately allowing for improved efficiency. In her journal entries, the KB reflected on how tools such as Health Evidence’s Quality Assessment Tool could be used for training purposes. Using this tool, the KB led participants through examples of good and poor quality systematic reviews to gain experience in critical appraisal.

The theme of easing the process of EIDM was evident in health departments where there had not been a concrete process in place prior to the study. The tools helped to define a process that public health professionals could follow, which appeared to further facilitate engagement in that process.

“… I think the process itself, that was laid out for us, was good in terms of… just having outlines, the databases, and the searches that we should go to, kind of the pyramid approach, the systematic, where we kind of focus, in there. I think that was really good and helpful. So the tools themselves were good. … we’ve never had anything kind of laid out before so I think that, in itself, was great.” – Specialist

“… I think there are a number of [consultants] for whom this is very exciting and they feel like, ‘Finally! I’m getting the tools that I need to do the work that I think is the work that I’m supposed to be doing!’” – Manager

In her journal, the KB emphasized the importance of using tools and templates to document participants’ work and keep track of their progress while maintaining transparency and repeatability in their efforts. She reflected on how this concept of documentation was new for most participants in all three health departments so participants appreciated having templates such as the Data Extraction for Systematic Reviews and Rapid Review Report Structure to work from and tailor to their own needs. Participants interviewed also identified specific tools, such as the Resources to Guide & Track Your Search, as being critical for supporting their work.

“ I love the - and I keep saying this - I love the tool, ‘The Resources to Guide & Track Your Search’. That’s my favourite! … I have a favourite. So when I oriented the other staff, it was like, ‘if you hear nothing else, remember this tool!’” - Specialist

Accessibility

Participants interviewed reported that the tools were easily accessible; following their involvement in an evidence review, they knew exactly where to go to access the tools for future work. Several tools have been compiled and made formally available to staff at two of the health departments. One health department posted them on their library website and the other included them in a draft organizational guidebook for conducting evidence reviews. This draft was introduced to all staff who attended workshops on EIDM with the health department during the study. Participants interviewed also reflected on the value of being able to access and easily download many of the tools online, free of charge. Furthermore, they noted the ease of using and navigating the tools as being important to effectively maximize the use of the tools.

“I really liked the tools through Health Evidence … being able to go into the websites to search for literature there. It seemed fairly timely. It was quite easy, actually.” – Specialist

Increasing confidence

Participants interviewed also discussed having confidence in the tools. Using tools, in which they had confidence, in turn promoted their trust in the results and recommendations of the rapid evidence reviews.

“…it's just fantastic that access and some structure and some templates and processes are available to do it in a systematic way. … to have the confidence that these tools and templates and processes have been developed and checked out, our confidence in what we’re finding when we go through them is high…” – Manager

Having the tools to support the EIDM process also increased the self-confidence staff had in their roles related to EIDM within the health department. The use of the tools at different steps in the EIDM process or rapid evidence review helped the staff identify the expectations within their role and the role of others in the health department.

“ …the process, I have to say, was well out-lined. We had sort of a package that was given to us and I looked at it a lot … there was a Managers’ Checklist that I was … tasked at doing so there were those pieces that helped to keep you on track when this is new, helped to see where you’re going and just getting confident in your role as this manager/supervisor.” - Specialist

Future use of the tools

Participants identified the tools as being relevant and timely to both their current and anticipated work. They expected to apply the tools to their work and projects occurring beyond the study.

“I think the tools, the tools that your team brought, were really helpful, too. Those tools are something that we can always take back and use and apply to other projects and other work we do. So the tools I found to be quite excellent.” - Specialist

Through the interviews it also became evident that those involved in the study thought it was important to share the tools with staff from the health department who were not involved in the KTE intervention. Participants discussed that sharing the tools could promote their continued use by both themselves and their peers. The KB further reflected that the participants she worked with were now recommending tools and templates to their colleagues.

The tools can be improved

Areas for improving the tools were also identified. Some tools were cited as having a great amount of detail, while others could be enhanced by providing further description and instructions for use. The KB reflected on conflicting responses to Health Evidence’s Quality Assessment Tool , where some participants found the latter three quality criteria to be difficult to interpret. She noted that through discussion and support from the KB, participants were able to eventually understand and be comfortable using the tool and its associated dictionary. On the other hand, the KB reflected that participants preferred this tool over other critical appraisal tools, because the dictionary was “immensely helpful” and the Quality Assessment Tool assigns a numerical score, providing a clearer conclusion on study quality. The Resources to Guide and Track your Search tool was another tool that was identified as being difficult to use:

“[A team member] stated that she did not like the Resources to Guide and Track your Search tool. She found that many of the links (not publically available) did not work and the process for using it took too long.” - KB reflective journal entry

Several formatting improvements were also identified. Participants suggested changing the layout of the Data Extraction for Systematic Reviews tool from a vertical to horizontal format (which was completed during the PHSI study) to make the tool more user-friendly for extracting and organizing data from rapid evidence reviews. Participants also suggested that consistently using a Word format for the tools would improve the ability to complete the tool within the document itself, versus as a PowerPoint slide or PDF format.

A final area of improvement related to the consistency in how the tools were used in the rapid evidence review process by the team members and library services. The KB reflected on this challenge of bringing all staff to the same understanding of the value of pre-processed data and where to begin a search. One participant suggested that the tools need to be “ adopted in a practical way” so that when the library assists a team in searching for evidence, the team can be confident that the search aligns with the principles of the 6S Pyramid .

Ideas for new tools

Finally, participants identified topic areas for tool development. Some managers reflected on whether there could be a tool specifically for knowledge transfer and change management that goes beyond the current scope of the A&T Tool . Suggestions were made that perhaps this tool could include a template for how to disseminate the messages as a result of a rapid evidence review to decision makers and stakeholders in a timely and meaningful way. The Managers’ Checklist was also cited as a very useful tool, both for completing the process and writing the final report. In addition, one specialist suggested that it may be helpful to have a more specific Specialists’ Checklist.

KTE interventions are being designed and implemented to address expectations for the use of research evidence in public health decision making and to overcome the individual, organizational, and contextual barriers to supporting, advancing, and sustaining EIDM [ 73 ]. The availability of tools and the role of a KB to provide the tools, mentoring staff through their use, were identified as being critical for facilitating staff learning and supporting the steps of EIDM in this case study.

A search of the literature on tools for EIDM revealed limited published evaluations of these types of tools. Electronic information services providing access to tools for EIDM in public health have been formally evaluated [ 74 ] and some instruments, such as the AGREE II [ 43 – 45 ], AMSTAR [ 48 , 75 ], CASP series [ 53 – 55 ], Critical Review Form Qualitative Studies version 2.0 [ 57 , 58 ], and the A&T tool [ 65 ], have been either evaluated, validated, and/or shown to be reliable. But evaluation and usability data across the spectrum of tools discussed here remains inconsistent. The usability and usefulness of these tools, illustrated by the qualitative results from our study, attempts to address this gap.

Usability and usefulness

Despite varied and inter-related definitions, the concepts of usability and usefulness were viewed favourably among the tools [ 76 – 78 ]. Public health professionals in our study found that the tools for EIDM were useful in providing a clear, concrete process to follow, thereby increasing their efficiency in learning new concepts related to EIDM. Participants were able to engage more successfully with their work because the tools guided them closely through the process of EIDM, systematically outlining the inputs and outputs required for each step and maintaining their focus on the task or step at hand. The usefulness of the tools, as defined by Seffah, was illustrated in their “practical utility” to enable “users to solve real problems in an acceptable way” [ 78 ].

Engagement in the EIDM process was further facilitated by the accessibility of the tools, meaning tools were easy to locate (online or in the health department’s guidebook or intranet), quick and free to download or acquire, and easy to navigate and understand. Similar findings have been reported with respect to the relationship between accessibility and usability of interactive software systems [ 78 ] and information products [ 79 ]. Simply stated, usability is rated low when a product is difficult to access [ 79 ]. Usability criteria cited as being necessary for the accessibility of software applications also applies in our assessment of tools, including: flexibility or the ability to tailor the product to the user; a minimal number of steps to access and use; the provision of a dictionary or user guide for access and use; self-descriptiveness, in that the purpose of the product is clearly conveyed; efficient navigability; and finally, simplicity of the product and its means of access [ 78 ]. Our analysis indicated that even with a dictionary, some of the tools may still be confusing for users who, in our study, required the assistance of a KB to guide them through using the tool. If these tools are meant to be stand-alone products, the developers need to ensure they are user-friendly and self-explanatory.

The final theme to emerge from our examination of the usability and usefulness of EIDM tools was the tools’ role in increasing confidence; having confidence in the tools promoted staff trust in the products that resulted from using the tools. The utility of the tools was further evident in increasing staff understanding of the expectations of their roles with respect to EIDM and improving their self-confidence within those roles. Others have reported confidence in tools and templates [ 41 ], with users deeming them to be reliable and/or credible for supporting aspects of work associated with EIDM [ 77 , 79 ].

Overcoming barriers, leveraging facilitators

The demonstrated usability and usefulness of the tools appeared to reduce barriers previously identified by public health professionals to engage in EIDM. An often cited individual barrier to EIDM is time [ 1 , 9 , 16 , 24 ]. Time was also identified in our study as a significant barrier for participants, but the tools were recognized as one means to overcome this. The tools were quickly and easily accessible and outlined a clear process to follow, reducing ambiguity concerning the requirements for each step and eliminating the need for organizations to create internal processes from scratch. For example, Robeson et al. [ 41 ] illustrated how the 6S Pyramid could reduce the amount of time a public health practitioner spends searching for evidence by encouraging them to begin at the highest, or most synthesized, level of evidence where the amount of relevant evidence is less and therefore more manageable, and already synthesized (and often appraised for quality) [ 41 ]. The Resources to Guide and Track your Search tool further improves efficiency, providing users with a direct link to several electronic databases. While extra time may be required to initially learn their appropriate use, these tools ultimately improve staff efficiency, reducing the demand on staff time.

A second individual-level barrier is limited capacity among public health staff to appraise, synthesize, and apply research findings in practice [ 16 , 80 ]. Use of the tools in our study facilitated individuals’ ability to systematically engage in the EIDM process and effectively learn the skills required for each step. Upon reflection, study participants commented on how they were better equipped for future engagement in EIDM and in sharing the tools and their learnings to improve their colleagues’ capacity for EIDM.

Organizational-level barriers to EIDM, as cited in the literature and observed in our own work, include unclear organizational values and expectations for EIDM and inadequate resources or infrastructure to access evidence [ 16 , 20 ]. To address the latter barrier, the health departments in our study either incorporated the tools into organizational documents and library intranet sites, or encouraged staff to access them from freely available and easily accessible online sources. Integrating the tools into organizational processes and widely promoting them among staff in turn helped solidify the value of EIDM and clarify organizational expectations. The tools used in this study were therefore central to the development of infrastructure and organizational capacity to support and encourage EIDM.

Organizational strategy and context

This study indicates that using the tools can assist in developing infrastructure within the organization to support and encourage EIDM. As suggested by Bowen and colleagues, EIDM “requires a change in how business is done, and the environment in which this business is conducted” [ 16 ]. Case study work shows that changes including the implementation of new tools should be part of a larger organizational strategy [ 9 , 11 ]. Being explicit about EIDM capacity building as a long-term process allows adequate time to create, pursue and reach realistic goals, both for individuals and organizations [ 11 ]. The usability and usefulness of the tools can further assist in supporting a consistent and replicable organizational process for EIDM [ 20 ]. An organizational approach requires active KT strategies that provide access to research and the technical infrastructure that supports that access [ 9 , 20 ]. It is important to be realistic about the infrastructure needed to support access [ 11 ].

Work with EIDM tools as part of this project identified strategies that may be important for sustaining the use of these tools as part of an organization-wide EIDM strategy beyond a time-limited KTE intervention. As indicated by our findings, public health professionals who used the tools intended to continue to use the tools in their future work, and share the tools with colleagues. With the use of new tools comes a need to acknowledge that learning how to use and apply them takes time [ 11 , 81 ]. Use of an Intranet or organizational website that incorporates the library system can also promote the sharing of tools for EIDM within the health department, increasing awareness and promoting the accessibility of tools and resources, leading to the development of organizational infrastructure to support and encourage the EIDM process [ 20 ]. While change is taking place, organizations must be aware that individuals are being asked to make a major change in the way they work [ 82 ]. A supportive culture and context for this change is needed [ 9 , 18 , 83 , 84 ]. Careful consideration of context is required in developing strategies for implementing knowledge translation strategies [ 85 – 87 ]. Generating the necessary positive context depends on leadership at the highest level of the organization [ 9 , 11 ]. Management support and accountability is also needed to support employees [ 11 , 81 , 84 ]. When managers help employees to acknowledge EIDM as a part of their role, managers themselves make fewer mistakes, organizations learn more, and there is more innovation [ 88 ]. While strong senior leadership and management play a key role, it is also necessary that all staff recognize that everyone has responsibility for sharing knowledge so that learning can take place [ 88 ].

Limitations

While this paper provides insights into the usefulness and usability of tools to support EIDM, some limitations should be noted. Data collection and analysis occurred simultaneously at baseline to help inform the tailored interventions. However, the majority of analysis occurred after follow-up data collection, restricting the ability of the researchers to conclude whether data saturation had occurred. The interviews were conducted by one member of the research team (RT), who also participated in enacting the role of the KB at one of the health departments. This may have introduced the possibility that participants provided responses that they thought to be socially desirable and hence especially positive. Lastly, all of the interviewees received the intensive KB-delivered KTE intervention or were involved in the study with a role of management support or key contact. Therefore views about the usefulness and usability of tools may be different among public health professionals who do not interact with a KB or are involved in these roles.

Recommendations

While a number of tools have been described in this paper, additional tools and options for accessing them are also available. For example: the NCCMT facilitates access to methods and tools for EIDM through The Registry for Methods and Tools [ 89 ]; the UK National Health Service public health evidence section includes implementation tools specific to public health [ 90 ]; and the Public Health Agency of Canada’s Canadian Best Practices Portal provides information and tools for EIDM [ 25 ] and program planning [ 91 ]. The developers of these and other EIDM tools, as well as those facilitating the use of the tools in practice (i.e. researchers, decision makers), should be encouraged to collect and share information about how the tools have been used and perceptions of their usability and usefulness. These evaluation efforts have the ability to inform the refinement of existing tools and the development of new tools to support EIDM. While the best way to facilitate the uptake of these tools by public health professionals remains to be determined, this study illustrated that a KB, as part of a tailored KTE intervention, was able to facilitate the integration of the tools within the health departments, including sharing tools between the health departments. Although there is no single way to generate readiness for this type of organizational change [ 92 ], future efforts can look to what is known about facilitating positive context for uptake [ 9 , 11 , 83 , 84 , 86 ]. Future efforts should continue to identify the effectiveness of KTE interventions as part of a broader strategy for promoting the integration of tools that support the EIDM process in practice.

Public health professionals are increasingly expected to use research evidence in decision making. Along with KTE interventions, tools are being designed and implemented to help public health professionals meet these expectations and engage in the seven-step process for EIDM. Using a KTE intervention delivered by a KB, this study demonstrated that the KB facilitated the sharing and integration of tools to support the EIDM process among three Ontario health departments. Findings illustrated that the tools used by public health professionals working within varied roles in the health departments were viewed as usable and useful. Use of the tools facilitated individuals’ ability to engage in the EIDM process in a systematic way, which in turn increased staff confidence in formulating recommendations for practice, program, and policy decisions. It also encouraged their future engagement in the EIDM process. Efforts should continue to promote the awareness and use of the tools to assist public health professionals in their efforts to incorporate research evidence in practice, program, and policy decisions.

Abbreviations

  • Evidence-informed decision making

National Collaborating Centre for Methods and Tools

Knowledge Broker

  • Knowledge translation and exchange

Canadian Institutes of Health Research

Partnerships for Health System Improvement

Funding Reference Number

Research and Policy Analyst

Executive Training for Research Application

Appraisal of Guidelines for Research and Evaluation

A Measurement Tool to Assess Systematic Reviews

Randomized controlled trial

Critical Appraisal Skills Programme

Scottish Intercollegiate Guidelines Network

Effective Public Health Practice Project

Applicability & Transferability

Knowledge translation.

Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE: Knowledge translation of research findings. Implement Sci. 2012, 7: 50-10.1186/1748-5908-7-50.

Article   PubMed   PubMed Central   Google Scholar  

Straus SE, Tetroe J, Graham I: Defining knowledge translation. Can Med Assoc J. 2009, 181: 165-168. 10.1503/cmaj.081229.

Article   Google Scholar  

Straus SE, Tetroe JM, Graham ID: Knowledge translation is the use of knowledge in health care decision making. J Clin Epidemiol. 2011, 64: 6-10. 10.1016/j.jclinepi.2009.08.016.

Article   PubMed   Google Scholar  

National Collaborating Centre for Methods and Tools: A Model for Evidence-Informed Decision Making in Public Health. http://www.nccmt.ca/publications/1/view-eng.html ,

Dobbins M, Thomas H, O’Brien MA, Duggan M: The use of systematic reviews in the development of new provincial public health policies in Ontario. Int J Technol Assess Health Care. 2004, 20: 399-404.

Dobbins M, DeCorby K, Twiddy T: A knowledge transfer strategy for public health decision makers. Worldviews Evid Based Nurs. 2004, 1: 120-128. 10.1111/j.1741-6787.2004.t01-1-04009.x.

Article   CAS   PubMed   Google Scholar  

Lavis J, Davies H, Oxman A, Denis JL, Golden-Biddle K, Ferlie E: Towards systematic reviews that inform health care management and policy-making. J Health Serv Res Policy. 2005, 10: 35-48. 10.1258/1355819054308549.

Government of Quebec: Revised Statutes of Quebec, Public Health Act. [ http://www2.publicationsduquebec.gouv.qc.ca/dynamicSearch/telecharge.php?type=2&file=/S_2_2/S2_2_A.html ]

Peirson L, Ciliska D, Dobbins M, Mowat D: Building capacity for evidence informed decision making in public health: a case study of organizational change. BMC Public Health. 2012, 12: 137-10.1186/1471-2458-12-137.

Brownson RC, Fielding J, Maylahn C: Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health. 2009, 30: 175-201. 10.1146/annurev.publhealth.031308.100134.

Ward M, Mowat D: Creating an organizational culture for evidence-informed decision making. Healthc Manage Forum. 2012, 25: 146-150. 10.1016/j.hcmf.2012.07.005.

Di Ruggiero E, Frank J, Moloughney B: Strengthen Canada’s public health system now. Can J Public Health. 2004, 95: 5-11.

PubMed   Google Scholar  

Ciliska D, Thomas H, Buffett C: An introduction to evidence-based public health and a compendium of critical appraisal tools for public health practice (Revised). 2012, Hamilton, Ontario, Canada: National Collaborating Centre for Methods and Tools

Google Scholar  

Dobbins M, Hanna SE, Ciliska D, Thomas H, Manske S, Cameron R, Mercer SL, O’Mara L, DeCorby K, Robeson P: A randomized controlled trial evaluating the impact of knowledge translation and exchange strategies. Implement Sci. 2009, 4: 61-10.1186/1748-5908-4-61.

LaPelle NR, Luckmann R, Hatheway Simpson E, Martin ER: Identifying strategies to improve access to credible and relevant information for public health professionals: a qualitative study. BMC Public Health. 2006, 6: 89-101. 10.1186/1471-2458-6-89.

Bowen S, Erickson T, Martens P, Crockett S: More than “using research”: the real challenges in promoting evidence-informed decision-making. Health Policy. 2009, 4: 87-102.

Lavis JN, Robertson D, Woodside J, McLeod C, Abelson J: How can research organizations more effectively transfer research knowledge to decision makers?. Milbank Q. 2003, 81: 221-248. 10.1111/1468-0009.t01-1-00052.

Ward M: Evidence-informed decision making in a public health setting. Healthc Manage Forum. 2011, 24: S8-S16. 10.1016/j.hcmf.2011.01.005.

Stetler CB, Ritchie JA, Rycroft-Malone J, Shultz AA, Charns MP: Institutionalizing evidence-based practice: an organizational case study using a model of strategic change. Implement Sci. 2009, 4:

Ellen ME, Leon G, Bouchard G, Lavis JN, Ouimet M, Grimshaw JM: What supports do health system organizations have in place to facilitate evidence-informed decision-making? A qualitative study. Implement Sci. 2013, 8: 84-10.1186/1748-5908-8-84.

Lomas J: The in-between world of knowledge brokering. BMJ. 2007, 334: 129-132. 10.1136/bmj.39038.593380.AE.

Gagnon ML: Moving knowledge to action through dissemination and exchange. J Clin Epidemiol. 2011, 64: 25-31. 10.1016/j.jclinepi.2009.08.013.

Dobbins M, DeCorby K, Robeson P, Husson H, Tirilis D, Greco L: A knowledge management tool for public health: health-evidence.ca. BMC Public Health. 2010, 10: 496-10.1186/1471-2458-10-496.

Kiefer L, Frank J, Di Ruggiero E, Dobbins M, Manuel D, Gully PR, Mowat D: Fostering evidence-based decision-making in Canada: examining the need for a Canadian Population and Public Health Evidence Centre and Research Network. Can J Public Health. 2005, 96: I-1-I-19.

Public Health Agency of Canada: Canadian Best Practice Portal: Evidence-Informed Decision-Making Tools. [ http://cbpp-pcpe.phac-aspc.gc.ca/resources/evidence-informed-decision-making/ ]

Dobbins M, Robeson P, Ciliska D, Hanna S, Cameron R, O’Mara L, DeCorby K, Mercer S: A description of a knowledge broker role implemented as part of a randomized controlled trial evaluating three knowledge translation strategies. Implement Sci. 2009, 4: 23-10.1186/1748-5908-4-23.

Clark JP: How to peer review a qualitative manuscript. Peer Review in Health Sciences. Edited by: Godlee F, Jefferson T. 2003, London: BMJ Books, 219-235.

Pascale R, Athos A: The Art of Japanese Management. 1981, London: Penguin Books

Peters T, Waterman R: In Search of Excellence. 1982, New York, London: Harper & Row

Waterman R, Peters J, Phillips JR: Structure is not organisation. Bus Horiz. 1980, 23: 14-26. 10.1016/0007-6813(80)90027-0.

Hewitt-Taylor J: Use of constant comparative analysis in qualitative research. Nurs Stand. 2001, 15: 39-42.

Canadian Foundation for Healthcare Improvement: What We Do: Education and Training: EXTRA. [ http://www.cfhi-fcass.ca/WhatWeDo/EducationandTraining/EXTRA.aspx ]

Health Evidence: Developing an efficient search strategy using PICO. [ http://www.healthevidence.org/practice-tools.aspx#PT2 ]

National Collaborating Centre for Methods and Tools: Evidence-informed public health- Define: Clearly define the question or problem. [ http://www.nccmt.ca/eiph/define-eng.html ]

Ganann R, Ciliska D, Thomas H: Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010, 5: 56-65. 10.1186/1748-5908-5-56.

Harker J, Kleijnen J: What is a rapid review? A methodological exploration of rapid reviews in Health Technology Assessments. Int J Evid Based Healthc. 2012, 10: 397-410. 10.1111/j.1744-1609.2012.00290.x.

Region of Peel Public Health: Step 1 - Developing a conceptual model - instructions and worksheet. [ http://www.peelregion.ca/health/library/developing-model.asp ]

Haynes B: Of Studies, Synthesis, Synopses, Summaries and Systems; the 5 S’s evolution of Information services for evidence-based healthcare decisions. Evid Based Nurs. 2007, 10: 6-7. 10.1136/ebn.10.1.6.

DiCenso A, Bayley L, Haynes RB: Accessing pre-appraised evidence: fine-tuning the 5S model into a 6S model. Evid Based Nurs. 2009, 12: 99-101.

Robeson P, Yost J: Resources to Guide & Track your Search. [ http://www.healthevidence.org/practice-tools.aspx#PT4 ]

Robeson P, Dobbins M, DeCorby K, Tirilis D: Facilitating access to pre-processed research evidence in public health. BMC Public Health. 2010, 10: 95-10.1186/1471-2458-10-95.

Health Evidence: Keeping track of search results: A flowchart. [ http://www.healthevidence.org/practice-tools.aspx#PT5 ]

Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, Fervers B, Graham ID, Grimshaw J, Hanna SE, Littlejohns P, Makarski J, Zitzelsberger L, AGREE Next Steps Consortium: AGREE II: Advancing guideline development, reporting and evaluation in healthcare. Can Med Assoc J. 2010, 182: E839-E842. 10.1503/cmaj.090449.

Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, Fervers B, Graham ID, Hanna SE, Makarski J, AGREE Next Steps Consortium: Development of the AGREE II, part 1: performance, usefulness and areas for improvement. CMAJ. 2010, 182: 1045-1052. 10.1503/cmaj.091714.

Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, Fervers B, Graham ID, Hanna SE, Makarski J, AGREE Next Steps Consortium: Development of the AGREE II, part 2: assessment of validity of items and tools to support application. CMAJ. 2010, 182: E472-E478. 10.1503/cmaj.091716.

Health Evidence: Quality assessment tool: Review articles. [ http://www.healthevidence.org/documents/our-appraisal-tools/QA_tool&dictionary_18.Mar.2013.pdf ]

Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM: Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007, 7: 10-10.1186/1471-2288-7-10.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M: AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009, 62: 1013-1020. 10.1016/j.jclinepi.2008.10.009.

AMSTAR: Recent Developments. [ http://amstar.ca/Developments.php ]

AMSTAR: About us. [ http://amstar.ca/About_Amstar.php ]

Critical Appraisal Skills Programme: Appraising the evidence. [ http://www.casp-uk.net/find-appraise-act/appraising-the-evidence/ ]

Scottish Intercollegiate Guidelines Network. Critical appraisal: Notes and checklists. [ http://www.sign.ac.uk/methodology/checklists.html ]

National Collaborating Centre for Methods and Tools: Critical appraisal tools to make sense of evidence. [ http://www.nccmt.ca/registry/view/eng/87.html ]

Critical Appraisal Skills Programme: History. [ http://www.casp-uk.net/# !history/cnxo]

Hannes K, Lockwood C, Pearson A: A comparative analysis of three online appraisal instruments’ ability to assess validity in qualitative research. Qual Health Res. 2010, 20: 1736-1743. 10.1177/1049732310378656.

Effective Public Health Practice Project: Quality assessment tools for quantitative studies. [ http://www.ephpp.ca/tools.html ]

Letts L, Wilkins S, Law M, Stewart D, Bosch J, Westmorland M: Critical Review Form – Qualitative Studies (Version 2.0). [ http://www.srs-mcmaster.ca/Portals/20/pdf/ebp/qualreview_version2.0.pdf ]

Occupational Therapy Evidence-Based Practice Research Group. [ http://www.srs-mcmaster.ca/Default.aspx?tabid=630 ]

Region of Peel Public Health: Step 4 - Data extraction for systematic reviews. [ http://www.peelregion.ca/health/library/data-extraction.asp ]

National Collaborating Centre for Methods and Tools: Methods: Synthesis 1. Rapid reviews: Methods and implications. [ http://www.nccmt.ca/pubs/Methods_Synthesis1.pdf ]

National Collaborating Centre for Methods and Tools: Evidence-informed public health- Adapt: Adapt the information to a local context. [ http://www.nccmt.ca/eiph/adapt-eng.html ]

Buffet C, Ciliska D, Thomas H: Can I Use This Evidence in my Program Decision? Assessing Applicability and Transferability of Evidence. [ http://www.nccmt.ca/pubs/AT_paper_with_tool_final_-_English_Oct_07.pdf ]

Buffet C, Ciliska D, Thomas H: It worked there. Will it work here? A tool for assessing applicability and transferability of evidence (A: When considering starting a new program). [ http://www.nccmt.ca/pubs/A&Trevised-startEN.pdf ]

Buffet C, Ciliska D, Thomas H: It worked there. Will it work here? A tool for assessing applicability and transferability of evidence (B: When considering stopping an existing program). [ http://www.nccmt.ca/pubs/A&Trevised-startEN.pdf ]

National Collaborating Centre for Methods and Tools: Applicability and transferability of evidence tool (A&T tool). [ http://www.nccmt.ca/registry/view/eng/24.html ]

Region of Peel Public Health: Step 6 - Rapid review report structure. [ http://www.peelregion.ca/health/library/report-structure.asp ]

Canadian Health Services Research Foundation: Communication notes: Reader-friendly writing - 1:3:25. [ http://www.cfhi-fcass.ca/Migrated/PDF/CommunicationNotes/cn-1325_e.pdf ]

Barwick M: Scientist Knowledge Translation Plan Template - R™. [ http://www.melaniebarwick.com/training.php ]

National Collaborating Centre for Methods and Tools: Evidence informed public health - Implement: Decide whether (and plan how) to implement the adapted evidence into practice or policy. [ http://www.nccmt.ca/eiph/implement-eng.html ]

Ross S, Goering P, Jacobson N, Butterill D: A Guide for Assessing Health Research Knowledge Translation Plans. 2007, Toronto, ON: Centre for Addiction and Mental Health

National Collaborating Centre for Methods and Tools: Evidence informed public health - Evaluate: Evaluate the effectiveness of implementation efforts. [ http://www.nccmt.ca/eiph/implement-eng.html ]

Region of Peel Public Health: Step 7 - Manager checklist. [ http://www.peelregion.ca/health/library/manager-checklist.asp ]

LaRocca R, Yost J, Dobbins M, Ciliska D, Butt M: The effectiveness of knowledge translation strategies used in public health: a systematic review. BMC Public Health. 2012, 12: 751-10.1186/1471-2458-12-751.

Peirson L, Catallo C, Chera S: the registry of knowledge translation methods and tools: a resource to support evidence-informed public health. Int J Public Health. 2013, 58: 493-500. 10.1007/s00038-013-0448-3.

Shea BJ, Bouter LM, Peterson J, Boers M, Andersson N, Ortiz Z, Ramsay T, Bai A, Shukla VK, Grimshaw JM: External validation of a measurement tool to assess systematic reviews (AMSTAR). PLoS Med. 2007, 2: e1350-

Bevan N, Kirakowski J, Maissel J: Proceedings of the 4th international conference on HCI: What is usability?. [ http://www.nigelbevan.com/papers/whatis92.pdf ]

Tsakonas G, Papatheodorou C: Analysing and evaluating usefulness and usability in electronic information services. J Inform Sci. 2006, 32: 400-419. 10.1177/0165551506065934.

Seffah A, Donyaee M, Kline R, Padda H: Usability measurement and metrics: a consolidated model. Software Qual J. 2006, 14: 159-178. 10.1007/s11219-006-7600-8.

Khan BK, Strong DM, Wang RY: Information quality benchmarks: product and service performance. Communications of the ACM. 2014, 45: 184-192.

Brownson RC, Gurney JG, Land GH: Evidence-based decision making in public health. J Public Health Manag Pract. 1999, 5: 86-97. 10.1097/00124784-199909000-00012.

Casebeer A, Hayward S, MacKean G, Matthias S, Hayward R: Evidence in action, acting on evidence. SEARCH Canada: Building capacity in health organizations to create and use knowledge. [ http://www.cihr-irsc.gc.ca/e/30667.html ]

Lucas LM: The role of teams, culture, and capacity in the transfer of organizational practices. The Learning Organization. 2010, 17: 419-436.

Cummings GG, Estabrooks CA, Midodzi WK, Wallin L, Hayduk L: Influence of organizational characteristics and context on research utilization. Nurs Res. 2007, 56: S24-S39. 10.1097/01.NNR.0000280629.63654.95.

Krein SL, Damschroder LJ, Kowalski CP, Forman J, Hofer TP, Saint S: The influence of organizational context on quality improvement and patient safety efforts in infection prevention: a multi-center qualitative study. Soc Sci Med. 2010, 71: 1692-1701. 10.1016/j.socscimed.2010.07.041.

Contandriopoulos D, Denis JL, Lemire M, Tremblay E: Knowledge exchange processes in organizations and policy arenas: a narrative systematic review of the literature. Milbank Q. 2010

Rycroft-Malone J, Seers K, Chandler J, Hawkes CA, Crichton N, Allen C, Bullock I, Strunin L: The role of evidence, context, and facilitation in an implementation trial: implications for the development of the PARIHS framework. Implement Sci. 2009, 8: 28-

Dogherty EJ, Harrison MB, Graham ID: Facilitation as a role and process in achieving evidence-based practice in nursing: a focused review of concept and meaning. Worldviews Evid Based Nurs. 2010, 7: 76-89.

Austin MJ: Strategies for transforming human service organizations into learning organizations: knowledge management and the transfer of learning. J Evid Based Soc Work. 2008, 5: 569-596. 10.1080/15433710802084326.

The National Collaborating Centre for Methods and Tools: The Registry of Methods and Tools: Knowledge translation methods and tools for public health. [ http://www.nccmt.ca/registry/index-eng.html ]

National Institute for Health and Care Excellence: Public Health Information. [ http://www.evidence.nhs.uk/about-evidence-services/content-and-sources/public-health-information ]

Public Health Agency of Canada: Canadian Best Practices Portal: Planning Public Health Programs: Information and Tools. [ http://cbpp-pcpe.phac-aspc.gc.ca/resources/planning-public-health-programs/ ]

Weiner BJ: A theory of organizational readiness for change. Implement Sci. 2009, 4: 67-10.1186/1748-5908-4-67.

Guyatt GH, Rennie D: Users’ guides to the medical literature. JAMA. 1993, 270: 2096-2097. 10.1001/jama.1993.03510170086037.

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/14/728/prepub

Download references

Acknowledgements

The authors gratefully acknowledge funding from the Canadian Institutes of Health Research (FRN 101867, 126353) for the project, A tailored, collaborative strategy to develop capacity and facilitate evidence-informed public health decision making . They also acknowledge the financial and in-kind contributions of the three partner health departments and the participation and feedback from health department staff throughout this work.

Author information

Authors and affiliations.

School of Nursing, Faculty of Health Sciences, McMaster University, 1200 Main St. W, Hamilton, Ontario, Canada

Jennifer Yost, Maureen Dobbins, Robyn Traynor, Stephanie Workentine & Lori Greco

Health Promotion, Chronic Disease & Injury Prevention, Public Health Ontario, 480 University Avenue, Suite 300, Toronto, Ontario, Canada

Kara DeCorby

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jennifer Yost .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors’ contributions

MD conceived of the study and contributed to the draft and revisions of the paper. JY prepared the first draft and revisions of the paper. RT and KD contributed to the draft and revisions of the paper. SW assisted with literature searching, background writing, and reference checking. LG contributed to initial revisions of the paper. MD, JY, RT, KD, and LG contributed to study implementation, data collection, and data analysis. All authors read and approved the final manuscript.

Electronic supplementary material

12889_2014_6880_moesm1_esm.pdf.

Additional file 1: Tools for Evidence Informed Decision Making (EIDM). Additional file 1 provides a succinct description of the tools and how they were used in the health departments, as well as identifies the developer of the tool and the format in which they are available [ 93 ]. (PDF 375 KB)

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/4.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Yost, J., Dobbins, M., Traynor, R. et al. Tools to support evidence-informed public health decision making. BMC Public Health 14 , 728 (2014). https://doi.org/10.1186/1471-2458-14-728

Download citation

Received : 05 April 2014

Accepted : 03 July 2014

Published : 18 July 2014

DOI : https://doi.org/10.1186/1471-2458-14-728

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Knowledge broker
  • Public health

BMC Public Health

ISSN: 1471-2458

research helps develop tools for assessing effectiveness

  • Systematic review
  • Open access
  • Published: 14 November 2017

The effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare: a systematic review

  • Mitchell N. Sarkies 1 ,
  • Kelly-Ann Bowles 2 ,
  • Elizabeth H. Skinner 1 ,
  • Romi Haas 1 ,
  • Haylee Lane 1 &
  • Terry P. Haines 1  

Implementation Science volume  12 , Article number:  132 ( 2017 ) Cite this article

31k Accesses

71 Citations

22 Altmetric

Metrics details

It is widely acknowledged that health policy and management decisions rarely reflect research evidence. Therefore, it is important to determine how to improve evidence-informed decision-making. The primary aim of this systematic review was to evaluate the effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare. The secondary aim of the review was to describe factors perceived to be associated with effective strategies and the inter-relationship between these factors.

An electronic search was developed to identify studies published between January 01, 2000, and February 02, 2016. This was supplemented by checking the reference list of included articles, systematic reviews, and hand-searching publication lists from prominent authors. Two reviewers independently screened studies for inclusion, assessed methodological quality, and extracted data.

After duplicate removal, the search strategy identified 3830 titles. Following title and abstract screening, 96 full-text articles were reviewed, of which 19 studies (21 articles) met all inclusion criteria. Three studies were included in the narrative synthesis, finding policy briefs including expert opinion might affect intended actions, and intentions persisting to actions for public health policy in developing nations. Workshops, ongoing technical assistance, and distribution of instructional digital materials may improve knowledge and skills around evidence-informed decision-making in US public health departments. Tailored, targeted messages were more effective in increasing public health policies and programs in Canadian public health departments compared to messages and a knowledge broker. Sixteen studies (18 articles) were included in the thematic synthesis, leading to a conceptualisation of inter-relating factors perceived to be associated with effective research implementation strategies. A unidirectional, hierarchal flow was described from (1) establishing an imperative for practice change, (2) building trust between implementation stakeholders and (3) developing a shared vision , to (4) actioning change mechanisms . This was underpinned by the (5) employment of effective communication strategies and (6) provision of resources to support change.

Conclusions

Evidence is developing to support the use of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare. The design of future implementation strategies should be based on the inter-relating factors perceived to be associated with effective strategies.

Trial registration

This systematic review was registered with Prospero (record number: 42016032947).

Peer Review reports

The use of research evidence to inform health policy is strongly promoted [ 1 ]. This drive has developed with increased pressure on healthcare organisations to deliver the most effective health services in an efficient and equitable manner [ 2 ]. Policy and management decisions influence the ability of health services to improve societal outcomes by allocating resources to meet health needs [ 3 ]. These decisions are more likely to improve outcomes in a cost-efficient manner when they are based on the best available evidence [ 4 , 5 , 6 , 7 , 8 ].

Evidence-informed decision-making refers to the complex process of considering the best available evidence from a broad range of information when delivering health services [ 1 , 9 , 10 ]. Policy and management decisions can be influenced by economic constraints, community views, organisational priorities, political climate, and ideological factors [ 11 , 12 , 13 , 14 , 15 , 16 ]. While these elements are all important in the decision-making process, without the support of research evidence they are an insufficient basis for decisions that affect the lives of others [ 17 , 18 ].

Recently, increased attention has been given to implementation research to reduce the gap between research evidence and healthcare decision-making [ 19 ]. This growing but poorly understood field of science aims to improve the uptake of research evidence in healthcare decision-making [ 20 ]. Research implementation strategies such as knowledge brokerage and education workshops promote the uptake of research findings into health services. These strategies have the potential to create systematic, structural improvements in healthcare delivery [ 21 ]. However, many barriers exist to successful implementation [ 22 , 23 ]. Individuals and health services face financial disincentives, lack of time or awareness of large evidence resources, limited critical appraisal skills, and difficulties applying evidence in context [ 24 , 25 , 26 , 27 , 28 , 29 , 30 ].

It is important to evaluate the effectiveness of implementation strategies and the inter-relating factors perceived to be associated with effective strategies. Previous reviews on health policy and management decisions have focussed on implementing evidence from single sources such as systematic reviews [ 29 , 31 ]. Strategies that involved simple written information on accomplishable change may be successful in health areas where there is already awareness of evidence supporting practice change [ 29 ]. Re-conceptualisation or improved methodological rigor has been suggested by Mitton et al. to produce a richer evidence base for future evaluation, however only one high-quality randomised controlled trial has been identified since [ 9 , 32 , 33 ]. As such, an updated review of emerging research in this topic is needed to inform the selection of research implementation strategies in health policy and management decisions.

The primary aim of this systematic review was to evaluate the effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare. A secondary aim of the review was to describe factors perceived to be associated with effective strategies and the inter-relationship between these factors.

Identification and selection of studies

This systematic review was registered with Prospero (record number: 42016032947) and has been reported consistent with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines (Additional file  1 ). Ovid MEDLINE, Ovid EMBASE, PubMed, CINAHL Plus, Scopus, Web of Science Core Collection, and The Cochrane Library were searched electronically from January 01, 2000, to February 02, 2016, in order to retrieve literature relevant to the current healthcare environment. The search was limited to the English language, and terms relevant to the field, population, and intervention were combined (Additional file  2 ). Search terms were selected based on their sensitivity, specificity, validity, and ability to discriminate implementation research articles from non-implementation research articles [ 34 , 35 , 36 ]. Electronic database searches were supplemented by cross-checking the reference list of included articles and systematic reviews identified during the title and abstract screening. Searches were also supplemented by hand-searching publication lists from prominent authors in the field of implementation science.

Study selection

Type of studies.

All study designs were included. Experimental and quasi-experimental study designs were included to address the primary aim. No study design limitations were applied to address the secondary aim.

The population included individuals or bodies who made resource allocation decisions at the managerial, executive, or policy level of healthcare organisations or government institutions. Broadly defined as healthcare policy-makers or managers, this population focuses on decision-making to improve population health outcomes by strengthening health systems, rather than individual therapeutic delivery. Studies investigating clinicians making decisions about individual clients were excluded, unless these studies also included healthcare policy-makers or managers.

Interventions

Interventions included research implementation strategies aimed at facilitating evidence-informed decision-making by healthcare policy-makers and managers. Implementation strategies may be defined as methods to incorporate the systematic uptake of proven evidence into decision-making processes to strengthen health systems [ 37 ]. While these interventions have been described differently in various contexts, for the purpose of this review, we will refer to these interventions as ‘research implementation strategies’.

Type of outcomes

This review focused on a variety of possible outcomes that measure the use of research evidence. Outcomes were broadly categorised based on the four levels of Kirkpatrick’s Evaluation Model Hierarchy: level 1—reaction (e.g. change in attitude towards evidence), level 2—learning (e.g. improved skills acquiring evidence), level 3—behaviour (e.g. self-reported action taking), and level 4—results (e.g. change in patient or organisational outcomes) [ 38 ].

The web-based application Covidence (Covidence, Melbourne, Victoria, Australia) was used to manage references during the review [ 39 ]. Titles and abstracts were imported into Covidence and independently screened by the lead investigator (MS) and one of two other reviewers (RH, HL). Duplicates were removed throughout the review process using Endnote (EndNote ™ , Philadelphia, PA, USA), Covidence and manually during reference screening. Studies determined to be potentially relevant or whose eligibility was uncertain were retrieved and imported to Covidence for full-text review. The lead investigator (MS) and one of two other reviewers (RH, HL) then independently assessed the full-text articles for the remaining studies to ascertain eligibility for inclusion. A fourth reviewer (KAB) independently decided on inclusion or exclusion if there was any disagreement in the screening process. Attempts were made to contact authors of studies whose full-text articles were unable to be retrieved, and those that remained unavailable were excluded.

Quality assessment

Experimental study designs, including randomised controlled trials and quasi-experimental studies, were independently assessed for risk of bias by the lead investigator (MS) and one of two other reviewers (RH, HL) using the Cochrane Collaboration’s tool for assessing risk of bias [ 40 ]. Non-experimental study designs were independently assessed for risk of bias by the lead investigator (MS) and one of two other reviewers (RH, HL) using design-specific risk-of-bias-critical appraisal tools: (1) Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies from the National Heart, Lung, and Blood Institute (NHLBI; [ 41 ], February) and (2) Critical Appraisal Skills Program (CASP) Qualitative Checklist for qualitative, case study, and evaluation designs [ 42 ].

Data extraction

Data was extracted using a standardised, piloted data extraction form developed by reviewers for the purpose of this study (Additional file  3 ). The lead investigator (MS) and one of two other reviewers (RH, HL) independently extracted data relating to the study details, design, setting, population, demographics, intervention, and outcomes for all included studies. Quantitative results were also extracted in the same manner from experimental studies that reported quantitative data relating to the effectiveness of research implementation strategies in promoting evidence-informed policy and management decisions in healthcare. Attempts were made to contact authors of studies where data was not reported or clarification was required. Disagreement between investigators was resolved by discussion, and where agreement could not be reached, an independent fourth reviewer (KAB) was consulted.

Data analysis

A formal meta-analysis was not undertaken due to the small number of studies identified and high levels of heterogeneity in study approaches. Instead, a narrative synthesis of experimental studies evaluating the effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare and a thematic synthesis of non-experimental studies were performed to describe factors perceived to be associated with effective strategies and the inter-relationship between these factors. Experimental studies were synthesised narratively, defined as studies reporting quantitative results with both an experimental and comparison group. This included specified quasi-experimental designs, which report quantitative before and after results for primary outcomes related to the effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare. Non-experimental studies were synthesised thematically, defined as studies reporting quantitative results without both an experimental and control group, or studies reporting qualitative results. This included quasi-experimental studies that do not report quantitative before and after results for primary outcomes related to the effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare.

The thematic synthesis was informed by inductive thematic approach for data referring to the factors perceived to be associated with effective strategies and the inter-relationship between these factors. The thematic synthesis in this systematic review was based on methods described by Thomas and Harden [ 43 ]. Methods involved three stages of analysis: (1) line-by-line coding of text, (2) inductive development of descriptive themes similar to those reported in primary studies, (3) analytical themes representing new interpretive constructs undeveloped within studies but apparent between studies once data is synthesised. Data reported in the results section of included studies were reviewed line-by-line and open coded according to meaning and content by the lead investigator (MS). Codes were developed using an inductive approach by the lead investigator (MS) and a second reviewer (TH). Concurrent with data analysis, this entailed constant comparison, ongoing development, and comparison of new codes as each study was coded. Immersing reviewers in the data, reflexive analysis, and peer debriefing techniques were used to ensure methodological rigor throughout the process. Codes and code structure was considered finalised at point of theoretical saturation (when no new concepts emerged from a study). A single researcher (MS) was chosen to conduct the coding in order to embed the interpretation of text within a single immersed individual to act as an instrument of data curation [ 44 , 45 ]. Simultaneous axial coding was performed by the lead investigator (MS) and a second reviewer (TH) during the original open coding of data to identify relationships between codes and organise coded data into descriptive themes. Once descriptive themes were developed, the two investigators then organised data across studies into analytical themes using a deductive approach by outlining relationships and interactions between codes across studies. To ensure methodological rigor, a third reviewer (JW) was consulted via group discussion to develop final consensus. The lead author (MS) reviewed any disagreements in descriptive and analytical themes by returning to the original open codes. This cyclical process was repeated until themes were considered to sufficiently describe the factors perceived to be associated with effective strategies and the inter-relationship between these factors.

Search results

The search strategy identified a total of 7783 articles, 7716 were identified by the electronic search strategy, 56 from reference checking of identified systematic reviews, 8 from reference checking of included articles, and 3 articles from hand-searching publication lists of prominent authors. Duplicates (3953) were removed using Endnote ( n  = 3906) and Covidence ( n  = 47), leaving 3830 articles for screening (Fig.  1 ).

PRISMA Flow Diagram

Of the 3830 articles, 96 were determined to be potentially eligible for inclusion after title and abstract screening (see Additional file  4 for the full list of 96 articles). The full-text of these 96 articles was then reviewed, with 19 studies ( n  = 21 articles) meeting all relevant criteria for inclusion in this review [ 9 , 27 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 ]. The most common reason for exclusion upon full-text review was that articles did not examine the effect of a research implementation strategy on decision-making by healthcare policy-makers or managers ( n  = 22).

Characteristics of included studies

The characteristics of included studies are shown in Table  1 . Three experimental studies evaluated the effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare systems. Sixteen non-experimental studies described factors perceived to be associated with effective research implementation strategies.

Study design

Of the 19 included studies, there were two randomised controlled trials (RCTs) [ 9 , 46 ], one quasi-experimental study [ 47 ], four program evaluations [ 48 , 49 , 50 , 51 ], three implementation evaluations [ 52 , 53 , 54 ], three mixed methods [ 55 , 56 , 57 ], two case studies [ 58 , 59 ], one survey evaluation [ 63 ], one process evaluation [ 64 ], one cohort study [ 60 ], and one cross-sectional follow-up survey [ 61 ].

Participants and settings

The largest number of studies were performed in Canada ( n  = 6), followed by the United States of America (USA) ( n  = 3), the United Kingdom (UK) ( n  = 2), Australia ( n  = 2), multi-national ( n  = 2), Burkina Faso ( n  = 1), the Netherlands ( n  = 1), Nigeria ( n  = 1), and Fiji ( n  = 1). Health topics where research implementation took place were varied in context. Decision-makers were typically policy-makers, commissioners, chief executive officers (CEOs), program managers, coordinators, directors, administrators, policy analysts, department heads, researchers, change agents, fellows, vice presidents, stakeholders, clinical supervisors, and clinical leaders, from the government, academia, and non-government organisations (NGOs), of varying education and experience.

Research implementation strategies

There was considerable variation in the research implementation strategies evaluated, see Table  2 for summary description. These strategies included knowledge brokering [ 9 , 49 , 51 , 52 , 57 ], targeted messaging [ 9 , 64 ], database access [ 9 , 64 ], policy briefs [ 46 , 54 , 63 ], workshops [ 47 , 54 , 56 , 60 ], digital materials [ 47 ], fellowship programs [ 48 , 50 , 59 ], literature reviews/rapid reviews [ 49 , 56 , 58 , 61 ], consortium [ 53 ], certificate course [ 54 ], multi-stakeholder policy dialogue [ 54 ], and multifaceted strategies [ 55 ].

Quality/risk of bias

Experimental studies.

The potential risk of bias for included experimental studies according to the Cochrane Collaboration tool for assessing risk of bias is presented in Table  3 . None of the included experimental studies reported methods for allocation concealment, blinding of participants and personnel, and blinding of outcome assessment [ 9 , 46 , 47 ]. Other potential sources of bias were identified in each of the included experimental studies including (1) inadequate reporting of p values for mixed-effects models, results for hypothesis two, and comparison of health policies and programs (HPP) post-intervention on one study [ 9 ], (2) pooling of data from both intervention and control groups limited ability to evaluate the success of the intervention in one study [ 47 ], and (3) inadequate reporting of analysis and results in another study [ 46 ]. Adequate random sequence generation was reported in two studies [ 9 , 46 ] but not in one [ 47 ]. One study reported complete outcome data [ 9 ]; however, large loss to follow-up was identified in two studies [ 46 , 47 ]. It was unclear whether risk of selective reporting bias was present for one study [ 46 ], as outcomes were not adequately pre-specified in the study. Risk of selective reporting bias was identified for one study that did not report p values for sub-group analysis [ 9 ] and another that only reported change scores for outcome measures [ 47 ].

Non-experimental studies

The potential risk of bias for included non-experimental studies according to the Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies from the National Heart, Lung, and Blood Institute, and the Critical Appraisal Skills Program (CASP) Qualitative Checklist is presented in Tables  4 and 5 .

Narrative synthesis results: effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare

Definitive estimates of implementation strategy effect are limited due to the small number of identified studies, and heterogeneity in implementation strategies and reported outcomes. A narrative synthesis of results is described for changes in reaction/attitudes/beliefs, learning, behaviour, and results. See Table  6 for a summary of study results.

Randomised controlled trials

Interestingly, the policy brief accompanied by an expert opinion piece was thought to improve both level 1 change in reaction/attitudes/beliefs and level 3 behaviour change outcomes. This was referred to as an “authority effect” [ 46 ]. Tailored targeted messages also reportedly improved level 3 behaviour change outcomes. However, the addition of a knowledge broker to this strategy may have been detrimental to these outcomes. When organisational research culture was considered, health departments with low research culture may have benefited from the addition of a knowledge broker, although no p values were provided for this finding [ 9 ].

Non-randomised studies

The effect of workshops, ongoing technical assistance, and distribution of instructional digital materials on level 1 change in reaction/attitudes/beliefs outcomes was difficult to determine, as many measures did not change from baseline scores and the direction of change scores was not reported. However, a reduction in perceived support from state legislators for physical activity interventions was reported after the research implementation strategy. All level 2 learning outcomes were reportedly improved, with change scores larger for local than state health department decision-makers in every category except methods in understanding cost. Results were then less clear for level 3 behaviour change outcomes. Only self-reported individual-adapted health behaviour change was thought to have improved [ 47 ].

Thematic synthesis results: conceptualisation of factors perceived to be associated with effective strategies and the inter-relationship between these factors

Due to the relative paucity of evidence for effectiveness studies, a thematic synthesis of non-experimental studies was used to explore the factors perceived to be associated with effective strategies and the inter-relationship between these factors. Six broad, interrelated, analytic themes emerged from the thematic synthesis of data captured in this review (Fig.  2 ). We developed a conceptualisation of how these themes interrelated from data captured both within and across studies. Some of these analytic themes were specifically mentioned in individual papers, but none of the papers included in this review identified all, nor developed a conceptualisation of how they interrelated. The six analytic themes were conceptualised as having a unidirectional, hierarchal flow from (1) establishing an imperative for practice change, (2) building trust between implementation stakeholders, (3) developing a shared vision , and (4) actioning change mechanisms . These were underpinned by (5) employment of effective communication strategies and (6) provision of resources to support change.

Conceptualisation of Inter-related themes (analytic themes) associated with effective strategies and the inter-relationship between these factors

Establish imperative

Organisations and individuals were driven to implement research into practice when there was an imperative for practice change. Decision-makers wanted to know why change was important to them, and their organisation and or community. Imperatives were seen as drivers of motivation for change to take place and were evident both internal to the decision-maker (personal gain) and external to the decision-makers (organisational and societal gain).

Personal gain

Individuals were motivated to participate in research implementation projects where they could derive personal gain [ 48 , 50 , 56 ]. Involvement in research was viewed as an opportunity rather than an obligation [ 56 ]. This was particularly evident in one study by Kitson et al. where all nursing leaders unanimously agreed the potential benefit of supported, experiential learning was substantial, with 13 of 14 committing to leading further interdisciplinary, cross-functional projects [ 50 ].

Organisational and societal gain

Decision-makers supported research implementation efforts when they aligned to an organisational agenda or an area where societal health needs were identified [ 48 , 50 , 53 , 55 , 59 , 64 ]. Practice change was supported if it was deemed important by decision-makers and aligned with organisational priorities, where knowledge exchange was impeded if changes had questionable relevance to the workplace [ 48 , 53 , 64 ]. Individuals reported motivation to commit to projects they felt would address community needs. For example, in one study, nursing leaders identified their passion for health topics as a reason to volunteer in a practice change process [ 50 ]. In another study, managers were supportive of practice change to improve care of people with dementia, as they thought this would benefit the population [ 55 ].

Build trust

Relationships, leadership authority, and governance constituted the development of trust between stakeholder groups.

Relationships

The importance of trusting relationships between managers, researchers, change agents, and staff was emphasised in a number of studies [ 48 , 50 , 54 , 59 , 64 ]. Developing new relationships through collaborative networking and constant contact reportedly addressed mutual mistrust between policy-makers and the researchers, and engaged others to change practice [ 54 , 59 ]. Bullock et al. described how pre-existing personal and professional relationships might facilitate implementation strategy success through utilising organisational knowledge and identifying workplace “gatekeepers” to engagement with. In the same study, no real link between healthcare managers and academic resources was derived from fellows that were only weakly connected to healthcare organisations [ 48 ].

Leadership authority

The leadership authority of those involved in research implementation influenced the development of trust between key stakeholders [ 50 , 52 , 55 , 59 , 61 ]. Dagenais et al. found recommendations and information was valued if credited from researchers and change agents whose input was trusted [ 52 ]. The perception that individuals with senior organisational roles reduce perceived risk and resistance to change was supported by Dobbins et al., who reported that seniority of individuals is a predictor of systematic review use in decision-making [ 50 , 59 , 61 ]. However, professional seniority should be related to the research implementation context, as the perceived lack of knowledge in content area was a barrier to providing managerial support [ 55 ].

A number of studies expressed the importance of consistent and sustained executive support in order to maintain project momentum [ 48 , 50 , 52 , 53 , 59 , 64 ]. In the study by Kitson et al. , individuals expressed concern and anxiety around reputational risk if consistent organisation support was not provided [ 50 ]. Organisational capacity was enhanced with strong management support and policies [ 57 ]. Uneke et al. identified good stewardship in the form of governance to provide accountability and protection for individuals and organisations in their study. Participants in this study unanimously identified the need for performance measurement mechanisms for the health policy advisory committee to promote sustainability and independent evidence to policy advice [ 54 ]. Bullock et al. found that managers view knowledge exchange in a transaction manner and are keen to know and use project results as soon as possible. However, researchers and change agents may not wish to apply results due to the phase of the project [ 48 ]. This highlighted the importance of governance systems to support confidentiality and limiting the release of project results before stakeholders are confident of findings.

Develop shared vision

A shared vision for desired change and outcomes can be built around common goal through improving understanding, influencing behaviour change, and working with the characteristics of organisations.

Stakeholder understanding

Improving the understanding of research implementation was considered a precursor to building shared vision [ 50 , 52 , 55 , 56 ]. Policy-makers reported lack of time prevented them from performing an evidence review and desired experientially tailored information, education, and avoidance of technical language to improve understanding [ 52 , 55 , 58 ]. It was perceived that lack of clarity limited project outcomes in the study by Gagliardi et al., which emphasised the need for simple processes [ 56 ]. When challenges arose in Kitson et al. , ensuring all participants understood their role from implementation outset was suggested as a process improvement [ 50 ].

Influence change

Knowledge brokers in Campbell et al. were able to elicit well-defined research questions if they were open, honest, and frank in their approach to policy-makers. Policy-makers felt that knowledge brokering was more useful for shaping parameters, scope, budget, and format of projects, which provides guidance for decision-making rather than being prescriptive [ 49 ]. However, conclusive recommendations that aim for a consensus are viewed favourably by policy-makers, which means a balance between providing guidance without being too prescriptive, must be achieved [ 63 ]. Interactive strategies may allow change agents to gain better understanding of evidence in organisational decisions and guide attitudes towards evidence-informed decision-making. Champagne et al. observed fellows participating in this interactive, social process, and Dagenais et al. reported practical exercises and interactive discussions were appreciated by knowledge brokers in their own training [ 52 , 59 ]. Another study reported barriers in work practice challenges being viewed as criticism; despite this, organisation staff valued leaders’ ability to inspire a shared vision and identified ‘challenging processes’ as the most important leadership practice [ 50 ].

Characteristics of organisation

Context-specific organisational characteristics such as team dynamics, change culture, and individual personalities can influence the effectiveness of research implementation strategies [ 50 , 53 , 56 , 59 ]. Important factors in Flanders et al. were clear lines of authority in collaborative and effective multidisciplinary teams. Organisation readiness for change was perceived as both a barrier and a facilitator to research implementation but higher staff consensus was associated with higher engagement in organisational change [ 60 ]. Strategies in Dobbins et al. were thought to be more effective if they were implemented in organisations with learning culture and practices, or facilitated an organisational learning culture themselves, where Flanders et al. reported solutions to hospital safety problems often created more work or change from long-standing practices, which proved a barrier to overcome [ 53 , 61 ]. Individual resistance to change in the form of process concerns led to higher levels of dissatisfaction [ 50 ].

Provide resources to support change

Individuals were conscious of the need for implementation strategies to be adequately resourced [ 48 , 49 , 50 , 55 , 56 , 58 , 59 , 61 ]. There was anxiety in the study by Döpp et al. around promoting research implementation programs, due to the fear of receiving more referrals than could be handled with current resourcing [ 55 ]. Managers mention service pressures as a major barrier in changing practice, with implementation research involvement dependent on workload and other professional commitments [ 50 , 56 ]. Lack of time prevented evidence reviews being performed, and varied access to human resources such as librarians were also identified as barriers [ 58 , 59 ]. Policy-makers and managers appreciated links to expert researchers, especially those who had infrequent or irregular contact with the academic sector previously [ 49 ]. Managers typically viewed engagement with research implementation as a transactional idea, wanting funding for time release (beyond salary costs), while researchers and others from the academic sector consider knowledge exchange inherently valuable [ 48 ]. Vulnerability around leadership skills and knowledge in the study by Kitson et al. exposed the importance of training, education, and professional development opportunities. Ongoing training in critical appraisal of research literature was viewed as a predictor of whether systematic reviews influenced program planning [ 61 ].

Employ effective communication strategies

Studies and study participants expressed different preferences for the format and mode of contact for implementation strategies [ 48 , 51 , 52 , 55 , 56 , 59 , 64 ]. Face to face contact was preferred by the majority of participants in the study by Waqa et al. and was useful in acquiring and accessing relevant data or literature to inform the writing of policy briefs [ 51 ]. Telephone calls were perceived as successful in Döpp et al. because they increased involvement and opportunity to ask questions [ 55 ]. Electronic communication formats in the study by Bullock et al. provided examples of evidence-based knowledge transfer from academic settings to the clinical setting. Fellows spent time reading literature at the university and would then send that information to the clinical workplace in an email, while managers stated that the availability of website information positively influenced its use [ 48 ]. Regular contact in the form of reminders encouraged actions, with the study by Dagenais et al. finding lack of ongoing, regular contact with knowledge brokers in the field limitated research implementation programs [ 52 ].

Action change mechanism

Reviewers interpreted the domains (analytical themes) representing a model of implementation strategy success to lead to a change mechanism. Change mechanisms refer to the actions taken by study participants to implement research into practice. Studies did not explicitly measure the change mechanisms that lead to the implementation of research into practice. Instead, implicit measurements of change mechanisms were reported such as knowledge gain and intention to act measures.

This review found that there are numerous implementation strategies that can be utilised to promote evidence-informed policy and management decisions in healthcare. These relate to the ‘authority effect’ from a simple low-cost policy brief and knowledge improvement from a complex multifaceted workshop with ongoing technical assistance and distribution of instructional digital materials [ 46 , 47 ]. The resource intensity of these strategies was relatively low. It was evident that providing more resource-intensive strategies is not always better than less, as the addition of a knowledge broker to a tailored targeted messaging strategy was less effective than the messages alone [ 9 ]. Due to the paucity of studies evaluating the effectiveness of implementation strategies, understanding why some implementation strategies succeed where others fail in different contexts is important for future strategy design. The thematic synthesis of the wider non-effectiveness literature included in our review has lead us to develop a model of implementation strategy design that may action a change mechanism for evidence-informed policy and management decisions in healthcare [ 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 63 , 64 ].

Our findings were concomitant with change management theories. The conceptual model of how themes interrelated both within and across studies includes similar stages to ‘Kotter’s 8 Step Change Model’ [ 65 ]. Leadership behaviours are commonly cited as organisational change drivers due to the formal power and authority that leaders have within organisations [ 66 , 67 , 68 ]. This supports the ‘authority effect’ described in Beynon et al. and the value decision-makers placed on information credited to experts they trust [ 46 ]. Authoritative messages are considered a key component of an effective policy brief, and therefore, organisations should consider partnering with authoritative institutions, research groups, or individuals to augment the legitimacy of their message when producing policy briefs [ 69 ]. Change management research proposes change-related training improves understanding, knowledge, and skills to embed a change vision at a group level [ 70 , 71 , 72 ]. The results of our review support this view that providing adequate training resources to decision-makers can improve understanding, knowledge, and skills, leading to desired change. The results of our thematic synthesis appear to support knowledge broker strategies in theory. Multi-component research implementation strategies are thought to have greater effects than simple strategies [ 73 , 74 ]. However, the addition of knowledge brokers to a tailored targeted messaging research implementation strategy in Dobbins et al. was less effective than the messages alone [ 9 ]. This may indicate that in some cases, simple research implementation strategies may be more effective than complex, multi-component ones. Further development of strategies is needed to ensure that a number of different implementation options are available, which can be tailored to individual health contexts. A previous review by LaRocca et al. supports this finding, asserting that in some cases, complex strategies may diminish key messages and reduce understanding of information presented [ 10 ]. Further, the knowledge broker strategy in Dobbins et al. had little or no engagement from 30% of participants allocated to this group, emphasising the importance of tailoring strategy complexity and intensity to organisational need.

This systematic review was limited both in the quantity and quality of studies that met inclusion criteria. Previous reviews have been similarly limited in the paucity of high-quality research evaluating the effectiveness of research implementation strategies in the review context area [ 10 , 29 , 32 , 75 ]. The limited number of retrieved experimental, quantitatively evaluated effectiveness studies, means the results of this review were mostly based on non-experimental qualitative data without an evaluation of effectiveness. Non-blinding of participants could have biased qualitative responses. Participants could have felt pressured to respond in a positive way if they did not wish to lose previously provided implementation resources, and responses could vary depending on the implementation context and what changes were being made, for example, if additional resources were being implemented to fill an existing evidence-to-practice gap, versus the disinvestment of resources due to a lack of supportive evidence. Despite these limitations, we believe our comprehensive search strategy retrieved a relatively complete identification of studies in the field of research. A previous Cochrane review in the same implementation context area recently identified only one study (also captured in our review) using their search strategy and inclusion criteria [ 33 , 76 ]. A meta-analysis was unable to be performed due to the limited amount of studies and high levels of heterogeneity in study approaches, as such, the results of this synthesis should be interpreted with caution. However, synthesising data narratively and thematically allowed this review to examine not only the effectiveness of research implementation strategies in the context area but also the mechanisms behind inter-relating factors perceived to be associated with effective strategies. Since our original search strategy, we have been unable to identify additional full-texts from the 11 titles excluded due to no data reporting (e.g. protocol, abstract). However, the Developing and Evaluating Communication strategies to support Informed Decisions and practice based on Evidence (DECIDE) project has since developed a number of tools to improve the dissemination of evidence-based recommendations [ 77 ]. In addition, support for the relationship development, face to face interaction, and focus on organisational climates themes in our conceptual model is supported by the full version [ 78 ] of an excluded summary article [ 79 ], identified after the original search strategy.

Studies measured behaviour changes considered on the third level of the Kirkpatrick Hierarchy but did not measure whether those behaviour changes led to their intended improved societal outcomes (level 4, Kirkpatrick Hierarchy). Future research should also evaluate changes in health and organisational outcomes. The conceptualisation of factors perceived to be associated with effective strategies and the inter-relationship between these factors should be interpreted with caution as it was based on low levels of evidence according to the National Health and Medical Research Council (NHMRC) of Australia designations [ 80 ]. Therefore, there is a need for the association between these factors and effective strategies to be rigorously evaluated. Further conceptualisation of how to evaluate research implementation strategies should consider how to include health and organisation outcome measures to better understand how improved evidence-informed decision-making can lead to greater societal benefits. Future research should aim to improve the relatively low number of high-quality randomised controlled trials evaluating the effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare. This might allow formal meta-analysis to be performed, providing indications of what research implementation strategies are effective in which context.

Evidence is developing to support the use of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare. A number of inter-relating factors were thought to influence the effectiveness of strategies through establishing an imperative for change, building trust, developing a shared vision, and action change mechanisms. Employing effective communication strategies and providing resources to support change underpin these factors, which should inform the design of future implementation strategies.

Abbreviations

Chief executive officer

Non-government organisation

National Health and Medical Research Council

Preferred Reporting Items for Systematic Reviews and Meta-Analysis

Randomised controlled trial

United Kingdom

United States of America

Orton L, Lloyd-Williams F, Taylor-Robinson D, O’Flaherty M, Capewell S. The use of research evidence in public health decision making processes: systematic review. PLoS One. 2011;6(7):e21704.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ciliska D, Dobbins M, Thomas H. Using systematic reviews in health services, Reviewing research evidence for nursing practice: systematic reviews; 2007. p. 243–53.

Google Scholar  

Mosadeghrad AM. Factors influencing healthcare service quality. Int J Health Policy Manag. 2014;3(2):77–89. 10.15171/ijhpm.2014.65 .

Article   PubMed   PubMed Central   Google Scholar  

Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health. 2009;30:175–201.

Article   PubMed   Google Scholar  

Jernberg T, Johanson P, Held C, Svennblad B, Lindback J, Wallentin L. Association between adoption of evidence-based treatment and survival for patients with ST-elevation myocardial infarction. JAMA. 2011;305 https://doi.org/10.1001/jama.2011.522 .

Davis D, Davis ME, Jadad A, Perrier L, Rath D, Ryan D, et al. The case for knowledge translation: shortening the journey from evidence to effect. BMJ. 2003;327(7405):33–5.

Madon T, Hofman K, Kupfer L, Glass R. Public health: implementation science. Science. 2007;318 https://doi.org/10.1126/science.1150009 .

Chalmers I. If evidence-informed policy works in practice, does it matter if it doesn’t work in theory? Evid Policy. 2005;1 https://doi.org/10.1332/1744264053730806 .

Dobbins M, Hanna SE, Ciliska D, Manske S, Cameron R, Mercer SL, et al. A randomized controlled trial evaluating the impact of knowledge translation and exchange strategies. Implement Sci. 2009;4(1):1–16.  https://doi.org/10.1186/1748-5908-4-61 .

LaRocca R, Yost J, Dobbins M, Ciliska D, Butt M. The effectiveness of knowledge translation strategies used in public health: a systematic review. BMC Public Health. 2012;12

Lavis JN, Ross SE, Hurley JE. Examining the role of health services research in public policymaking. Milbank Q. 2002;80(1):125–54.

Lavis JN. Research, public policymaking, and knowledge-translation processes: Canadian efforts to build bridges. J Contin Educ Health Prof. 2006;26(1):37–45.

Klein R. Evidence and policy: interpreting the Delphic oracle. J R Soc Med. 2003;96(9):429–31.

Walt G. How far does research influence policy? Eur J Public Health. 1994;4(4):233–5.

Article   Google Scholar  

Bucknall T, Fossum M. It is not that simple nor compelling!: comment on “translating evidence into healthcare policy and practice: single versus multi-faceted implementation strategies—is there a simple answer to a complex question?”. Int J Health Policy Manag. 2015;4(11):787.

Bowen S, Erickson T, Martens PJ, Crockett S. More than “using research”: the real challenges in promoting evidence-informed decision-making. Healthc Policy. 2009;4(3):87.

PubMed   PubMed Central   Google Scholar  

Macintyre S, Petticrew M. Good intentions and received wisdom are not enough. J Epidemiol Community Health. 2000;54(11):802–3.

Chalmers I. Trying to do more good than harm in policy and practice: the role of rigorous, transparent, up-to-date evaluations. Ann Am Acad Pol Soc Sci. 2003;589(1):22–40. https://doi.org/10.1177/0002716203254762 .

Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implementation science. 2012;7(1):50.

Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. Bmj. 2013;347:f6753. doi: 10.1136/bmj.f6753 .

Stone EG, Morton SC, Hulscher ME, Maglione MA, Roth EA, Grimshaw JM, et al. Interventions that increase use of adult immunization and cancer screening services: a meta-analysis. Ann Intern Med. 2002;136(9):641–51.

Paramonczyk A. Barriers to implementing research in clinical practice. Can Nurse. 2005;101(3):12–5.

PubMed   Google Scholar  

Haynes B, Haines A. Barriers and bridges to evidence based clinical practice. BMJ : Br Med J 1998;317(7153):273-276.

Lavis JN. How can we support the use of systematic reviews in policymaking? PLoS Med. 2009;6(11):e1000141.

Wilson PM, Watt IS, Hardman GF. Survey of medical directors’ views and use of the Cochrane Library. Br J Clin Gov. 2001;6(1):34–9.

Ram FS, Wellington SR. General practitioners use of the Cochrane Library in London. Prim Care Respir J. 2002;11(4):123–5.

Dobbins M, Cockerill R, Barnsley J. Factors affecting the utilization of systematic reviews: A study of public health decision makers. International journal of technology assessment in health care. 2001;17(2):203–14.

Article   CAS   PubMed   Google Scholar  

Innvær S, Vist G, Trommald M, Oxman A. Health policy-makers’ perceptions of their use of evidence: a systematic review. J Health Serv Res Policy. 2002;7(4):239–44.

Murthy L, Shepperd S, Clarke MJ, Garner SE, Lavis JN, Perrier L, Roberts NW, Straus SE. Interventions to improve the use of systematic reviews in decision-making by health system managers, policy makers and clinicians. Cochrane Database of Systematic Reviews. 2012;(9):CD009401. doi: 10.1002/14651858.CD009401.pub2 .

Tetroe JM, Graham ID, Foy R, Robinson N, Eccles MP, Wensing M, et al. Health research funding agencies’ support and promotion of knowledge translation: an international study. Milbank Q. 2008;86(1):125–55. https://doi.org/10.1111/j.1468-0009.2007.00515.x .

Perrier L, Mrklas K, Lavis JN, Straus SE. Interventions encouraging the use of systematic reviews by health policymakers and managers: a systematic review. Implement Sci. 2011;6:43. https://doi.org/10.1186/1748-5908-6-43 .

Mitton C, Adair CE, McKenzie E, Patten SB, Waye Perry B. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q. 2007;85(4):729–68. https://doi.org/10.1111/j.1468-0009.2007.00506.x .

Armstrong R. Evidence-informed public health decision-making in local government [PhD thesis]. Melbourne: University of Melbourne; 2011.

McKibbon KA, Lokker C, Wilczynski NL, Ciliska D, Dobbins M, Davis DA, et al. A cross-sectional study of the number and frequency of terms used to refer to knowledge translation in a body of health literature in 2006: a tower of Babel? Implement Sci. 2010;5:16. https://doi.org/10.1186/1748-5908-5-16 .

McKibbon KA, Lokker C, Wilczynski NL, Haynes RB, Ciliska D, Dobbins M, et al. Search filters can find some but not all knowledge translation articles in MEDLINE: an analytic survey. J Clin Epidemiol. 2012;65(6):651–9. https://doi.org/10.1016/j.jclinepi.2011.10.014 .

Lokker C, McKibbon KA, Wilczynski NL, Haynes RB, Ciliska D, Dobbins M, et al. Finding knowledge translation articles in CINAHL. Stud Health Technol Inform. 2010;160(Pt 2):1179–83.

Science I. About. 2016. http://implementationscience.biomedcentral.com/about . Accessed 19 Jun 2016.

Kirkpatrick DL. Evaluating human relations programs for industrial foremen and supervisors. Madison: University of Wisconsin; 1954.

Covidence. 2016. https://www.covidence.org/ . Accessed 18 Nov 2016.

Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

NHLBI. (2014, February). Background: Development and use of study quality assessment tools. US department of Health and Human Services. Retrieved from http://www.nhlbi.nih.gov/health-pro/guidelines/in-develop/cardiovascular-risk-reduction/tools/background .

CASP. NHS Critical Appraisal Skills Programme (CASP): appraisal tools. NHS Public Health Resource Unit. 2017. http://www.casp-uk.net/casp-tools-checklists .

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8:45. https://doi.org/10.1186/1471-2288-8-45 .

Bradley EH, Curry LA, Devers KJ. Qualitative data analysis for health services research: developing taxonomy, themes, and theory. Health Serv Res. 2007;42(4):1758–72.

Janesick V. The choreography of qualitative research: minuets, improvisations, and crystallization. In: Denzin N, Lincoln Y, editors. Strategies of qualitative inquiry. 2nd ed. Ed. Thousand Oaks: Sage Publications; 2003. p. 46–79.

Beynon P, Chapoy C, Gaarder M, Masset E. What difference does a policy brief make? Full report of an IDS, 3ie, Norad study. Institute of Development Studies and the International Initiative for Impact Evaluation (3ie): New Delhi, India. 2012.  http://www.3ieimpact.org/media/filer_public/2012/08/22/fullreport_what_difference_does_a_policy_brief_make__2pdf_-_adobe_acrobat_pro.pdf .

Brownson RC, Ballew P, Brown KL, Elliott MB, Haire-Joshu D, Heath GW, et al. The effect of disseminating evidence-based interventions that promote physical activity to health departments. Am J Public Health. 2007;97(10):1900–7.

Bullock A, Morris ZS, Atwell C. Exchanging knowledge through healthcare manager placements in research teams. Serv Ind J. 2013;33(13–14):1363–80.

Campbell D, Donald B, Moore G, Frew D. Evidence check: knowledge brokering to commission research reviews for policy. Evid Policy. 2011;7 https://doi.org/10.1332/174426411x553034 .

Kitson A, Silverston H, Wiechula R, Zeitz K, Marcoionni D, Page T. Clinical nursing leaders’, team members’ and service managers’ experiences of implementing evidence at a local level. J Nurs Manag. 2011;19(4):542–55. https://doi.org/10.1111/j.1365-2834.2011.01258.x .

Waqa G, Mavoa H, Snowdon W, Moodie M, Schultz J, McCabe M. Knowledge brokering between researchers and policymakers in Fiji to develop policies to reduce obesity: a process evaluation. Implement Sci. 2013;8 https://doi.org/10.1186/1748-5908-8-74 .

Dagenais C, Some TD, Boileau-Falardeau M, McSween-Cadieux E, Ridde V. Collaborative development and implementation of a knowledge brokering program to promote research use in Burkina Faso, West Africa. Glob Health Action. 2015;8:26004. doi: 10.3402/gha.v8.26004 .

Flanders SA, Kaufman SR, Saint S, Parekh VI. Hospitalists as emerging leaders in patient safety: lessons learned and future directions. J Patient Saf. 2009;5(1):3–8.

Uneke CJ, Ndukwe CD, Ezeoha AA, Uro-Chukwu HC, Ezeonu CT. Implementation of a health policy advisory committee as a knowledge translation platform: the Nigeria experience. Int J Health Policy Manag. 2015;4(3):161–8. 10.15171/ijhpm.2015.21 .

Döpp CM, Graff MJ, Rikkert MGO, van der Sanden MWN, Vernooij-Dassen MJ. Determinants for the effectiveness of implementing an occupational therapy intervention in routine dementia care. Implement Sci. 2013;8(1):1.

Gagliardi AR, Fraser N, Wright FC, Lemieux-Charles L, Davis D. Fostering knowledge exchange between researchers and decision-makers: exploring the effectiveness of a mixed-methods approach. Health Policy. 2008;86(1):53–63.

Traynor R, DeCorby K, Dobbins M. Knowledge brokering in public health: a tale of two studies. Public Health. 2014;128(6):533–44.

Chambers D, Grant R, Warren E, Pearson S-A, Wilson P. Use of evidence from systematic reviews to inform commissioning decisions: a case study. Evid Policy. 2012;8(2):141–8.

Champagne F, Lemieux-Charles L, Duranceau M-F, MacKean G, Reay T. Organizational impact of evidence-informed decision making training initiatives: a case study comparison of two approaches. Implement Sci. 2014;9(1):53.

Courtney KO, Joe GW, Rowan-Szal GA, Simpson DD. Using organizational assessment as a tool for program change. J Subst Abus Treat. 2007;33(2):131–7.

Dobbins M, Cockerill R, Barnsley J, Ciliska D. Factors of the innovation, organization, environment, and individual that predict the influence five systematic reviews had on public health decisions. Int J Technol Assess Health Care. 2001;17(4):467–78.

CAS   PubMed   Google Scholar  

Waqa G, Mavoa H, Snowdon W, Moodie M, Nadakuitavuki R, Mc Cabe M. Participants’ perceptions of a knowledge-brokering strategy to facilitate evidence-informed policy-making in Fiji. BMC Public Health. 2013;13 https://doi.org/10.1186/1471-2458-13-725 .

Moat KA, Lavis JN, Clancy SJ, El-Jardali F, Pantoja T. Evidence briefs and deliberative dialogues: perceptions and intentions to act on what was learnt. Bull World Health Organ. 2014;92(1):20–8.

Wilson MG, Grimshaw JM, Haynes RB, Hanna SE, Raina P, Gruen R, et al. A process evaluation accompanying an attempted randomized controlled trial of an evidence service for health system policymakers. Health Res Policy Syst. 2015;13(1):78.

Kotter JR. Leading change—why transformation efforts fail. Harv Bus Rev. 2007;85(1):96–103.

Whelan-Berry KS, Somerville KA. Linking change drivers and the organizational change process: a review and synthesis. J Chang Manag. 2010;10(2):175–93.

Trice HM, Beyer JM. Cultural leadership in organizations. Organ Sci. 1991;2(2):149–69.

Downs A, Besson D, Louart P, Durant R, Taylor-Bianco A, Schermerhorn Jr J. Self-regulation, strategic leadership and paradox in organizational change. J Organ Chang Manag. 2006;19(4):457–70.

Jones N, Walsh C. Policy briefs as a communication tool for development research: Overseas development institute (ODI); 2008.

Schneider B, Gunnarson SK, Niles-Jolly K. Creating the climate and culture of success. Organ Dyn. 1994;23(1):17–29.

Whelan-Berry K, Alexander P. Creating a culture of excellent service: a scholar and practitioner explore a case of successful change. Paper presented at the Academy of Management. Honolulu, August; 2005.

Bennett JB, Lehman WE, Forst JK. Change, transfer climate, and customer orientation a contextual model and analysis of change-driven training. Group Org Manag. 1999;24(2):188–216.

Mansouri M, Lockyer J. A meta-analysis of continuing medical education effectiveness. J Contin Educ Heal Prof. 2007;27(1):6–15. https://doi.org/10.1002/chp.88 .

Marinopoulos SS, Dorman T, Ratanawongsa N, Wilson LM, Ashar BH, Magaziner JL, et al. Effectiveness of continuing medical education. Evid Rep Technol Assess. 2007;149:1–69.

Chambers D, Wilson PM, Thompson CA, Hanbury A, Farley K, Light K. Maximizing the impact of systematic reviews in health care decision making: a systematic scoping review of knowledge-translation resources. Milbank Q. 2011;89(1):131–56.

Armstrong R, Waters E, Dobbins M, Lavis JN, Petticrew M, Christensen R. Knowledge translation strategies for facilitating evidence-informed public health decision making among managers and policy-makers. Cochrane Database of Systematic Reviews. 2011;(6):CD009181. doi: 10.1002/14651858.CD009181 .

Treweek S, Oxman AD, Alderson P, Bossuyt PM, Brandt L, Brożek J, et al. Developing and evaluating communication strategies to support informed decisions and practice based on evidence (DECIDE): protocol and preliminary results. Implement Sci. 2013;8(1):6.

Phillips SJ. Piloting knowledge brokers to promote integrated stroke care in Atlantic Canada, Evidence in action, acting on evidence; 2008. p. 57.

Lyons R, Warner G, Langille L, Phillips S. Evidence in action, acting on evidence: a casebook of health services and policy research knowledge translation stories: Canadian Institutes of Health Research; 2006.

Merlin T, Weston A, Tooher R. Extending an evidence hierarchy to include topics other than treatment: revising the Australian 'levels of evidence'. BMC medical research methodology. 2009;9:34. doi: 10.1186/1471-2288-9-34 .

Download references

Acknowledgements

Authors’ would like to acknowledge the expertise provided by Jenni White and the support provided by the Monash University library staff and the Monash University and Monash Health Allied Health Research Unit.

No funding.

Availability of data and materials

Data are available from the corresponding author on reasonable request.

Author information

Authors and affiliations.

Kingston Centre, Monash University and Monash Health Allied Health Research Unit, 400 Warrigal Road, Heatherton, VIC, 3202, Australia

Mitchell N. Sarkies, Elizabeth H. Skinner, Romi Haas, Haylee Lane & Terry P. Haines

Monash University Department of Community Emergency Health and Paramedic Practice, Building H McMahons Road, Frankston, VIC, 3199, Australia

Kelly-Ann Bowles

You can also search for this author in PubMed   Google Scholar

Contributions

MS was responsible for the conception, organisation and completion of this systematic review. MS developed the research question and search strategy, conducted the search, screened the retrieved studies, extracted the data, performed the analysis and quality appraisal, and prepared the manuscript. KAB was responsible for the oversight and management of the review. KAB contributed to the development of the inclusion and exclusion criteria; resolved screening, quality, and data extraction discrepancies between reviewers; and assisted with the manuscript preparation. ES also was responsible for the oversight and helped develop the final research question and inclusion criteria. ES assisted with selecting and using the quality appraisal tool, developing the data extraction tool, and preparing the manuscript. RH and HL were responsible for performing independent screening of identified studies and deciding upon inclusion or exclusion from the review. RH and HL also performed independent quality appraisal and data extraction for half of the included studies and contributed to the manuscript preparation. TH was responsible for the oversight and management of the review, assisted with data analysis and interpretation, and contributed to the manuscript preparation. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mitchell N. Sarkies .

Ethics declarations

Authors’ information.

Mitchell Sarkies is a Physiotherapist from Melbourne, Victoria, Australia, with an interest in translating research into practice. He is currently a Ph.D. candidate at Monash University.

Ethics approval and consent to participate

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:.

PRISMA 2009 checklist. (DOCX 26 kb)

Additional file 2:

Search Strategy. (DOCX 171 kb)

Additional file 3:

Data extraction 1 and 2. (XLSX 884 kb)

Additional file 4:

Full list of 96 articles and reasons for full-text exclusion. (DOCX 125 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Sarkies, M.N., Bowles, KA., Skinner, E.H. et al. The effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare: a systematic review. Implementation Sci 12 , 132 (2017). https://doi.org/10.1186/s13012-017-0662-0

Download citation

Received : 20 February 2017

Accepted : 01 November 2017

Published : 14 November 2017

DOI : https://doi.org/10.1186/s13012-017-0662-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Implementation
  • Translation

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research helps develop tools for assessing effectiveness

IMAGES

  1. Designing Assessments

    research helps develop tools for assessing effectiveness

  2. Best tools for research| The 10 Best Research

    research helps develop tools for assessing effectiveness

  3. How to Develop a Strong Research Question

    research helps develop tools for assessing effectiveness

  4. Best assessment tools for teachers

    research helps develop tools for assessing effectiveness

  5. Best tools for research| The 10 Best Research

    research helps develop tools for assessing effectiveness

  6. Effectiveness

    research helps develop tools for assessing effectiveness

VIDEO

  1. Data Engineer Interview| Data Model Design Interview Question

  2. Is Donald Trump's Approach to Government Affairs Effective? 🤔 #trump #news #shorts

  3. Extramarks Assessment Center

  4. SciVal A tool for evidence based research planning Identifying trending topics

  5. Step 3-4: The 9 key elements for success of clinical research

  6. HOW TO READ and ANALYZE A RESEARCH STUDY

COMMENTS

  1. Exploring the effectiveness, efficiency and equity (3e's) of research

    Research impact assessment can be thought of as research on research, with the aim of providing analysis that describes what works in research, helps better allocation of research funding, creates ...

  2. Methods Guide for Effectiveness and Comparative Effectiveness Reviews

    Comparative Effectiveness Reviews are systematic reviews of existing research on the effectiveness, comparative effectiveness, and harms of different health care interventions. They provide syntheses of relevant evidence to inform real-world health care decisions for patients, providers, and policymakers. Strong methodologic approaches to systematic review improve the transparency, consistency ...

  3. Development of the ASSESS tool: a comprehenSive tool to Support

    The development of the ASSESS tool was modeled after recommended methodologies for developing health research reporting guidelines and critical appraisal tools [16,17,18,19].A completed checklist of the recommended steps for developing a health research reporting guideline is available as an additional file [].We utilized a hybrid consensus-building approach combining e-Delphi Group and ...

  4. Study design for assessing effectiveness, efficiency and acceptability

    used to generate hypotheses, help understand complexities of a situation and gain insight into processes e.g. case series. e) Health technology assessment. examines what technology can best deliver benefits to a particular patient or population group. It assesses the cost-effectiveness of treatments against current or next best treatments.

  5. A proposed framework for developing quality assessment tools

    Assessment of the quality of included studies is an essential component of any systematic review. A formal quality assessment is facilitated by using a structured tool. There are currently no guidelines available for researchers wanting to develop a new quality assessment tool. This paper provides a framework for developing quality assessment tools based on our experiences of developing a ...

  6. A review of the quantitative effectiveness evidence synthesis methods

    The complexity of public health interventions create challenges in evaluating their effectiveness. There have been huge advancements in quantitative evidence synthesis methods development (including meta-analysis) for dealing with heterogeneity of intervention effects, inappropriate 'lumping' of interventions, adjusting for different populations and outcomes and the inclusion of various ...

  7. Evaluation of comparative effectiveness research: a practical tool

    Comparative effectiveness research (CER) guidelines have been developed to direct the field toward the most rigorous study methodologies. A challenge, however, is how to ensure the best evidence is generated, and how to translate methodologically complex or nuanced CER findings into usable medical evidence. To reach that goal, it is important ...

  8. Research designs for studies evaluating the effectiveness of change and

    The methods of evaluating change and improvement strategies are not well described. The design and conduct of a range of experimental and non-experimental quantitative designs are considered. Such study designs should usually be used in a context where they build on appropriate theoretical, qualitative and modelling work, particularly in the development of appropriate interventions. A range of ...

  9. Establishing a Framework for Assessing Teaching Effectiveness

    Developing a framework for assessing teaching in higher education. In a recent IDEA paper, Benton and Young identify a number of best practices for evaluating teaching (Table 1) (Benton and Young Citation 2018).Essentially, these 13 best practices recognize that evaluation is only as valuable as the work that goes into it, both from those being evaluated to those doing the evaluation.

  10. Assessing research excellence: Evaluating the Research Excellence

    Taylor (2011) considered the use of metrics to assess the research environment, and found evidence of bias towards more research-intensive universities in the assessment of research environment in the 2008 RAE (see Pinar and Unlu 2020b for similar findings for the REF 2014). In particular, he argued that the judgement of assessors may have an ...

  11. A scoping review of the globally available tools for assessing health

    This scoping review was designed to identify and describe tools for assessing the outcomes and impacts of health research partnerships, and is guided by a collaboratively built conceptual framework [].The detailed scoping review protocol [] outlining the objectives, inclusion criteria and methods was specified a priori and posted to the Open Science Framework [], prior to full text abstraction.

  12. Summary

    Clinical effectiveness research (CER) serves as the bridge between the development of innovative treatments and therapies and their productive application to improve human health. Building on efficacy and safety determinations necessary for regulatory approval, the results of these investigations guide the delivery of appropriate care to individual patients. As the complexity, number, and ...

  13. The effectiveness of implementation strategies for promoting evidence

    Without effective strategies for implementation of evidence-based recommendations it is unlikely that evidence-based practice will improve the quality of care, reduce practice variation and/or reduce cost. The importance of the implementation strategy to the effective use of evidence-based practice has been recognised by numerous authors [7, 8 ...

  14. Assessing the Effectiveness of Research Organizations:

    Assessing the effectiveness of research organizations in education (as well as those in the social sciences) is hampered by the lack of measurable goals and identifiable outcome criteria. In this review, key issues in evaluating effectiveness are identified and insights on the application of major assessment approaches are provided.

  15. Tools to support evidence-informed public health decision making

    Tools for defining the question/problem/issue. The Developing An Efficient Search Strategy tool was developed by Health Evidence - an organization that facilitates EIDM among public health professionals in Canada - to turn practice-based issues into answerable, searchable questions [].This tool provides users with a framework for articulating different types of questions.

  16. The effectiveness of research implementation strategies for promoting

    Background It is widely acknowledged that health policy and management decisions rarely reflect research evidence. Therefore, it is important to determine how to improve evidence-informed decision-making. The primary aim of this systematic review was to evaluate the effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare ...

  17. PDF The Vital Role of Research in Improving Education

    strategies to understand their students' learning and help them succeed. The data and tools that inform educators' professional decisions about how to best engage, teach, and guide their students are based on prior research findings. Access to research helps states and districts take the following actions: Develop tools and strategies.

  18. PDF Assessing Research-Practice Partnerships

    ASSESSING RESEARCH-PRACTICE PARTNERSHIPS: FIVE DIENSINS F EFFECTIVENESS HENRICK, COBB, PENUEL, JACKSON, CLARK iii CONTENTS Preface, i Introduction, 1 Dimension 1: Building trust and cultivating partnership relationships, 5 Dimension 2: Conducting rigorous research to inform action, 8 Dimension 3: Supporting the partner practice organization in achieving its goals, 11

  19. Research Supporting the Ten Principles: Assessment Practices

    "According to research, formative assessment practice has powerful effects on student learning and motivation (see Black & Wiliam, 1998b). Scholars in the area of educational assessment generally agree that when students are evaluated frequently for the purposes of monitoring learning and guiding instruction, they are more likely to be successful learners (Stiggins, 1998).

  20. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  21. (PDF) Assessment of teaching effectiveness

    from broadening the assessment sources they use to evaluate teaching effectiveness through increased use. of standardized measures based on student learning and greater reliance on systematic ...

  22. Assessing the value of screening tools: reviewing the challenges and

    Screening is an important part of preventive medicine. Ideally, screening tools identify patients early enough to provide treatment and avoid or reduce symptoms and other consequences, improving health outcomes of the population at a reasonable cost. Cost-effectiveness analyses combine the expected benefits and costs of interventions and can be used to assess the value of screening tools.

  23. PDF Evaluating Program Effectiveness: Planning Guide

    Although there are several other ways to plan and conduct an evaluation, below are five main elements to include as you develop or revise your system. This guide will address each element. Define the purpose and scope of the evaluation. Design the evaluation. Develop a data collection and analysis plan.