• Open access
  • Published: 12 February 2018

Engaging policy-makers, health system managers, and policy analysts in the knowledge synthesis process: a scoping review

  • Andrea C. Tricco   ORCID: orcid.org/0000-0002-4114-8971 1 , 2 ,
  • Wasifa Zarin 1 ,
  • Patricia Rios 1 ,
  • Vera Nincic 1 ,
  • Paul A. Khan 1 ,
  • Marco Ghassemi 1 ,
  • Sanober Diaz 1 ,
  • Ba’ Pham 1 ,
  • Sharon E. Straus 3 &
  • Etienne V. Langlois 4  

Implementation Science volume  13 , Article number:  31 ( 2018 ) Cite this article

25k Accesses

51 Citations

97 Altmetric

Metrics details

A Correction to this article was published on 16 April 2018

This article has been updated

It is unclear how to engage a wide range of knowledge users in research. We aimed to map the evidence on engaging knowledge users with an emphasis on policy-makers, health system managers, and policy analysts in the knowledge synthesis process through a scoping review.

We used the Joanna Briggs Institute guidance for scoping reviews. Nine electronic databases (e.g., MEDLINE), two grey literature sources (e.g., OpenSIGLE), and reference lists of relevant systematic reviews were searched from 1996 to August 2016. We included any type of study describing strategies, barriers and facilitators, or assessing the impact of engaging policy-makers, health system managers, and policy analysts in the knowledge synthesis process. Screening and data abstraction were conducted by two reviewers independently with a third reviewer resolving discrepancies. Frequency and thematic analyses were conducted.

After screening 8395 titles and abstracts followed by 394 full-texts, 84 unique documents and 7 companion reports fulfilled our eligibility criteria. All 84 documents were published in the last 10 years, and half were prepared in North America. The most common type of knowledge synthesis with knowledge user engagement was a systematic review (36%). The knowledge synthesis most commonly addressed an issue at the level of national healthcare system (48%) and focused on health services delivery (17%) in high-income countries (86%).

Policy-makers were the most common (64%) knowledge users, followed by healthcare professionals (49%) and government agencies as well as patients and caregivers (34%). Knowledge users were engaged in conceptualization and design (49%), literature search and data collection (52%), data synthesis and interpretation (71%), and knowledge dissemination and application (44%). Knowledge users were most commonly engaged as key informants through meetings and workshops as well as surveys, focus groups, and interviews either in-person or by telephone and emails. Knowledge user content expertise/awareness was a common facilitator (18%), while lack of time or opportunity to participate was a common barrier (12%).

Conclusions

Knowledge users were most commonly engaged during the data synthesis and interpretation phases of the knowledge synthesis conduct. Researchers should document and evaluate knowledge user engagement in knowledge synthesis.

Registration details

Open Science Framework ( https://osf.io/4dy53/ ).

Peer Review reports

An estimated 85% of investment in health and biomedical research is wasted every year due to redundancies, failure to establish priorities based on needs of stakeholders (particularly end-users of knowledge), poorly designed research methods, and incomplete reporting of study results, leading to billions of dollars lost globally [ 1 , 2 , 3 ]. Stakeholders include those who are affected by, have an interest or stake in research [ 4 ], while knowledge users are subgroup of stakeholders who are likely to use research findings to make informed decisions about health systems and practices [ 5 ]. Knowledge users include but are not limited to patients and their informal caregivers or surrogate decision-makers (e.g., family, friends), healthcare providers (e.g., physicians, occupational therapists), policy-makers (e.g., Minister of Health, health officer), health system managers (e.g., hospital administrators, health unit managers), and policy analysts.

The overarching goal of knowledge user engagement in health research is to co-produce knowledge that is relevant and useful to those making real-world health decisions [ 6 ]. Early engagement of knowledge users in the research process may help establish research priorities and increase relevance of findings [ 7 , 8 ]. To facilitate the use of research in decision-making, health systems and research funders are encouraging the engagement of knowledge users and other stakeholders in research [ 9 ].

Knowledge synthesis, such as a systematic review or a scoping review, is particularly useful for decision-makers because these research products provide a summary of the expansive evidence on a particular topic to inform decisions based on the totality of evidence [ 10 , 11 ]. Co-production of evidence whereby researchers and knowledge users work together to conduct research increases the policy-relevance of research questions and fosters integration of findings into policy and practice [ 12 , 13 , 14 ]. However, the opportunities and approaches to engaging a wide range of knowledge users remain largely unexplored. Evidence is required to guide the process of engaging knowledge users in knowledge synthesis to identify engagement approaches that are effective, efficient, and meaningful. In addition, co-production of research by researchers and knowledge users requires additional time and funding, and it is imperative that the limited resources available for health research are used appropriately.

We undertook a scoping review to map the literature on engaging knowledge users in the knowledge synthesis process. Engagement of policy-makers, policy analysts, and health system managers were of particular interest, as these knowledge users are increasingly commissioning knowledge synthesis research products to meet their decision-making needs. The research questions (RQs) for our scoping review are provided below and outlined in our published protocol [ 15 ]:

(RQ1) In what context were policy-makers, health system managers, and policy analysts engaged (e.g., health system setting, high-income countries (HICs) versus low and middle-income countries (LMICs))?

(RQ2) What strategies exist to engage policy-makers, health system managers, and policy analysts in the knowledge synthesis process?

(RQ3) In studies describing strategies for engaging policy-makers, health system managers, and policy analysts, what outcomes do they measure to evaluate engagement mechanisms (e.g., attitudes, beliefs, knowledge) and what are the results (e.g. benefits, unintended consequences)?

(RQ4) What are the barriers and facilitators in engaging policy-makers, health system managers, and policy analysts in the knowledge synthesis process?

Commissioning agency

As part of a project to strengthen research capacity in LMICs, we were commissioned to conduct this scoping review by the Alliance for Health Policy and Systems Research (hereafter the Alliance), an international partnership hosted by the World Health Organization (WHO). We engaged with members of the Alliance throughout the review conduct.

Study design

We selected the scoping review method [ 16 ] because we were interested in mapping the concepts relevant to engaging knowledge users in knowledge synthesis [ 16 , 17 ]. The scoping review methodology is particularly useful when exploring an emerging and diverse knowledge-base, which makes the method well-matched to our RQs.

We drafted a scoping review protocol following the methods outlined by the Joanna Briggs Institute Methods Manual for scoping reviews [ 18 ] and reported findings using the elements provided in the Preferred Reporting Items for Systematic Reviews and Meta-analysis for Protocols (PRISMA-P) [ 19 ]. Our protocol was revised by the research team, registered with the Open Science Framework [ 20 ], and published in BMJ Open [ 15 ]. Since our full methods are available in our protocol, they are outlined briefly below.

Eligibility criteria

Our eligibility criteria were conceptualized using the Population, Intervention, Comparator, Outcome, and Study design components [ 21 ], as follows:

At minimum, the paper must mention at least one of the three knowledge user types specified in our RQs, which included policy-makers, policy analysts, and health system managers. Policy-makers are individuals at some level of government or decision-making institution, including but not limited to international organizations, non-governmental agencies or professional associations, who have responsibility for making recommendations to others [ 22 ]. Policy analysts are individuals at some level of government or decision-making institution, including but not limited to international organizations, non-governmental agencies or professional associations, responsible for analyzing data and informing decisions and recommendations [ 22 ]. Health system managers are individuals in a managerial or supervisory role in a health system with management or supervisory mandates, including implementers and public health officials [ 22 ].

Intervention

Papers that described any engagement strategy for policy-makers, health system managers, and policy analysts in the knowledge synthesis process were included. Engagement can be defined as “an iterative process of actively soliciting the knowledge, experience, judgment and values of individuals selected to represent a broad range of direct interests in a particular issue, for the dual purposes of: creating a shared understanding [and] making relevant, transparent and effective decisions” [ 8 ]. This scoping review limits knowledge user engagement to those opportunities that allow a meaningful interaction of the knowledge users in the research process from conception to design and completion and/or interpretation and uptake of results.

Comparators

Papers with or without a comparator group were eligible for inclusion.

Outcomes of interest were strategies, barriers, facilitators, and contextual factors for engaging health policy-makers, health system managers, or policy analysts in the conduct and use of knowledge synthesis. We also explored whether engagement strategies were evaluated regarding researcher and knowledge user attitudes, beliefs and knowledge of engagement as well as impact and effectiveness of engagement.

Study designs

We included any type of study design (e.g., qualitative or quantitative methods).

Time periods

To increase feasibility and timeliness of review completion, we restricted inclusion of the literature to the past 20 years.

All settings were eligible for inclusion.

To increase feasibility and timeliness of review completion, only papers written in English were included.

Our full list of eligibility criteria can be found in Additional file  1 : Appendix 1.

Information sources and search strategy

The following electronic databases were searched by an experienced librarian (Dr. Jessie McGowan) from 1996 to August 15 2016: MEDLINE, Embase, ERIC, PsycINFO, Joanna Briggs, The Cochrane Library, EBM Reviews, The Campbell Library, and Social Work abstracts. The MEDLINE search strategy was peer-reviewed using the PRESS Statement [ 23 ] by a second librarian (Dr. Elise Cogo) and has been published in our protocol [ 15 ]. The main literature search was supplemented through searching GreyNet International [ 24 ] and OpenSIGLE [ 25 ] to locate unpublished (or grey) literature, such as conference abstracts and dissertations. All literature searches and full-text retrievals were executed by an experienced library technician (Ms. Alissa Epworth) and managed using Endnote [ 26 ]. Additionally, references from relevant review articles were scanned, and experts in the field were identified and contacted via email by the Alliance to identify additional sources of evidence.

Study selection process

Literature search results were screened using our online Synthesi.SR software [ 27 ]. For level 1 screening of titles and abstracts, 3 pilot-tests were conducted on a total of 125 records. Once 80% agreement was achieved, pairs of reviewers (BP, MG, PR, PK, SD, VN, and WZ) independently screened remaining titles and abstracts. There were 403 (5%) discrepancies at level 1 screening, which were resolved by a third reviewer (WZ). For level 2 screening of potentially relevant full-text articles, 2 pilot-tests were conducted. When 70% agreement was achieved, pairs of reviewers (MG, PR, PK, SD, VN, and WZ) independently screened the full-text articles. There were 62 (18%) discrepancies at level 2 screening, which were resolved by a third reviewer (WZ).

Data items and data abstraction process

We abstracted data on article characteristics (e.g., country of origin, funder), engagement characteristics and contextual factors (e.g., type of knowledge user, country income level [ 28 ], type of engagement activity, frequency and intensity of engagement, use of a framework [ 29 ] to inform the intervention), barriers and facilitators to engagement, and results of any formal assessment of engagement (e.g., attitudes, beliefs, knowledge, benefits, unintended consequences).

Data abstraction was conducted using a standardized Excel form that was developed a priori and pilot-tested on a sample of 5 included papers. After the team conducted 2 pilot-tests, data was abstracted by one reviewer and verified by another (MG, PK, SD, VN). Two experienced reviewers then quality checked the data for consistency and accuracy (WZ or PR).

Risk of bias assessment

We did not conduct risk of bias assessment, which is consistent with the Joanna Briggs Institute Scoping Review Methods Manual [ 18 ] and scoping reviews on health-related topics [ 17 ].

Synthesis of results

Results were synthesized using frequencies and thematic analysis [ 30 ]. Thematic analysis of open-text data was performed by one reviewer and verified by a second reviewer (PR, WZ). We used previously established nomenclature in our thematic analysis of barriers and facilitators to engaging knowledge users in health research [ 31 , 32 ]. Engagement was coded and defined based on the framework established by Keown et al. [ 33 ]. Meta-analysis was not performed.

Literature search

After screening 8395 titles and abstracts and 394 full-text documents, 84 unique documents and 7 companion reports [ 34 , 35 , 36 , 37 , 38 , 39 , 40 ] (i.e., follow-up reports to the main documents included in our review) fulfilled our eligibility criteria (Fig.  1 ). The full citations can be found in Additional file 1 : Appendix 2. Four documents were excluded because they were written in languages other than English, and 79 were excluded because they were conference abstracts, commentaries, or protocols that did not include relevant data. Six documents were identified through contacting experts in the field [ 11 , 18 , 24 , 25 , 26 , 27 ]. Six were unpublished reports, which were identified through our literature searches as well as through expert contact [ 41 , 42 , 43 , 44 , 45 , 46 ]. No relevant documents were identified through scanning reference lists.

PRISMA flow diagram

Characteristics of included documents ( n  = 84)

All documents were published in the last 10 years (Table  1 ) and originated predominantly in Canada (37%), USA (17%), UK (18%), and Australia (13%) (Fig.  2 ). The funding source was mainly public (79%), and Health Care Sciences & Services was the most common publishing journal discipline (31%). The types of documents were classified as application papers (87%) which are knowledge synthesis papers that described knowledge user engagement in the conduct of their research, descriptive papers (10%) which provided details of knowledge user engagement strategies developed by a research center or program, and methodology papers (4%) that studied knowledge user engagement in the knowledge synthesis process. The most common types of knowledge synthesis with knowledge user engagement were: systematic review (36%), literature review with a systematic literature search (19%), scoping review (14%), rapid review (12%), and realist review (6%) (Table 1 ).

Choropleth of document distribution by geographic region

RQ1: Contextual factors of included documents ( n  = 84)

The research most commonly addressed an issue at the level of the national healthcare systems (48%), followed by applied research settings (19%), and local healthcare systems (6%; Table  2 ). The knowledge synthesis product most commonly focused on health services delivery (17%), followed by knowledge translation (16%), and public health (10%). Most of the documents were produced in the context of high-income countries (86%), while 12% of the documents were in the context of low- and middle-income countries (see Table  3 for more details on LMICs).

RQ2: Methodology documents ( n  = 3)

One methodology document conducted 18 key informant interviews with policy-makers and systematic review producers to identify institutional mechanisms to increase demand for and facilitate conduct of policy-relevant systematic reviews [ 47 ]. The authors proposed four models for achieving policy-relevant systematic reviews with an emphasis on policy-maker engagement based on knowledge user needs and timelines as well as complexity of the research question. The authors concluded that early engagement with managers and policy-makers can improve clarity and consensus of definitions and maximize relevance of systematic reviews.

Another methodology document conducted a systematic literature search of stakeholder engagement and 13 key informant interviews on prioritizing research [ 48 ]. The authors included 56 papers that used mixed qualitative/quantitative approaches to engaging stakeholders using in-person, online, or teleconference modalities. Prioritization of research was often achieved using structured ranking or Delphi methods. Ten factors for successful engagement were recommended and are outlined in Table  4 .

A third methodology document examined the benefits of engaging a range of stakeholders in systematic reviews through a review of 24 papers and 34 key informant interviews [ 46 ]. The authors noted that although a number of benefits and challenges to engaging stakeholders were identified, none of the studies formally evaluated engagement.

RQ2: Descriptive documents ( n  = 8)

The eight descriptive documents [ 33 , 49 , 50 , 51 , 52 , 53 , 54 ] described engagement approaches used for the following: Healthcare Improvement Scotland [ 53 ], Greater London Authority (UK) [ 55 ], Agency for Healthcare Research and Quality (US) [ 49 ], Samueli Institute (US) [ 51 ], Institute for Work and Health (Canada), Ottawa Hospital Research Institute (Canada) [ 52 ], and various funding and governmental agencies within Canada [ 33 , 50 , 54 ]. Most of the approaches involved consultations. One paper described [ 33 ] engagement of knowledge users as part of the review team. Further results can be found in Table  5 .

RQ2: Application papers ( n  = 73)

Seventy-three knowledge synthesis documents reported knowledge user engagement in the research process. Policy-makers were the most common (64%) type of knowledge users to be engaged in the knowledge synthesis process, followed by healthcare professionals and organizations (49%), and government agencies as well as patient organizations and caregivers (34%; Fig.  3 ).

Types of knowledge users

The points of engagement in the knowledge synthesis process occurred at the onset of the review to conceptualize and plan the research (49%), where knowledge users were engaged to either select the research topic or refine research questions (40%), develop the study proposal or protocol (29%) or define study selection criteria (27%) (Fig.  4 ). Knowledge users were also involved at the literature search or data collection phase (52%) to either assist with the literature search (26%), help with study selection (8%), provide input on the data collection form (18%), help with data collection (5%) or provide experiential data to supplement the data obtained from the literature searches (32%). At the data synthesis and interpretation stage (71%), knowledge users informed data analysis (32%) or helped interpret the results (66%). During the knowledge dissemination and application phase (44%), knowledge users assisted with the report writing (10%), reviewed and provided feedback on the draft report (18%), helped develop key messages (4%), developed practice or policy recommendations (15%), or established the future research agenda (4%).

Distribution of knowledge user engagement by steps in the knowledge synthesis process

Knowledge users were most commonly engaged as key informants across the four stages of the knowledge synthesis process (Fig.  5 ). Other roles included advisors (i.e., knowledge users provide high-level recommendations and advice on the design and method and is typically engaged at various stages of a review), expert panel (i.e., knowledge users provide specialized input/opinion on the topic and is typically engaged at a specific stage of a review), steering group (i.e., knowledge users provide strategic decisions on the direction of the research project and is consulted at various stages of a review), or as a team member (i.e., knowledge user is included as part of the review team). Full definitions for all terms can be found in Additional file 1 : Appendix 3.

Engagement strategy framework

Frequently used methods of engagement were structured meetings or workshops and information gathering by means of surveys, focus groups or interviews across all four stages. Other methods of engagement included nominal group techniques or Delphi approaches to problem-solve and reach decisions in a group setting as well as circulating documents for feedback, and sending regular updates to relevant knowledge users. Knowledge users were most commonly engaged in-person or telephone across all four stages of a knowledge synthesis. Other forums for engagement included online platforms and email discussions.

The frequency of knowledge user engagement varied across the 73 application documents (Fig.  6 ). Knowledge users were engaged only once during the knowledge synthesis process in two-fifths of the documents, twice in one-quarter of the documents, three times in one-tenth of the documents and in all four stages in nearly one-quarter of the documents.

Frequency of engagement

RQ2: Frameworks used to inform engagement strategy

One document reported the use of a framework for engagement in research [ 56 ] called the 7Ps of Stakeholder Engagement and Six Stages of Research [ 57 ]. The 7Ps are (1) patients and the public, (2) providers, (3) purchasers, (4) payers, (5) policy-makers, (6) product makers, and (7) principal investigators. The six stages of research are (1) evidence prioritization, (2) evidence generation, (3) evidence synthesis, (4) evidence integration, (5) dissemination and application, and (6) feedback and assessment [ 57 ]. The authors of this framework recommended the following: prioritizing engagement through funding opportunities and other initiatives and adopting a common taxonomy when working with knowledge users, experimenting with different engagement strategies and evaluating them on an ongoing basis, and reporting outcomes and continuous quality improvement to iterate and implement changes when required.

Another document provided a conceptual framework on the models and mechanisms for engaging policy-makers in systematic reviews that focus on health policy and systems research [ 47 ]. Mechanisms that can be used to bolster engagement with policy-makers included the following: finding ongoing funding so researchers can answer questions posed by policy-makers, providing capacity-building to researchers and policy-makers to support engagement, and having team members with experience working closely with policy-makers.

RQ3: Outcomes of engagement ( n  = 84)

None of the included documents conducted a formal evaluation of engagement; measurement tools specific to engagement were not identified. The authors of one paper asked participating knowledge users to answer an anonymous survey and 100% reported that the information provided in the review was “very” or “somewhat” useful in their decision-making [ 58 ]. One study [ 46 ] suggested ways to measure engagement in future research, including tracking how the research question, eligibility criteria, or other aspects of the review were modified after engagement, comparing reviews on the same topic with engagement and without engagement, retrospectively evaluating reviews that were conducted without engagement to determine their impact, or deliberately phasing in engagement at different parts of the process to measure how the engagement impacted the review.

RQ4: Barriers and facilitators to engagement ( n  = 31)

Thirty-one documents reported on 16 factors that were considered barriers or facilitators to engagement (Table  6 ). The most common facilitators were content expertise/awareness of the knowledge user (17%), establishing partnership with knowledge users early in the research process (8%), and having forums for ongoing interaction (7%). The most common barriers reported were lack of time or opportunity for engagement (11%) and when knowledge users lacked expertise/awareness of the topic (content) (6%).

The included documents were predominantly conducted at the level of a national healthcare system and focused on health services delivery in the context of high-income countries. We did not identify any distinguishing trends in engagement when we compared knowledge user engagement across country income groups and other contextual factors. We did not identify differences in results over time or across settings, for phases of engagement, or how the engagement was conducted. This might be because the practice of engaging knowledge users in knowledge synthesis is still relatively new.

Knowledge users were most commonly engaged as key informants who were engaged through structured meetings or workshops and surveys, focus groups or interviews. Knowledge users were engaged only once during the knowledge synthesis process in two-fifths of the documents, twice in one-quarter of the documents, three times in one-tenth of the documents, and across all four stages in nearly one-quarter of the documents. None of the documents conducted a formal evaluation of engagement and measurement tools specific to engagement were not identified. Sixteen barriers and facilitators were identified. The most common facilitator was content expertise/awareness of the knowledge user, whereas the most common barrier was lack of time or opportunity for engagement.

There are numerous perceived benefits to engaging policy-makers, policy analysts, and health system managers in knowledge synthesis. Examples include more comprehensive literature searches, improved rigor of knowledge synthesis findings, greater clarity of results [ 59 ] as well as greater relevance, uptake, and usefulness of results. However, the results of our scoping review suggest that very little research has been conducted in this area. The research that has been conducted is purely descriptive in nature and a formal evaluation of engagement approaches and outcomes was not identified. A future study could evaluate engagement using a variety of methods, such as documenting how the knowledge synthesis process and results were modified after engagement or testing engagement at different points of the knowledge synthesis process to see how engagement influences research impact.

We identified several factors that may enhance engagement of knowledge users in knowledge synthesis process that are within the researcher’s control, for example, engaging knowledge users before the synthesis begins; clearly outlining expectations regarding stakeholder’s role and time commitment; identifying funding opportunities to work closely with policy-makers; providing time for question and answer opportunities; conducting ice breaker activities; providing materials in advance of meetings; considering knowledge user comments as being equal to those received from researchers; being sensitive to knowledge user’s time; presenting results to knowledge users; and using a neutral facilitator. As none of these have been formally evaluated, we cannot comment on the effectiveness of any of these approaches. As such, the type and intensity of engagement should be meaningful and tailored to available resources, including time and funding. To better define knowledge user engagement in knowledge synthesis, researchers should discretely identify the desired benefits and impacts and effectiveness of engagement and develop systematic and reproducible methods and indicators for formal evaluation.

There were four main phases when engagement took place, including conception and design of research, search and data collection, data synthesis and interpretation, and knowledge dissemination and application.

Knowledge users were most often engaged as key informants across the four stages of the knowledge synthesis process to obtain advice, feedback, and opinions. However, there is increasing interest globally in co-design and co-development of research with knowledge users and using research to inform public policy [ 60 , 61 ]. Co-creation of science is gaining momentum to integrate research and decision-making cycles and incorporate knowledge generation in complex policy planning and implementation, ultimately enhancing the usability and impact of research [ 12 , 62 , 63 ]. It will be important to test the utility of the co-design and co-creation of knowledge synthesis in the future.

Two conceptual frameworks were identified that provided a structure and mechanism to facilitate knowledge user engagement in knowledge synthesis. These were the 7Ps of Stakeholder Engagement and Six Stages of Research framework [ 58 ] and a conceptual framework on the models and mechanisms for engaging policy-makers in systematic reviews that focus on health policy and systems research [ 47 ]. An additional framework can also be used: the online survey patient and public engagement questionnaire (PPEQ) [ 64 ]. Members of our research team are currently conducting a study to test the level of engagement of knowledge users in a systematic review using the PPEQ [ 65 ], which will provide clarity to the field.

Two additional frameworks [ 66 , 67 ] to engage stakeholders in knowledge synthesis were published after the literature search date and completion of our scoping review. Haddaway and colleagues [ 66 ] discussed a framework including approaches for engaging stakeholders during systematic reviews in the field of environmental management. Land and colleagues [ 67 ] described an empirically tested five-step approach for stakeholder engagement in prioritization and planning of environmental evidence syntheses that the Mistra Council for Evidence-based Environmental Management has been using. These frameworks may also be of relevance to knowledge synthesis within health and should be examined more closely in the future.

The strengths of our scoping review include a comprehensive literature search of multiple electronic databases as well as unpublished sources. We also followed the rigorous scoping review methods suggested by the Joanna Briggs Institute. We engaged with the principal knowledge user (EVL) throughout the review process who provided input in our research questions, review protocol, eligibility criteria of papers, reviewed this manuscript, and helped interpret our findings. In terms of dissemination plans, in addition to publication of this manuscript, we will prepare a1-page policy brief which will be made available on our website ( https://knowledgetranslation.net/ ) and present at international conferences. Team members will also use their networks to encourage broad dissemination of results.

There are limitations to our scoping review process. To increase feasibility, we limited inclusion to documents made available in the past 20 years. However, this is likely not a substantial limitation, as all of the included documents were made available in the past 10 years. We also limited inclusion to studies written in English, which may have resulted in the exclusion of eligible studies from LMIC settings for RQ1. Given the large number of documents included, the data were abstracted by one reviewer and verified by a second reviewer. However, the data are likely valid, as a pilot-test was conducted prior to embarking on data abstraction with the entire team and a second reviewer who is an experienced research coordinator on the team verified all data. Often the included documents did not distinguish between stakeholders (i.e., those who are affected by or have an interest or stake in research [ 4 ]) and knowledge users (i.e., a subgroup of stakeholders who are likely to use research findings to make informed decisions about health systems and practices [ 5 ]), which is likely due to inconsistent use of the terms in the literature. As such, our results are likely applicable to both stakeholder and knowledge user participants. Furthermore, the reporting of knowledge user engagement methods varied considerably in their completeness across the literature, and as such, our data are limited by the details described in the literature. For example, most papers described steps to engage knowledge users but did not provide details on non-response or unsuccessful engagement.

Engaging policy-makers, policy analysts, and health system managers in knowledge synthesis usually occurs at the beginning or end of the knowledge synthesis process. However, ongoing engagement throughout the review process may lead to more relevant and user-friendly results. The type and intensity of engagement should be meaningful and tailored to available resources, including time and funding. Researchers should document and evaluate engagement activities in knowledge synthesis on an ongoing basis. It is important to document and test knowledge user engagement in knowledge synthesis in the future, to advance the field.

Change history

16 april 2018.

Following the publication of the original article [1], it was brought to our attention that the letter ‘l’ was unfortunately omitted from the word ‘health’ in the article’s title.

Abbreviations

High-income countries

Low- and middle-income countries

Patient and public engagement questionnaire

Preferred reporting items for systematic reviews and meta-analysis for protocols

Research questions

World Health Organization

Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gulmezoglu AM, et al. How to increase value and reduce waste when research priorities are set. Lancet. 2014;383(9912):156–65.

Article   PubMed   Google Scholar  

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

Macleod MR, Michie S, Roberts I, Dirnagl U, Chalmers I, Ioannidis JP, et al. Biomedical research: increasing value, reducing waste. Lancet. 2014;383(9912):101–4.

CIHR. Glossary of funding-related terms Canada: Canadian Institutes for Health Research; 2017. http://www.cihr-irsc.gc.ca/e/34190.html .

Google Scholar  

CIHR. Knowledge user engagement: Canadian Institutes of Health Research; 2016. http://www.cihr-irsc.gc.ca/e/49505.html ].

CIHR. Integrated knowledge translation (iKT): Canadian Institutes of Health Research; 2015. http://www.cihr-irsc.gc.ca/e/45321.html#a3 .

Bragge P, Clavisi O, Turner T, Tavender E, Collie A, Gruen RL. The global evidence mapping initiative: scoping research in broad topic areas. BMC Med Res Methodol. 2011;11:92.

Article   PubMed   PubMed Central   Google Scholar  

Deverka PA, Lavallee DC, Desai PJ, Esmail LC, Ramsey SD, Veenstra DL, et al. Stakeholder participation in comparative effectiveness research: defining a framework for effective engagement. J Comp Eff Res. 2012;1(2):181–94.

WHO. WHO strategy on health policy and systems research: changing mindsets. 2012. http://www.who.int/alliance-hpsr/alliancehpsr_changingmindsets_strategyhpsr.pdf .

Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7(9):e1000326.

Murad MH, Montori VM. Synthesizing evidence: shifting the focus from individual studies to the body of evidence. JAMA. 2013;309(21):2217–8.

Langlois EV, Becerril Montekio V, Young T, Song K, Alcalde-Rabanal J, Tran N. Enhancing evidence informed policymaking in complex health systems: lessons from multi-site collaborative approaches. Health Res Policy Syst. 2016;14:20.

Vindrola-Padros C, Pape T, Utley M, Fulop NJ. The role of embedded research in quality improvement: a narrative review. BMJ Qual Saf. 2017;26(1):70–80.

Ghaffar A, Langlois EV, Rasanathan K, Peterson S, Adedokun L, Tran NT. Strengthening health systems through embedded research. Bull World Health Organ. 2017;95(2):87.

Tricco AC, Zarin W, Rios P, Pham B, Straus SE, Langlois EV. Barriers, facilitators, strategies and outcomes to engaging policymakers, healthcare managers and policy analysts in knowledge synthesis: a scoping review protocol. BMJ Open. 2016;6(12):e013929.

Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Article   Google Scholar  

Tricco AC, Lillie E, Zarin W, O'Brien K, Colquhoun H, Kastner M, et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med Res Methodol. 2016;16:15.

Peters M, Godfrey C, McInerney P, Soares C, Hanan K, Parker D. The Joanna Briggs Institute Reviewers’ Manual 2015: methodology for JBI scoping reviews. 2015.

Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;349:g7647.

Tricco AC, Zarin W, Rios P, Pham B, Straus SE, Langlois E. Barriers, facilitators, strategies and outcomes to engaging policy-makers, healthcare managers, and policy analysts in knowledge synthesis: a scoping review protocol. Open Science Framwork. 2016; https://osf.io/4dy53/

Stone PW. Popping the (PICO) question in research and evidence-based practice. Appl Nurs Res. 2002;15(3):197–8.

Langlois EV, Straus SE, Mijumbi-Deve R, Lewin S, Tricco AC. Chapter 1: the need for rapid reviews to inform health policy and systems. In: Tricco AC, Langlois EV, Straus SE, editors. Rapid reviews to strengthen health policy and systems: a. Practical Guide Geneva: World Health Organization; 2017.

McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS peer review of electronic search strategies: 2015 guideline statement. J Clin Epidemiol. 2016;75:40–6.

Grey Literature Report [Internet]. 2016. http://www.greylit.org/ .

Open Grey [Internet]. 2017. http://www.opengrey.eu/ .

EndNote. X7.7 ed: Thomson Reuters; 2016.

Synthesi.SR [Internet]. Knowledge Translation Program, St. Michael's Hospital. 2012. http://www.breakthroughkt.ca/login.php .

The World Bank. World Bank Country and Lending Groups. https://datahelpdesk.worldbank.org/knowledgebase/articles/906519 .

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53.

Tricco AC, Cardoso R, Thomas SM, Motiwala S, Sullivan S, Kealey MR, et al. Barriers and facilitators to uptake of systematic reviews by policy makers and health care managers: a scoping review. Implement Sci. 2016;11:4.

Gagliardi AR, Berta W, Kothari A, Boyko J, Urquhart R. Integrated knowledge translation (IKT) in health care: a scoping review. Implement Sci. 2016;11:38.

Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14(1):1–12.

Keown K, Van Eerd D, Irvin E. Stakeholder engagement opportunities in systematic reviews: knowledge transfer for policy and practice. J Contin Educ Heal Prof. 2008;28(2):67–72.

Carter N, Martin-Misener R, Kilpatrick K, Kaasalainen S, Donald F, Bryant-Lukosius D, et al. The role of nursing leadership in integrating clinical nurse specialists and nurse practitioners in healthcare delivery in Canada. Nurs Leadersh (Tor Ont). 2010;23 Spec No 2010:167–85.

Nieuwlaat R, Connolly SJ, Mackay JA, Weise-Kelly L, Navarro T, Wilczynski NL, et al. Computerized clinical decision support systems for therapeutic drug monitoring and dosing: a decision-maker-researcher partnership systematic review. Implement Sci. 2011;6:90.

Roshanov PS, You JJ, Dhaliwal J, Koff D, Mackay JA, Weise-Kelly L, et al. Can computerized clinical decision support systems improve practitioners’ diagnostic test ordering behavior? A decision-maker-researcher partnership systematic review. Implement Sci. 2011;6:88.

Sahota N, Lloyd R, Ramakrishna A, Mackay JA, Prorok JC, Weise-Kelly L, et al. Computerized clinical decision support systems for acute care management: a decision-maker-researcher partnership systematic review of effects on process of care and patient outcomes. Implement Sci. 2011;6:91.

Souza NM, Sebaldt RJ, Mackay JA, Prorok JC, Weise-Kelly L, Navarro T, et al. Computerized clinical decision support systems for primary preventive care: a decision-maker-researcher partnership systematic review of effects on process of care and patient outcomes. Implement Sci. 2011;6:87.

Scotto Rosato N, Correll CU, Pappadopulos E, Chait A, Crystal S, Jensen PS. Treatment of maladaptive aggression in youth: CERT guidelines II. Treatments and ongoing management. Pediatrics. 2012;129(6):e1577–86.

Leonard MM, Agar M, Spiller JA, Davis B, Mohamad MM, Meagher DJ, et al. Delirium diagnostic and classification challenges in palliative care: subsyndromal delirium, comorbid delirium-dementia, and psychomotor subtypes. J Pain Symptom Manag. 2014;48(2):199–214.

Akl E, Fadlallah R, Ghandour L, Kdouh O, Langlois E, Lavis J, et al. The SPARK tool for prioritizing questions for systematic reviews in health policy and systems research: development and initial validation. Health Res Policy Syst. 15(1):77.

Hatziandreu E, Archontakis F, Daly A. The potential cost savings of greater use of home and hospice-based end of life care in England: RAND Corporation; 2008.

Committee on Integrating Primary Care and Public Health. Primary care and public health-exploring integration to improve population health 2012.

Pakes B. Ethical analysis in public health practice—Heterogeny, Discensus and the Man-on-the-Clapham Omnibus Toronto: University of Toronto; 2014.

Rycroft-Malone J, Burton CR, Williams L, Edwards S, Fisher D, Hall B, et al. Health services and delivery research. Improving skills and care standards in the support workforce for older people: a realist synthesis of workforce development interventions. Southampton: NIHR Journals Library; 2016.

Cottrell E, Whitlock E, Kato E, Uhl S, Belinson S, Chang C, et al. AHRQ methods for effective health care. Defining the benefits of stakeholder engagement in systematic reviews. Rockville: Agency for Healthcare Research and Quality (US); 2014.

Oliver S, Dickson K. Policy-relevant systematic reviews to strengthen health systems: models and mechanisms to support their production. Evidence & Policy: J Res, Debate Pract. 2016;12(2):235–59.

Guise JM, O'Haire C, McPheeters M, Most C, Labrant L, Lee K, et al. A practice-based tool for engaging stakeholders in future research: a synthesis of current practices. J Clin Epidemiol. 2013;66(6):666–74.

Atkins D, Fink K, Slutsky J. Better information for better health care: the evidence-based practice center program and the Agency for Healthcare Research and Quality. Ann Intern Med. 2005;142(12 Pt 2):1035–41.

Best A, Terpstra JL, Moor G, Riley B, Norman CD, Glasgow RE. Building knowledge integration systems for evidence-informed decisions. J Health Organ Manag. 2009;23(6):627–41.

Crawford C, Boyd C, Jain S, Khorsan R, Jonas W. Rapid Evidence Assessment of the Literature (REAL((c))): streamlining the systematic review process and creating utility for evidence-based health care. BMC Res Notes. 2015;8:631.

Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1:10.

McIntosh HM, Calvert J, Macpherson KJ, Thompson L. The healthcare improvement Scotland evidence note rapid review process: providing timely, reliable evidence to inform imperative decisions on healthcare. Int J Evid Based Healthc. 2016;14(2):95–101.

Saul JE, Willis CD, Bitz J, Best A. A time-responsive tool for informing policy making: rapid realist review. Implement Sci. 2013;8:103.

Mindell J, Bowen C, Herriot N, Findlay G, Atkinson S. Institutionalizing health impact assessment in London as a public health tool for increasing synergy between policies in other areas. Public Health. 2010;124(2):107–14.

Article   CAS   PubMed   Google Scholar  

Concannon TW, Fuster M, Saunders T, Patel K, Wong JB, Leslie LK, et al. A systematic review of stakeholder engagement in comparative effectiveness and patient-centered outcomes research. J Gen Intern Med. 2014;29(12):1692–701.

Concannon TW, Meissner P, Grunbaum JA, McElwee N, Guise JM, Santa J, et al. A new taxonomy for stakeholder engagement in patient-centered outcomes research. J Gen Intern Med. 2012;27(8):985–91.

Hayden JA, Killian L, Zygmunt A, Babineau J, Martin-Misener R, Jensen JL, et al. Methods of a multi-faceted rapid knowledge synthesis project to inform the implementation of a new health service model: collaborative emergency centres. Syst Rev. 2015;4:7.

Tricco AC, Langlois EV, Straus SE, editors. Rapid reviews to strengthen health policy and systems: a practical guide. Geneva: World Health Organization; 2017. Licence: CC BY-NC-SA 3.0 IGO

Goodyear-Smith F, Jackson C, Greenhalgh T. Co-design and implementation research: challenges and solutions for ethics committees. BMC Med Ethics. 2015;16:78.

Oliver K, Lorenc T, Innvaer S. New directions in evidence-based policy research: a critical analysis of the literature. Health Res Policy Syst. 2014;12:34.

Jackson CL, Greenhalgh T. Co-creation: a new approach to optimising research impact? Med J Aust. 2015;203(7):283–4.

Greenhalgh T, Jackson C, Shaw S, Janamian T. Achieving research impact through co-creation in community-based health services: literature review and case study. Milbank Q. 2016;94(2):392–429.

Moore A, Brouwers M, Straus SE, Tonelli M. Advancing patient and public involvement in guideline development. Toronto: Canadian Taskforce on Preventative Health Care; 2015.

Soobiah C, Daly C, Blondal E, Ewusie J, Ho J, Elliott MJ, et al. An evaluation of the comparative effectiveness of geriatrician-led comprehensive geriatric assessment for improving patient and healthcare system outcomes for older adults: a protocol for a systematic review and network meta-analysis. Syst Rev. 2017;6(1):65.

Haddaway N, Kohl C, da Silva NR, Schiemann J, Spök A, Stewart R, et al. A framework for stakeholder engagement during systematic reviews and maps in environmental management. Environmental Evidence. 2017;6(1):11.

Land M, Macura B, Bernes C, Johansson S. A five-step approach for stakeholder engagement in prioritisation and planning of environmental evidence syntheses. Environmental Evidence. 2017;6(1):25.

Agweyu A, Opiyo N, English M. Experience developing national evidence-based clinical guidelines for childhood pneumonia in a low-income setting—making the GRADE? BMC Pediatr. 2012;12:1.

Buchan J, Fronteira I, Dussault G. Continuity and change in human resources policies for health: lessons from Brazil. Hum Resour Health. 2011;9:17.

Clarke D, Duke J, Wuliji T, Smith A, Phuong K, San U. Strengthening health professions regulation in Cambodia: a rapid assessment. Hum Resour Health. 2016;14:9.

Higashi H, Khuong TA, Ngo AD, Hill PS. The development of Tobacco Harm Prevention Law in Vietnam: stakeholder tensions over tobacco control legislation in a state owned industry. Subst Abuse Treat Prev Policy. 2011;6:24.

Muller L, Flisher A. Standards for the mental health care of people with severe psychiatric disorders in South Africa: part 2. Methodology and results. South African Psychiatr Rev. 2005;8:146–52.

Orem JN, Mafigiri DK, Marchal B, Ssengooba F, Macq J, Criel B. Research, evidence and policymaking: the perspectives of policy actors on improving uptake of evidence in health policy development and implementation in Uganda. BMC Public Health. 2012;12:109.

Sidibe S, Pack AP, Tolley EE, Ryan E, Mackenzie C, Bockh E, et al. Communicating about microbicides with women in mind: tailoring messages for specific audiences. J Int AIDS Soc. 2014;17(3 Suppl 2):19151.

PubMed   PubMed Central   Google Scholar  

Teerawattananon Y, Kingkaew P, Koopitakkajorn T, Youngkong S, Tritasavit N, Srisuwan P, et al. Development of a health screening package under the universal health coverage: the role of health technology assessment. Health Econ. 2016;25(Suppl 1):162–78.

Wiysonge CS, Ngcobo NJ, Jeena PM, Madhi SA, Schoub BD, Hawkridge A, et al. Advances in childhood immunisation in South Africa: where to now? Programme managers’ views and evidence from systematic reviews. BMC Public Health. 2012;12:578.

Download references

Acknowledgements

We thank Dr. Jessie McGowan for developing the search strategy and Dr. Elise Cogo for peer-reviewing it. We also thank Ms. Alissa Epworth for all library support, including performing the database and grey literature searches and obtaining full-text papers. Finally, we thank Ms. Theshani De Silva for formatting the manuscript and Ms. Susan Le for formatting the manuscript as well as providing all logistical and administrative support.

This study was funded by Alliance for Health Policy and Systems Research, WHO, Geneva, with support from the Norwegian Government Agency for Development Cooperation (Norad), the Swedish International Development Cooperation Agency (Sida) and the UK Department for International Development (DFID).

ACT is funded by a Tier 2 Canada Research Chair in Knowledge Synthesis, and SES is funded by a Tier 1 Canada Research Chair in Knowledge Translation.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Author information

Authors and affiliations.

Li Ka Shing Knowledge Institute of St. Michael’s Hospital, 209 Victoria Street, Toronto, Ontario, M5B 1T8, Canada

Andrea C. Tricco, Wasifa Zarin, Patricia Rios, Vera Nincic, Paul A. Khan, Marco Ghassemi, Sanober Diaz & Ba’ Pham

Epidemiology Division, Dalla Lana School of Public Health, University of Toronto, 6th Floor, 155 College St, Toronto, Ontario, M5T 3M7, Canada

Andrea C. Tricco

Department of Geriatric Medicine, Faculty of Medicine, University of Toronto, 27 King’s College Circle, Toronto, Ontario, M5S 1A1, Canada

Sharon E. Straus

Alliance for Health Policy and Systems Research, World Health Organization, Avenue Appia 20, 1211, Geneva, Switzerland

Etienne V. Langlois

You can also search for this author in PubMed   Google Scholar

Contributions

ACT obtained funding, conceptualized the research, participated in all pilot-testing of screening and data abstraction forms, helped conceptualize the analysis, and wrote the first draft of the manuscript. WZ coordinated the study, screened citations and articles, resolved all discrepancies, cleaned the data, coded and verified the open-text data, performed all data analysis, created the figures, and drafted the sections of the manuscript. PR helped coordinate the study, screened the citations, abstracted the data, cleaned the data, coded the open-text data, and reviewed the manuscript. BP, MG, PK, SD, and VN helped with screening the citations and articles, abstracted and verified the data, and reviewed the manuscript. SES obtained funding, helped conceptualize the research, and reviewed the manuscript. EVL developed the research idea, helped conceptualized the study, and reviewed the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Andrea C. Tricco .

Ethics declarations

Ethics approval and consent to participate.

Since this is a scoping review, ethics approval was not required.

Consent for publication

Not applicable.

Competing interests

SES is an Associate Editor of the journal but was not involved with the submission process or decision to publish. All other authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional information

The original version of the article has been revised to correct a typographical error in the title.

Additional files

Additional file 1:.

Appendices. The appendices includes all supplemental information. Appendix 1. Inclusion and exclusion criteria. Appendix 2. List of included papers. Appendix 3. Definitions. (DOCX 37 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Tricco, A.C., Zarin, W., Rios, P. et al. Engaging policy-makers, health system managers, and policy analysts in the knowledge synthesis process: a scoping review. Implementation Sci 13 , 31 (2018). https://doi.org/10.1186/s13012-018-0717-x

Download citation

Received : 27 September 2017

Accepted : 25 January 2018

Published : 12 February 2018

DOI : https://doi.org/10.1186/s13012-018-0717-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Knowledge user
  • Stakeholder
  • Knowledge translation
  • Health policy
  • Policy-relevant
  • Health system
  • Policy-maker
  • Knowledge synthesis

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

policy makers research paper

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Research Engagement with Policy Makers: a practical guide to writing policy briefs

Profile image of Jan Lecouturier

Effective communication between academics and policy makers plays an important role in informing political decision making and creating impact for researchers. Policy briefs are short evidence summaries written by researchers to inform the development or implementation of policy. This guide has been developed to support researchers to write effective policy briefs. It is jointly produced by the NIHR Policy Research Unit in Behavioural Science (BehSciPRU) and the UCL Centre for Behaviour Change (CBC). It has been written in consultation with policy advisers and synthesises current evidence and expert opinion on what makes an effective policy brief. It is for any researcher who wishes to increase the impact of their work by activity that may influence the process of policy formation, implementation or evaluation. Whilst the guide has been written primarily for a UK audience, it is hoped that it will be useful to researchers in other countries.

Related Papers

Toronto, Ontario: Policy Bench, Fraser Mustard Institute of Human Development, University of Toront

Sima Sajedinejad

Objective: This toolkit provides a guide for researchers on the development of effective policy briefs to communicate research findings to policymakers to support evidence-informed decision-making. Key Points: • A policy brief is one component of a comprehensive policy impact plan. It typically provides a concise summary of a specific issue, the policy options to address it, and some recommendations for action. • Policy briefs are often based on a larger evidence synthesis or research study which provides more technical details. • Policy briefs generally target an informed, non-specialist audience. Therefore, language should be clear and concise; academic and technical jargon should be drilled down into plain and nonacademic language. • There are two types of policy briefs: an objective brief that gives balanced information about the policy options and allows policymakers to make their own decision, and an advocacy brief favouring one suggested policy option. • The process of developing a policy brief includes planning, writing, designing, and revising, and disseminating. Typically, visual aids are used to present the results in a simplified way to capture readers' attention with clear takeaways. • The outline of a policy brief usually includes the following sections: title, background, research results, policy options, implications, recommendations, and conclusions. • Results and proposed solutions or policy options to address the issue should be presented in a neutral, objective manner and should consider important dimensions such as feasibility, costs, and other pros and cons. Make sure your argument flows clearly based on appropriate structure and data. • The implications section draws from the results of the research to discuss the broader implications of the findings from a policy perspective. • Recommendations should be easy to find, clear and easy to understand, short, specific, realistic, relevant, attainable, and usually start with action words.

policy makers research paper

Kizito Ndihokubwayo

Lindsey Pike

Access to reliable and timely information ensures that decision-makers can operate effectively. The motivations and challenges of parliamentarians and policy-makers in accessing evidence have been well documented in the policy literature. However, there has been little focus on research-providers. Understanding both the demand- and the supply-side of research engagement is imperative to enhancing impactful interactions. This study reports on a recent online consultation with research professionals on their policy experience, including motivations and barriers to engage with decision-makers.

Policy Design and Practice

Shannon Guillot-Wright, PhD

Bridging the "Know–Do" Gap: Knowledge brokering to improve child wellbeing

Meredith Edwards

Health Economics & Decision Science (HEDS) Discussion Paper Series

Erika E Atienzo

Over the last two decades, there has been an emphasis on the concept of evidence-based policy. However, evidence-based policy remains a major challenge and a gap exists in the systematic translation of scientific knowledge into policies. The awareness of this evidence-policy gap has led to a proliferation of research. As the demand for evidence-informed policy-making escalates, so does the need to unveil the mechanisms by which we can influence the process of research uptake. In this paper, we present a protocol for a systematic review. We aim to conduct an umbrella review/overview of reviews about factors affecting the use of research by policy-makers and/or decision-makers in health, education and social services areas. The results of this review could contribute to improving the utilisation of research in the policy-making process, by identifying factors that are most important in influencing the uptake of research by policy-makers.

International Journal of Behavioral Nutrition and Physical Activity

Heather Yeatman

Australia and New Zealand Health Policy

Anthony Zwi

Tamlin Gorter

Heaven Crawley

The concept of ‘evidence-based policy’ making has become something of a mantra within government circles. Within academia too, there is a growing emphasis on the ‘relevance’ of research to ‘real world’ issues and problems. For those of us who have been directly engaged in what might be described as ‘policy’ or ‘applied’ research for many years, this shift in emphasis is welcome and very much overdue. But the increased recognition afforded to research evidence in the policy making process belies a complex and difficult relationship between academics and policy-makers whose modus operandi is very different and who may have widely divergent motivations, objectives, methods and measures of ‘success’. Attempts to bring these two worlds together are not without their problems, which are explored in this short chapter.

RELATED PAPERS

„Śląskie Studia Historyczno–Teologiczne”

Damian Bednarski

JAMA Dermatology

L. Boussemart

Avril Joffe

M. Natasha Rajah

jacqueline berman

2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS)

xinlei Yang

Universidad de los Llanos, Villavicencio, 2018

Leonela gomez

International Journal of Recent Technology and Engineering

Jainul Fathima

Signum: Estudos da Linguagem

Silvia Guimarães

Numismatic Chronicle

Derek Smith

Boletín Academia Chilena de la Historia

Rodrigo Moreno

Revista Brasileira de Meteorologia

Geber Moura

Md. Shorif Hossan

Jurnal Edutrained : Jurnal Pendidikan dan Pelatihan

Yadhi Nur Amin

Signal Processing

Salvatore Iommelli

Revista de Derecho

HENRY TORRES VASQUEZ

Piero Fregatti

Journal of the American Chemical Society

Thomas Kodadek

Renewable and Sustainable Energy Reviews

Norazuwana Shaari

Acta Chemica Scandinavica

George Francis

Informatics in Education

Piroska Biró

John Okechi

Emerging Infectious Diseases

Kathleen Howell

DR. EMMANUEL I F E A N Y I OBEAGU

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024
  • Open access
  • Published: 06 March 2021

How to bring research evidence into policy? Synthesizing strategies of five research projects in low-and middle-income countries

  • Séverine Erismann 1 , 2 ,
  • Maria Amalia Pesantes 3 ,
  • David Beran 4 ,
  • Andrea Leuenberger 1 , 2 ,
  • Andrea Farnham 1 , 2 ,
  • Monica Berger Gonzalez de White 1 , 2 , 5 ,
  • Niklaus Daniel Labhardt 1 , 2 , 6 ,
  • Fabrizio Tediosi 1 , 2 ,
  • Patricia Akweongo 7 ,
  • August Kuwawenaruwa 1 , 2 , 8 ,
  • Jakob Zinsstag 1 , 2 ,
  • Fritz Brugger 9 ,
  • Claire Somerville 10 ,
  • Kaspar Wyss 1 , 2 &
  • Helen Prytherch 1 , 2  

Health Research Policy and Systems volume  19 , Article number:  29 ( 2021 ) Cite this article

24k Accesses

17 Citations

13 Altmetric

Metrics details

Addressing the uptake of research findings into policy-making is increasingly important for researchers who ultimately seek to contribute to improved health outcomes. The aims of the Swiss Programme for Research on Global Issues for Development (r4d Programme) initiated by the Swiss National Science Foundation and the Swiss Agency for Development and Cooperation are to create and disseminate knowledge that supports policy changes in the context of the 2030 Agenda for Sustainable Development. This paper reports on five r4d research projects and shows how researchers engage with various stakeholders, including policy-makers, in order to assure uptake of the research results.

Eleven in-depth interviews were conducted with principal investigators and their research partners from five r4d projects, using a semi-structured interview guide. The interviews explored the process of how stakeholders and policy-makers were engaged in the research project.

Three key strategies were identified as fostering research uptake into policies and practices: (S1) stakeholders directly engaged with and sought evidence from researchers; (S2) stakeholders were involved in the design and throughout the implementation of the research project; and (S3) stakeholders engaged in participatory and transdisciplinary research approaches to coproduce knowledge and inform policy. In the first strategy, research evidence was directly taken up by international stakeholders as they were actively seeking new evidence on a very specific topic to up-date international guidelines. In the second strategy, examples from two r4d projects show that collaboration with stakeholders from early on in the projects increased the likelihood of translating research into policy, but that the latter was more effective in a supportive and stable policy environment. The third strategy adopted by two other r4d projects demonstrates the benefits of promoting colearning as a way to address potential power dynamics and working effectively across the local policy landscape through robust research partnerships.

Conclusions

This paper provides insights into the different strategies that facilitate collaboration and communication between stakeholders, including policy-makers, and researchers. However, it remains necessary to increase our understanding of the interests and motivations of the different actors involved in the process of influencing policy, identify clear policy-influencing objectives and provide more institutional support to engage in this complex and time-intensive process.

Peer Review reports

Increasingly, research funders are asking their grantees to address the uptake of research findings into decision-making processes and policy-making [ 1 , 2 ]. This growing trend is a response to a need for real-world and context-sensitive evidence to respond to and address complex health systems and health service delivery bottlenecks faced by policy-makers, health practitioners, communities and other actors that require more than single interventions to induce large-scale change [ 3 ]. Moreover, there is growing pressure for applied and implementation research to be relevant, demonstrate value for money and result in high-impact publications. The relevance of ensuring the translation of research into practice is also reflected in growing support for research projects with concrete requirements regarding the evaluation of their impact of science on society [ 4 ].

One example of the above is the Swiss Programme for Research on Global Issues for Development (r4d Programme) initiated by the Swiss National Science Foundation (SNSF) and the Swiss Agency for Development and Cooperation (SDC) covering the period 2012–2022. The r4d Programme is aimed at researchers in Switzerland and low-and middle-income countries (LMICs) conducting projects that specifically focus on poverty reduction and the protection of public goods in developing countries. Its specific objectives are to create and disseminate knowledge that supports policy-making in the area of global development and foster research on global issues in the context of the 2030 Agenda for Sustainable Development [ 5 , 6 ].

While the linkage of research to policy is strongly encouraged by research funding agencies, the uptake of research evidence by policy-makers to establish new laws and regulations or to improve policies to solve a problem or enhance implementation effectiveness, especially in LMICs, remains weak [ 2 , 7 ]. This is often referred to as the gap between research and policy [ 8 ]. One of the factors that was identified with the dearth of research uptake in previous studies is a lack of evidence that is context sensitive, timely and relevant for policy-makers; other factors include difficulties in accessing existing evidence, challenges with correctly interpreting and using existing evidence [ 7 , 9 ] and also a lack of interest from policy-makers in the use and uptake of evidence [ 10 ]. Using the SNSF r4d funding scheme, our aim is to show how researchers have engaged with stakeholders, including policy-makers, from the onset of a research project, in order to identify strategies for evidence uptake and use.

As part of the r4d Programme, several synthesis initiatives have been launched to disseminate the research evidence from the r4d projects and increase its impact ( http://www.r4d.ch/r4d programme/synthesis ). The aim of one of these synthesis initiatives is to support knowledge translation and exchange, as well as knowledge diffusion and dissemination among 15 r4d projects focusing on public health. More specifically, the aim is to facilitate the uptake of findings for the benefit of societies in LMICs, especially with regards to social inclusion and gender equity in the drive towards universal health coverage (UHC) and the 2030 Agenda for Sustainable Development [ 6 ]. The present study and resulting article are part of this synthesis initiative.

In this article, we present—through five case studies—strategies to translate and bridge evidence emerging from research into policy-making and decision-making. We rely on the experiences of five public health projects within the r4d research initiative. This paper describes these experiences, reports on the lessons learnt and outlines important features and challenges of engaging in this process using the researchers’ perspectives. This paper contributes to the body of literature on research translation by highlighting concrete examples and successful strategies for the uptake of research evidence in policy formulation.

Invitations were sent out to researchers working on projects within the r4d Programme to share their experiences with the project. Based on the interest shown by researchers, five projects were selected by the authors to demonstrate the different approaches and strategies used in the r4d projects with the aim to influence policy. Researchers were asked to share descriptions of the different approaches used in seeking to influence the uptake of research results by policy-makers. Each project represents a case study with emphasis on the main features of their translational approaches and the challenges, enablers and successes encountered.

The different research–policy engagement strategies were identified through data analysis of the interviews conducted within the framework of the five r4d case studies and were inspired by the work conducted by Uzochukwu and colleagues in Nigeria [ 2 ], who described four detailed strategies to support evidence-informed policy-making: (1) policy-makers and stakeholders seeking evidence from researchers; (2) involving stakeholders in designing objectives of a research project and throughout the research period; (3) facilitating policy-maker–researcher engagement in optimizing ways of using research findings to influence policy and practice; (4) active dissemination of own research findings to relevant stakeholders and policy-makers (see Table 1 ).

In using the term stakeholder, we apply the following definition by Brinkerhoff and Crosby [ 11 ]: “A stakeholder is an individual or group that makes a difference or that can affect or be affected by the achievement of the organization’s objectives”. Hence, individual stakeholders can include politicians (heads of state and legislators), government bureaucrats and technocrats from various sectors (e.g. health), but also representatives of civil society organizations and support groups [ 12 ].

Data collection

Eleven in-depth interviews with principal investigators and their research partners from five r4d projects were conducted by the first author, using a semi-structured interview guide. The interview guide covered the following themes: (1) How were stakeholders involved in the research project? (2) Was there uptake of research evidence in national/international policies? (3) How were research results disseminated? (4) What were the challenges or obstacles encountered in disseminating and translating evidence from research to policy? The interview duration was between 30 and 45 min. Seven interviews were conducted with researchers based in Switzerland and four with researchers in LMICs. At least two interviews were conducted for each r4d case study.

Data management and analysis

Of the 11 interviews, nine were audio recorded and notes taken. Audio files were transcribed verbatim by the same researcher. Two interviews were not recorded, but detailed notes were taken during the interview.

A qualitative content analysis method was used in order to organize and structure both the manifest and latent content [ 13 ]. Aligned to overall study questions, essential content was identified by the first author, which involved a process of generating a provisional list of themes of interest that were based on the study objectives, including stakeholder involvement in the generation of research questions, research process, generation of results and dissemination of research findings, as well as challenges to research dissemination and policy uptake. In a next step, the transcripts were sorted and grouped by the first author according to the coding scheme for analysis. This involved using the content summary analysis method, which consists of reducing the textual content and preserving only the essential content in order to produce a short text [ 14 ]. As several co-authors were interviewed, they validated that their perspective was not misinterpreted or misrepresented.

Three key strategies were identified for research uptake into policy and practice throughout the data collection of this synthesis initiative: (S1) stakeholders directly engaged with and sought evidence from researchers; (S2) stakeholders were involved in the design and throughout the implementation of the research project; and (S3) stakeholders engaged in participatory and transdisciplinary research approaches to co-produce knowledge and inform policy. The first two strategies (S1, S2) are in line with Uzochukwu and colleagues’ work [ 2 ], and the third strategy (S3) is an additional category based on the experiences of researchers in r4d projects [ 2 ]. Each r4d project is described in more detail as a case study in one of these three strategies (Table 2 ).

S1: stakeholders directly engaged with and sought evidence from researchers

In this strategy, international stakeholders requested evidence from the research team. This is a unique (and rare) strategy, as stated by Uzochukwu et al. [ 2 ], and often involves a policy window of opportunity in which stakeholders, including policy-makers, are looking to solve a particular problem, which coincides with the publishing of a scientific report or paper and the interests of these same groups [ 15 , 16 ].

Improving the HIV care cascade in Lesotho: towards 90-90-90—a research collaboration with the Ministry of Health of Lesotho

In this r4d project, the research team was approached by the International Aids Society (IAS) and the World Health Organization (WHO) in Geneva, based on the publication of their study protocol [ 17 ], introducing their innovative research approach of same-day antiretroviral therapy (ART) initiation in rural communities in Lesotho:

“They [international stakeholders] were all keen of getting the results out and requested evidence of the randomized controlled trials. We shared the results confidentially with WHO as soon as we had the data and thereafter published the results in a journal with a wide reach. WHO as well as other international guidelines and policy committees took up the recommendation of same-day ART initiation and informed global guidelines” (Researcher 1).

As a result, many HIV programmes in sub-Saharan Africa as well as in the global north have adopted the practice of offering rapid-start ART to persons who test HIV positive even outside a health facility. In this example, the policy window and direct stakeholder engagement was crucial for the effective translation and uptake of research evidence.

Furthermore, by closely collaborating with national policy-makers, the research team advocated for the setting up of a research database and of knowledge management units within the Ministry of Health (MoH) of Lesotho, which have been successfully established. The members of the research project consortia have also initiated a national research symposium on a bi-annual basis, which is chaired by the MoH with the aim of facilitating the dissemination and uptake of research findings.

S2: Stakeholders were involved in the design and throughout the implementation of the research project

In this strategy, policy uptake is facilitated through stakeholder engagement from the beginning as well as during the conduct of research activities, through participating at workshops or functioning in the governance of the projects. Two r4d projects illustrate this strategy.

Health system governance for an inclusive and sustainable social health protection in Ghana and Tanzania

This project established a Country Advisory Group (CAG) at the start that included representatives of the main stakeholders of the social health protection systems. The CAGs were involved in all phases of the project, from the definition of the research plans to the dissemination of the results. The specific research questions addressed by the project emerged from the interactions with these main stakeholders, i.e. national policy-makers, healthcare providers and members of the social health protection schemes (the NHIS and the Livelihood Empowerment Against Poverty schemes in Ghana; and the National Health Insurance Fund, the Community Health Funds and the Tanzania Social Action Fund in Tanzania). Specifically in Ghana, the following stakeholders played a major role in shaping the research plan: the Ministry of Gender Children and Social Protection (MGCSP), the Ghana Health Service (Policy Planning and Monitoring and Evaluation Division; Research and Development Division), the National Health Insurance Authority (NHIA) and the Associations of Private Health Care Providers. In Tanzania, a major role was played by the Ministry of Health, Community, Development, Gender, Elderly and Children, the President’s Office—Regional Administration and Local Government, by representatives of civil society organizations, such as Sikika, by the SDC (Swiss Agency for Development and Cooperation) Health Promotion and System Strengthening project and by the SDC-supported development programme.

These stakeholders were subsequently involved in steering the research, as captured by a researcher:

“In Ghana, it was a balanced relationship. They were involved since the very beginning of the project in articulating what the information gap at policy level is, formulating the research questions and understanding the methods/what is feasible. In Tanzania, where the policy landscape is more fragmented, it was very important to listen to the voices of several different stakeholders” (Researcher 2).

The stakeholder consultations in Ghana and Tanzania initially involved discussions on the relevance of the research plans to address the existing gaps in strengthening the social health protection scheme, the synergies with other research initiatives and the feasibility of implementing the proposed research. Later on in the project, the consultation process involved reviewing and discussing the focus of the research and the appropriateness of the research aims in light of decisions and reforms that were under discussion by the government but not in the public domain. This led to revision of the research questions as they would have become redundant when such reforms were made public, especially in Ghana. These consultation processes were more formal in Ghana and more informal in Tanzania, but they were very informative and had a tangible impact on the research plans, which were revised according to the feedback received. However, the research teams were always independent in deciding on the research methodology and in interpreting the results. The in-country dissemination of the results at the end of the first phase of the project informed the decisions to be made on the research plan for the second phase and provided the opportunity to discuss policy implications based on the results of the first phase. Because of this close collaboration and engagement with stakeholders, the results of the studies were widely disseminated in Ghana. Two of the main findings of the project were particularly considered by these stakeholders. According to the researcher:

“First, the study results showed that even though people registered with the NHIS they continued to pay out of pocket for health services. The reasons for this were delays in reimbursement by NHIS, escalating prices of drugs and medical products, low tariffs, lack of trust between providers and NHIA and inefficiencies. Secondly, the results showed that the current system of targeting the poor is not working properly, with more than half of people registered in the NHIS as indigents being in the non-poor socio-economic groups. These results contributed to inform decisions regarding the revision of the NHIA reimbursement tariffs, and to improve the identification of the poor to be exempted from paying the NHIS premium, in collaboration with the MGCSP” (Researcher 3).

In Tanzania, research was conducted to assess the effects of the public private partnership, referred as the Jazia Prime Vendor System (Jazia PVS), on improving access to medicines in the Dodoma and Morogoro regions in Tanzania. This is one of the reforms in the area of supply chain management taking place in the country. Results showed that a number of accountability mechanisms (inventory and financial auditing, close monitoring of standard operating procedures) implemented in conjunction with Jazia PVS contributed positively to the performance of Jazia PVS. Participants’ acceptability of Jazia PVS was influenced by the increased availability of essential medicines at the facilities, higher-order fulfilment rates and timely delivery of the consignment [ 18 , 19 , 20 ].

The findings from this study were disseminated during the national meeting attended by various stakeholders, including CAG members, government officials and policy-makers. In addition, the findings were used to inform the national scale-up of the Jazia PVS intervention as the government of Tanzania decided to scale up the Jazia PVS to all the 23 regions in 2018. Moreover, the results/manuscripts were published or submitted to peer-reviewed journals [ 18 , 19 , 20 ], enabling other countries intending to adopt such innovate public–private partnerships for improvement of the in-country pharmaceutical supply chain to learn from Jazia PVS in Tanzania.

Health impact assessment for engaging natural resource extraction projects in sustainable development in producer regions (HIA4SD)

In this r4d project, stakeholders were involved from the outset through their participation in the project launch meeting and in regular consortium meetings. The project is a collaboration between the Swiss Tropical and Public Health Institute (Swiss TPH), the Center for Development and Cooperation (NADEL) at the Swiss Federal Institute of Technology in Zurich/Switzerland and national research institutes, namely the Institut de Recherches en Sciences de la Santé in Burkina Faso, the University of Health and Allied Sciences in Ghana, the Centro de Investigação em Saúde de Manhiça in Mozambique and the Ifakara Health Institute in Tanzania [ 21 ]. The involvement of key stakeholders from the government, civil society, private sector and research community in an engaged dialogue from the beginning iss of central importance in this project, as in most cases mining is a highly politicized topic. To promote the immediate integration of research findings into policy, the project is organized into two streams, namely an “impact research stream” and a “governance stream”, that work in parallel. While the impact research stream is focused on evidence generation to support the uptake of health impact assessment (HIA) in Africa, the governance stream is focused on understanding the policy terrain and consequently the pathways that need to be utilized to support translation of the evidence into policy and practice. The second phase of the study is devoted to the dissemination of research findings into policy at the national and local levels, including capacity-building activities for national stakeholders. As the HIA4SD project examines operational questions of relevance for guiding both policy-making and decision-making, team members sought to regularly engage with and inform the national stakeholders. According to the researcher:

“Strategies employed to influence policy vary according to the country, but included regular stakeholder workshops, participation in a new national platform launched to discuss issues around mining in Mozambique, development of policy briefs, strengthened collaborations with national ministries of health, discussion of results and advocacy with policy makers, and conference presentation of findings” (Researcher 4).

In these two case examples, continuous stakeholder engagement was considered essential to translate and disseminate research evidence. Thus, beyond the stage of setting the objectives, contact with stakeholders was active and maintained on a regular basis through regular exchanges with stakeholder groups during workshops or meetings, which facilitated the dissemination and uptake of the research results. While the time and level of meaningful interaction varied across the countries and workshops, all meetings were well attended by participants from varied levels of government, MoHs, nongovernmental organizations and private industry, prompting spirited discussion and insight from these groups. All stakeholders were willing to attend these workshops as part of the scope of their professional duties.

S3: stakeholders engaged in participatory and transdisciplinary research approaches to co-produce knowledge and inform policy

In the two examples presented in this section, the research questions and approaches arose through community and stakeholder participation in the research and intervention design itself. The methodology adopted allowed them to engage, design research, act, share and sustain partnerships between the communities, the involved stakeholders and researchers [ 22 ]. These participatory research approaches facilitated grassroot-level policy and practice changes which were not researcher nor policy maker led, and that show promising approaches for developing culturally aligned solutions [ 23 ]. Policy makers at both the regional and national levels were invited to be part of the participatory research approach: they were interviewed during the initial stage, then the research results were presented and discussed with them; thereafter, we had several meetings to co-create potential interventions to address the identified problems, with the aim to directly engage in the research and intervention design itself in partnerships with the community stakeholders, including local leaders, and the researchers.

Surveillance and response to zoonotic diseases in Maya communities of Guatemala: a case for OneHealth

The research was embedded in a collaboration between the Universidad del Valle in Guatemala, the MoH, the Ministry of Animal Production and Health, the Maya Qéqchi’ Council of Elders, TIGO Telecommunications Foundation and the community development councils. The objective of this r4d programme was to set up integrated animal–human disease surveillance (OneHealth) in Maya communities in Guatemala. The research approach arose from a context of medical pluralism, where communities have access to and use two different medical systems: (1) the modern Western medical system and (2) traditional Maya medicine [ 24 ].

Researchers and community members collaborated at all stages of the research process, including the planning stage. Even before the grant proposal was finalized, researchers met with the communities that, should the funding come through, would be invited to participate in the research. According to the researchers:

“The project was set up through a transdisciplinary process, with academic and non-academic actors—including national, local and traditional authorities—involved in the problem through a collaborative design, analysis, dissemination and research translation. It was a co-producing transformative process—transferring knowledge between academic and non-academic stakeholders in plenary sessions and through group work. These meetings were held every year to continuously follow up the progress of the process” (Researcher 7).

The active engagement and collaboration by the community and stakeholders facilitated the acceptability of the study results and hence its dissemination, captured by the researchers as follows:

“The main result was that they allowed a frank discussion between Maya medical exponents in human–animal health and Western medicine, which allowed the patients and the animal holders to avoid the cognitive dissonance and so that the patients or the animal holders can choose freely what they want. Cognitive dissonance exists if one system dominates the other—or refutes the other” (Researcher 7).
“After all stakeholders discussed the research evidence produced jointly, an unprecedented process of collaboration between Government authorities and communities followed to develop three joint responses: a) education campaigns led by local teachers in tandem with the Ministry of Education, b) communication strategies at regional levels led by the Human and Animal Health authorities along with traditional Maya Ajilonel (medicine specialists), and c) a policy framework for producing a OneHealth approach led by Central Government authorities” (Researcher 8).

The process of mutual learning throughout the project produced a new level of awareness, facilitating culturally pertinent and socially robust responses that overcame a historical tendency of unilateral policy making based solely on Western values and preferences. As the project implemented a new approach to monitoring animal and human populations, the involvement of regional teams from the different ministries (Health, Livestock and Agriculture) throughout all the phases of methodological design, data collection, posterior data analysis and design of specific interventions for the local population (transformation of scientific results into actions for public health improvement) was essential to ensuring that the approach used secured the regional authorities’ commitment to defining new policies for immediate application in their territory. Accordingly, this also contributed towards the development of a OneHealth national strategy for Guatemala in which ministries start to cooperate to take up priority issues.

Addressing the double burden of disease: improving health systems for non-communicable and neglected tropical diseases (Community Health System Innovation [COHESION])

Together with three Swiss academic partners, this r4d project examined the challenges of a double burden of non-communicable and neglected tropical diseases at the primary healthcare level in vulnerable populations in Mozambique, Nepal and Peru. Community participation and co-creation were key elements of the project’s approach. The work conducted in Peru illustrates this approach:

“At the beginning, the people who were involved were respondents, but then they became active participants. So it was this active engagement and the changing of roles, giving feedback not just from the research responses but also from being involved in the process, which helped to design and create interventions together with the research team” (Researcher 5).

This participatory approach to co-creation actively sought a diverse range of stakeholders, including community members, primary healthcare workers, and regional and national health authorities. The co-creation approach to participatory research enables context-specific variation in methodological design, a critical element when studying three very different countries and health systems. Central to all aspects was a feedback loop whereby early findings were shared with research participants for further elaboration and iteration.

As active co-creators of the research process, local communities developed high levels of trust in the methodology and data, with the result that researchers achieved deeper “buy-in” which in turn is known to enhance the uptake of findings by decision-makers [ 25 ] as communities in which research is being undertaken play a central role in the decision-making process [ 26 ].

Challenges to research uptake in health policy identified by r4d researchers

During the interviews, r4d researchers identified several challenges to research utilization and uptake into policy. These challenges are summarized and highlighted in Table 3 .

Three key strategies identified for research uptake in policy and practice are described in this paper, namely: (S1) stakeholders directly engaged with and sought evidence from researchers; (S2) stakeholders were involved in the design and throughout the implementation of the research project; and (S3) stakeholders engaged in participatory and transdisciplinary research approaches to co-produce knowledge and inform policy. These strategies are in line with the overall objectives of the r4d projects, which are to generate scientific knowledge and research-based solutions to reduce poverty and global risks in LMICs, and also to offer national and international stakeholders integrated approaches to solving problems [ 5 ]. In the course of our synthesis work, we found that several lessons could be learned from the three strategies identified for research uptake in policy and practice.

S1: raising awareness of planned research to attract stakeholder involvement

The actual uptake of research findings in policy was most direct in the case of the first strategy (S1), in which IAS and WHO stakeholders were wanting new knowledge on HIV and same-day ART initiation, and were actively seeking new evidence on these specific topics. The findings published in peer-reviewed journals were then taken up by these stakeholders to update international policies and guidelines on rapid ART initiation [ 27 ]. This was also found in other studies, highlighting the importance of the timeliness and relevance of findings and the production of credible and trustworthy reports, among others, as key factors in promoting the use of research evidence in policy [ 2 , 28 ].

S2: sustainable collaborations in a supportive policy environment with stakeholder engagement from early on and throughout the research process

With regards to the second strategy (S2), we found that constant collaboration with an advisory and steering group composed of diverse stakeholders, including policy-makers, from early on promotes the uptake and use of research evidence. In line with findings from other studies [ 2 ], the experiences encountered in the r4d public health projects show that early involvement of stakeholders in the processes to identify the research problem and set the priorities facilitated the continuous exchange of information that might ultimately influence policy. The r4d project on social governance mechanisms in Ghana highlight that the evidence produced influenced policy documents (identification of the poor and tariff adjustments), but that frequent changes government officials made it difficult to maintain a close relationship between the researchers and the governmental agencies/policy stakeholders. From this, we draw the conclusion that research approaches need to be more adaptive and flexible to be successful in an unsupportive or unstable policy environment to ensure continuity in promoting the dissemination and uptake of research evidence in policy-making. One possible manner to secure this transformation is for researchers to apply for additional funding after the grant is finished. Other studies have also come to this conclusion, thereby demonstrating the key role of a supportive and effective policy environment that includes some degree of independence in governance and financing, strong links to stakeholders that facilitate trust and influence and also the capacity within the government workforce to process and apply policy advice developed by the research findings [ 29 ]. By involving stakeholders in the process of identifying research objectives and designing the project, as seen particularly in the r4d case studies on social health protection in Ghana and Tanzania and the HI4SD, but also in the HIV care cascade in Lesotho, the research approach responded to the need of locally led and demand-driven research in these countries, strengthening local research capacities and institutions, but also investing in research that is aligned with the national research priorities. As highlighted by other authors, advantages of this “demand-driven” approach is that it tailors research questions to local needs, helps to strengthen local individual and organizational capacities and provides a realized stringent framework on which a research project should deliver outcomes [ 30 , 31 ].

S3: co-creation and equal partnerships

The third strategy with a strong participatory approach, such as that adopted by two r4d projects, OneHealth in Guatemala and COHESION, demonstrates benefits to promoting co-learning as a way to minimize the impact of unequal power dynamics and to work effectively across the local policy landscape through equal partnerships. It also facilitates identifying solutions that are culturally pertinent, socially more robust and implementable.

The approaches of co-creation, equal participation and stakeholder involvement used in the research projects raise questions of ‘governance’, that is the way rules, norms and actions are structured, sustained and regulated by public and para-public actors to condition the engagement and impact of public involvement activities [ 32 , 33 ]. Through stakeholder involvement in setting the agenda and designing the research projects, as shown in the case studies on social protection in Ghana and Tanzania and the HI4SD project, but particularly in the two projects using a co-creation approach, the engagement of a range of stakeholders serves to make the health research systems a participaant in the endeavor that then has the capacity to promote changes in the healthcare system it aims to serve. By establishing a shared vision with a public involvement agenda and through the collaborative efforts of various stakeholders, as we found particularly in the co-creation approach, supportive health research systems are established. This leads to greater public advancement through collaborative actions, thereby tackling the stated problems of the health systems [ 34 ].

There were four key challenges mentioned by the respondents during the interviews to research uptake in policy making. The first was the necessary time investment by researchers to translate the result and develop policy advocacy products for the different audiences. This challenge is all the more difficult because research evidence and tangible products only become available towards the end of a research project, leaving only a short window of opportunity for exchange and engagement. There seems to be a need for wider discussion on the role of researchers in influencing policy. The concerns raised included whether influencing policy is actually a role for researchers and whether researchers have the right skills to be effective in persuasion or network formation [ 35 ]. Conversely, researchers may be in a good position to engage in the policy process if they enjoy finding solutions to complex problems while working with diverse and collaborative groups in partnerships [ 36 , 37 ]. The rationale for engaging in such a process needs to be clarified in advance: is the aim to frame an existing problem, or is it to simply measure the issues at stake and provide sound evidence according to an existing frame? Regarding the the former, how far should researchers go to be useful and influential in the policy process or to present challenges faced by vulnerable populations [ 37 ]? While fully engaging in the policy process may be the best approach for researchers to achieve credibility and impact, there may also be significant consequences, such as the risk of political interests undermining the methodological rigour of academic research (being considered as academic ‘lightweight’ among one’s peer group) [ 38 , 39 , 40 , 41 ]. For researchers there is also considerable opportunity costs because engaging in the policy-influencing process is a time-consuming activity [ 35 ], with no clear guarantee of the impact of success [ 37 ]. It is therefore crucial to consider the investment and overall time researchers may have to spend to engage [ 35 ], and how this time and investment can be distributed between actual research and the production of outreach products, such as policy briefs, presentation of research findings as policy narratives [ 35 ] and the setting-up of alliances, building of networks and exploitation of windows of opportunity for policy change [ 37 ].

The second challenge included the issue of scale and objectivity, as most of the projects are not scaled or national-level studies and thus are highly context specific. The difficulty to measure the contributions of a single research project or study in terms of policy outcomes was also highlighted, particularly in view of the different understandings among researchers and funders on the possible policy impacts of the research, which can range from guiding policy-makers to understand a situation or problem (awareness raising) to influencing a particular course of action by establishing new or revising existing policies. This has also been emphasized in the Evidence Peter Principle [ 42 ], showing that single studies are often inappropriately used to make global policy statements for which they are not suitable. To make global policy statements, an assessment of the global evidence in systematic reviews is needed [ 42 , 43 ].

The third challenge mentioned was the frequent changes in staff at the governmental level, which demanded continuous interactions between r4d researchers and stakeholders, highlighting the need for more adaptive and flexible research approaches. These should include a thorough analytical process prior to implementation in historical, sociopolitical and economic aspects, power differentials and context; backward planning exercises to check assumptions; and conflict transformation and negotiation skills in order to be able to constantly adapt to changing contexts. In line with our research findings, when researchers make the time investment needed to engage in the policy-influencing process, an opportunity is provided to getting know the involved stakeholders better and improve their understanding of the policy world in practice, but also to build diverse and longer-term networks [ 37 , 44 ] and to identify policy problems and the appropriate stakeholders to work with [ 45 , 46 ]. Engaging a diverse range of stakeholders through co-designing the research is widely held to be practically the best way to guarantee the uptake and use of evidence in policy through a more dynamic research approach [ 47 ]. However, the development of networks and contacts for collaboration, as well as the skills to do so, takes time and effort and is an ongoing process [ 48 ], factors which need to be acknowledged more widely.

Lastly, the fourth challenge related to research uptake was the diverging interests between researchers, research funding bodies and stakeholders. Time was identified as a limiting factor from the perspective of the design of the research project. Most research projects, including the r4d projects, are funded for 3–4 years [ 5 ]. It takes a considerable amount of time to generate new research results, and often these are more likely to be produced for further use at the end of a project. If researchers should engage more fully in the policy process to secure meaningful impact, it is critical to discuss the extent to which they have the skills, resources and institutional support to do so [ 37 ], as well as how projects could be set up differently. This could be done either by the funders in providing the necessary support that allows researchers to have the means to impact policy, or by the researchers in the design of their project to take on board the different strategies to influence evidence use and uptake. In moving forward, defining shared goals from the outset between funders and the researchers might translate to more achievable milestones in terms of which policy issue, theme or process a research project aims to change in order to effectively influence policy [ 49 ]. This would help to identify the resources and budget needed by the funders in order for the researchers to engage with more resources over a longer time span in this process.

Limitations

Interviews were limited to researchers of the r4d projects and did not include local stakeholders. Therefore, the synthesis work, including the analysis and results, reflects solely the perspective of researchers. We are aware that had we included a range of stakeholders, including policy-makers, in the sample, we would have potentially been able to identify additional factors relating to social, cultural and political barriers to the use and uptake of research findings in politics and practice. However, constraints such as access to local stakeholders, language barriers and time zones drove our decision to focus on researchers. A future synthesis effort would need to include the other voices.

There is ever growing awareness of how critical it is to close the gap between policy-makers, practitioners and researchers. Using the researchers’ perspectives, in this article we give insight into three different strategies that can facilitate this process, with the first strategy requiring proactive searching for the latest findings on the part of well-informed policy-makers, the second requiring researchers to take steps to ensure an active exchange of ideas and information with diverse stakeholders when designing the research project and ensuring the latter’s involvement throughout; and the third using a transdisciplinary and/or co-creation approach to establish equal partnerships and trust among all involved stakeholders.

The five case studies reported here also show some of the difficulties that prevail for research to be taken up into policy and practice, despite everyone’s best intentions and efforts. Researchers may not always be best placed for communication, dissemination and advocacy work, all activities which are very time intensive or become important only towards the end of a research project when clear and high-quality evidence is produced. Moreover, it takes a strong body of evidence, advocacy and coalition building with appropriate stakeholders to influence policy, and then a further major effort of resources to see policy followed through into practice. It is through experiences such as this synthesis initiative that precious insights and learning can be gained for the common good of all involved moving forward, and it is crucial that funders continue to support and/or adapt their funding schemes to ensure some of these strategies are implemented.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Antiretroviral therapy

Country Advisory Group

Community Health System Innovation

Health impact assessment

Health impact assessment for engaging natural resource extraction projects in sustainable development in producer regions

Human immunodeficiency virus

International Aids Society

Jazia Prime Vendor System

Low- and middle-income countries

Ministry of Health

Ministry of Gender Children and Social Protection

Center for Development and Cooperation at the Swiss Federal Institute of Technology

National Health Insurance Authority

National Health Insurance Scheme

Swiss Programme for Research on Global Issues for Development

Swiss Agency for Development and Cooperation

Swiss National Science Foundation

Swiss Tropical and Public Health Institute

Universal health coverage

World Health Organization

Court J, Young J. Bridging research and policy in international development: an analytical and practical framework. J Dev Pract. 2006;16(1):85–90.

Article   Google Scholar  

Uzochukwu B, Onwujekwe O, Mbachu C, Okwuosa C, Etiaba E, Nyström ME, Gilson L. The challenge of bridging the gap between researchers and policy makers: experiences of a Health Policy Research Group in engaging policy makers to support evidence informed policy making in Nigeria. Glob Health. 2016;12(1):67.

Di Ruggiero E, Edwards N. The interplay between participatory health research and implementation research: Canadian research funding perspectives. Biomed Res Int. 2018;2018:1519402.

Rau H, Goggins G, Fahy F. From invisibility to impact: recognising the scientific and societal relevance of interdisciplinary sustainability research. Res Policy. 2018;47(1):266–76.

Swiss National Science Foundation: Swiss Programme for Research on Global Issues for Development (r4d programme). http://www.r4d.ch/r4d programme/portrait . Accessed 20 Jan 2020.

United Nations: Sustainable Development Goals. https://sustainabledevelopment.un.org/ . Accessed 17 Dec 2019.

Shroff ZC, Javadi D, Gilson L, Kang R, Ghaffar A. Institutional capacity to generate and use evidence in LMICs: current state and opportunities for HPSR. Health Res Policy Syst. 2017;15(1):94.

McKee M. Bridging the gap between research and policy and practice Comment on “CIHR health system impact fellows: reflections on ‘driving change’ within the health system.” Int J Health Policy Manag. 2019;8(9):557–9.

World Health Organization. Sound choices: enhancing capacity for evidence-informed health policy. In: Bennett S, Green A, editors. Geneva: WHO; 2007.

Stoker G, Evans M. Evidence-based policy making in the social sciences: methods that matter. Bristol: Policy Press; 2016.

Book   Google Scholar  

Brinkerhoff DW, Crosby B. Managing policy reform: concepts and tools for decision-makers in developing and transitioning countries. Sterling: Kumarian Press; 2002.

Google Scholar  

Hardee KFI, Boezwinkle J, Clark B. A framework for analyzing the components of family planning, reproductive health, maternal health, and HIV/AIDS policies. Wilmette: The Policy Circle; 2004.

Mayring P. Qualitative Inhaltsanalyse. In: Mey G, Mruck K, editors. Handbuch qualitative Forschung in der Psychologie. Wiesbaden: Springer; 2010. pp. 601–613.

Mayring P, Fenzl T. Qualitative Inhaltsanalyse. In: Baur N, Blasius J, editors. Handbuch Methoden der empirischen Sozialforschung. Wiesbaden: Springer; 2014. pp. 543–556.

Rose DC, Amano T, González-Varo JP, Mukherjee N, Robertson RJ, Simmons BI, Wauchope HS, Sutherland WJ. Calling for a new agenda for conservation science to create evidence-informed policy. Biol Conserv. 2019;238:108222.

Shiffman J, Smith S. Generation of political priority for global health initiatives: a framework and case study of maternal mortality. Lancet. 2007;370(9595):1370–9.

Labhardt ND, Ringera I, Lejone TI, Klimkait T, Muhairwe J, Amstutz A, Glass TR. Effect of offering same-day ART vs usual health facility referral during home-based HIV testing on linkage to care and viral suppression among adults with HIV in Lesotho: the CASCADE randomized clinical trial. JAMA. 2018;319(11):1103–12.

Kuwawenaruwa A, Wyss K, Wiedenmayer K, Metta E, Tediosi F. The effects of medicines availability and stock-outs on household’s utilization of healthcare services in Dodoma region, Tanzania. Health Policy Plan. 2020;35(3):323–33.

Kuwawenaruwa A TF, Metta E, Obrist B, Wiedenmayer K, Msamba V, Wyss K. Acceptability of a prime vendor system in public healthcare facilities in Tanzania. Int J Health Policy Manag. 2020. (in press).

Kuwawenaruwa ATF, Obrist B, Metta E, Chiluda F, Wiedenmayer K, Wyss K. The role of accountability in the performance of Jazia prime vendor system in Tanzania. J Pharm Policy Pract. 2020;2020(13):25.

Farnham A, Cossa H, Dietler D, Engebretsen R, Leuenberger A, Lyatuu I, Nimako B, Zabre HR, Brugger F, Winkler MS. Investigating health impacts of natural resource extraction projects in Burkina Faso, Ghana, Mozambique, and Tanzania: protocol for a mixed methods study. JMIR Res Protoc. 2020;9(4):e17138.

Beran D, Lazo-Porras M, Cardenas MK, Chappuis F, Damasceno A, Jha N, Madede T, Lachat S, Perez Leon S, Aya Pastrana N, Pesantes MA, Singh SB, Sharma S, Somerville C, Suggs LS, Miranda JJ. Moving from formative research to co-creation of interventions: insights from a community health system project in Mozambique, Nepal and Peru. BMJ Gob Health. 2018;3(6):e001183.

Mertens DM. Advancing social change in South Africa through transformative research. S Afr Rev Sociol. 2016;47(1):5–17.

Berger-González M, Stauffacher M, Zinsstag J, Edwards P, Krütli P. Transdisciplinary research on cancer-healing systems between biomedicine and the Maya of Guatemala: a tool for reciprocal reflexivity in a multi-epistemological setting. Qual Health Res. 2016;26(1):77–91.

Theron LC. Using research to influence policy and practice: the case of the pathways-to-resilience study (South Africa). In: Abubakar A, van de Vijver FJR, editors. Handbook of applied developmental science in sub-Saharan Africa. New York: Springer; 2017. pp. 373–87.

Baum F, MacDougall C, Smith D. Participatory action research. J Epidemiol Community Health. 2006;60(10):854–7.

WHO. Guidelines for managing advanced HIV disease and rapid initiation of antiretroviral therapy. Geneva: World Health Organization; 2017.

Lavis JN, Oxman AD, Lewin S, Fretheim A. SUPPORT Tools for evidence-informed health policymaking (STP) 3: setting priorities for supporting evidence-informed policymaking. Health Res Policy Syst. 2009;7(S1):I3.

Bennett S, Corluka A, Doherty J, Tangcharoensathien V, Patcharanarumol W, Jesani A, Kyabaggu J, Namaganda G, Hussain AMZ, de-Graft Aikins A. Influencing policy change: the experience of health think tanks in low- and middle-income countries. Health Policy Plan. 2011;27(3):194–203.

Kok MO, Gyapong JO, Wolffers I, Ofori-Adjei D, Ruitenberg EJ. Towards fair and effective North–South collaboration: realising a programme for demand-driven and locally led research. Health Res Policy Syst. 2017;15(1):96.

Wolffers I, Adjei S. Research-agenda setting in developing countries. Lancet. 1999;353(9171):2248–9.

Article   CAS   Google Scholar  

Dodgson R, Lee K, Drager N. Global health governance: a conceptual review. London: Centre on Global Change and Health, London School of Hygiene & Tropical Medicine/World Health Organization; 2018.

Saltman RB, Ferroussier-Davis O. The concept of stewardship in health policy. Bull World Health Organ. 2000;78(6):732–9.

CAS   PubMed   PubMed Central   Google Scholar  

Miller FA, Patton SJ, Dobrow M, Marshall DA, Berta W. Public involvement and health research system governance: a qualitative study. Health Res Policy Syst. 2018;16(1):87.

Lloyd J. Should academics be expected to change policy? Six reasons why it is unrealistic for research to drive policy change. https://blogs.lse.ac.uk/impactofsocialsciences/2016/05/25/should-academics-be-expected-to-change-policy-six-reasons-why-it-is-unrealistic/ . Accessed 28 May 2020.

Petes LE, Meyer MD. An ecologist’s guide to careers in science policy advising. Front Ecol Environ. 2018;16(1):53–4.

Oliver KCP. The dos and don’ts of influencing policy: a systematic review of advice to academics. Palgrave Commun. 2019;5(1):21.

Hutchings JA, Stenseth NC. Communication of science advice to government. Trends Ecol Evol. 2016;31(1):7–11.

Maynard A. Is public engagement really career limiting?. https://www.timeshighereducation.com/blog/public-engagement-really-career-limiting . Accessed 27 May 2020.

Haynes AS, Derrick GE, Chapman S, Redman S, Hall WD, Gillespie J, Sturk H. From “our world” to the “real world”: exploring the views and behaviour of policy-influential Australian public health researchers. Soc Sci Med. 2011;72(7):1047–55.

Crouzat E, Arpin I, Brunet L, Colloff MJ, Turkelboom F, Lavorel S. Researchers must be aware of their roles at the interface of ecosystem services science and policy. Ambio. 2018;47(1):97–105.

White H. The Evidence Peter Principle: the misuse and abuse of evidence Reflections on the evidence architecture. https://www.campbellcollaboration.org/blog/the-evidence-peter-principle-the-misuse-and-abuse-of-evidence.html?utm_source=Campbell+Collaboration+newsletters&utm_campaign=4dfa01ec7d-Newsletter+September+2019&utm_medium=email&utm_term=0_ab55bacb0c-4dfa01ec7d-199138457 . Accessed 29 Jan 2020.

Caird J, Sutcliffe K, Kwan I, Dickson K, Thomas J. Mediating policy-relevant evidence at speed: are systematic reviews of systematic reviews a useful approach? Evid Policy J Res Debate Pract. 2015;11(1):81–97.

Evans MC, Cvitanovic C. An introduction to achieving policy impact for early career researchers. Palgrave Commun. 2018;4(1):88.

Echt L. Context matters:” a framework to help connect knowledge with policy in government institutions. LSE Impact Blog. https://blogs.lse.ac.uk/impactofsocialsciences/2017/12/19/context-matters-a-framework-to-help-connect-knowledge-with-policy-in-government-institutions/ . Accessed 13 July 2020.

Lucey JM, Palmer G, Yeong KL, Edwards DP, Senior MJM, Scriven SA, Reynolds G, Hill JK. Reframing the evidence base for policy-relevance to increase impact: a case study on forest fragmentation in the oil palm sector. J Appl Ecol. 2017;54(3):731–6.

Green D: How academics and NGOs can work together to influence policy: insights from the InterAction report. LSE Impact blog https://blogs.lse.ac.uk/impactofsocialsciences/2016/09/23/how-academics-and-ngos-can-work-together-to-influence-policy-insights-from-the-interaction-report/ . Accessed 13 July 2020.

Boaz A, Hanney S, Borst R, O’Shea A, Kok M. How to engage stakeholders in research: design principles to support improvement. Health Res Policy Syst. 2018;16(1):60.

Tilley HSL, Rea J, Ball L, Young J. 10 things to know about how to influence policy with research. London: Overseas Development Institute; 2017.

Download references

Acknowledgements

The authors would like to acknowledge the contribution of Dr Claudia Rutte from the r4d programme/SNSF for her inputs to the history and background of the r4d programme.

The r4d synthesis initiative is implemented by the Swiss Tropical and Public Health Institute, which funded the costs of publishing this paper.

Author information

Authors and affiliations.

Swiss Tropical and Public Health Institute, Basel, Switzerland

Séverine Erismann, Andrea Leuenberger, Andrea Farnham, Monica Berger Gonzalez de White, Niklaus Daniel Labhardt, Fabrizio Tediosi, August Kuwawenaruwa, Jakob Zinsstag, Kaspar Wyss & Helen Prytherch

University of Basel, Basel, Switzerland

CRONICAS Centre of Excellence in Chronic Diseases, Universidad Peruana Cayetano Heredia, Lima, Peru

Maria Amalia Pesantes

Division of Tropical and Humanitarian Medicine, University of Geneva and Geneva University Hospitals, Geneva, Switzerland

David Beran

Centro de Estudios en Salud, Universidad del Valle de Guatemala, Guatemala, Guatemala

Monica Berger Gonzalez de White

Department of Infectious Diseases and Hospital Epidemiology, University Hospital Basel, Basel, Switzerland

Niklaus Daniel Labhardt

School of Public Health, College of Health Sciences, University of Ghana, Accra, Ghana

Patricia Akweongo

Ifakara Health Institute, Plot 463, Kiko Avenue Mikocheni, Dar es Salaam, Tanzania

August Kuwawenaruwa

Swiss Federal Institute of Technology Zurich, Zurich, Switzerland

Fritz Brugger

Gender Centre, Graduate Institute of International and Development Studies, Geneva, Switzerland

Claire Somerville

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the writing of this manuscript. Each author contributed with synthesizing their project experiences and with the discussion and recommendations. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Séverine Erismann or Helen Prytherch .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Erismann, S., Pesantes, M.A., Beran, D. et al. How to bring research evidence into policy? Synthesizing strategies of five research projects in low-and middle-income countries. Health Res Policy Sys 19 , 29 (2021). https://doi.org/10.1186/s12961-020-00646-1

Download citation

Received : 15 July 2020

Accepted : 15 October 2020

Published : 06 March 2021

DOI : https://doi.org/10.1186/s12961-020-00646-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Evidence-based policy-making
  • Research for development

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

policy makers research paper

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Health Res Policy Syst

Logo of hlthresps

What can we learn from interventions that aim to increase policy-makers’ capacity to use research? A realist scoping review

Abby haynes.

1 Sax Institute, 235 Jones Street, Ultimo, NSW 2007 Australia

2 Sydney School of Public Health, Edward Ford Building (A27), University of Sydney, Camperdown, NSW 2006 Australia

Samantha J. Rowbotham

3 Menzies Centre for Health Policy, University of Sydney, Sydney, Australia

4 The Australian Prevention Partnership Centre, Ultimo, NSW 2007 Australia

Sally Redman

Sue brennan.

5 Australasian Cochrane Centre, School of Public Health and Preventive Medicine, Monash University, Clayton, VIC 3800 Australia

Anna Williamson

Gabriel moore, associated data.

Health policy-making can benefit from more effective use of research. In many policy settings there is scope to increase capacity for using research individually and organisationally, but little is known about what strategies work best in which circumstances. This review addresses the question: What causal mechanisms can best explain the observed outcomes of interventions that aim to increase policy-makers’ capacity to use research in their work?

Articles were identified from three available reviews and two databases (PAIS and WoS; 1999–2016). Using a realist approach, articles were reviewed for information about contexts, outcomes (including process effects) and possible causal mechanisms. Strategy + Context + Mechanism = Outcomes (SCMO) configurations were developed, drawing on theory and findings from other studies to develop tentative hypotheses that might be applicable across a range of intervention sites.

We found 22 studies that spanned 18 countries. There were two dominant design strategies (needs-based tailoring and multi-component design) and 18 intervention strategies targeting four domains of capacity, namely access to research, skills improvement, systems improvement and interaction. Many potential mechanisms were identified as well as some enduring contextual characteristics that all interventions should consider. The evidence was variable, but the SCMO analysis suggested that tailored interactive workshops supported by goal-focused mentoring, and genuine collaboration, seem particularly promising. Systems supports and platforms for cross-sector collaboration are likely to play crucial roles. Gaps in the literature are discussed.

This exploratory review tentatively posits causal mechanisms that might explain how intervention strategies work in different contexts to build capacity for using research in policy-making.

Electronic supplementary material

The online version of this article (10.1186/s12961-018-0277-1) contains supplementary material, which is available to authorized users.

There is widespread agreement that the use of research in policy-making could be improved, with potentially enormous social gains [ 1 , 2 ]. There are disputes about the extent to which research can inform policy decision-making, and about what forms of research knowledge should be most valued and how they should be applied. However, these are underpinned by a shared belief that the effective use of relevant and robust research within policy processes is a good thing [ 3 – 6 ]; specifically, that research-informed policies can help prevent harm, maximise resources, tackle the serious challenges facing contemporary healthcare, and otherwise contribute to improved health outcomes [ 7 – 10 ]. Ensuring that health policy-makers have the capacity to access, generate and use research in decision-making is a priority.

Despite a rapidly growing body of literature about the use of research in policy-making, we have a limited understanding of how best to help policy-makers use research in their day-to-day work, partly because most of the literature is either descriptive or theoretical. Descriptive studies often struggle to identify findings that are transferable to other settings, which can limit their utility for informing intervention design [ 11 ]. Theoretical studies have produced many models, concepts and frameworks, but these are often hard to operationalise [ 12 ]. For example, Field et al. [ 13 ] found that one such framework, while frequently cited, was used with varying levels of completeness and seldom guided the actual design, delivery or evaluation of intervention activities. The authors conclude that prospective, primary research is needed to establish the real value of theoretical models and tools. Yet, testing specific strategies to increase or otherwise improve the use of research in policy processes is relatively underdeveloped [ 14 , 15 ]. Consequently, we have a plethora of ideas about what may or may not support research-informed policy-making, but little robust empirical knowledge about what strategies are effective in which contexts.

This paper brings together information about interventions designed to build capacity for using research in policy processes and explores possible transferable lessons for future interventions.

Using research in policy-making

The concept of research-informed policy has emerged from multiple disciplines with different paradigmatic influences, leading to debates about what we should expect of policy-making and the role of research within it [ 4 , 16 – 18 ]. In summary, many reject what they see as inaccurate assumptions about the extent to which policy-making is a linear technical–rational process in which ‘objective’ academic research can be used instrumentally (i.e. to direct policy decision-making) [ 5 , 16 ]. This argument is premised on policy being a rhetorical arena [ 19 ] where, often, facts are uncertain, values are contested, stakes high and decisions urgent [ 20 ]. Such views challenge the expectation that improved access to research, or greater capacity to use it, will result in increased use [ 21 ]. Indeed, several studies show that individual attributes and contextual factors frequently prevent the use of apparently helpful evidence (e.g. [ 22 – 25 ]). Some go further, questioning the assumption that there is a policy-research ‘gap’ that needs ‘bridging’, or that more use of research in policy-making will necessarily produce better health outcomes [ 26 ].

Counter arguments highlight the enormous number of policy-makers who are actively and effectively engaged (and often qualified) in using research, the success of strategies for improving research use such as policy dialogues, rapid review programs and partnership approaches [ 27 – 33 ], and the many cases where research has demonstrably influenced policy-making with positive results [ 2 , 33 – 35 ]. From this perspective, research is only one source of evidence amongst many, but it is a critical source that has the potential to guide agendas, maximise the functioning of programs, support governments and service providers to act in the public interest, and hold them accountable when they get it wrong [ 2 , 6 , 36 , 37 ]. As Lomas puts it, “ …the goal here is not for the imperial forces of research to vanquish all other inputs. Rather it is to have the role of research optimized in the context of other perfectly reasonable considerations… ” ([ 38 ], p. xiii).

Building capacity to use research in policy-making

Capacity-building is conceptualised as a suite of strategies that seek to “ increase the self-sustaining ability of people to recognize, analyse and solve their problems by more effectively controlling and using their own and external resources ” ([ 39 ], p. 100). Thus, effective capacity-building interventions facilitate not merely technical skills development, but increased agency and agility [ 40 ].

Capacity is a multi-dimensional concept spanning different levels – individual, interpersonal, organisational and environmental [ 41 , 42 ] – each of which is likely to require quite different intervention strategies and evaluation methods [ 43 ]. Capacity-building in policy agencies uses “ a variety of strategies that have to do with increasing the efficiency, effectiveness and responsiveness of government ” ([ 40 ], p. 212). In this context, performance and accountability are essential [ 4 , 42 , 44 ]. Greater capacity to use research can enhance the former by increasing the effectiveness of public policies and programs, and the latter by providing an independent and scientifically verified rationale for decision-making [ 42 , 45 , 46 ]. In this study, we use the term ‘research’ broadly to include collections or analyses of data, or theory, found in peer-reviewed papers, technical monographs or books, or in grey literature such as internal evaluations and reports on authoritative websites, or presentations or advice from researchers.

Multiple dimensions of capacity affect the use of research in policy-making. At the most concrete level, policy-makers must be able to get hold of useful research. This is research that (1) addresses policy problems, including costs and benefits, and produces findings that can be applied to local decision-making; (2) is accessible, i.e. readable, timely and easy to get hold of (not buried in information tsunamis or located behind firewalls); and (3) which has policy credibility, i.e. is conducted with sufficient scientific rigour to render it reliable, but is also methodologically fit-for-purpose and communicates the findings persuasively [ 47 – 49 ]. Thus, there is a scope to enhance the conduct, presentation and synthesis of research itself, as well as the means by which policy-makers access it [ 50 ].

Policy-makers may also need specialist knowledge and skills to access, appraise, generate and apply research in their work. Although many have substantial skills and experience in these areas, others do not [ 51 ]; they lack confidence and want training [ 46 , 52 ]. Individuals’ beliefs about the value of research and requirements of different policy roles are also considered to be important mediators of use [ 50 , 53 ].

Organisational capacity can constrain or enhance research use, irrespective of individual capabilities [ 54 ]. Institutional infrastructure, resourcing and systems, leadership and the underlying workplace culture, have an enormous impact on practice norms and expectations, and opportunities for skills development and application [ 50 , 54 – 57 ]. Organisational culture is notoriously hard to access and transform [ 58 ], but is considered to be a fundamental indicator, and facilitator, of research-informed practice [ 42 , 59 ]. This meso level is, in turn, impacted by the wider institutional systems in which policy organisations operate [ 51 ]. For instance, dominant views about what forms of evidence are valued and how they are sanctioned or incentivised in policy processes shape what capacities are needed and how they operate, but this sphere remains largely outside the scope of intervention trials [ 41 , 42 ].

The quality of the relationships between policy-makers and researchers is also (and increasingly) seen as critical for improving the development and uptake of useful research for policy-making [ 32 , 60 – 62 ]. Here, capacity-building focuses on forging or enhancing connections across a spectrum of interactivity from information exchange forums to formal partnerships and the co-production of research [ 63 – 65 ]. Individual, organisational and institutional capacity have crucial roles to play in forming and sustaining interpersonal networks [ 42 , 50 ].

These dimensions of capacity indicate the breadth and complexity of the capabilities that interventions can address, concerned as they are with products, resources, skills, beliefs, values, systems, institutional structures, boundaries and relationships, most of which have interdependencies.

This review explores a range of interventions designed to build capacity for research use in policy processes. Outcomes of interest are those related to capacity to use research, including capacity to access and apply research, to work productively with researchers and intermediaries such as knowledge brokers, the establishment of workforce and infrastructure supports, intention to use research and actual use.

Our purpose is two-fold – first, to describe the main characteristics of the interventions, namely the study designs, intervention goals and strategies, implementation settings, participants and outcomes. Second, to consider how process effects and outcomes were generated in those settings (see next section for definitions of these terms) drawing on theory from other studies to develop tentative hypotheses that might be applicable across varied intervention sites. Understanding context and theory is essential for understanding how interventions function in general [ 66 – 68 ], and how research is mobilised [ 69 – 71 ]. Our aim is to provide a first step towards developing practical and theoretically grounded guidance for future interventions of this type. The research question addressed is: What causal mechanisms can best explain the observed outcomes of interventions that aim to increase policy-makers’ capacity to use research in their work? Note, by ‘intervention’ we mean a purposeful attempt to bring about some identified change, this may involve the use of one or more ‘strategies’. We use the term ‘theory’ broadly to encompass a range of formally investigated and informal hypotheses about how intervention strategies bring about change, or why they do not.

Our approach: a realist scoping review

This a realist scoping review. In general, scoping reviews are “ … a form of knowledge synthesis that addresses an exploratory research question aimed at mapping key concepts, types of evidence, and gaps in research related to a defined area or field by systematically searching, selecting, and synthesizing existing knowledge ” ([ 72 ], p. 129–4). In this case, the research question and synthesis were strongly informed by realist philosophy and realist review methods; however, it does not fully adhere to the current criteria for conducting a realist (or rapid realist) review – hence the hybrid term (see Additional file  1 for comparison of scoping reviews, realist reviews and our methodology).

A realist approach is used because we aim to produce findings with potentially transferable implications for the design, implementation and evaluation of other interventions in this field. Realist reviews attempt to identify patterns that are articulated at the level of ‘middle range’ or program theory. This is thought to be most useful for our purposes because it is specific enough to generate propositions that can be tested, but general enough to apply across different interventions and settings [ 73 – 75 ]. Realist approaches are methodologically flexible, enabling reviews of broad scope suited to addressing the complexity of interventions in organisational systems [ 76 ], and for addressing questions about how interventions work (or not) for different people in different circumstances [ 77 ]. This inclusive approach enabled us to capture studies that used innovative and opportunistic strategies for enhancing research use in policy, and diverse methods for evaluating these strategies.

Mechanisms, process effects and outcomes

Realist evaluations and syntheses develop and test hypothesised relationships between intervention strategies, implementation contexts, causal mechanisms and observed outcomes. However, contexts, mechanisms and outcomes are not fixed entities in a tidy causal chain, but can shift position depending on the focus of the inquiry, i.e. they function as a context, mechanism or outcome at different levels of analysis and in different parts of a program [ 31 ]. For example, if the outcome of interest is research utilisation, then increased capacity is likely to be an important mechanism; however, if the inquiry takes capacity itself as the outcome of interest (as we do in this review), the focus will be on the more granular mechanisms that built this capacity. Therefore, in this review, we are looking for mechanisms that cause, and thus precede, capacity development. Different foci, and the corresponding shift in where we look for elements of causality, have implications for the review and synthesis of intervention studies, as we now explain.

Where interventions are new, or newly adapted, causal relationships are often examined at a relatively detailed level of granularity that emphasises specific points in the causal pathway. Broadly, we see process evaluation, formative evaluation and much of the qualitative research that is conducted in trials as trying to identify and explain how intervention strategies bring about process effects. These are the range of immediate responses (ways of interacting with and ‘translating’ the intervention) that shape distal responses to the intervention (final outcomes).

Intervention designers have hypotheses about what process effects are needed to achieve desired outcomes. Often, these are not articulated, sometimes because they are obvious (e.g. if no one attends a workshop then it clearly cannot be successful), or because these interactions are subsumed in blanket terms like ‘engagement’ and ‘participation’, which mask crucial details about how people engaged and participated, or why they did not. For example, intervention studies often report on the importance of ‘champions’, i.e. members of an organisation who actively promote an intervention or practice change. When looking at an intervention study as a whole, championing may function as a causal mechanism in that it helps to bring about change, but from a granular perspective, championing can be conceptualised as a process effect because (1) effective championing mediates intervention outcomes (the influence of champions on organisational change initiatives is well documented), and (2) it is generated by interactions between the intervention, participants and context (i.e. it is caused – people make conscious judgements about acting as champions). Consequently, in order to inform implementation improvements, and program improvement and adaptation, it may be useful to understand the causes of championing in more detail, for example, by asking ‘In this intervention and context, what perceptions and considerations influenced who became a champion and who did not?’. At this level of analysis, process effects are treated as proximal outcomes. We explore this in more detail elsewhere [ 78 ].

Figure  1 depicts how these two levels of focus might be applied to a research utilisation capacity-building intervention. Figure  1a illustrates a granular approach where the evaluation focuses on the relationship between immediate perceptions and experiences of the intervention (which function as mechanisms in this scenario) and how they lead to process effects (capacity-related responses such as participation in skills development programs, relationship development with researchers, or managers funding access to research databases and other workplace supports). This contrasts with Fig.  1b , which depicts an evaluation that is more focused on distal outcomes and thus takes a higher-level perspective, collapsing the causal detail and blurring the distinction between process effects and mechanisms. From this perspective, many process effects are indeed mechanisms.

An external file that holds a picture, illustration, etc.
Object name is 12961_2018_277_Fig1_HTML.jpg

Different levels of focus depending on the evaluation purpose and outcomes in studies of research utilisation capacity-building interventions. The black dotted lines reflect the focus of enquiry. In ( a ), the focus is on immediate responses to the intervention, namely process effects and the mechanisms through which these are brought about. In ( b ), where the focus of the inquiry is on more distal outcomes using a higher level of analysis to investigate causality, mechanisms and process effects are functionally the same thing, i.e. proximal responses to the intervention

In practice, this distinction between granular and high-level foci is usually a question of emphasis rather than a demarcation, and it is not always clear where the focus of an evaluation lies. Although realist findings are often tabulated, which can imply that phenomena exist in strict compartments, the propositions that describe causal pathways tend to suggest greater fluidity and often incorporate process effects and the mechanisms that generate them.

We highlight this distinction here because our findings depend on both the models above. This reflects the different evaluative emphases of the reviewed studies, and their diverse outcome measures. Consequently, we present findings that include intervention strategies, contexts, mechanisms, process effects and outcomes.

Focus of the review: what and who?

This review is founded on the realist assumption that the success or failure of an intervention depends on the interactions between the intervention strategies and the implementation context. Identifying patterns in these interactions enables us to develop tentative theories about what may plausibly occur in other interventions [ 73 , 79 ]. Thus, in reviewing the literature, close attention must be paid to the characteristics of the intervention, the people it targets and the settings in which it takes place. To this end, we differentiate between two types of intervention that are often conflated in the literature. This review focuses on research utilisation interventions that aim to increase the effectiveness with which professionals engage with research and use it in their decision-making. We see these interventions as having very different goals to research translation interventions that attempt to modify clinical practice in line with research-informed standards or guidelines. The former will generally attempt to build capacity in some form (e.g. by providing resources, training, reflective forums or partnership opportunities designed to enhance professional practice) and may increase individuals’ agency and critical engagement with research, whereas the latter more often seek to institutionalise adherence to a specific practice protocol that may constrain autonomy and critical reflection. Harrison and McDonald [ 80 ] make a similar distinction between the “ critical appraisal model ” of research-informed practice, where participants are encouraged to critique and incorporate concepts in their practice, and the “ scientific-bureaucratic model ” that attempts to regulate practice. The contrasting ‘politics of knowledge’ inherent in these two models are likely to trigger quite different responses in relation to professional identity, self-determination and organisational accountability [ 38 , 80 ].

Second, we differentiate between research utilisation capacity-building interventions targeting policy-makers in government agencies and those targeting health practitioners based in service organisations. While both tackle complex systems with distributed decision-making, we believe that the contextual characteristics of bureaucracies differ from those of clinical practice in ways that may affect the functioning of interventions. This is especially pertinent for interventions that attempt to build research utilisation capacity because the forms of research that are most useful in these contexts are likely to differ. For example, biomedical and clinical research (including randomised controlled trials) may be more useful in healthcare settings, whereas health policy-makers might be better informed by research that incorporates scaled analyses of effectiveness, reach and costs [ 47 ]. Thus, studies included in this review are limited to research utilisation capacity-building interventions that target policy-makers.

Search strategy

The literature encompassed in this review was identified in three ways:

  • From three existing reviews of published peer-reviewed papers that reported on the evaluation of strategies aimed at increasing the use of research by decision-makers in public policy and program development. The first two reviews are complementary; one [ 81 ] captured papers published between 1999 and 2009, while the other [ 82 ] captured papers published between 2009 and 2015. These reviews focus on identifying the strategies employed to increase research use and the “ factors associated with these strategies that are likely to influence the use of research ” [ 82 ]. The third review [ 42 ], conducted for the United Kingdom’s Building Capacity to Use Research Evidence program, focused on primary intervention studies aimed at developing capacity for research use in public sector decision-making. It included studies published between 2003 and 2014. These sources were deemed to have identified most relevant peer-reviewed papers pertaining to the testing of research utilisation interventions in policy agencies between 2001 and 2015.
  • From two searches on academic databases: one on PAIS and the other on Web of Science (WoS), searching articles published between 2001 and 2016. See Additional file  1 for the search syntax and filters used, and the rationale for selecting these databases.

An external file that holds a picture, illustration, etc.
Object name is 12961_2018_277_Fig2_HTML.jpg

Review search strategy

Inclusion criteria

Studies were included provided they met the following criteria:

  • Interventions. The study reported on an intervention designed to improve or increase policy-makers’ capacity to use research in their work. We took a broad view of capacity-building that included strategies for supporting research use as well as strategies for advancing it, so studies were included if they were designed to enhance access to research; skills in appraising research, generating research (commissioning or conducting it) or applying research; organisational systems for developing and supporting research use; and/or connections with researchers (including partnership work and co-production). Any strategy that employed one or more capacity-building activities aimed at individual-level or organisational structures and systems was eligible, irrespective of whether the strategy was part of a research study or initiated/implemented by policy agencies, or a combination. Studies were excluded if they focused on policy development capabilities in which the use of research was a minor component (e.g. [ 83 ]); evaluated one-off attempts to get policy-makers to use research for a specific policy initiative; or focused on specific fields other than health such as education or climate change (e.g. [ 84 ]). Studies that addressed the use of research by policy-makers in general were included (e.g. [ 41 ]).
  • Populations. The intervention targeted policy-makers, by which we mean either (1) non-elected civil servants working in some aspect of policy or program development, funding, implementation or evaluation within government agencies such as Departments of Health, and/or (2) senior health services decision-makers, e.g. executives in regional or federal health authorities who have responsibility for large scale service planning and delivery, and/or (3) elected government ministers. We included studies where participants included policy-makers and other groups (e.g. frontline clinicians, NGO officers or researchers) and where some disaggregated data for the different groups was reported, but not those in which intervention effects/outcomes were reported in aggregated form, for instance, health staff at all levels took part in the intervention but the data for senior health services decision-makers was not reported separately from that of frontline staff (e.g. [ 25 , 85 – 91 ]).
  • Study design. The intervention was evaluated. This includes process evaluations and reports of ‘soft’ proximal outcomes such as satisfaction and awareness, which we conceptualise in our analysis as mechanisms. As mentioned, opportunistic evaluations of initiatives that were planned outside the auspices of a research study were included, but studies were excluded if they described achievements or ‘lessons learnt’ but did not explain how such information was captured as part of an evaluation strategy (e.g. [ 32 , 84 , 92 – 94 ]).
  • Publication. The evaluation results were published in English between 1999 and 2016. This date range was judged by the authors as likely to encompass the vast majority of relevant publications in this field, and included the earliest relevant study of which we were aware [ 95 ].
  • Settings. We included studies from all countries, including low- and middle-income countries. These settings are likely to differ considerably from those of high-income countries (e.g. less developed infrastructure, fewer resources, different health priorities and a poorer local evidence-base [ 96 ]) but, together, the studies provide insights into creative interventions and produce findings that may have implications across the country income divide in both directions.
  • Quality. Studies were excluded if they were “ fatally flawed ” according to the appraisal criteria used in critical interpretive synthesis [ 97 ]. Following Dixon-Woods et al. [ 97 ], we used a low threshold for quality pre-analysis due to the diversity of methodologies in our final sample, and because our method of synthesising data would include judgements about the credibility and contribution of studies.

We took an inductive approach to analysis (e.g. [ 45 ]) guided by realist thinking rather than starting with an a priori framework or conceptual categories. This was due to the breadth of strategies we were investigating and their diverse theoretical implications. As befits an exploratory realist review, our aim was to identify “ initial rough theories ” that might explain how and why different strategies worked (or did not) within those intervention settings [ 75 ]. While none of the studies are realist themselves, many pay close attention to context, process and interaction, so there is some rich information with which to start developing tentative hypotheses about how and why the interventions had the effects they did [ 75 , 98 ].

These terms are used in accordance with the realist movement associated with the United Kingdom’s RAMESES projects, which aim to produce quality and publication standards for realist research [ 99 ].

Following Best et al. [ 100 ] a six-step process was used in which we (1) read and reread the 22 articles to gain familiarity with the data; (2) extracted details about what the intervention comprised, contextual features and the studies’ empirical findings, including any clues about interactions and causality; (3) extracted program theory made by authors of these studies, i.e. explicit or inferred hypotheses, concepts or principles about what the intervention was expected to do or how it was expected to work [ 75 ]; (4) reviewed the intervention strategies, findings and theories of related intervention studies that had similar aims but did not meet our inclusion criteria; (5) identified further relevant literature that might help develop new theories and/or additional detail, including papers cited by authors of the primary studies and literature from fields that were likely to shed light on or challenge our findings (time and resource constraints limited this step, so it was pragmatic rather comprehensive, drawing on materials known to the authors as well as found articles); and (6) summarised the connections we inferred between intervention strategies, implementation contexts, underlying casual mechanisms and observed outcomes in strategy + context + mechanism = outcome (SCMO) configurations (this was an iterative process that repeatedly cycled back through the activities described above). Table  1 describes how the main concepts were defined and identified in this process.

Definition and identification of concepts in SCMO configurations

AH led this process. SB read the synthesis of studies and critiqued the draft SCMO configurations. SJR independently read the included studies, made notes in relation to steps 2 and 6, and critiqued the revised configurations. AH and SJR iteratively workshopped draft findings to reach agreement, drawing on feedback from our co-authors to resolve remaining areas of debate and to refine the final SCMO configurations.

Search results

As Fig.  2 shows, 5, 14 and 18 articles, respectively, were identified from the three reviews. Of these 37 articles, 15 met our eligibility criteria. Another 3 articles were identified from the PAIS search, and 3 from the WoS search. A further article was identified from citation snowballing, resulting in 22 included studies. One article from the PAIS search was excluded on quality criteria as it provided so little detail of the intervention strategies and evaluation methods that we could not see how the conclusions were reached. Additional file  2 presents a tabular overview of the included studies’ aims, design, intervention strategies, participant numbers and characteristics, context, evaluation methods, outcome measures, and key findings. This table includes theories, models or frameworks mentioned in the article as informing the design of the intervention or evaluation.

Study design

Study design terminology is often inconsistent. Here, we use three terms to describe the overarching study design – ‘experimental’ indicates that the research team provided the intervention and evaluated it using some form of randomisation and control groups; ‘interventional’ that the research team provided the intervention and evaluated it, but not using experimental methods; and ‘observational’ that the intervention or initiative being evaluated was not designed as part of a research study. Based on these definitions, 12 of the studies appeared to be observational, seven interventional, and three experimental. These included process evaluations and opportunistic evaluations of projects, services or strategies that had been initiated by others.

Outcomes of interest were diverse, ranging from activities that can be measured objectively, such as increased use of systematic reviews [ 95 ], to more conceptual outcomes such as greater strategic use of data [ 101 , 102 ], ‘catalysing’ research use [ 103 ] and fostering a culture of critical thinking [ 104 ].

Domains of capacity-building and support

Eighteen strategies for supporting or increasing capacity were identified within four non-exclusive domains of research use capacity. These were ‘Access to research’ (16 studies); ‘Skills improvement’ in accessing, appraising and/or applying research in policy work (13 studies); ‘Systems improvement’ tackling organisational or cross-organisational infrastructure, processes and/or resources (8 studies); and ‘Interaction’ with researchers (11 studies) (Table  2 ).

Research utilisation domains and strategies used within the reviewed studies

Most evaluations used case study methodologies. The main data collection methods were interviews (14 studies), focus groups (6 studies) and questionnaires (11 studies: 6 cross-sectional post-intervention and 5 pre/post); therefore, outcomes were largely self-reported. Two studies reported on the validity of their survey instruments. Eight studies also reviewed relevant documents, two conducted a network analysis, and one used observations of the intervention activities. Three studies used independent experts to assess intervention outputs. Participation data such as attendance rates at workshops and numbers of participants in the intervention and evaluation were reported to varying extents. The nine studies that included statistical analyses had diverse outcome measures, and used different sampling, data collection and modelling methods. See Additional file  2 for further details on aspects of study design.

Two whole-of-intervention design strategies were also widely used, namely needs-based tailoring (10 studies) and multi-component programs (17 studies) (Table  3 ).

Focus of intervention studies targeting research utilisation in policy-making 2001–2016

Intervention participants and settings

All studies targeted capacity in policy-makers and/or policy agencies in that they were attempting to support, increase or otherwise improve policy-makers’ access to research; policy-makers’ skills in accessing and/or using research; the capacity of systems in policy organisations to support research use; and/or interactions between researchers and policy-makers that were intended to facilitate knowledge exchange or partnership work. Many studies had more than one category of participant, e.g. a mix of policy-makers, practitioners and researchers. The majority included bureaucrats in government departments of health or equivalent at the regional level (11 studies) or national/international level (9 studies). Eleven studies targeted government employees running regional health services, and two included elected (ministerial) policy-makers.

The intervention settings spanned 18 countries. There were 17 single-country studies conducted in Canada ( n = 5), Australia ( n = 3), Nigeria ( n = 3), The Netherlands ( n = 2), Burkina Faso ( n = 1), Ethiopia ( n = 1), Fiji ( n = 1) and the United States of America ( n = 1). Four were multi-country studies were conducted, respectively, in Bangladesh, Gambia, India and Nigeria; Cameroon and South Africa; Bolivia, Cameroon, Mexico and the Philippines; and Argentina, Bangladesh, Cameroon, Nigeria and Zambia. Finally, one was an international collaboration run from Canada. Thus, 10 studies took place within one or more low- and middle-income country [ 105 ].

Program theories

The 22 studies draw on a diverse suite of theories and concepts. None present a formal program theory, but many use frameworks to guide intervention development. The RCT conducted by Dobbins et al. [ 106 ] is based on diffusion of innovations [ 107 – 109 ], Brennan et al. [ 110 ] use the theoretical domains framework [ 111 ], and Shroff et al. [ 103 ] adapt a knowledge translation framework [ 45 ]. Others draw eclectically on concepts from a variety of sources, including (1) studies of how research is used in policy-making – both systematic reviews [ 49 , 112 ] and individual studies such as Weiss’s seminal typology of research utilisation [ 113 ]; (2) models for mobilising research such as knowledge transfer frameworks [ 114 – 116 ] and partnership approaches [ 61 , 64 , 117 – 119 ]; (3) analyses of barriers to researcher–policymaker relationships [ 53 , 120 ] and ‘gap-bridging’ solutions such as the linkage and exchange model [ 121 ], and the use of knowledge brokers [ 122 ]; (4) studies of organisational support for research use [ 59 , 123 ]; (5) guidance for facilitating the use of research in policy-making, including in low- and middle-income countries, e.g. the SUPPORT tools developed by Oxman et al. [ 124 , 125 ] and Lavis et al. [ 126 ]; and (6) WHO commissioned reports on building capacity for health policy and systems research [ 50 , 54 ].

A minority of studies developed their own program theory or conceptual framework that guided the intervention design and evaluation (e.g. [ 91 , 102 , 127 ]), and some report using frameworks primarily for the evaluation (e.g. [ 41 , 101 , 104 ]).

Intervention strategies, contexts, causal mechanisms and outcomes

The findings derived from our realist analysis of the 22 studies are now presented. See Additional file  2 for a summary of each study’s design, outcomes and informing theory.

Overarching contextual considerations and their implications

Nearly all the studies in this review conceptualised research use as contextually contingent. They assumed that some degree of responsivity to local needs was required by research providers, and that policy-makers’ judgements about the usefulness of research were flexible, according to shifting circumstances, and based on far broader criteria than academic hierarchies of evidence, e.g. “ Research is only as useful as potential users perceive it to be, irrespective of its methodological rigour or its findings’ power ” ([ 128 ], p. 241). Some pointed to the limitations of technical-rational models of research use in political decision-making [ 95 , 102 , 128 ], and over half emphasised the complexity of policy-making [ 29 , 41 , 62 , 102 – 104 , 106 , 110 , 127 , 129 – 131 ]. Terminology reflected the acceptance that policy will never be based entirely on research [ 5 ]. Indeed, few of the studies used the term ‘evidence based policy’ unquestioningly, preferring more nuanced post-evidence based terms such as ‘evidence-informed policy’ [ 103 , 132 ], ‘evidence-informed decision making’ [ 104 , 106 , 133 ], and ‘research-informed policy’ [ 29 , 110 , 134 ].

From our analysis, it appeared that there were some similar contextual factors in all the studies reviewed, despite their very different settings (e.g. Burkina Faso and Canada). This suggests there may be universal influences on the use of research in policy-making of which virtually every capacity-building intervention should take account. The main contextual factors identified were:

  • Research characteristics: Policy-makers’ use of research was influenced by the degree to which they were able to obtain research that was relevant, applicable and easy to read. This suggests the need for strategies that increase the availability and accessibility of fit-for-purpose research findings. Credibility of research and researchers is also a consideration.
  • Individual characteristics: Policy-makers’ use of research was affected by existing research-related knowledge, skills and self-confidence, and views about the value of research. The former clearly indicates task-appropriate skills development and support, but the latter suggests that, in some cases, it will be necessary to influence beliefs.
  • Interpersonal characteristics: Although neither community is heterogeneous, there were common differences between policy-makers and researchers in terms of language, values, expectations and incentives. This suggests the need for strategies that either bridge these communities, or form more cohesive connections between them, potentially blurring their boundaries.
  • Organisational characteristics: Research use in policy agencies was shaped by organisational culture. Agency remits, resources and constraints further influenced how research was prioritised and used. This suggests the need to activate structural mechanisms that increase expectations of, and facilitate, research use in day-to-day practice. Underlying values and assumptions may need to be influenced. Leadership by managers and opinion leaders is likely to be key.
  • Environmental characteristics: Policy-making environments were complex, political and responsive, affecting what research could be used for what purposes, and the time available for this. The way that research is (or can be) interpreted in this argumentative arena is likely to determine its role in policy processes, thus relevance, applicability and credibility are not fixed research characteristics but determined in relation to circumstances. This suggests that tailored research (both in terms of methodology and presentation), rapid production of findings and responsive dialogue with researchers may be valuable. Methods for supporting this are likely to include commissioning and/or research-policy partnerships and/or internal generation of research by policy agencies.

These overarching contextual factors align with other reviews, which conclude that policy-makers’ capacity and motivation to use research is shaped by forces such as these at micro, meso and macro levels [ 43 , 49 , 112 ].

The next four sections present our findings in relation to the four domains of capacity previously identified. As per the focus of this review, the emphasis in these results is not on the extent to which interventions were successful in effecting change, but on how change was effected or why it was not; consequently, the narrative overview in each of the results sections is on mechanisms. The tables that follow place these mechanisms in context by showing what intervention strategy was used; key contextual factors identified in the studies; possible causal mechanisms including those that appear to have been activated in the studies and those that were apparently required but were not activated; and any reported results relating to that strategy (including process effects and study outcomes) [ 73 , 135 ]. Our hypotheses describe mechanisms that were inferred from multiple studies and/or powerfully evident in one study, and are supported by theoretical or empirical implementation literature. Where low effects are observed, we hypothesise that one or more key mechanisms were not activated, or were activated too weakly to bring about the level of desired change. Note that the contextual factors described above are considered to be ‘givens’ in these tables and so are only reiterated where they seem to be most crucial.

Access to research (Table 4 )

Access to research intervention strategies, context, mechanisms and impacts [ 29 , 41 , 91 , 95 , 128 , 129 , 103 , 104 , 106 , 110 , 133 , 134 , 138 , 202 ]

An external file that holds a picture, illustration, etc.
Object name is 12961_2018_277_Tab4_HTML.jpg

*In this and subsequent tables, not all the studies that target each domain will necessarily be included, for example, where a study’s strategies for increasing access was based on skills or systems improvement, or interaction, it is not cited in the table above. Calculations of the number of studies where process effects/outcomes were observed (in the last column) are based only on studies cited in the intervention strategies column

Mechanisms that appeared to underpin access to online research included awareness of resources and the relative merits of different kinds of research within them; valuing what was on offer; the efficiency with which research could be obtained; and confidence in using resources and their contents. When research was synthesised, tailored for specific users, and sent to them, ease of access and ease of use aided uptake, probably aided by increased policy-relevance and applicability. As with all domains, perceived fit between what was offered and policy needs/priorities was key. The tailored, contextualised and inclusive evidence synthesis provided by evidence briefs tick many of these boxes. Commissioning rapid reviews maximised policy-makers’ engagement with and control over the research focus, methods and timeliness. The costs and process of commissioning are likely to increase investment in using the end-product.

The value of seminars seemed to be enhanced by tailoring and interactivity, and by the credibility and communicative skills of the presenters who engage policy-makers despite the often dry content. Meeting researchers at these seminars can break the ice and lead to further interaction.

Intermediaries such as knowledge brokers can facilitate access to research by providing navigational support in unfamiliar terrain. They provide a communicative bridge by helping policy-makers articulate needs and expectations and, in some cases, translate these for researchers. The intermediaries’ interpersonal skills, credibility, availability, ability to provide individualised support and perceived neutrality enabled the relationship to work, but this also requires time in less research-orientated settings.

Skills improvement (Table 5 )

Skills improvement intervention strategies, context, mechanisms and impacts [ 41 , 101 – 104 , 110 , 127 , 130 – 134 , 137 ]

An external file that holds a picture, illustration, etc.
Object name is 12961_2018_277_Tab5_HTML.jpg

Mechanisms for skills improvement in using research appear to include policy-makers believing in the relative advantage of participation, which is affected by the perceived appropriateness/desirability of intervention goals and the relevance, applicability, accessibility and credibility of intervention content. Andragogical principles that emphasise participant-driven learning (manifested in a partnership approach that may include needs consultation, tailored content, informal information exchange and practice opportunities) engages policy-makers. Participants’ active input appears to maintain interest and investment. Strengths-based learning that develops self-efficacy increases motivation and can empower policy-makers to become research champions and train or mentor others. Strong leadership support for the intervention and its goals, including modelling research use practices, is emblematic of wider organisation commitment to using research. Targeted policy-makers will have to find training manageable if they are to attend; this may require pragmatic timing and workarounds.

Mentoring works by providing individualised guidance and support about the real-world application of new knowledge and skills, which in turn increases self-efficacy as abstract learning is turned into concrete practice. Mentors’ credibility and experience, and relationship skills, are crucial. Participants’ accountability, triggered by the need to present their work and/or have it assessed, increases motivation to develop competence in using new knowledge and skills.

Systems improvement (Table 6 )

Systems improvement intervention strategies, context, mechanisms and impacts [ 41 , 127 , 102 – 104 , 133 , 134 ]

An external file that holds a picture, illustration, etc.
Object name is 12961_2018_277_Tab6_HTML.jpg

The mechanisms underpinning systems improvement appear to be diverse, reflecting the breadth of strategies. Diffusion of innovations theory [ 108 , 109 , 136 ] helps makes sense of findings across the studies in relation to interactions between new infrastructure, tools and processes. It posits that new systems must be compatible with key professional and organisational culture and values, flexible enough to accommodate aspects of existing practice that participants will not relinquish, sufficiently easy to use so that policy-makers are not deterred, and have relative advantage, i.e. seem better than existing systems for the individual and for the organisation, so it feels worth the effort of adaptation. Participatory planning and implementation of systems improvement with potential participants may most effectively engage and enthuse them, increasing their readiness for change as well as making the intervention more fit-for-purpose.

Improved systems widen opportunities for using research by increasing ease of access. Where research skills are brought into and developed within the organisation, there is strengthened belief in managers’ commitment to research use. Co-location and control of expertise is likely to increase the policy-relevance, applicability, accessibility and, probably, timeliness of research outputs and advice. In-house research expertise provides opportunities and incentives that policy-makers may find motivating. In general, systems improvements help to embed research use in day-to-day practice and demonstrate managerial commitment, both of which contribute to a research-oriented culture.

Interaction with researchers (Table 7 )

Interaction intervention strategies, context, mechanisms and impacts [ 41 , 62 , 91 , 102 , 103 , 110 , 127 – 129 , 131 , 137 , 138 ]

An external file that holds a picture, illustration, etc.
Object name is 12961_2018_277_Tab7_HTML.jpg

Mechanisms for productive interactions between policy-makers and researchers appear to include mutual commitment to investing time and effort in interaction, and mutual interest (including the identification of benefits to both parties) in the endeavour. Trust, respect and communicative ease underpin relationship formation, but this takes time to develop and may require repeated interactions. Further, researchers are perceived as neutral, dispassionate contributors.

Where positive interaction is underway it can sensitise and upskill both parties through learning from each other about their values, work contexts and practices. Interactions are more sustainable when there is strong organisational support, and where formal arrangements are put in place rather than relying on individuals (who may move on). Leadership and championing from respected ‘insiders’ may motivate staff to engage with the intervention and put it into practice. Known contacts can act as linkage agents, introducing people to networks and keeping them connected. Collaboration increases ownership of and investment in the research process and outputs, but only when it is genuine, i.e. when both parties have the power and ability to shape critical decisions and have input into processes. However, genuine collaboration is often hard to facilitate. Good governance arrangements can help by ensuring that costs and rewards are agreed and shared, roles are clear, and expectations are articulated and met. Reflexivity, namely paying attention to partnership processes, critiquing and seeking to learn from them, perhaps through developmental evaluation approaches, may combat the lure of traditional silos and disciplinary norms that have been found to undermine collaborations.

One of the challenges in evaluating interactive initiatives is the increased entanglement of strategies, mechanisms, process effects and outcomes. For instance, existing positive cross-sector relationships may function as both a context and a mechanism; trust may function as both a mechanism and process effect; and improved relationships may be an outcome while also providing context for further dialogue and partnership work. Thus, there are dual functions and feedback loops implied in much of this theorising.

Whole-of-intervention design strategies (Table 8 )

Key whole-of-intervention strategies, context, mechanisms and impacts

a These outcomes are speculative: there was no clear evidence of outcomes relating to these strategies in the studies

Although many interventions were described as ‘tailored’, only 10 of the reviewed studies both reported using formal needs analyses or consultative/collaborative strategies for determining needs and preferences and gave some indication of how this shaped the intervention. There was very little information about how this might have impacted responses to the intervention. Nevertheless, it seems likely that tailoring based on accurate needs assessment will maximise the interventions’ compatibility with local needs and practices, and its ability to build on local strengths. Where participants collaborate in tailoring they are more likely to feel like respected partners in the intervention and thus to have ownership of and investment in its outcomes.

As shown in Table  3 , most studies that employed multiple intervention strategies did so to improve capacity across two or more domains (e.g. access, skills and interaction) in order to address different levels of support and constraint in research use. Only three studies used multiple intervention strategies to improve capacity in a single domain (e.g. a combination of training workshops, mentoring and practice assessment were used in conjunction to build individual skills). Possible mechanisms are triggered by the interaction and complementarity of multiple strategies, which may be increased when multiple domains are targeted because strengthening capacity in one area (e.g. organisational systems) is likely to support capacity growth in other areas (e.g. individual skills), and strategies may function synergistically to shape a conducive environment for research use. As such, they may both represent and facilitate a culture of research use.

The 22 studies in this review display a diverse suite of theories, concepts and frameworks from different disciplines and fields. We identified 18 intervention strategies and two key design strategies targeting four domains of research use capacity, namely Access, Skills improvement, Systems improvement and Interaction. These studies reflect dominant concerns in the literature about the paucity of policy-usable research and difficulties locating it within information-saturated environments; the need for policy-makers to have adequate skills and confidence in using research, and for work processes and infrastructure to support this use; and the benefits of researchers and policy-makers developing relationships that facilitate mutual understanding, information exchange and collaboration. Underpinning much of the above, are concerns about how the value of research in policy processes is perceived by policy-makers and how these beliefs are affected by organisational cultures.

Despite drawing on ideas from different traditions, most of these studies rejected linear pipeline models in which universally applicable research findings can be ‘transferred’, and favoured more nuanced notions of both the research product and the policy process. Prominent ideas included policy-making as information bricolage in which research findings are only one component (e.g. [ 41 , 102 , 103 , 106 , 137 ]); the rhetorical and political dimensions of research use (e.g. [ 91 , 128 , 129 , 138 ]); and research use as situated – dependent on myriad fluctuating contextual factors (e.g. [ 103 , 110 , 128 ]). Correspondingly, the findings indicated scant instrumental use of research. Indeed, they showed that even where specific research was valued and understood it was seldom translated directly into policy action [ 29 , 91 ], and that some forms of use did not correspond with established typologies [ 91 ].

Like others, we cannot identify one strategy as superior to others in building the capacity of policy-makers to use research [ 41 , 81 ]. Policy-making is a complex and contingent process, and the various capabilities that facilitate it operate at multiple levels, including the meso and macro levels, where local infrastructures, politics and issue polarisation is likely to impact what is viable [ 41 , 43 ]. A combination of strategies that are responsive to changing conditions is likely to be most appropriate. Further, regardless of the design features of the intervention, it will be interpreted and enacted differently in different settings [ 139 ]. Nevertheless, there are lessons from the 22 studies in this review that have transferable implications for the design and implementation of research utilisation capacity-building interventions in policy agencies, some of which we now discuss.

It is axiomatic that policy-makers cannot use research if they do not know about it. To this end, efficient routes to relevant, clearly presented research findings are a boon. Tailored and contextualised syntheses – either in document form or via presentations, seminars and advice from knowledge brokers or researchers – seem to offer the most helpful means of providing this access. The benefits of tailoring information for specific audiences and using active dissemination are supported by other reviews [ 12 ].

While these studies clearly demonstrated the importance of research being policy relevant, applicable and credible, these concepts raise problems of their own. Regarding credibility, participants did not always judge the merits of research using academic hierarchies of evidence. Weiss suggests this is because policy-makers assess research credibility using ‘truth tests’ (are the findings plausible/legitimate?) and ‘utility tests’ (are proposed solutions feasible in our context?) [ 140 ]. Consequently, local data is often most compelling, and contextualisation is needed for the findings to have leverage in the discursive and rhetorical processes that characterise policy-making [ 5 , 141 ]. This suggests it may be unhelpful for interventions to focus solely on access to untailored systematic reviews and syntheses. Enhancing research for policy purposes involves trade-offs – increasing one attribute (relevance, credibility or accessibility) is likely to be at the expense of another. For example, presenting research findings clearly can enhance accessibility and relevance, but may also neglect important complexities, which decreases credibility. Therefore, solutions may be most effective when tailored on a case-by-case basis [ 142 , 143 ].

Most of the interventions that attempted to increase access appeared to conceptualise access as necessary but insufficient for effective use of research, hence their parallel attempts to address individual, interpersonal and organisational capabilities. They recognise that research use is an intensely social and relational process [ 12 ], and that to increase it we have to understand and work with supporting factors such as organisational culture, professional behaviours, local circumstances and different intervention facilitation approaches [ 57 , 144 – 147 ].

Training workshops – the primary intervention for individual capacity-building – appear to provide a useful starting point providing they are well-tailored (resulting in relevant and appropriately pitched content) and facilitate active input from participants. Workshops are generally well received with high levels of self-reported improvement in understanding, but as a stand-alone intervention method they seem unlikely to result in substantial practice change. Uneke et al. praise the merits of one-off workshops [ 132 ], but follow-ups of RCTs find workshops alone to be costly and largely ineffective [ 85 ], indeed without support structures and systems even the best training will not be translated and sustained in practice [ 25 , 102 , 104 ]. The trade-off between intervention intensity and attendance by busy policy-makers, especially those at higher levels of seniority who also have a role in modelling and championing change, remains problematic.

The use of mentored practice seems to address some of these concerns, and is supported in the wider literature. The hypothesis that mentoring develops knowledge, skills and confidence has been tested in multiple studies with success where the mentor is appropriate and the mentor/mentee relationship is sound [ 148 , 149 ]. Wider benefits include connecting mentees’ to communities of practice, and inspiring them to become mentors themselves [ 150 ]. Shroff et al. [ 103 ] suggest that, in policy agencies, this requires that the mentor has local knowledge and applied policy expertise. Others note difficulties in identifying mentors and matching them with mentees, and in sufficiently freeing up mentors’ time [ 54 , 151 ].

Combining training and mentoring with performance goals and assessment (as three of the reviewed studies did) may offer the best option for embedding skills. For example, in their multi-country study Pappaioanou et al. conclude that “ without supportive follow-up and supervised application of skills, participants frequently continued to use the same work practices that they had used before they attended the training ” ([ 102 ], p. 1935). A recent meta-analysis found that goal-focused mentoring (otherwise known as coaching), even when short-term, improved individual, skills-based and affective outcomes [ 152 ]. Mentoring may offer greater support to staff who are less engaged in the workforce, so policy-makers who are new employees and/or especially lack confidence in using research skills may benefit most [ 151 ]. The terminology in this area is muddled so it is important to consider the specific tactics and goals of the intervention rather than relying on terms such as knowledge brokering, coaching or mentoring to define them.

Knowledge utilisation is intimately linked to organisational structure and systems [ 38 ], so it is not surprising that these appear to play a key role in supporting individual efforts to access and use research. However, they must be fit-for-purpose, attuned to real practice needs, able to accommodate local adaptations and provide a clear benefit. Those developing and implementing such systems cannot afford to neglect the complex human dynamics within which they must work; consequently, participatory development of systems interventions may offer the best chance of success.

The outcomes of research-focused recruitment and performance management were not generally available in these studies, partly because their effects are often hard to disentangle from other strategies; however, they promise proximal and distal benefits. In-house research experts such as knowledge brokers may be more able to provide highly relevant, applicable, accessible and timely findings, but can also help to build wider capacity by supporting their colleagues’ skills development and contribute to a more research-orientated organisational culture. Evaluations of knowledge brokering in Scottish government departments and in Canadian healthcare organisations show a positive impact on research use [ 153 , 154 ], and their use has been found to strengthen the clarity of research commissioned by policy-makers [ 155 ]. Our review found that the use of onsite knowledge brokers had mixed results, possibly because of the time needed to build productive working relationships with policy staff [ 133 ]. Thus, longer-term use may be most beneficial. Findings concur with descriptive and other empirical studies about the importance of knowledge brokers’ interpersonal skills and credibility [ 156 – 158 ].

Interaction

There is little doubt that interaction between policy-makers and researchers – when it is positive and productive – tends to operate as a virtuous circle that increases trust and confidence in the benefits of further dialogue, and builds the capacity of both parties to understand and work with the other. For example, strategies that modelled respect for policy-makers as ‘knowers’ as well as ‘doers’ may have increased engagement, e.g. having senior policy-makers co-facilitate deliberative forums [ 132 ]. Mutual respect and commitment seemed to be crucial mechanisms, suggesting that those selected to take part in these initiatives should be carefully selected where possible, and that enthusiastic but sensitive facilitation might be helpful in the early stages. Results also suggest that reflexivity and continual adjustment may be crucial in dealing with the inevitable challenges of collaboration. There are tools available to help parties prepare for partnership work (e.g. [ 159 ]), and to monitor and evaluate its functioning [ 61 , 62 ].

The extent to which interaction translates into research-informed policy-making is less certain. Neither increased understanding or collaborative outputs necessarily influence policy decision-making; however, where sound relationships are formed, they do appear to support the ‘social life’ of research, helping findings and ideas from research move into, within and between networks [ 160 – 163 ]. Empirical studies repeatedly find that professionals, including policy-makers, are more likely to seek and use research obtained via trusted interpersonal channels rather than from formal sources [ 38 , 164 , 165 ]. Interaction can build relationships that enable researchers to operate within this sphere of trust and familiarity [ 60 , 165 ].

Despite disappointing outcomes in three of the five collaboration-focused studies, co-production remains a worthy goal, particularly in the light of a recent review that found an association between clinician involvement in research and improved healthcare performance [ 166 ]. The sticking point appears to be the capacity of individuals and organisations to facilitate genuine collaboration in which roles and tasks, resources and outputs are negotiated, and leadership is distributed across boundaries, resulting in shared expectations and mutually satisfying returns on investment. Early robust dialogue and fair but firm governance arrangements, underpinned by institutional support, seem to play an important role. The extent to which these policy-makers experienced a sense of ownership in the research process is likely to have been just as vital. As Zimmerman argues [ 167 ], ownership and buy-in are opposite concepts – ownership means collaborative development of ideas, decision-making and action, whereas buy-in means agreeing to someone else’s proposal. For example, in Kothari et al.’s [ 91 ] intervention the policy-makers’ involvement seems to have been limited to articulating the research questions and commenting on draft versions of the report. This consultative role places them closer to ‘endorsers’ than ‘co-researchers’ in the spectrum of co-production [ 65 ]. As Senge [ 168 ] puts it, people feel ownership of a shared vision not when they are playing according to the rules of the game, but when they feel responsible for the game. These findings align with other studies, including a systematic review which concluded that knowledge exchange in policy-making depends on the establishment of a viable cost-sharing equilibrium and institutionalised communication networks [ 43 ]. Evaluations of the CLARHC partnerships concur and draw attention to the benefits of leveraging existing relationships when starting partnerships [ 30 , 169 , 170 ].

Key considerations in intervention design

The lack of detail about needs/situation analysis and how it was used to tailor interventions makes it hard to draw conclusions about the mechanisms that were or not triggered. However, others argue strongly that generalised capacity-building interventions are seldom successful; rather, they should be designed in response to accurate analysis of existing capacity and concerns, research needs and local conditions, derived from consultation or – better still – collaboration with potential participants [ 171 ]. This is supported by calls more generally for collaborative needs assessment as a precursor to local tailoring of interventions [ 172 , 173 ], as this is likely to identify goals that are locally meaningful, make implementation plans more actionable, actively engage participants in translating data, and increase their investment in outcomes [ 174 , 175 ]. Understanding existing capacity is also vital for tapping into local practice strengths [ 70 ]. As Trostle et al. argue, “ capacity can often be increased more effectively by reinforcing existing structures than by building new ones ” ([ 176 ], p. 63).

Findings emphasised the power of organisational culture to shape receptivity to intervention ideas and resources; this suggests that tailoring should take account of these dynamics. Where the existing culture is not perceived by staff to value research, the intervention may need to target values, beliefs and leadership prior to (or in parallel with) the other strategies. Attention to an organisation’s history and self-narrative is essential for crafting strategies that will resonate in current circumstances [ 100 , 171 ].

Two-thirds of the studies used multiple strategies and targeted multiple domains of capacity. This is unsurprising given that supports and constraints in one area of capacity are known to influence capacity in other areas [ 39 , 50 ]. It is outside the scope of this review to discuss the relative merits of single- versus multi-component interventions – plus others have dealt with this effectively elsewhere [ 177 ] – but it does seem that the degree to which intervention strategies are selected, tailored and implemented for local needs and practices may be more important than how many strategies are used, or in what combination [ 178 ]. Further, treating capacity-building as a participative endeavour is most likely to generate relevant, locally owned strategies [ 40 , 176 , 179 ]. We note that a 2011 meta-review of interventions designed to increase the use of research in clinical practice found that systematic reviews of multifaceted interventions reported greater effect sizes than single component interventions such as audit and feedback [ 180 ]. However, the extent to which these interventions were tailored for local needs, or developed collaboratively, is not reported.

The cumulative findings of these studies are a reminder that interventions are complex systems thrust into complex systems [ 77 ]. Research utilisation interventions, like other interventions, succeed or fail via their interaction with context – people, places, circumstances and processes will determine what works where, and for whom. Thus, the ‘best’ strategies for effecting change are those that are most fit-for-purpose at the local level [ 181 ]. We are warned of the considerable challenges that attempts to build capacity present. For example, that, “ … capacity-building is a risky, messy business, with unpredictable and unquantifiable outcomes, uncertain methodologies, contested objectives, many unintended consequences, little credit to its champions and long time lags ” ([ 56 ], p. 2). Nevertheless, this review shows that there are successes, and informative failures, so we can continue to develop our understanding of how to foster capacity in further interventions.

Implications for future interventions

We note some areas that might be addressed fruitfully in further research-use capacity-building interventions in policy agencies:

  • Understanding research use in context: Several studies in our review concluded that they had insufficient understanding of the local practices and contexts that were being addressed. This aligns with wider arguments that we continue to have a limited understanding of how policy-makers engage with research ideas and integrate them with other forms of evidence [ 26 ], which affects how we conceive of and design interventions, and interpret findings. Designing interventions that are “ close to practice ” [ 182 ] in terms of fit with local research needs and context seems to be essential, but we may also require further investigation of the ‘irrational’ (aka differently rational [ 6 ]) and non-linear uses of research that dominate the use of research in policy more generally. One of the reviewers of this paper pointed out that the consistency of contextual factors between our findings and other studies (which we attribute here to enduring contextual regularities in policy-makers’ use of research) may, in fact, be an artefact of an enduring research paradigm. The reviewer states, “ it could also be because much of the research into evidence use is conducted from identical premises (more research should be used) and using identical methods (surveys or interviews asking why more research isn't used). ” This is an important reminder of the need to ensure that the theories, models and methods we use in investigating research use are sensitive to real world policy practices and contexts rather than perpetuating a ‘barriers and enablers’ framework that risks masking complex interactions, identities and processes [ 183 – 185 ].
  • Researchers ’ capacity: Research-informed policy-making requires that researchers have the skills to produce policyrelevant research, present findings accessibly, and work productively with policy-makers; but these skills are often lacking [ 6 ]. Six of the reviewed studies attempted to build some aspect of researchers’ capacity in conjunction with that of policy-makers [ 62 , 127 , 131 , 132 , 137 , 138 ]; however, a cursory scan of the literature suggests that capacity-building for researchers in this field is less developed than for policy-makers, with very few intervention trials; the onus remains on policy-makers. This appears to be a gap that would benefit from further attention. It would likely require that policy-makers are involved in designing the content of such interventions. Many of the mechanisms suggested in our analysis are likely to be relevant.
  • Leadership: Findings reinforced the role of impassioned and strategic leadership as a crucial driver of organisational change (e.g. [ 103 , 104 , 128 , 133 ]), including leadership by high profile external experts in the wider policy environment [ 102 ]. However, there seemed to be few attempts to target or harness internal leadership in intervention activities. The pivotal role of organisational leaders, champions and opinion leaders in driving change is well established both in practice settings [ 57 , 186 – 188 ] and within policy agencies [ 18 , 44 , 50 , 51 ]; but we know little about how leadership dynamics within hierarchical and procedure-focused policy agencies function and effect change in relation to research utilisation capacity-building. Recent arguments about the strengths of distributed or collective leadership for knowledge mobilisation, including cross-sector partnerships, suggest that our conceptualisation of leadership may need to expand [ 32 , 100 , 170 ]. This area could benefit from further investigation.
  • Audit and feedback: With the exception of Peirson et al. [ 104 ], none of the studies reported using organisational-level progress feedback as a strategy, and none used audit and feedback – a process that gathers information about current practice and presents it to participants to facilitate learning, develop goals, create motivation for change and focus attention on change tasks [ 189 ]. Audit and feedback is well-established as a catalyst for professional practice change [ 189 ], including the uptake and use of research [ 144 , 190 ]. There is mixed evidence for its effectiveness, but a recent systematic review found that it generally leads to small yet potentially important improvements in professional practice [ 191 ], and it may be more successful than change techniques such as persuasion [ 192 ]. It seems a potentially valuable strategy within research utilisation interventions, particularly in the light of systems-influenced implementation frameworks that emphasise the need to establish performance feedback loops in organisational change processes [ 10 , 100 , 108 ].
  • Commissioning research syntheses: Findings of the two studies that looked at commissioned research syntheses suggest that the value policy-makers attribute to syntheses is affected by the commissioning process and/or their involvement in the conduct of the review [ 29 , 91 ]. A contribution mapping review of 30 studies found that research was most likely to be used when it was initiated and conducted by people who were in a position to use the results in their own work [ 193 ]. However, an evaluation of health policy-makers’ use of a briefing service provided by academics found that access to the service did not improve the policy-makers’ capacity, nor their uptake and use of research [ 194 ]. What critical factors are at play in these scenarios? We would benefit from greater understanding of how commissioning models can best support policy-makers’ capacity development and use of research, including the contribution that researchers and knowledge brokers can make.
  • Sustainability: The concept of capacity-building is linked to that of sustainability [ 70 ], but sustainability itself was seldom mentioned in the reviewed studies. As bureaucracies, policy organisations are characterised by their adherence to protocol, but there appeared to be few attempts to embed strategies within existing work systems (with some notable exceptions, e.g. [ 104 , 134 ]). The need for continuous active participation in knowledge mobilisation practices [ 70 ] was evident in few studies. Several tried to embed new knowledge and skills in practice (e.g. via mentored assessment), but this targets individual knowledge rather than organisationally owned processes, which is an important consideration in organisations known for their high turnover [ 40 ]. Sustainability may depend on different mechanisms from those posited here. For example, self-efficacy may be critical for initiating new patterns of behaviour, but have a limited impact on the decision to maintain that behaviour over time [ 195 ]. Greater consideration of organisational learning and the use of measures to prevent capacity initiatives from being ‘washed out’ [ 196 ] may be required. Longer-term evaluation would help, but organisational change, like relationship building, is a lengthy and evolving process, often taking years to reach intended goals [ 62 , 104 ].
  • Underpinning assumptions about research - informed policy-making: Despite the lack of clear theoretical drivers in most studies, the conceptual basis of attempts to address research use in policy-making seems to be maturing – rational linear models of research are being supplanted by ideas from political science, organisational change, systems thinking and other bodies of work that disrupt the evidence-based policy ideal. The field is also making use of opportunities to evaluate capacity-building endeavours that are initiated outside of academia, using creative methods to learn from complex real world projects and refusing to be cowed by the entanglement of change strategies, process indicators and outcomes. However, there is still evidence of “ theoretical naivety ” as described by Oliver et al. [ 26 ]; for example, focusing on research as exemplary evidence rather than on policy-makers’ use of diverse information and ideas within which research must function; the belief that a reconfiguration of barriers and enablers to accessing research would lead to greater impact; and a general lack of understanding about policy processes. Oliver et al. [ 26 ] provide advice about the direction that future research can take to address these issues.

Strengths and limitations

This paper contributes to our understanding of research utilisation interventions in policy agencies by providing an overview of 22 studies, including their change strategies and outcomes, the contextual factors that mediated these effects, and the theoretical perspectives that underpinned them. It also tentatively identifies the mechanisms that can best explain how the intervention strategies achieved their effects, or why they did not. This is an important first step in developing a more theoretically grounded approach to the design and evaluation of such interventions.

The paper may best be described as a realist-informed scoping review [ 197 ]. Unlike an orthodox realist review, it was conducted to inform our own program of research so it was not negotiated with external stakeholders, we did not conduct extensive theory-focused searches, and the analysis was exploratory and inductive rather than an interrogation of program theory [ 75 ]. We took an inclusive approach to study design and quality and, given our aim of identifying causal mechanisms, focused on identifying explanations of why an intervention was more or less successful rather than on quantitative findings [ 198 ]. The realist perspective contributed importantly to this process by enabling us to identify tentative constructs that may be used to inform the development, implementation and evaluation of subsequent research-to-policy capacity-building trials.

The findings are strengthened by independent analyses and critique of draft SCMO configurations, but our limited timeframe prevented the use of strategies that might have strengthened the review’s rigour further such as more comprehensive searching – this was an exploratory trawl of the literature rather than an exhaustive search – and contacting the authors of studies for missing information. The distinction between different participant groups and the extent to which the results for the various groups could be identified in the findings was not always clear. Including studies in which evaluations focused on processes and perceptions limits the identification of distal outcomes. The (mostly qualitative) data provided rich clues about contexts and possible mechanisms, but often did not include concrete information about capacity impacts or about any actual use of research in policy processes. Consequently, the findings should be seen as preliminary.

The identification of outcomes is further complicated by the entanglement of intervention strategies and outcomes. For example, improved infrastructure for accessing research, greater advocacy of research by organisational leaders, workforce development, and increased interaction between policy-makers and researchers can be seen as both intervention inputs and outputs – the means and the ends of capacity-building – depending on the focus of the intervention. As such, they tend to be described rather than evaluated. Many of the phenomena being investigated in these studies are complex and evolve over time; a reminder that capacity for using research in policy-making is a work in progress and will never be fully ‘built’.

Lack of shared evaluation frameworks across the studies means that we were not comparing like with like. A theory-driven approach in which we examined each study in relation to a middle range hypothesis could have produced more focused findings. We took an inductive approach in this first attempt, but believe that subsequent reviews would benefit from a theoretically based investigation. Our review might inform the development of causal hypotheses that further reviews could use as an investigative framework. The results tables (in which we present intervention strategies, contextual factors, hypothesised mechanisms and potential process effects/outcomes) reflect this exploratory approach. Not all of the elements are fully connected, lessening their explanatory potential [ 199 ]. We hope that future work will build on these lose connections to produce tighter configurations.

Lastly, mechanisms are “ squishy ” [ 200 ]. They change position in SCMO configurations, morphing into contexts and outcomes depending on the focus of the evaluation and level of analysis [ 31 ]. They can be differently aggregated, their categorisation is limited by vocabulary and interpretation [ 200 ], and their status as causal explanatory devices is uncertain; as Gerring argues, “ mechanisms might also be referred to as a theory, theoretical framework, or model, depending on one’s predilection ” ([ 200 ], p. 1503). Thus, the concepts we have called mechanisms, the level of granularity at which they are expressed, and the terms we use to describe them are all uncertain, and many would likely look quite different in the hands of another team. However, we believe that they offer a starting point for further testing and discussion. Typically, middle range theories develop gradually over time based on the accumulation of insights acquired through a series of studies [ 201 ]. This review is an early step.

This review explores what intervention strategies have been trialled for building capacity to use research in policy-making, and tentatively posits possible mechanisms that might explain how those strategies functioned (or why they did not) in different contexts. The evidence is variable, especially because we included formative and process evaluations that focus more on process effects than measurable outcomes, but our findings suggest that tailored interactive workshops supported by goal-focused mentoring, and genuine collaboration, may be particularly promising strategies. Systems supports (e.g. infrastructure, governance arguments and workforce development) are likely to play a vital role, but it is very hard to disentangle their effects from other intervention strategies and systems flux. Many potential mechanisms were identified as well as some contextual factors that appeared to impact the functioning of virtually all intervention strategies. There were some gaps in the reviewed literature that could usefully be addressed in further research.

Additional files

Review characteristics and search strategy. (DOCX 38 kb)

Overview of included studies. (PDF 389 kb)

Acknowledgements

Thanks to Phyllis Butow for comments on earlier drafts, to Danielle Campbell for providing a policy perspective on the approach and findings, and to the reviewers for their helpful suggestions to improve this paper.

This review was funded by the Centre for Informing Policy in Health with Evidence from Research (CIPHER), an Australian National Health and Medical Research Council (NHMRC) Centre for Research Excellence (#1001436), administered by the Sax Institute. CIPHER is a joint project of the Sax Institute; Australasian Cochrane Centre, Monash University; University of Newcastle; University of New South Wales; Research Unit for Research Utilisation, University of St Andrews and University of Edinburgh; Australian National University; and University of South Australia. The Sax Institute receives a grant from the NSW Ministry of Health. The Australasian Cochrane Centre is funded by the Australian Government through the NHMRC. AH is supported by an NHMRC Public Health and Health Services Postgraduate Research Scholarship (#1093096).

Abbreviations

Authors’ contributions.

AH led this work, including searching for and reviewing the articles, extracting data, developing hypotheses, drafting SCMO configurations and drafting the paper. SJR independently reviewed the articles, developed hypotheses and iteratively critiqued the draft SCMO configurations. SB independently read the article summary information and critiqued the draft SCMO configurations. All authors made substantial contributions to the analysis and interpretation of data, and were involved in critically revising the manuscript for important intellectual content. All authors read and approved the final manuscript.

Authors’ information

Not applicable.

Ethics approval and consent to participate

As a review of existing publications, no ethical approval was required for this study. However, it was conducted as part of a wider program of work that was granted ethical approval by the University of Western Sydney Human Research Ethics Committee, approval numbers H8855 and H9870.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Abby Haynes, Email: [email protected] .

Samantha J. Rowbotham, Email: [email protected] .

Sally Redman, Email: [email protected] .

Sue Brennan, Email: [email protected] .

Anna Williamson, Email: [email protected] .

Gabriel Moore, Email: [email protected] .

APSA Preprints Home

U.S. Policymakers Need to Mind the Gap between Think Tanks and Geographic Seams: Policy Recommendations for Improving Research Studies and Expert Commentary on Africa and the Middle East at American Think Tanks

  • Michael Walsh ,
  • Stephen Porter

Version History

Author’s competing interest statement.

  • Open access
  • Published: 18 April 2024

Research ethics and artificial intelligence for global health: perspectives from the global forum on bioethics in research

  • James Shaw 1 , 13 ,
  • Joseph Ali 2 , 3 ,
  • Caesar A. Atuire 4 , 5 ,
  • Phaik Yeong Cheah 6 ,
  • Armando Guio Español 7 ,
  • Judy Wawira Gichoya 8 ,
  • Adrienne Hunt 9 ,
  • Daudi Jjingo 10 ,
  • Katherine Littler 9 ,
  • Daniela Paolotti 11 &
  • Effy Vayena 12  

BMC Medical Ethics volume  25 , Article number:  46 ( 2024 ) Cite this article

920 Accesses

6 Altmetric

Metrics details

The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice. In this paper we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022.

The GFBR is an annual meeting organized by the World Health Organization and supported by the Wellcome Trust, the US National Institutes of Health, the UK Medical Research Council (MRC) and the South African MRC. The forum aims to bring together ethicists, researchers, policymakers, research ethics committee members and other actors to engage with challenges and opportunities specifically related to research ethics. In 2022 the focus of the GFBR was “Ethics of AI in Global Health Research”. The forum consisted of 6 case study presentations, 16 governance presentations, and a series of small group and large group discussions. A total of 87 participants attended the forum from 31 countries around the world, representing disciplines of bioethics, AI, health policy, health professional practice, research funding, and bioinformatics. In this paper, we highlight central insights arising from GFBR 2022.

We describe the significance of four thematic insights arising from the forum: (1) Appropriateness of building AI, (2) Transferability of AI systems, (3) Accountability for AI decision-making and outcomes, and (4) Individual consent. We then describe eight recommendations for governance leaders to enhance the ethical governance of AI in global health research, addressing issues such as AI impact assessments, environmental values, and fair partnerships.

Conclusions

The 2022 Global Forum on Bioethics in Research illustrated several innovations in ethical governance of AI for global health research, as well as several areas in need of urgent attention internationally. This summary is intended to inform international and domestic efforts to strengthen research ethics and support the evolution of governance leadership to meet the demands of AI in global health research.

Peer Review reports

Introduction

The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice [ 1 , 2 , 3 ]. Beyond the growing number of AI applications being implemented in health care, capabilities of AI models such as Large Language Models (LLMs) expand the potential reach and significance of AI technologies across health-related fields [ 4 , 5 ]. Discussion about effective, ethical governance of AI technologies has spanned a range of governance approaches, including government regulation, organizational decision-making, professional self-regulation, and research ethics review [ 6 , 7 , 8 ]. In this paper, we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health research, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022. Although applications of AI for research, health care, and public health are diverse and advancing rapidly, the insights generated at the forum remain highly relevant from a global health perspective. After summarizing important context for work in this domain, we highlight categories of ethical issues emphasized at the forum for attention from a research ethics perspective internationally. We then outline strategies proposed for research, innovation, and governance to support more ethical AI for global health.

In this paper, we adopt the definition of AI systems provided by the Organization for Economic Cooperation and Development (OECD) as our starting point. Their definition states that an AI system is “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy” [ 9 ]. The conceptualization of an algorithm as helping to constitute an AI system, along with hardware, other elements of software, and a particular context of use, illustrates the wide variety of ways in which AI can be applied. We have found it useful to differentiate applications of AI in research as those classified as “AI systems for discovery” and “AI systems for intervention”. An AI system for discovery is one that is intended to generate new knowledge, for example in drug discovery or public health research in which researchers are seeking potential targets for intervention, innovation, or further research. An AI system for intervention is one that directly contributes to enacting an intervention in a particular context, for example informing decision-making at the point of care or assisting with accuracy in a surgical procedure.

The mandate of the GFBR is to take a broad view of what constitutes research and its regulation in global health, with special attention to bioethics in Low- and Middle- Income Countries. AI as a group of technologies demands such a broad view. AI development for health occurs in a variety of environments, including universities and academic health sciences centers where research ethics review remains an important element of the governance of science and innovation internationally [ 10 , 11 ]. In these settings, research ethics committees (RECs; also known by different names such as Institutional Review Boards or IRBs) make decisions about the ethical appropriateness of projects proposed by researchers and other institutional members, ultimately determining whether a given project is allowed to proceed on ethical grounds [ 12 ].

However, research involving AI for health also takes place in large corporations and smaller scale start-ups, which in some jurisdictions fall outside the scope of research ethics regulation. In the domain of AI, the question of what constitutes research also becomes blurred. For example, is the development of an algorithm itself considered a part of the research process? Or only when that algorithm is tested under the formal constraints of a systematic research methodology? In this paper we take an inclusive view, in which AI development is included in the definition of research activity and within scope for our inquiry, regardless of the setting in which it takes place. This broad perspective characterizes the approach to “research ethics” we take in this paper, extending beyond the work of RECs to include the ethical analysis of the wide range of activities that constitute research as the generation of new knowledge and intervention in the world.

Ethical governance of AI in global health

The ethical governance of AI for global health has been widely discussed in recent years. The World Health Organization (WHO) released its guidelines on ethics and governance of AI for health in 2021, endorsing a set of six ethical principles and exploring the relevance of those principles through a variety of use cases. The WHO guidelines also provided an overview of AI governance, defining governance as covering “a range of steering and rule-making functions of governments and other decision-makers, including international health agencies, for the achievement of national health policy objectives conducive to universal health coverage.” (p. 81) The report usefully provided a series of recommendations related to governance of seven domains pertaining to AI for health: data, benefit sharing, the private sector, the public sector, regulation, policy observatories/model legislation, and global governance. The report acknowledges that much work is yet to be done to advance international cooperation on AI governance, especially related to prioritizing voices from Low- and Middle-Income Countries (LMICs) in global dialogue.

One important point emphasized in the WHO report that reinforces the broader literature on global governance of AI is the distribution of responsibility across a wide range of actors in the AI ecosystem. This is especially important to highlight when focused on research for global health, which is specifically about work that transcends national borders. Alami et al. (2020) discussed the unique risks raised by AI research in global health, ranging from the unavailability of data in many LMICs required to train locally relevant AI models to the capacity of health systems to absorb new AI technologies that demand the use of resources from elsewhere in the system. These observations illustrate the need to identify the unique issues posed by AI research for global health specifically, and the strategies that can be employed by all those implicated in AI governance to promote ethically responsible use of AI in global health research.

RECs and the regulation of research involving AI

RECs represent an important element of the governance of AI for global health research, and thus warrant further commentary as background to our paper. Despite the importance of RECs, foundational questions have been raised about their capabilities to accurately understand and address ethical issues raised by studies involving AI. Rahimzadeh et al. (2023) outlined how RECs in the United States are under-prepared to align with recent federal policy requiring that RECs review data sharing and management plans with attention to the unique ethical issues raised in AI research for health [ 13 ]. Similar research in South Africa identified variability in understanding of existing regulations and ethical issues associated with health-related big data sharing and management among research ethics committee members [ 14 , 15 ]. The effort to address harms accruing to groups or communities as opposed to individuals whose data are included in AI research has also been identified as a unique challenge for RECs [ 16 , 17 ]. Doerr and Meeder (2022) suggested that current regulatory frameworks for research ethics might actually prevent RECs from adequately addressing such issues, as they are deemed out of scope of REC review [ 16 ]. Furthermore, research in the United Kingdom and Canada has suggested that researchers using AI methods for health tend to distinguish between ethical issues and social impact of their research, adopting an overly narrow view of what constitutes ethical issues in their work [ 18 ].

The challenges for RECs in adequately addressing ethical issues in AI research for health care and public health exceed a straightforward survey of ethical considerations. As Ferretti et al. (2021) contend, some capabilities of RECs adequately cover certain issues in AI-based health research, such as the common occurrence of conflicts of interest where researchers who accept funds from commercial technology providers are implicitly incentivized to produce results that align with commercial interests [ 12 ]. However, some features of REC review require reform to adequately meet ethical needs. Ferretti et al. outlined weaknesses of RECs that are longstanding and those that are novel to AI-related projects, proposing a series of directions for development that are regulatory, procedural, and complementary to REC functionality. The work required on a global scale to update the REC function in response to the demands of research involving AI is substantial.

These issues take greater urgency in the context of global health [ 19 ]. Teixeira da Silva (2022) described the global practice of “ethics dumping”, where researchers from high income countries bring ethically contentious practices to RECs in low-income countries as a strategy to gain approval and move projects forward [ 20 ]. Although not yet systematically documented in AI research for health, risk of ethics dumping in AI research is high. Evidence is already emerging of practices of “health data colonialism”, in which AI researchers and developers from large organizations in high-income countries acquire data to build algorithms in LMICs to avoid stricter regulations [ 21 ]. This specific practice is part of a larger collection of practices that characterize health data colonialism, involving the broader exploitation of data and the populations they represent primarily for commercial gain [ 21 , 22 ]. As an additional complication, AI algorithms trained on data from high-income contexts are unlikely to apply in straightforward ways to LMIC settings [ 21 , 23 ]. In the context of global health, there is widespread acknowledgement about the need to not only enhance the knowledge base of REC members about AI-based methods internationally, but to acknowledge the broader shifts required to encourage their capabilities to more fully address these and other ethical issues associated with AI research for health [ 8 ].

Although RECs are an important part of the story of the ethical governance of AI for global health research, they are not the only part. The responsibilities of supra-national entities such as the World Health Organization, national governments, organizational leaders, commercial AI technology providers, health care professionals, and other groups continue to be worked out internationally. In this context of ongoing work, examining issues that demand attention and strategies to address them remains an urgent and valuable task.

The GFBR is an annual meeting organized by the World Health Organization and supported by the Wellcome Trust, the US National Institutes of Health, the UK Medical Research Council (MRC) and the South African MRC. The forum aims to bring together ethicists, researchers, policymakers, REC members and other actors to engage with challenges and opportunities specifically related to research ethics. Each year the GFBR meeting includes a series of case studies and keynotes presented in plenary format to an audience of approximately 100 people who have applied and been competitively selected to attend, along with small-group breakout discussions to advance thinking on related issues. The specific topic of the forum changes each year, with past topics including ethical issues in research with people living with mental health conditions (2021), genome editing (2019), and biobanking/data sharing (2018). The forum is intended to remain grounded in the practical challenges of engaging in research ethics, with special interest in low resource settings from a global health perspective. A post-meeting fellowship scheme is open to all LMIC participants, providing a unique opportunity to apply for funding to further explore and address the ethical challenges that are identified during the meeting.

In 2022, the focus of the GFBR was “Ethics of AI in Global Health Research”. The forum consisted of 6 case study presentations (both short and long form) reporting on specific initiatives related to research ethics and AI for health, and 16 governance presentations (both short and long form) reporting on actual approaches to governing AI in different country settings. A keynote presentation from Professor Effy Vayena addressed the topic of the broader context for AI ethics in a rapidly evolving field. A total of 87 participants attended the forum from 31 countries around the world, representing disciplines of bioethics, AI, health policy, health professional practice, research funding, and bioinformatics. The 2-day forum addressed a wide range of themes. The conference report provides a detailed overview of each of the specific topics addressed while a policy paper outlines the cross-cutting themes (both documents are available at the GFBR website: https://www.gfbr.global/past-meetings/16th-forum-cape-town-south-africa-29-30-november-2022/ ). As opposed to providing a detailed summary in this paper, we aim to briefly highlight central issues raised, solutions proposed, and the challenges facing the research ethics community in the years to come.

In this way, our primary aim in this paper is to present a synthesis of the challenges and opportunities raised at the GFBR meeting and in the planning process, followed by our reflections as a group of authors on their significance for governance leaders in the coming years. We acknowledge that the views represented at the meeting and in our results are a partial representation of the universe of views on this topic; however, the GFBR leadership invested a great deal of resources in convening a deeply diverse and thoughtful group of researchers and practitioners working on themes of bioethics related to AI for global health including those based in LMICs. We contend that it remains rare to convene such a strong group for an extended time and believe that many of the challenges and opportunities raised demand attention for more ethical futures of AI for health. Nonetheless, our results are primarily descriptive and are thus not explicitly grounded in a normative argument. We make effort in the Discussion section to contextualize our results by describing their significance and connecting them to broader efforts to reform global health research and practice.

Uniquely important ethical issues for AI in global health research

Presentations and group dialogue over the course of the forum raised several issues for consideration, and here we describe four overarching themes for the ethical governance of AI in global health research. Brief descriptions of each issue can be found in Table  1 . Reports referred to throughout the paper are available at the GFBR website provided above.

The first overarching thematic issue relates to the appropriateness of building AI technologies in response to health-related challenges in the first place. Case study presentations referred to initiatives where AI technologies were highly appropriate, such as in ear shape biometric identification to more accurately link electronic health care records to individual patients in Zambia (Alinani Simukanga). Although important ethical issues were raised with respect to privacy, trust, and community engagement in this initiative, the AI-based solution was appropriately matched to the challenge of accurately linking electronic records to specific patient identities. In contrast, forum participants raised questions about the appropriateness of an initiative using AI to improve the quality of handwashing practices in an acute care hospital in India (Niyoshi Shah), which led to gaming the algorithm. Overall, participants acknowledged the dangers of techno-solutionism, in which AI researchers and developers treat AI technologies as the most obvious solutions to problems that in actuality demand much more complex strategies to address [ 24 ]. However, forum participants agreed that RECs in different contexts have differing degrees of power to raise issues of the appropriateness of an AI-based intervention.

The second overarching thematic issue related to whether and how AI-based systems transfer from one national health context to another. One central issue raised by a number of case study presentations related to the challenges of validating an algorithm with data collected in a local environment. For example, one case study presentation described a project that would involve the collection of personally identifiable data for sensitive group identities, such as tribe, clan, or religion, in the jurisdictions involved (South Africa, Nigeria, Tanzania, Uganda and the US; Gakii Masunga). Doing so would enable the team to ensure that those groups were adequately represented in the dataset to ensure the resulting algorithm was not biased against specific community groups when deployed in that context. However, some members of these communities might desire to be represented in the dataset, whereas others might not, illustrating the need to balance autonomy and inclusivity. It was also widely recognized that collecting these data is an immense challenge, particularly when historically oppressive practices have led to a low-trust environment for international organizations and the technologies they produce. It is important to note that in some countries such as South Africa and Rwanda, it is illegal to collect information such as race and tribal identities, re-emphasizing the importance for cultural awareness and avoiding “one size fits all” solutions.

The third overarching thematic issue is related to understanding accountabilities for both the impacts of AI technologies and governance decision-making regarding their use. Where global health research involving AI leads to longer-term harms that might fall outside the usual scope of issues considered by a REC, who is to be held accountable, and how? This question was raised as one that requires much further attention, with law being mixed internationally regarding the mechanisms available to hold researchers, innovators, and their institutions accountable over the longer term. However, it was recognized in breakout group discussion that many jurisdictions are developing strong data protection regimes related specifically to international collaboration for research involving health data. For example, Kenya’s Data Protection Act requires that any internationally funded projects have a local principal investigator who will hold accountability for how data are shared and used [ 25 ]. The issue of research partnerships with commercial entities was raised by many participants in the context of accountability, pointing toward the urgent need for clear principles related to strategies for engagement with commercial technology companies in global health research.

The fourth and final overarching thematic issue raised here is that of consent. The issue of consent was framed by the widely shared recognition that models of individual, explicit consent might not produce a supportive environment for AI innovation that relies on the secondary uses of health-related datasets to build AI algorithms. Given this recognition, approaches such as community oversight of health data uses were suggested as a potential solution. However, the details of implementing such community oversight mechanisms require much further attention, particularly given the unique perspectives on health data in different country settings in global health research. Furthermore, some uses of health data do continue to require consent. One case study of South Africa, Nigeria, Kenya, Ethiopia and Uganda suggested that when health data are shared across borders, individual consent remains necessary when data is transferred from certain countries (Nezerith Cengiz). Broader clarity is necessary to support the ethical governance of health data uses for AI in global health research.

Recommendations for ethical governance of AI in global health research

Dialogue at the forum led to a range of suggestions for promoting ethical conduct of AI research for global health, related to the various roles of actors involved in the governance of AI research broadly defined. The strategies are written for actors we refer to as “governance leaders”, those people distributed throughout the AI for global health research ecosystem who are responsible for ensuring the ethical and socially responsible conduct of global health research involving AI (including researchers themselves). These include RECs, government regulators, health care leaders, health professionals, corporate social accountability officers, and others. Enacting these strategies would bolster the ethical governance of AI for global health more generally, enabling multiple actors to fulfill their roles related to governing research and development activities carried out across multiple organizations, including universities, academic health sciences centers, start-ups, and technology corporations. Specific suggestions are summarized in Table  2 .

First, forum participants suggested that governance leaders including RECs, should remain up to date on recent advances in the regulation of AI for health. Regulation of AI for health advances rapidly and takes on different forms in jurisdictions around the world. RECs play an important role in governance, but only a partial role; it was deemed important for RECs to acknowledge how they fit within a broader governance ecosystem in order to more effectively address the issues within their scope. Not only RECs but organizational leaders responsible for procurement, researchers, and commercial actors should all commit to efforts to remain up to date about the relevant approaches to regulating AI for health care and public health in jurisdictions internationally. In this way, governance can more adequately remain up to date with advances in regulation.

Second, forum participants suggested that governance leaders should focus on ethical governance of health data as a basis for ethical global health AI research. Health data are considered the foundation of AI development, being used to train AI algorithms for various uses [ 26 ]. By focusing on ethical governance of health data generation, sharing, and use, multiple actors will help to build an ethical foundation for AI development among global health researchers.

Third, forum participants believed that governance processes should incorporate AI impact assessments where appropriate. An AI impact assessment is the process of evaluating the potential effects, both positive and negative, of implementing an AI algorithm on individuals, society, and various stakeholders, generally over time frames specified in advance of implementation [ 27 ]. Although not all types of AI research in global health would warrant an AI impact assessment, this is especially relevant for those studies aiming to implement an AI system for intervention into health care or public health. Organizations such as RECs can use AI impact assessments to boost understanding of potential harms at the outset of a research project, encouraging researchers to more deeply consider potential harms in the development of their study.

Fourth, forum participants suggested that governance decisions should incorporate the use of environmental impact assessments, or at least the incorporation of environment values when assessing the potential impact of an AI system. An environmental impact assessment involves evaluating and anticipating the potential environmental effects of a proposed project to inform ethical decision-making that supports sustainability [ 28 ]. Although a relatively new consideration in research ethics conversations [ 29 ], the environmental impact of building technologies is a crucial consideration for the public health commitment to environmental sustainability. Governance leaders can use environmental impact assessments to boost understanding of potential environmental harms linked to AI research projects in global health over both the shorter and longer terms.

Fifth, forum participants suggested that governance leaders should require stronger transparency in the development of AI algorithms in global health research. Transparency was considered essential in the design and development of AI algorithms for global health to ensure ethical and accountable decision-making throughout the process. Furthermore, whether and how researchers have considered the unique contexts into which such algorithms may be deployed can be surfaced through stronger transparency, for example in describing what primary considerations were made at the outset of the project and which stakeholders were consulted along the way. Sharing information about data provenance and methods used in AI development will also enhance the trustworthiness of the AI-based research process.

Sixth, forum participants suggested that governance leaders can encourage or require community engagement at various points throughout an AI project. It was considered that engaging patients and communities is crucial in AI algorithm development to ensure that the technology aligns with community needs and values. However, participants acknowledged that this is not a straightforward process. Effective community engagement requires lengthy commitments to meeting with and hearing from diverse communities in a given setting, and demands a particular set of skills in communication and dialogue that are not possessed by all researchers. Encouraging AI researchers to begin this process early and build long-term partnerships with community members is a promising strategy to deepen community engagement in AI research for global health. One notable recommendation was that research funders have an opportunity to incentivize and enable community engagement with funds dedicated to these activities in AI research in global health.

Seventh, forum participants suggested that governance leaders can encourage researchers to build strong, fair partnerships between institutions and individuals across country settings. In a context of longstanding imbalances in geopolitical and economic power, fair partnerships in global health demand a priori commitments to share benefits related to advances in medical technologies, knowledge, and financial gains. Although enforcement of this point might be beyond the remit of RECs, commentary will encourage researchers to consider stronger, fairer partnerships in global health in the longer term.

Eighth, it became evident that it is necessary to explore new forms of regulatory experimentation given the complexity of regulating a technology of this nature. In addition, the health sector has a series of particularities that make it especially complicated to generate rules that have not been previously tested. Several participants highlighted the desire to promote spaces for experimentation such as regulatory sandboxes or innovation hubs in health. These spaces can have several benefits for addressing issues surrounding the regulation of AI in the health sector, such as: (i) increasing the capacities and knowledge of health authorities about this technology; (ii) identifying the major problems surrounding AI regulation in the health sector; (iii) establishing possibilities for exchange and learning with other authorities; (iv) promoting innovation and entrepreneurship in AI in health; and (vi) identifying the need to regulate AI in this sector and update other existing regulations.

Ninth and finally, forum participants believed that the capabilities of governance leaders need to evolve to better incorporate expertise related to AI in ways that make sense within a given jurisdiction. With respect to RECs, for example, it might not make sense for every REC to recruit a member with expertise in AI methods. Rather, it will make more sense in some jurisdictions to consult with members of the scientific community with expertise in AI when research protocols are submitted that demand such expertise. Furthermore, RECs and other approaches to research governance in jurisdictions around the world will need to evolve in order to adopt the suggestions outlined above, developing processes that apply specifically to the ethical governance of research using AI methods in global health.

Research involving the development and implementation of AI technologies continues to grow in global health, posing important challenges for ethical governance of AI in global health research around the world. In this paper we have summarized insights from the 2022 GFBR, focused specifically on issues in research ethics related to AI for global health research. We summarized four thematic challenges for governance related to AI in global health research and nine suggestions arising from presentations and dialogue at the forum. In this brief discussion section, we present an overarching observation about power imbalances that frames efforts to evolve the role of governance in global health research, and then outline two important opportunity areas as the field develops to meet the challenges of AI in global health research.

Dialogue about power is not unfamiliar in global health, especially given recent contributions exploring what it would mean to de-colonize global health research, funding, and practice [ 30 , 31 ]. Discussions of research ethics applied to AI research in global health contexts are deeply infused with power imbalances. The existing context of global health is one in which high-income countries primarily located in the “Global North” charitably invest in projects taking place primarily in the “Global South” while recouping knowledge, financial, and reputational benefits [ 32 ]. With respect to AI development in particular, recent examples of digital colonialism frame dialogue about global partnerships, raising attention to the role of large commercial entities and global financial capitalism in global health research [ 21 , 22 ]. Furthermore, the power of governance organizations such as RECs to intervene in the process of AI research in global health varies widely around the world, depending on the authorities assigned to them by domestic research governance policies. These observations frame the challenges outlined in our paper, highlighting the difficulties associated with making meaningful change in this field.

Despite these overarching challenges of the global health research context, there are clear strategies for progress in this domain. Firstly, AI innovation is rapidly evolving, which means approaches to the governance of AI for health are rapidly evolving too. Such rapid evolution presents an important opportunity for governance leaders to clarify their vision and influence over AI innovation in global health research, boosting the expertise, structure, and functionality required to meet the demands of research involving AI. Secondly, the research ethics community has strong international ties, linked to a global scholarly community that is committed to sharing insights and best practices around the world. This global community can be leveraged to coordinate efforts to produce advances in the capabilities and authorities of governance leaders to meaningfully govern AI research for global health given the challenges summarized in our paper.

Limitations

Our paper includes two specific limitations that we address explicitly here. First, it is still early in the lifetime of the development of applications of AI for use in global health, and as such, the global community has had limited opportunity to learn from experience. For example, there were many fewer case studies, which detail experiences with the actual implementation of an AI technology, submitted to GFBR 2022 for consideration than was expected. In contrast, there were many more governance reports submitted, which detail the processes and outputs of governance processes that anticipate the development and dissemination of AI technologies. This observation represents both a success and a challenge. It is a success that so many groups are engaging in anticipatory governance of AI technologies, exploring evidence of their likely impacts and governing technologies in novel and well-designed ways. It is a challenge that there is little experience to build upon of the successful implementation of AI technologies in ways that have limited harms while promoting innovation. Further experience with AI technologies in global health will contribute to revising and enhancing the challenges and recommendations we have outlined in our paper.

Second, global trends in the politics and economics of AI technologies are evolving rapidly. Although some nations are advancing detailed policy approaches to regulating AI more generally, including for uses in health care and public health, the impacts of corporate investments in AI and political responses related to governance remain to be seen. The excitement around large language models (LLMs) and large multimodal models (LMMs) has drawn deeper attention to the challenges of regulating AI in any general sense, opening dialogue about health sector-specific regulations. The direction of this global dialogue, strongly linked to high-profile corporate actors and multi-national governance institutions, will strongly influence the development of boundaries around what is possible for the ethical governance of AI for global health. We have written this paper at a point when these developments are proceeding rapidly, and as such, we acknowledge that our recommendations will need updating as the broader field evolves.

Ultimately, coordination and collaboration between many stakeholders in the research ethics ecosystem will be necessary to strengthen the ethical governance of AI in global health research. The 2022 GFBR illustrated several innovations in ethical governance of AI for global health research, as well as several areas in need of urgent attention internationally. This summary is intended to inform international and domestic efforts to strengthen research ethics and support the evolution of governance leadership to meet the demands of AI in global health research.

Data availability

All data and materials analyzed to produce this paper are available on the GFBR website: https://www.gfbr.global/past-meetings/16th-forum-cape-town-south-africa-29-30-november-2022/ .

Clark P, Kim J, Aphinyanaphongs Y, Marketing, Food US. Drug Administration Clearance of Artificial Intelligence and Machine Learning Enabled Software in and as Medical devices: a systematic review. JAMA Netw Open. 2023;6(7):e2321792–2321792.

Article   Google Scholar  

Potnis KC, Ross JS, Aneja S, Gross CP, Richman IB. Artificial intelligence in breast cancer screening: evaluation of FDA device regulation and future recommendations. JAMA Intern Med. 2022;182(12):1306–12.

Siala H, Wang Y. SHIFTing artificial intelligence to be responsible in healthcare: a systematic review. Soc Sci Med. 2022;296:114782.

Yang X, Chen A, PourNejatian N, Shin HC, Smith KE, Parisien C, et al. A large language model for electronic health records. NPJ Digit Med. 2022;5(1):194.

Meskó B, Topol EJ. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ Digit Med. 2023;6(1):120.

Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1(9):389–99.

Minssen T, Vayena E, Cohen IG. The challenges for Regulating Medical Use of ChatGPT and other large Language models. JAMA. 2023.

Ho CWL, Malpani R. Scaling up the research ethics framework for healthcare machine learning as global health ethics and governance. Am J Bioeth. 2022;22(5):36–8.

Yeung K. Recommendation of the council on artificial intelligence (OECD). Int Leg Mater. 2020;59(1):27–34.

Maddox TM, Rumsfeld JS, Payne PR. Questions for artificial intelligence in health care. JAMA. 2019;321(1):31–2.

Dzau VJ, Balatbat CA, Ellaissi WF. Revisiting academic health sciences systems a decade later: discovery to health to population to society. Lancet. 2021;398(10318):2300–4.

Ferretti A, Ienca M, Sheehan M, Blasimme A, Dove ES, Farsides B, et al. Ethics review of big data research: what should stay and what should be reformed? BMC Med Ethics. 2021;22(1):1–13.

Rahimzadeh V, Serpico K, Gelinas L. Institutional review boards need new skills to review data sharing and management plans. Nat Med. 2023;1–3.

Kling S, Singh S, Burgess TL, Nair G. The role of an ethics advisory committee in data science research in sub-saharan Africa. South Afr J Sci. 2023;119(5–6):1–3.

Google Scholar  

Cengiz N, Kabanda SM, Esterhuizen TM, Moodley K. Exploring perspectives of research ethics committee members on the governance of big data in sub-saharan Africa. South Afr J Sci. 2023;119(5–6):1–9.

Doerr M, Meeder S. Big health data research and group harm: the scope of IRB review. Ethics Hum Res. 2022;44(4):34–8.

Ballantyne A, Stewart C. Big data and public-private partnerships in healthcare and research: the application of an ethics framework for big data in health and research. Asian Bioeth Rev. 2019;11(3):315–26.

Samuel G, Chubb J, Derrick G. Boundaries between research ethics and ethical research use in artificial intelligence health research. J Empir Res Hum Res Ethics. 2021;16(3):325–37.

Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics. 2021;22(1):1–17.

Teixeira da Silva JA. Handling ethics dumping and neo-colonial research: from the laboratory to the academic literature. J Bioethical Inq. 2022;19(3):433–43.

Ferryman K. The dangers of data colonialism in precision public health. Glob Policy. 2021;12:90–2.

Couldry N, Mejias UA. Data colonialism: rethinking big data’s relation to the contemporary subject. Telev New Media. 2019;20(4):336–49.

Organization WH. Ethics and governance of artificial intelligence for health: WHO guidance. 2021.

Metcalf J, Moss E. Owning ethics: corporate logics, silicon valley, and the institutionalization of ethics. Soc Res Int Q. 2019;86(2):449–76.

Data Protection Act - OFFICE OF THE DATA PROTECTION COMMISSIONER KENYA [Internet]. 2021 [cited 2023 Sep 30]. https://www.odpc.go.ke/dpa-act/ .

Sharon T, Lucivero F. Introduction to the special theme: the expansion of the health data ecosystem–rethinking data ethics and governance. Big Data & Society. Volume 6. London, England: SAGE Publications Sage UK; 2019. p. 2053951719852969.

Reisman D, Schultz J, Crawford K, Whittaker M. Algorithmic impact assessments: a practical Framework for Public Agency. AI Now. 2018.

Morgan RK. Environmental impact assessment: the state of the art. Impact Assess Proj Apprais. 2012;30(1):5–14.

Samuel G, Richie C. Reimagining research ethics to include environmental sustainability: a principled approach, including a case study of data-driven health research. J Med Ethics. 2023;49(6):428–33.

Kwete X, Tang K, Chen L, Ren R, Chen Q, Wu Z, et al. Decolonizing global health: what should be the target of this movement and where does it lead us? Glob Health Res Policy. 2022;7(1):3.

Abimbola S, Asthana S, Montenegro C, Guinto RR, Jumbam DT, Louskieter L, et al. Addressing power asymmetries in global health: imperatives in the wake of the COVID-19 pandemic. PLoS Med. 2021;18(4):e1003604.

Benatar S. Politics, power, poverty and global health: systems and frames. Int J Health Policy Manag. 2016;5(10):599.

Download references

Acknowledgements

We would like to acknowledge the outstanding contributions of the attendees of GFBR 2022 in Cape Town, South Africa. This paper is authored by members of the GFBR 2022 Planning Committee. We would like to acknowledge additional members Tamra Lysaght, National University of Singapore, and Niresh Bhagwandin, South African Medical Research Council, for their input during the planning stages and as reviewers of the applications to attend the Forum.

This work was supported by Wellcome [222525/Z/21/Z], the US National Institutes of Health, the UK Medical Research Council (part of UK Research and Innovation), and the South African Medical Research Council through funding to the Global Forum on Bioethics in Research.

Author information

Authors and affiliations.

Department of Physical Therapy, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada

Berman Institute of Bioethics, Johns Hopkins University, Baltimore, MD, USA

Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA

Department of Philosophy and Classics, University of Ghana, Legon-Accra, Ghana

Caesar A. Atuire

Centre for Tropical Medicine and Global Health, Nuffield Department of Medicine, University of Oxford, Oxford, UK

Mahidol Oxford Tropical Medicine Research Unit, Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand

Phaik Yeong Cheah

Berkman Klein Center, Harvard University, Bogotá, Colombia

Armando Guio Español

Department of Radiology and Informatics, Emory University School of Medicine, Atlanta, GA, USA

Judy Wawira Gichoya

Health Ethics & Governance Unit, Research for Health Department, Science Division, World Health Organization, Geneva, Switzerland

Adrienne Hunt & Katherine Littler

African Center of Excellence in Bioinformatics and Data Intensive Science, Infectious Diseases Institute, Makerere University, Kampala, Uganda

Daudi Jjingo

ISI Foundation, Turin, Italy

Daniela Paolotti

Department of Health Sciences and Technology, ETH Zurich, Zürich, Switzerland

Effy Vayena

Joint Centre for Bioethics, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

JS led the writing, contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. JA contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. CA contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. PYC contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. AE contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. JWG contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. AH contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. DJ contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. KL contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. DP contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. EV contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper.

Corresponding author

Correspondence to James Shaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Shaw, J., Ali, J., Atuire, C.A. et al. Research ethics and artificial intelligence for global health: perspectives from the global forum on bioethics in research. BMC Med Ethics 25 , 46 (2024). https://doi.org/10.1186/s12910-024-01044-w

Download citation

Received : 31 October 2023

Accepted : 01 April 2024

Published : 18 April 2024

DOI : https://doi.org/10.1186/s12910-024-01044-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Machine learning
  • Research ethics
  • Global health

BMC Medical Ethics

ISSN: 1472-6939

policy makers research paper

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 27 August 2019

Creating and communicating social research for policymakers in government

  • Jessica H. Phoenix   ORCID: orcid.org/0000-0002-0427-9286 1 ,
  • Lucy G. Atkinson 1 &
  • Hannah Baker   ORCID: orcid.org/0000-0002-6342-7330 1  

Palgrave Communications volume  5 , Article number:  98 ( 2019 ) Cite this article

10k Accesses

10 Citations

77 Altmetric

Metrics details

  • Politics and international relations
  • Science, technology and society
  • Social policy

Many academics ask ‘How can I use my research to influence policy?’. In this paper, we draw on our first-hand experience as social researchers for the British Government to advise how academics can create and communicate research with policymakers. Specifically, we describe methods of communicating research to policymakers in relation to research we undertook to listen to farmers about their priorities for a new agricultural policy for England following the exit of the UK from the European Union. The main purpose of this research was to ensure farmers’ voices were included in policy development and therefore communication of the research to policymakers was key. We reflect on the effectiveness of the communication methods we employed and summarise our learnings into four practical recommendations: (1) make research relevant to policymakers; (2) invest time to develop and maintain relationships with policymakers; (3) utilise ‘windows of opportunity’; and (4) adapt presentation and communication styles to the audience. We consider that employing these recommendations will help to improve how evidence is communicated between academia and government and therefore the influence of evidence in decision-making processes.

Similar content being viewed by others

policy makers research paper

Institutionalizing applied humanities: enabling a stronger role for the humanities in interdisciplinary research for public policy

The evidence ecosystem in south africa: growing resilience and institutionalisation of evidence use, process expertise in policy advice: designing collaboration in collaboration, introduction.

We are employed as Government Social Researchers (GSR) and our day-to-day work revolves around communicating evidence to policymakers. We can be understood as ‘knowledge brokers’ as we “effectively construct a bridge between the research and policy communities” (Nutley et al., 2007 , p. 63). Therefore, the practices we employ can be of use to academics who want to share their research with policymakers. Against the background of limited empirical evidence regarding how academics can create impact on policy (Oliver and Cairney, 2019 ), the sharing of our day-to-day work provides important tangible experiences and recommendations for how academics can effectively engage with policymakers. In this paper we draw on our experiences to offer practical recommendations for how academics can engage with policymakers to ensure policy making is evidence-informed (Mayne et al., 2018 ), and consequently to improve policy effectiveness and programme efficiency (UK Government, 2013 ).

In this paper we first outline our role as knowledge brokers and critique the concept of the ‘two communities’ of policy and academia. We then introduce the policy backdrop to our research; future farming policy in the context of EU Exit. We briefly describe an in-house social research project that we undertook to gauge farmers’ views of future farming policy in the context of EU Exit. Subsequently, we describe our methods of communicating the research findings from this project, based upon dialogue and argumentation (Sanderson, 2009 ), with policymakers by involving policymakers early on in the process, inviting policymakers to be directly involved in the research, regularly sharing findings and presenting the findings in simple formats such as posters and slidedecks. Finally, we share key learnings from our work to offer recommendations to academics about effective strategies to both create and communicate research with policymakers.

Rather than providing a full account of the evidence/policy literature (which is covered by Oliver and Cairney, 2019 ; Evans and Cvitanovic, 2018 ; Cairney and Kwiatkowski, 2017 ; Witting, 2017 ), we show how we are making sense of such studies in our day-to-day work. We focus on situating ourselves in the evidence/policy environment and clearly communicating our research. A key advantage of this paper is that recommendations are made as to how to communicate evidence with policymakers based on our first-hand experience. Furthermore, we write in a manner that puts some of these recommendations into practice.

Government social researchers as ‘knowledge brokers’

GSR share evidence with policymakers to ensure policy is evidence-informed. To do this effectively, we first understand the policy issue being worked on, co-design the questions being asked, and attempt to collaboratively answer these questions with suitable evidence. Collecting evidence involves commissioning research, searching for relevant academic literature, undertaking evidence reviews on policy relevant topics and delivering in-house research. Evidence needs to be suitable to the needs of the policymakers and reworked dependent on the question being asked. We often streamline, condense and target the evidence to make it relevant to the policy questions being asked, and to make it more understandable. The number of GSR in UK Government has substantially increased in recent years due to the need for evidence to inform policy developments in regard to EU Exit. The employment of evidence professionals means that, in opposition to Newman et al. ( 2016 ) findings about the Australian government, the British Government now has increased capacity to engage with diverse forms of evidence.

Despite being positioned at the interface of research and policy making, the work of GSR is not routinely recognised as academic, and we are overlooked in literature regarding the binary construct of the ‘two communities’ of policy and academia/research-producers (Caplan, 1979 ; Dunn, 1980 ; and Slavin, 2002 ). The ‘two communities’ construct is widely used “to describe the disconnect between the worlds of academia and policy” (Newman et al., 2015 , p. 24). Academia is conceptualised to be “preoccupied with abstract concepts and theoretical explanations” (Newman and Head, 2015 , p. 384) whilst policymakers are “faced with real-life problems that needed to be resolved in real time” (Newman and Head, 2015 , p. 384). Hence, the construct relies on identifying gaps between policy and academia and assumes that the two communities have a ‘lack of fit’ (Wehrens, 2014 ). Furthermore, Topp et al. ( 2018 ) suggests that the problems identified in the ‘lack of fit’ between the two communities exist even when the two communities belong to the same organisation and are located in the same building.

We contend that the two communities construct problematically assumes heterogeneity between the communities and homogeneity within these communities (Wehrens, 2014 ). Newman ( 2014 ) troubles the assumed heterogeneity in the ‘two communities’ construct in his research with Australian public servants. Newman ( 2014 , p. 614) writes that:

public servants who claim to use academic research in their policy work are more likely to have much in common with academics, including having postgraduate degrees and work experience in the university sector.

We strongly relate to Newman’s challenge of this construct. We are employed by the British Government as Civil Servants on the basis of having specialist skills and knowledge of social research. Hence, if we take the ‘two communities’ construct to be true, we sit as knowledge brokers at the interface between the two (Nutley et al., 2007 ); we are in the policy community and we are producers of research. As a point of comparison, Table 1 displays the key problems identified in the disconnect between the two communities and how they apply or do not apply to GSR.

Table 1 shows that not all of the problems posed by the disconnect between academics and policymakers apply to GSR and policymakers, in large part because of our role as knowledge brokers. Although differences exist between GSR’ and academics’ relationship with policymakers, we have valuable ‘insider’ insight into the policy process, creation of evidence and evidence dissemination that scholars recognise to be critical for academia to understand (Monaghan, 2011 ; Mayne et al., 2018 ; and Cooper, 2016 ).

We recognise that our role as GSR gives us direct access to policymakers, unlike many academics, and therefore some of the research communication methods that we describe in this paper may be difficult for academics to implement. Consequently, we split the paper into two. The first half of the paper is written to help academic knowledge brokers further develop their understanding of policymakers, and how GSR knowledge brokers create and communicate research with policymakers. The second half of the paper distils our experience into practical recommendations for academics that want to improve their engagement with policymakers and increase the likelihood that their evidence impacts policymaking. We draw on our close relationship with policy to aid academics in their communication of evidence to policymakers, with the aim of further developing evidence-based policy.

Farming and EU exit policy environment

We work in a central evidence team for the Future Farming and Countryside Programme in the Department for Environment, Food and Rural Affairs (Defra). This Programme is composed of multiple policy teams, from Environmental Land Management, to Regulation and Enforcement, to Animal and Plant Health. We collate, create and communicate evidence to policymakers in these policy teams to inform policy development. In this paper, we use the term ‘policymakers’ to refer to colleagues in Defra who are involved and responsible for the development of farming policy (Lexico, 2019 ). All of these policy teams focus on issues related to EU Exit and farming, and aim to deliver a smooth agricultural transition away from the EU’s Common Agricultural Policy and into a new approach to policy post EU Exit. EU Exit poses a significant change to policy and challenge to farmers. It is a high profile policy area and Ministers are motivated to create new, and different, policies. It is therefore a high priority work area for the Programme.

Farmers are one of the most important stakeholder groups in the development of this future policy; they manage 70% of the land in England (Defra, 2018 ) and their management decisions are crucial to the achievement of policy outcomes. We therefore recognised the need to understand their viewpoints on, and experience of, agricultural policy to inform new policies for the countryside that deliver successful outcomes for the public and farmers. In light of this need, we undertook a research project to understand farmers’ opinions and ideas for future agricultural policy post EU Exit, and to communicate these opinions with policymakers. Our aims for the delivery of the research project were to ensure the research could evolve in the changing policy environment, to communicate the findings to policy teams and ensure policy teams were invested in the pertinence of future social research.

Methodology for farmer discussions

The complex and dynamic environment of EU Exit required a research approach that was reactive to policy changes and enabled stakeholder insights to be shared with policymakers at pace, hence we undertook the project in-house.

Farmers were initially invited to be involved in this research via Twitter in August 2017. Moreover, leaflets detailing information about the research and our team’s contact details were given to farmers at an existing Defra project meeting. This led to informal snowball recruitment through farming networks which enabled us to understand the viewpoints of ‘hard to reach’ farmers with whom Defra do not normally converse.

The research consisted of unstructured discussions (Fontana, 2007 ; and Morgan, 1997 ) with farmer groups across England to collect farmers’ views on future policy. Discussion participants were self-selecting and groups ranged from less than 10 participants to over 30. Farmers and catchment advisors hosted the discussions and invited other attendees, including their farming neighbours and existing discussion groups. Discussions were held across England (Fig. 1 ) with farmers from all farm sectors between October 2017 and June 2018. We did not aim to gather views that were representative of the English farming population, but rather aimed to listen to the diversity of views in different farming sectors (e.g., livestock, cropping) and different geographies (e.g., hill farming, peri-urban, rural). ‘Farmers’ is used throughout this article as shorthand for all discussion participants.

figure 1

Locations of 40 farmer discussions. These discussions took place between October 2017 and June 2018

Participants discussed issues most pertinent to them, giving their personal context to issues. The unstructured discussions lasted 2 to 4 hours. Usually, at least one government social researcher and one policymaker attended each discussion. Notes taken at the discussions were typed up into summaries as soon as possible after the discussion. Nearing the end of the research project in June 2018, the summaries were compiled and subject to thematic analysis. The findings were organised into three themes, with five issues in each theme (Defra and Government Social Research (2018) Farmers voices, government listening. [Unpublished]).

Methodology for evidence communication with policymakers

As GSR in a centralised evidence team, we were the ‘face’ of this research project. The close association between ourselves and the research helped us to build personal working relationships with policymakers. Four GSR (three female and one male) were involved in the evidence communication strategy, ranging in seniority and experience. The most senior member of the team had worked in Government for over 15 years and therefore had extensive experience of evidence communication to policy teams. The communication methods detailed in this paper were informed by his experience. Another member of the team was both a social researcher in government and a social researcher in academia, hence concurrently sat in and between the “two communities” (Caplan, 1979 ; and Dunn, 1980 ). Her seminar teaching experience in academia informed the delivery of bitesize sessions (see below). All members of the team developed evidence communication methods through on-the-job training and experience.

We communicated the findings with roughly 400 colleagues (including approximately 250 policymakers, 60 evidence colleagues, 20 senior policy managers (hereafter referred to as senior management), 70 officers and advisors, and two ministers across the Defra group Footnote 1 ). We involved policymakers, senior management and a ministerial advisor in our research from its conception to ensure the work was collaborative and gained credibility. At first, some policymakers were hesitant of the research due to uncertainty regarding future policy direction and therefore uncertainty regarding the topics to be raised in the farmer discussions. However, promotion by senior management in Programme wide meetings gave credibility to the work and developed policy teams’ interest. Consequently, policymakers attended the farmer discussions and became involved in the research. This aided our evidence communication strategy as policymakers recognised our specialism as GSR and therefore invested in the research. Our evidence communication strategy (Fig. 2 ) consisted of five methods:

Email to policymakers after each farmer discussion

Discussions and presentations in bitesize sessions

Strategic discussions and presentations in meetings with senior management

figure 2

Research process flow chart. The chart displays Government Social Researchers’ evidence communication strategy and its associated impact 1

We decided to communicate the evidence by sending emails to policymakers and hosting discussions (methods 1 and 3) at the beginning of the research project. However, the style of the communication changed as we developed relationships with policymakers and more effective engagement methods. We developed our communication methods based on evidence-based policy making literature, the effectiveness of the methods we implemented and our previous experience.

Following each farmer discussion, we drafted findings into email summaries which were sent to relevant evidence and policy colleagues and senior management in the Programme. These emails contained a brief introduction to the farming group, key findings from the farmer discussion and highlighted repeated themes from previous farmer discussions (Fig. 3 ).

figure 3

Example format of summary emails. Identifiable information has been deleted

Once the research was complete, we created slidedecks to present evidence to policymakers. We created both a short slidedeck containing high level findings for ease of access for senior management, and a slidedeck detailing the full findings (Defra and Government Social Research (2018) Farmers voices, government listening. [Unpublished]), which was shared with policy teams and attendees of the farmer discussions.

We also communicated our findings by delivering 30 and 60 minutes bitesize sessions to policymakers across the Programme and the Defra group (see footnote 1 ). We presented 14 lunchtime sessions with 200 attendees, using the slidedecks described above. These included face-to-face bitesize sessions in three offices, as well as a series of webinars; webinars proved to be a useful tool for evidence dissemination across different offices. Each session was repeated at least twice in different locations in order to extend our engagement with Defra staff. We shared a summary of our research findings to highlight the rich qualitative evidence-base that we had available. We conducted two rounds of bitesize sessions with different target audiences. The first round provided general findings from across all the research themes to aid policy teams’ to consider their work from a farmer’s perspective. The second round provided tailored findings to specific policy teams.

Furthermore we undertook targeted engagement with senior management. We presented a summary of our findings to senior management, relevant to their policy area, and included specific examples and stories from farmer discussions.

At the end of the research project we designed a poster to display in Defra’s main offices, which contained detail of the background, method, main findings and next steps for our research (Fig. 4 ). The poster enabled policymakers across the Defra group to quickly understand the main findings from the research.

figure 4

‘Farmers Voices, Government Listening’ poster

How did we increase the impact of the research project on policy development?

Invite policymakers to take part in the research.

Policymakers were invited to attend and be involved in the farmer discussions. This had a number of benefits. First, policy colleagues had a good grasp of the key issues they wanted to gauge farmers opinions’ on. Evidence needs to be relevant to current policy needs (Head, 2010b ), so inviting policymakers to discussions ensured that the research led to insights that were of a ‘good fit’ for their policy team. The use of unstructured focus groups allowed policymakers to ask specific policy-related questions where appropriate within the discussion. Second, leaving the office and meeting farmers gave policymakers time and space to think about their policy area in relation to the practicalities of farming. It enabled them to learn first-hand about farmers’ challenges and ideas for future policy. Oliver and Duncan ( 2019 ) note that bringing together people with different perspectives allows learning from each other, and learning that is new to everyone involved. In light of this, we consider that bringing together policymakers and farmers encouraged policymakers to critically examine their policy ideas as farmers questioned and challenged current policy thinking. This led to new policy ideas being developed. Third, their involvement in discussions meant both policy and evidence colleagues collated personal tales from the research. As noted by Cairney and Oliver ( 2017 ), hearing the evidence through stakeholders’ stories imbued emotion into the findings, thereby helping to prompt colleagues’ attention to the importance of this work. Jones and Crow ( 2017 ) further note that the emotional aspects of a story are more likely to be remembered, therefore leading to a stronger ability to recall the information. For example, some GSR and policymakers told stories from this research project to other Defra colleagues about challenges faced by contract graziers; a type of farmer missing from Defra’s database of farm holdings because they do not own or manage land. Therefore, these stories reflected the diversity of farmers who were otherwise hidden or unaccounted for in aggregated statistics. This approach also offered benefits to farmers, who by having direct conversations with policymakers, received immediate feedback from those ‘using’ the evidence and felt their views were being heard directly and in a salient manner. Fourth, colleagues who attended discussions became enveloped in the research and further interested in outputs from other discussions held around the country. This resulted in our continued engagement with policymakers about the findings. However, this approach was not without challenges. For example, we had to continuously engage with policy colleagues (using methods described throughout this paper) as they were busy and were themselves dealing with uncertainty in policy developments.

This constant iteration of the materials and format of the farmer discussions with policymakers provided policymakers with ownership over some of the findings.

Email after each discussion

At the beginning of the research project Future Farming policy post EU Exit was not clearly defined. In light of this ambiguity of future policies, emails sent to policymakers following farmer discussions highlighted farmers’ views and ideas across a broad range of topics. The diversity of farmers’ views presented in the emails challenged policymakers, contributed to their thinking and enabled them to pose further questions about policies. For example, farmers told us that ‘productivity’ is not a term that has resonance with them and they preferred the terms ‘profitable’ and ‘competitive’. To share this finding effectively with policymakers, we emphasised that we were reporting ‘what farmers’ said’ as opposed to fitting findings into pre-existing policy definitions.

As time passed, policy began to develop. We therefore fine-tuned our email approach so that research findings for particular topics were more strategically aimed at policymakers in order to increase their use (Farley-Ripple, 2012 ). For example, the key findings were reported in order of relevance to each policy area (Fig. 3 ), which enabled time-limited colleagues to easily access the evidence most pertinent to their policy work (Cairney and Kwiatkowski, 2017 ).

Presenting findings from each farmer discussion in specific emails provided each discussion group with a space and voice that was specific to their geography and circumstances. These ‘micro-updates’ provided opportunity to highlight issues salient to farmers across policy areas and to translate complex evidence into simple stories (Cairney and Oliver, 2017 ). Storytelling is increasingly being recognised as an important dissemination tool in evidence-informed policy making in order to make simple messages persuasive (Cairney, 2016 ). Lock ( 2011 ) notes that just presenting evidence often does not persuade the audience to make a particular decision or action. Rather, Davidson ( 2017 , p. 3) writes that evidence needs to be:

packaged or ‘framed’ in ways that connect to people’s values and take account of the frames, worldviews or narratives in people’s heads of how the world works

Our framing of farmer stories appealed to policymakers’ emotions and therefore increased the resonance of our findings. The stories provided a “face” to anonymous facts (Davidson 2017 , p. 7) and highlighted the diversity of views away from aggregate assumptions. We used stories to inform colleagues about how farmers’ think policy ideas may play out in specific contexts and generated consideration from policymakers about the potential implications of their policies on farmers. Our story telling approach increased the attractiveness of the evidence and helped policymakers to remember the evidence (Jones and Crow, 2017 ). Stories help to engage people and first-hand examples personalise the evidence to increase resonance (Davidson, 2017 ). For example, the evidenced narratives we told were being recounted around the office six months later.

In addition, the regularity of the emails kept policy teams informed of the research project. We strategically sent emails to align with timings of policy developments. This increased the likelihood of our findings impacting policy because we identified suitable times to influence policy development. In other words, we took advantage of ‘windows of opportunity’ (Service et al., 2014 ; Cairney and Kwiatkowski, 2017 ; Head, 2010a ; and Honig and Coburn, 2008 ). The uncertain timetable and evolving policy in relation to EU Exit meant policymakers’ questions developed from week to week. The regular emails enabled policymakers to pose further questions they wanted us to ask farmers, which if relevant, we then posed to discussion groups.

The emails became a channel for discussion between and among policy teams, thereby extending the reach of the work. For example, a ministerial advisor regularly responded to these emails and generated regular further email discussion across a range of colleagues. These cross-team discussions led to the sharing of further evidence as colleagues fed in their own insights from other work.

The principle communication method was a slidedeck (Defra and Government Social Research (2018) Farmers voices, government listening. [Unpublished]). Following on from the award-winning publication of the ‘Future Farming and Environment Evidence Compendium’ (Defra and Government Statistical Service, 2018 ), we created slidedecks in a similar manner to present the evidence to policymakers. Due to our inexperience in graphic design, we utilised the skills of a graphic designer to ensure the slidedecks were visually stimulating. Initially we created a short slidedeck containing high level findings for ease of access for senior management. This slidedeck was subsequently sent to Ministers, thereby extending the reach of social research to the highest level in Defra and raising the profile of social research in government. Successively, a slidedeck detailing the full findings was created (Defra and Government Social Research (2018) Farmers voices, government listening. [Unpublished]). This was shared with policy teams and attendees of the farmer discussions. In comparison to an academic paper the slidedeck was more concise and easier for policy teams to dip in and out of at their ease, hence increased accessibility of the evidence.

Bitesize sessions

We also communicated the research findings in informal lunchtime bitesize sessions. We encouraged people to bring their lunch to the bitesize sessions in order to create an informal atmosphere. This informality provided a ‘safe space’ for policymakers to ask questions that they may not otherwise ask in a more formal setting. This resonates with the work of Mawhinney ( 2010 ) who noted that teachers use school lunch times to have informal conversations and share professional knowledge with one another, thereby building trust between colleagues.

We undertook ‘guess the finding’ (Rothstein, 2016 ) in all bitesize sessions. This involved asking attendees a multi-choice question related to our research, recording their answers, and then presenting the answer from our findings, for example:

Question-What did farmers think needs to change for future delivery of government-led schemes?
Answer 1-Forward contracts and price certainty
Answer 2-Certainty about the future support they will get
Answer 3-Confidence in return on investment
Answer 4-A good income in the first place—they need to be profitable in order to invest

The approach engaged attendees in the session and encouraged them to consider policy from farmers’ perspectives. There is a tendency for policymakers to focus on their own ideas or perceptions when developing policy, and for these ideas to be reinforced by the evidence they read and their team in ‘groupthink’ (Hallsworth et al., 2018 ). Like Louis XIV and his hall of mirrors in the Palace of Versailles, Rose ( 1999 ) poses that we, as researchers, are susceptible to seeing what we want reflected back to us. In a similar manner, we found that policymakers also have a hall of mirrors. The rapidity of change of EU Exit and associated policies made it difficult for policymakers to keep fully up to date with stakeholder opinions and evidence, and therefore their hall of mirrors could stand unchallenged. Our findings provided a snapshot of farmers’ viewpoints and stories which, to many colleagues, was a new insight to add to their existing evidence portfolio and to their understanding. Our approach therefore challenged some colleague’s assumptions and perceptions about farmers by temporarily removing them from their hall of mirrors. This helped to build up both GSR and policy colleagues’ understanding of the evidence, which Head ( 2010b ) states is crucial for evidence-informed policy.

The bitesize sessions generated further interest in the research and requests for more sessions to reach other policymakers. Reflective of Walter et al. ( 2003 ) our tailoring of evidence to policy teams led to in-depth, active discussions which improved the likelihood of the evidence informing policy development. We utilised these sessions to ask policymakers how else we might effectively communicate our findings with their teams and were subsequently invited to policy specific events, such as away days. However, some sessions were difficult to facilitate because policy teams were prone to talking about issues beyond our findings. We worked hard to bring the discussion back to the topic at hand and provided a digital copy of the slidedeck for policymakers’ reference after the bitesize session to ensure they could contact us with any follow up questions.

These bitesize sessions facilitated the establishment of multiple networks between ourselves and the policy teams. These networks helped us to understand the constraints within which policy teams operate, the motivations and influences of policy colleagues besides research evidence, and in turn the values associated with different evidence sources that we present to them (Cairney et al., 2016 ; and Head, 2010b ). In summary, the bitesize sessions helped us to develop trusted and long-term relationships with policy teams, which improved our influence and engagement in the policy making process.

Strategic meetings with senior management

We held seven meetings with senior management personnel who were responsible for entire policy areas and teams. We introduced ourselves as their policy team’s ‘go-to’ GSR and asked about their evidence gaps. This opened up further opportunities to share our findings more widely. For example, one senior manager provided the following feedback and request:

“I found last week’s briefing on the farmer engagement events really useful. I was wondering if you could repeat this for the colleagues in my team?”

The discussions enabled us to develop a strategic focus for our work and our priorities for social research moving forward. We fostered relations through sincere, direct engagement which built trust, raised our personal profiles and the profile of evidence. This approach helped to disseminate evidence during a period of policy changes and uncertainty, which Head ( 2010b ) states is a difficult time for evidence-informed policy to operate. We used this engagement to contribute farmers’ voices to the policy discussions and highlighted areas where evidence was missing from the policy making process, including where evidence was in conflict with policy colleagues’ thinking. Senior management recognised that they needed to be aware of the evidence, understand its strengths and limitations, and reconcile this with the wider policy context. The meetings ensured evidence could be balanced with other competing perspectives in the “fuzzy, political and conflictual” policy process (Head, 2010b , p. 83). It was beneficial to attend these meetings in person because we were able to improvise our communication based on senior managements’ body language (Opdenakker, 2006 ), provide more nuanced answers to questions, and create a more informal setting for discussing the research.

Inevitably senior management raised connections between their team’s work and other policy areas in the Defra group. This enabled the identification of any overlap or potential clashes with other policy areas. It brought policy teams closer together and helped different policy teams to consider cross cutting issues. This shows how periods of policy complexity can provide opportunities for evidence to be used and to overcome siloed ways of working (Head, 2010b ).

These strategic meetings with senior management involved more time investment than other engagement methods, but ensured farmer’s voices were heard by people involved at all levels of policy development.

The creation of posters provided a space to convert qualitative findings into eye-catching information. Evidence suggests that using a variety of dissemination methods can improve knowledge transfer, engagement with the research and in turn its impact (Walter et al., 2003 ; Llic and Rowe, 2013 ; and Witting, 2017 ). In particular, Llic and Rowe ( 2013 ) state that posters provide a concise overview of research findings which can stimulate informal discussion. We used the poster to rouse interest in the research project and encourage evidence teams to contact us for more information. We sent the poster to evidence colleagues across the Defra Group via an online newsletter and displayed it in Defra offices across England. Sharing the poster with evidence colleagues created further requests for information about our research. For example, requests for a fuller report of the findings, requests to extract tailored evidence for specific evidence teams, and requests to deliver bitesize sessions for different policy teams. The poster improved the ‘poor fit’ (Head, 2010b ) between how evidence is assembled by researchers and the practical needs of policy colleagues. We consider that evidence needs to be packaged and communicated in a well written, targeted, and accessible way because even robust and reliable research is not always utilised in policy making.

Recommendations for evidence communication with policymakers

We have themed recommendations arising from our experience of communicating and creating evidence from the research project described. A summary of recommendations is shown in Box 1 .

Box 1: Relevant learnings from our work for academics who are trying to communicate their research to policymakers

Make the research relevant to policymakers.

Involve policymakers early on in the research process and invite them to shape the research, for example defining policy questions for interviews and focus groups.

Be prepared to adapt a presentation to the situation depending on questions or comments made by policymakers. A bank of relevant stories specific to the policy area may help academics to respond to questions with memorable answers.

Have flexibility in methods to increase the attractiveness and relevance of research to policymakers.

Invest time to develop and maintain relationships with policymakers

Use relationships with policymakers to map out the policy landscape in order to quickly target communications to relevant civil servants; Government Social Researchers can potentially help academics to navigate the policy landscape. Ask senior management about the most appropriate engagement methods for their specific policy teams.

Network and engage with policymakers prior to beginning research to establish links and demonstrate how the research is of use for policy.

Utilise ‘windows of opportunity’

Find opportunities to regularly present research to policymakers, using tailored stories to present evidence from a specific voice and context.

Group research findings into policy specific themes so these are ready to disseminate quickly on request.

Provide regular updates to senior management and offer help when ‘windows of opportunities’ are identified.

Adapt presentation and communication styles to the audience

Summarise the research in a visually attractive format, with a variety of information such as repeated themes, quotes and key insights.

Use a consistent template and adapt presentations to the policy area.

Use different communication methods and types to extend the reach of the research, such as round-table discussions in face-to-face presentations, and a ‘Question and Answer’ format for online webinars. Interactive approaches such as ‘guess the finding’ can engage policymakers and find out what their assumptions and opinions are.

Provide a digital copy of the presentation including contact details for any follow up.

Allow plenty of time for discussion of the research when organising presentations.

Use engaging publication methods to share findings with policymakers. We suggest using slidedecks and recommend involving a graphic designer in their creation.

Of course, evidence should be appropriate to policy needs (Parkhurst, 2017 ). Parkhurst ( 2017 ) notes that evidence is appropriate if it addresses the key policy concerns at hand, is applicable to the local context and is constructed in ways useful to address policy concerns. To determine the appropriateness of evidence and—if desired—to make evidence more appropriate to policy, we encourage academics to network and engage with policymakers prior to beginning the research. We recommend involving policymakers early on in the research to establish commitment, provide context to policy development and to put a face to the evidence (Davidson, 2017 ). Commitment can be furthered by inviting policymakers to be involved in the research, if collection methods and ethics allow. Academics can further allow policymakers to shape research by encouraging them to feed into methods, for example defining questions for interviews and/or focus groups. This will enable academics to establish links with the policy making process, demonstrate how the research can help to answer the key question(s) being asked and establish to whom their research is relevant (Evans and Cvitanovic, 2018 ).

We suggest academics invest time in creating and maintaining relationships with policymakers by providing regular updates to senior management and offering evidence when ‘windows of opportunity’ are identified (see below). It is a challenge to continuously engage with policymakers (using methods described throughout this paper) as they are busy and often deal with uncertainty in policy developments. However we consider it to be a necessary part of effective engagement.

In addition, we recommend academics find opportunities to regularly present research to policymakers so they become interested and invested in the work. When presenting information, use tailored stories to present evidence from a specific voice and context. The tailoring of research will help policymakers to recognise and remember pertinent points for their work. It is essential to adapt the presentations to the policy area. However, also be prepared to adapt to the situation depending on what questions or comments are made. It is beneficial to have a bank of relevant stories specific to the policy area so academics can respond to questions with memorable answers and enhance the richness of the discussion. We recommend that sessions are organised for 60 minutes to ensure there is enough time for academics and policymakers to have detailed conversations.

We recommend that academics do not keep hold of evidence until it is finished and published, but rather communicate it with civil servants whenever it is relevant to a policy question. Taking advantage of ‘windows of opportunity’ increases the likelihood of evidence being considered in the policy making process. Periods of policy ambiguity, such as that created by EU Exit, present larger scope for decisions to be evidence-informed than matters which are tightly constrained by politics (Head, 2010a ). Thus, it is important to be aware of topics that are of interest to policy to ensure evidence is timely.

Multiple communication methods may need to be employed with different policymakers in different scenarios to meet their evidence needs, such as round-table discussions in face-to-face sessions, and a ‘Question and Answer’ format for online webinars. The use of multiple methods can develop an evidence-based dialogue between policymakers and therefore encourage the sharing of evidence. We suggest using interactive approaches such as ‘guess the finding’ to engage policymakers and uncover their assumptions and opinions. We recommend academics provide a digital copy of any presentations for the audience’s reference, including contact details for any follow up.

In periods of policy ambiguity, civil servants are often stretched for time and will likely not invest time into reading a complex journal article. We recommend that instead of sending journal articles to policymakers, academics send a slidedeck or poster and offer to meet with the relevant policy team to share evidence face-to-face, for example in lunchtime sessions. These communication methods make it easier for policymakers to understand the evidence and help to form relationships between academics and the policy team. These relationships may be long lasting and lead to the co-production of evidence in the future.

This paper is based on a particular research project and therefore does not cover every possible method of communicating research with policy. We suggest that academics consider our recommendations, arising from first-hand experience, in light of those made by Oliver and Cairney’s ( 2019 ) systematic review of ‘how to’ advice in the academic peer-reviewed and grey literatures. In addition to employing the communication methods described in this paper, we recommend more academics look to bridge the gap between the ‘two communities’ of academia and policy. For example, academics can undertake fellowships and/or secondments with government to learn about policy making and how to effectively share research with policymakers, therefore increasing the impact of their research. We are keen for more academics to provide commentary on their experiences of working with policymakers and for GSR to share ‘lessons learned’ with the aim of improving the links between academia and policy making.

We recognise that our recommendations take time and effort to implement, and do not always lead to the immediate results that may be desired. For example, policymakers may drop out of meetings at the last minute, it is difficult to establish commitment from policy teams, and academics may not have the flexibility to develop the research method as policy questions/problems develop. To increase the likelihood of evidence being considered in decision making, we recommend that academics draw on knowledge brokers to connect with policy-relevant knowledge networks and become aware of opportunities for new thinking. Knowledge brokers include academics working on research projects for government departments, individuals working in both government and academia, departmental social research expert groups (for example the Defra Social Science Expert Group Footnote 2 ) and GSR. Academics can contact the Government Social Research profession (Government Social Research profession, 2019 ) and ask to be put in touch with GSR in the relevant field of work who can potentially help academics to navigate the policy landscape. Academics can then invite GSR and/or policymakers to be involved in their research from its inception and co-design research together. This will likely lead to the research being discussed by policy teams, develop civil servants’ investment in the research and may increase the pertinence of the research in policy.

In summary, in this paper we have shared our practical experience of creating and communicating research with policymakers. For academics wanting to increase the utility of research for policy, we recommend to make the research relevant to policymakers, invest time to develop and maintain relationships with policymakers, utilise ‘windows of opportunity’, and adapt presentation and communication styles to the audience. We consider that employing these recommendations will help to improve how evidence is communicated between academia and government, and the influence of evidence in decision-making processes.

Data availability

The materials generated during and/or analysed during the current study are not currently publicly available, but are available from the corresponding author on reasonable request.

The Defra group includes: Defra; Natural England; the Environment Agency; the Animal, Plant and Health Agency; the Rural Payments Agency; the Forestry Commission; and the Veterinary Medicine Directorate.

The Social Science Expert Group is a sub-group of Defra’s Science Advisory Council (Science Advisory Council, 2019 ).

Cairney P (2016) The politics of evidence-based policy making. Palgrave Macmillan UK, London

Google Scholar  

Cairney P, Kwiatkowski R (2017) How to communicate effectively with policymakers: combine insights from psychology and policy studies. Pal Commun 3:37. https://doi.org/10.1057/s41599-017-0046-8

Article   Google Scholar  

Cairney P, Oliver K (2017) Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy? Health Res Policy Syst 15:35. https://doi.org/10.1186/s12961-017-0192-x

Article   PubMed   PubMed Central   Google Scholar  

Cairney P, Oliver K, Wellstead A (2016) To bridge the divide between evidence and policy: reduce ambiguity as much as uncertainty. Pal Adm Rev 76(3):399–402. https://doi.org/10.1111/puar.12555

Caplan N (1979) The two-communities theory and knowledge utilization. Am Behav Sci 22(3):459–470. https://doi.org/10.1177/000276427902200308

Cooper ACG (2016) Exploring the scope of science advice: social sciences in the UK government. Pal Commun 2:16044. https://doi.org/10.1057/palcomms.2016.44

Davidson B (2017) Storytelling and evidence-based policy: lessons from the grey literature. Pal Commun 3:17093. https://doi.org/10.1057/palcomms.2017.93

Defra (2018) Health and Harmony: the future for food, farming and the environment in a Green Brexit–policy statement. Department for Environment Food and Rural Affairs. https://www.gov.uk/government/publications/the-future-for-food-farming-and-the-environment-policy-statement-2018/health-and-harmony-the-future-for-food-farming-and-the-environment-in-a-green-brexit-policy-statement . Accessed 17 Jan 2019

Defra and Government Statistical Service (2018) The Future Farming and Environment Evidence Compendium. Department for Environment Food and Rural Affairs. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/683972/future-farming-environment-evidence.pdf . Accessed 17 Jan 2019

Dunn W (1980) The two‐communities metaphor and models of knowledge use: an exploratory case survey. Sci Commun 1(4):515–536. https://doi.org/10.1177/107554708000100403

Evans MC, Cvitanovic C (2018) An Introduction to achieving policy impact for early career researchers. Palgrave Commun 4:88. https://doi.org/10.1057/s41599-018-0144-2

Farley-Ripple L (2012) Research use in school district central office decision making: a case study. Educ Manag Adm Leadersh 40(6):786–806. https://doi.org/10.1177/1741143212456912

Fontana A (2007) Interviewing, structured, unstructured, and postmodern. The Blackwell Encyclopaedia of Sociology. https://doi.org/10.1002/9781405165518.wbeosi070 . Accessed 17 Jan 2019

Government Social Research profession (2019) Government Social Research profession. https://www.gov.uk/government/organisations/civil-service-government-social-research-profession . Accessed 26 June 2019

Hallsworth M, Egan M, Rutter J, McCrae J (2018) Behavioural Government: using behavioural science to improve how governments make decisions. The Behavioural Insights Team. https://www.behaviouralinsights.co.uk/publications/behavioural-government/ . Accessed 17 Jan 2019

Head B (2010a) Evidence-based policy: principles and requirements. In: Australian Government Productivity Commission (ed). Productivity Commission (2010) Strengthening evidence-based policy in the Australian Federation, 1, Roundtable Proceedings, Productivity Commission, Canberra, Australia, pp. 13–26

Head B (2010b) Reconsidering evidence-based policy: key issues and challenges. Policy Soc 29(2):77–94. https://doi.org/10.1016/j.polsoc.2010.03.001

Honig M, Coburn C (2008) Evidence-based decision making in school district central offices. Educ Policy 22(4):578–608. https://doi.org/10.1177/0895904807307067

Jones MD, Crow A (2017) How can we use the ‘science of stories’ to produce persuasive scientific stories? Palgrave Commun 3:53. https://doi.org/10.1057/s41599-017-0047-7

Lexico (2019) Lexico: dictionary. https://www.lexico.com/en/definition/policymaker . Accessed 28 June 2019

Llic D, Rowe N (2013) What is the evidence that poster presentations are effective in promoting knowledge transfer? A state of the art review. Health Inf Libr J 30(1):4–12. https://doi.org/10.1111/hir.12015

Lock SJ (2011) Deficits and dialogues: science communication and the public understanding of science in the UK. In: Bennett DJ, Jennings RC (eds) Successful science communication. Cambridge University Press, Cambridge, p 17–30

Chapter   Google Scholar  

Mawhinney L (2010) Let’s lunch and learn: professional knowledge sharing in teachers’ lounges and other congregational spaces. Teach Teach Educ 26(4):972–978. https://doi.org/10.1177/0192636515602330

Article   MathSciNet   Google Scholar  

Mayne R, Green D, Gujit I, Walsh M, English R, Cairney P (2018) Using evidence to influence policy: Oxfam’s experience. Pal Commun 3:122. https://doi.org/10.1057/s41599-018-0176-7

Monaghan M (2011) Evidence versus politics: exploiting research in UK drug policy making? The Policy Press, Bristol

Book   Google Scholar  

Morgan D (1997) The focus group guidebook. Sage Publications, London

Newman J (2014) Revisiting the “two communities” metaphor of research utilisation. Int J Public Sect Manag 27(7):614–627. https://doi.org/10.1108/IJPSM-04-2014-0056

Newman J, Head BW (2015) Beyond the two communities: a reply to Mead’s “why government often ignores research”. Policy Stud 48(3):383–393. https://doi.org/10.1007/s11077-015-9226-9

Newman J, Cherney A, Head BW (2015) Do policy makers use academic research? Re-examining the “Two communities” theory of research utilisation. Public Adm Rev 76(1):24–32. https://doi.org/10.1111/puar.12464

Newman J, Cherney A, Head BW (2016) Policy capacity and evidence-based policy in the public service. Public Manag Rev 19(2):157–174. https://doi.org/10.1080/14719037.2016.1148191

Nutley S, Walter I, Davies H (2007) Using evidence: how research can inform public services. Policy Press, Bristol

Oliver K, Cairney P (2019) The dos and don’ts of influencing policy: a systematic review of advice to academics. Pal Commun 5:21. https://doi.org/10.1057/s41599-019-0232-y

Oliver S, Duncan S (2019) Editorial: looking through the Johari window. Res All 3(1):1–6. https://doi.org/10.18546/RFA.03.1.01

Opdenakker R (2006) Advantages and disadvantages of four interview techniques in qualitative research. Forum Qual Soz/Forum 7(4):11. https://doi.org/10.17169/fqs-7.4.175

Parkhurst J (2017) The politics of evidence: from evidence-based policy to the good governance of evidence. Routledge, London

Rose DB (1999) Indigenous ecologies and an ethic of connection. In: Low N (ed) Global ethics and environment. Routledge, London, p 175–187

Rothstein T (2016) 3 workshop ideas for sharing your research findings. https://medium.com/@tessrothstein/make-your-findings-interactive-d83a2204b11e . Accessed 17 Jan 2019

Sanderson I (2009) Intelligent policy making for a complex world: pragmatism, evidence and learning. Political Stud 57(4):699–719. https://doi.org/10.1111/j.1467-9248.2009.00791.x

Science Advisory Council (2019) Science advisory council. science advisory council. https://www.gov.uk/government/organisations/science-advisory-council . Accessed 26 June 2019

Service O, Hallsworth M, Halpern D, Algate F, Gallagher R, Nguyen S, Ruda S, Sanders M, Pelenur M, Gyani A, Harper H Reinhard J, Kirkman E (2014) EAST: four simple ways to apply behavioural insights. https://www.behaviouralinsights.co.uk/wp-content/uploads/2015/07/BIT-Publication-EAST_FA_WEB.pdf . Accessed 17 Jan 2019

Slavin RE (2002) Evidence-based education policies: transforming educational practice and research. Educ Res 31(7):15–21. https://doi.org/10.3102/0013189X031007015

Topp L, Mair D, Smillie L, Cairney P (2018) Knowledge management for policy impact: the case of the European Commission’s Joint Research Centre. Pal Commun 4:87. https://doi.org/10.1057/s41599-018-0143-3

UK Government (2013) What works: evidence centres for social policy. UK Cabinet Office, London

Walter I, Nutley S, Davies H (2003) Research impact: a cross sector review literature review. Part of a wider project entitled ‘Models of Research Impact: a cross sector review’, funded by the Learning and Skills Development Agency (LSDA). https://www.researchgate.net/profile/Huw_Davies5/publication/265218078_Research_Impact_A_Cross_Sector_Literature_Review/links/56013a2808aeba1d9f84f180.pdf . Accessed 17 Jan 2019

Wehrens R (2014) Beyond two communities–from research utilization and knowledge translation to co-production? Public Health 128(6):545–551. https://doi.org/10.1016/j.puhe.2014.02.004

Article   CAS   PubMed   Google Scholar  

Witting A (2017) Insights from ‘policy learning’ on how to enhance the use of evidence by policymakers. Pal Commun 3:49. https://doi.org/10.1057/s41599-017-0052-x

Download references

Acknowledgements

We thank Tony Pike for establishing the research project on which this paper is based. Thanks to Jenny Kemp for designing and creating the slidedecks and the process chart shown in Fig. 2 .

Author information

Authors and affiliations.

UK Government, Department for Environment Food and Rural Affairs, London, UK

Jessica H. Phoenix, Lucy G. Atkinson & Hannah Baker

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jessica H. Phoenix .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Phoenix, J.H., Atkinson, L.G. & Baker, H. Creating and communicating social research for policymakers in government. Palgrave Commun 5 , 98 (2019). https://doi.org/10.1057/s41599-019-0310-1

Download citation

Received : 24 January 2019

Accepted : 05 August 2019

Published : 27 August 2019

DOI : https://doi.org/10.1057/s41599-019-0310-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Education for ai, not ai for education: the role of education and ethics in national ai policy strategies.

  • Daniel Schiff

International Journal of Artificial Intelligence in Education (2022)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

policy makers research paper

Automated Social Science: Language Models as Scientist and Subjects

We present an approach for automatically generating and testing, in silico, social scientific hypotheses. This automation is made possible by recent advances in large language models (LLM), but the key feature of the approach is the use of structural causal models. Structural causal models provide a language to state hypotheses, a blueprint for constructing LLM-based agents, an experimental design, and a plan for data analysis. The fitted structural causal model becomes an object available for prediction or the planning of follow-on experiments. We demonstrate the approach with several scenarios: a negotiation, a bail hearing, a job interview, and an auction. In each case, causal relationships are both proposed and tested by the system, finding evidence for some and not others. We provide evidence that the insights from these simulations of social interactions are not available to the LLM purely through direct elicitation. When given its proposed structural causal model for each scenario, the LLM is good at predicting the signs of estimated effects, but it cannot reliably predict the magnitudes of those estimates. In the auction experiment, the in silico simulation results closely match the predictions of auction theory, but elicited predictions of the clearing prices from the LLM are inaccurate. However, the LLM's predictions are dramatically improved if the model can condition on the fitted structural causal model. In short, the LLM knows more than it can (immediately) tell.

This research was made possible by a generous grant from Dropbox Inc. Thanks to Jordan Ellenberg, Benjamin Lira Luttges, David Holtz, Bruce Sacerdote, Paul Röttger, Mohammed Alsobay, Ray Duch, Matt Schwartz, David Autor, and Dean Eckles for their helpful feedback. Author's contact information, code, and data are currently or will be available at http://www.benjaminmanning.io/. Both Benjamin S. Manning and Kehang Zhu contributed equally to this work. John J. Horton is a co-founder of a company, Expected Parrot Inc., using generative AI models for market research. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research.

MARC RIS BibTeΧ

Download Citation Data

More from NBER

In addition to working papers , the NBER disseminates affiliates’ latest findings through a range of free periodicals — the NBER Reporter , the NBER Digest , the Bulletin on Retirement and Disability , the Bulletin on Health , and the Bulletin on Entrepreneurship  — as well as online conference reports , video lectures , and interviews .

15th Annual Feldstein Lecture, Mario Draghi, "The Next Flight of the Bumblebee: The Path to Common Fiscal Policy in the Eurozone cover slide

You are using an outdated browser. Upgrade your browser today or install Google Chrome Frame to better experience this site.

IMF Live

  • IMF at a Glance
  • Surveillance
  • Capacity Development
  • IMF Factsheets List
  • IMF Members
  • IMF Timeline
  • Senior Officials
  • Job Opportunities
  • Archives of the IMF
  • Climate Change
  • Fiscal Policies
  • Income Inequality

Flagship Publications

Other publications.

  • World Economic Outlook
  • Global Financial Stability Report
  • Fiscal Monitor
  • External Sector Report
  • Staff Discussion Notes
  • Working Papers
  • IMF Research Perspectives
  • Economic Review
  • Global Housing Watch
  • Commodity Prices
  • Commodities Data Portal
  • IMF Researchers
  • Annual Research Conference
  • Other IMF Events

IMF reports and publications by country

Regional offices.

  • IMF Resident Representative Offices
  • IMF Regional Reports
  • IMF and Europe
  • IMF Members' Quotas and Voting Power, and Board of Governors
  • IMF Regional Office for Asia and the Pacific
  • IMF Capacity Development Office in Thailand (CDOT)
  • IMF Regional Office in Central America, Panama, and the Dominican Republic
  • Eastern Caribbean Currency Union (ECCU)
  • IMF Europe Office in Paris and Brussels
  • IMF Office in the Pacific Islands
  • How We Work
  • IMF Training
  • Digital Training Catalog
  • Online Learning
  • Our Partners
  • Country Stories
  • Technical Assistance Reports
  • High-Level Summary Technical Assistance Reports
  • Strategy and Policies

For Journalists

  • Country Focus
  • Chart of the Week
  • Communiqués
  • Mission Concluding Statements
  • Press Releases
  • Statements at Donor Meetings
  • Transcripts
  • Views & Commentaries
  • Article IV Consultations
  • Financial Sector Assessment Program (FSAP)
  • Seminars, Conferences, & Other Events
  • E-mail Notification

Press Center

The IMF Press Center is a password-protected site for working journalists.

  • Login or Register
  • Information of interest
  • About the IMF
  • Conferences
  • Press briefings
  • Special Features
  • Middle East and Central Asia
  • Economic Outlook
  • Annual and spring meetings
  • Most Recent
  • Most Popular
  • IMF Finances
  • Additional Data Sources
  • World Economic Outlook Databases
  • Climate Change Indicators Dashboard
  • IMF eLibrary-Data
  • International Financial Statistics
  • G20 Data Gaps Initiative
  • Public Sector Debt Statistics Online Centralized Database
  • Currency Composition of Official Foreign Exchange Reserves
  • Financial Access Survey
  • Government Finance Statistics
  • Publications Advanced Search
  • IMF eLibrary
  • IMF Bookstore
  • Publications Newsletter
  • Essential Reading Guides
  • Regional Economic Reports
  • Country Reports

Departmental Papers

  • Policy Papers
  • Selected Issues Papers
  • All Staff Notes Series
  • Analytical Notes
  • Fintech Notes
  • How-To Notes
  • Staff Climate Notes

Central Bank Digital Currencies in the Middle East and Central Asia

Author/Editor:

Serpil Bouza ; Bashar Hlayhel ; Thomas Kroen ; Marcello Miccoli ; Borislava Mircheva ; Greta Polo ; Sahra Sakha ; Yang Yang

Publication Date:

April 26, 2024

Electronic Access:

Free Download . Use the free Adobe Acrobat Reader to view this PDF file

Disclaimer: The views expressed herein are those of the author(s) and do not necessarily represent the views of the IMF, its Executive Board, or IMF management.

Against the backdrop of a rapidly digitalizing world, there is a growing interest in central bank digital currencies (CBDCs) among central banks, including in the Middle East and Central Asia (ME&CA) region. This paper aims to support ME&CA policymakers in examining key questions when considering the adoption of a CBDC while underscoring the importance of country-specific analyses. This paper does not provide recommendations on CBDC issuance. Instead, it frames the discussion around the following key questions: What is a CBDC? What objectives do policymakers aim to achieve with the issuance of a CBDC? Which inefficiencies in payment systems can CBDCs address? What are the implications of CBDC issuance for financial stability and central bank operational risk? How can CBDC design help achieve policy objectives and mitigate these risks? The paper provides preliminary answers to these questions at the regional level. A survey of IMF teams and public statements from ME&CA policymakers confirm that promoting financial inclusion and making payment systems more efficient (domestic and cross-border) are the top priorities in the region. Payment services through CBDCs, if offered at a lower cost than existing alternatives, could spur competition in the payment market and help increase access to bank accounts, improve financial inclusion, and update legacy technology platforms. CBDCs may also help improve the efficiency of cross-border payment services, especially if designed to address frictions arising from a lack of payment system interoperability, complex processing of compliance checks, long transaction chains, and weak competition. At the same time, CBDCs could negatively impact bank profitability while introducing a substantial operational burden for central banks. However, the exact economic and financial impacts of CBDCs need further study and would depend on estimates of CBDC demand, which are uncertain and country- dependent. CBDC issuance and adoption is a long journey that policymakers should approach with care. Policymakers need to analyze carefully whether a CBDC serves their country’s objectives and whether the expected benefits outweigh the potential costs, in addition to risks for the financial system and operational risks for the central bank.

Departmental Paper No 2024/004

Central Bank digital currencies Commercial banks Digital financial services Financial inclusion Financial institutions Financial markets International organization Payment systems Political economy Technology

9798400263798/2616-5333

Please address any questions about this title to [email protected]

Help | Advanced Search

Computer Science > Machine Learning

Title: mapping the potential of explainable artificial intelligence (xai) for fairness along the ai lifecycle.

Abstract: The widespread use of artificial intelligence (AI) systems across various domains is increasingly highlighting issues related to algorithmic fairness, especially in high-stakes scenarios. Thus, critical considerations of how fairness in AI systems might be improved, and what measures are available to aid this process, are overdue. Many researchers and policymakers see explainable AI (XAI) as a promising way to increase fairness in AI systems. However, there is a wide variety of XAI methods and fairness conceptions expressing different desiderata, and the precise connections between XAI and fairness remain largely nebulous. Besides, different measures to increase algorithmic fairness might be applicable at different points throughout an AI system's lifecycle. Yet, there currently is no coherent mapping of fairness desiderata along the AI lifecycle. In this paper, we set out to bridge both these gaps: We distill eight fairness desiderata, map them along the AI lifecycle, and discuss how XAI could help address each of them. We hope to provide orientation for practical applications and to inspire XAI research specifically focused on these fairness desiderata.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. (PDF) How to write evidence synthesis reports for policy makers: a 9

    policy makers research paper

  2. The best policymakers are systems thinkers

    policy makers research paper

  3. (PDF) Can scientist and policy makers work together?

    policy makers research paper

  4. (PDF) Recommendations to Policy Makers, Schools, Teachers and

    policy makers research paper

  5. (PDF) Health policy analysis: A simple tool for policy makers

    policy makers research paper

  6. (PDF) The Role of Policy Makers and Institutions in the Energy Sector

    policy makers research paper

VIDEO

  1. Lecture 20

  2. OPRE

  3. Who should participate in the policymaking process?

  4. 2024년 1월 한국 주식 하락률 TOP10

  5. 2024년 1월 한국 주식 상승률 TOP 10

  6. Advanced workshop for policy makers: Using policy briefs in health policy-making

COMMENTS

  1. Perspectives and Experiences of Policy Makers, Researchers, Health Information Technology Professionals, and the Public on Evidence-Based Health Policies: Protocol for a Qualitative Study

    A movement toward policy-based evidence, noted as found in Young , was depicted in The New Yorker magazine as a cartoon: a policy maker handed a paper to an advisor saying, "Here is my policy; go find some evidence based on it." Health policy makers have their own priorities and processes that influence and have implications on health outcomes.

  2. Revealed: the ten research papers that policy documents cite most

    The top ten most cited papers in policy documents are dominated by economics research; the number one most referenced study has around 1,300 citations. When economics studies are excluded, a 1997 ...

  3. Use and effectiveness of policy briefs as a knowledge transfer tool: a

    Rapid reviews and commissioned research were excluded because they are different in a fundamental aspect: they are made as a direct response to a request from decision-makers. Since these papers ...

  4. Rethinking policy 'impact': four models of research-policy relations

    Knowledge shapes policy. A range of theories and models of the relationship between academic knowledge and policy were developed by US and UK scholars in the 1970s and 1980s (Blume, 1977; Caplan ...

  5. (PDF) Research engagement with policy makers: a practical guide to

    Researchers may write policy briefs because. they want research evidence to inform the way. policy makers influence the lives of citizens. Evidence of the way research has influenced. the ...

  6. Policy capacities and effective policy design: a review

    Our central thesis is that the growing body of research on policy design effectiveness, which is synthesized in this paper, remains largely descriptive and tends to confound rather than clarify the relationship between policy capacity and effective policy design. ... Ramesh M, Capano G. Policy-makers, policy-takers and policy tools: Dealing ...

  7. Aligning the times: Exploring the convergence of researchers, policy

    In this paper, we explore the extent to which the convergence of researchers, policy makers and research evidence in higher education policy making can be better understood when viewed through the theoretical lens of time. ... As one policy maker noted: 'research can shape thinking, and become part of the public thinking, the public debate ...

  8. (PDF) Research Engagement with Policy Makers: a practical guide to

    A policy brief is a short publication specifically. designed to provide policy makers with research. evidence relating to a policy issue. Policy. briefs provide a summary of findings for an. issue ...

  9. Advocacy for Evidence-Based Policy-Making in Public Health: Experiences

    To further improve these programmes and add value to evidence-based policy, it is important to use the most effective and reliable evidence and ensure that health professionals—either researcher or practitioner—understand and participate in the policy process and present the evidence to decision-makers in a way that highlights its ...

  10. Full article: Making research relevant to policymaking: from brokering

    When communication between policy makers and researchers breaks down, scientific research can be considered irrelevant: science as just another opinion (Latour Citation 2013). Collaborative knowledge generation, with collaborative problem structuring at its core (Hoppe Citation 2011 , 231), is a more productive way to deal with such problems.

  11. Engaging policy-makers, health system managers, and policy analysts in

    It is unclear how to engage a wide range of knowledge users in research. We aimed to map the evidence on engaging knowledge users with an emphasis on policy-makers, health system managers, and policy analysts in the knowledge synthesis process through a scoping review. We used the Joanna Briggs Institute guidance for scoping reviews. Nine electronic databases (e.g., MEDLINE), two grey ...

  12. (PDF) Research Engagement with Policy Makers: a practical guide to

    Policy makers and researchers can be interested in the same issue but have different frames of reference, and different ways of asking questions [4, 8, 9]. Policy makers are a diverse group of professionals with varying backgrounds. Some policy makers may need support to frame policy questions in ways that research can answer.

  13. The Importance of Policy Change for Addressing Public Health Problems

    To increase translation of research into policy, proactive strategies that bridge the research and policy worlds to increase adoption and implementation of policies shown to be effective in research studies are needed. 16,31 Researchers and policy makers have unique skill sets and professional incentives. For example, researchers are skilled in ...

  14. Do Policy Makers Use Academic Research? Reexamining the "Two

    Universities are powerhouses of research, but by many accounts, policy makers do not use academic research to its fullest potential (Nutley, Walter, and Davies 2007).Over many decades, the study of how policy decisions can be based on—or impervious to—the outputs of academic research has grown, inspiring subgenres with names such as "research utilization," "knowledge transfer ...

  15. How to bring research evidence into policy? Synthesizing strategies of

    This paper reports on five r4d research projects and shows how researchers engage with various stakeholders, including policy-makers, in order to assure uptake of the research results. Eleven in-depth interviews were conducted with principal investigators and their research partners from five r4d projects, using a semi-structured interview guide.

  16. How to communicate effectively with policymakers: combine ...

    To communicate effectively in policymaking systems, actors need to understand how policymakers process evidence and the environment in which they operate. Therefore, we combine psychology and ...

  17. PDF Tips for Writing Policy Papers

    In the world of policy, white papers guide decision makers with expert opinions, recommendations, and analytical research. Policy papers may also take the form of a briefing paper, which typically provides a decision maker with an overview of an issue or problem, targeted analysis, and, often, actionable ...

  18. What can we learn from interventions that aim to increase policy-makers

    Studies were excluded if they focused on policy development capabilities in which the use of research was a minor component (e.g. ); evaluated one-off attempts to get policy-makers to use research for a specific policy initiative; or focused on specific fields other than health such as education or climate change (e.g. ).

  19. PDF How To plan, write and communicate an effective Policy Brief

    papers. On average, policy actors spend 30-60 minutes reading a policy brief (Jones & Walsh 2008: 6). Thus policy briefs are an effective way of bringing important research to the attention of policy actors because they can be read in a short amount of time. Making research findings easily digestible increases the likelihood of research

  20. U.S. Policymakers Need to Mind the Gap between Think Tanks and

    The Biden Administration recently declared transcending geographic seams to be a national security priority. However, this declaration does not appear to have spurred systematic changes in the production of research studies and expert commentary on African affairs within the United States think tank community. An exploration of the metatags used to label commentary articles published on an ...

  21. PDF Reviving Tanzania's Research regional leadership and Africa Programme

    This research paper argues that the government's foreign policy review must seize the opportunity to define a bolder strategic vision for Tanzania's foreign policy. The momentum of Tanzania's international revival has so far placed an emphasis on carefully rebuilding relations with a diverse range of partners, creating greater

  22. Research ethics and artificial intelligence for global health

    The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice. In this paper we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town ...

  23. Creating and communicating social research for policymakers in ...

    Specifically, we describe methods of communicating research to policymakers in relation to research we undertook to listen to farmers about their priorities for a new agricultural policy for ...

  24. Automated Social Science: Language Models as Scientist and Subjects

    Founded in 1920, the NBER is a private, non-profit, non-partisan organization dedicated to conducting economic research and to disseminating research findings among academics, public policy makers, and business professionals.

  25. Central Bank Digital Currencies in the Middle East and Central Asia

    Against the backdrop of a rapidly digitalizing world, there is a growing interest in central bank digital currencies (CBDCs) among central banks, including in the Middle East and Central Asia (ME&CA) region. This paper aims to support ME&CA policymakers in examining key questions when considering the adoption of a CBDC while underscoring the importance of country-specific analyses.

  26. [2404.18736] Mapping the Potential of Explainable Artificial

    The widespread use of artificial intelligence (AI) systems across various domains is increasingly highlighting issues related to algorithmic fairness, especially in high-stakes scenarios. Thus, critical considerations of how fairness in AI systems might be improved, and what measures are available to aid this process, are overdue. Many researchers and policymakers see explainable AI (XAI) as a ...

  27. Accelerating Sustainable Development: Insights from the Groningen

    Despite challenges, there was a shared optimism among participants, emphasizing the need for continued research, advocacy, and belief in a vision for a sustainable future. After the workshop the participant will work together to refine the consensus paper further, with the aim of presenting it at the UN Summit of the Future in September 2024.