research funding problems

When big companies fund academic research, the truth often comes last

research funding problems

Chair professor, University of Sydney

Disclosure statement

Lisa Bero receives funding from the National Health and Medical Research Council to study bias in research. She has served on a number of university, national and international committees related to conflicts of interest, academic - industry relations, or academic freedom.

University of Sydney provides funding as a member of The Conversation AU.

View all partners

  • Bahasa Indonesia

This article is part of a series on academic freedom where leading academics from around the world write on the state of free speech and inquiry in their region.

Over the last two decades, industry funding for medical research has increased globally, while government and non-profit funding has decreased. By 2011, industry funding, compared to public sources, accounted for two-thirds of medical research worldwide.

Research funding from other industries is increasing too, including food and beverage, chemical, mining, computer and automobile companies. And as a result, academic freedom suffers.

Industry sponsors suppress publication

An early career academic recently sought my advice about her industry-funded research. Under the funding contract – that was signed by her supervisor – she wouldn’t be able to publish the results of her clinical trial.

Another researcher, a doctoral student, asked for help with her dissertation. Her work falls under the scope of her PhD supervisor’s research funding agreement with a company. This agreement prevented the publication of any work deemed commercial-in-confidence by the industry funder. So, she will not be allowed to submit the papers to fulfil her dissertation requirements.

Read more: Influential doctors aren't disclosing their drug company ties

I come across such stories often and they all have one thing in common. The blocked publications present the sponsoring companies’ products in an unfavourable way. While the right to publish is a mainstay of academic freedom, research contracts often include clauses that give the funder the final say on whether the research can be published.

Early career researchers are particularly vulnerable to publication restrictions when companies fund their research. Scientific publication is vital to their career advancement, but their supervisors may control the research group’s relationship with industry.

research funding problems

Senior researchers can also be vulnerable to industry suppressing their research. In the 1980s, a pharmaceutical company funded a researcher to compare their brand’s thyroid drug to its generic counterparts. The researcher found the generics were as good as the branded products.

The funder then went to great lengths to suppress the publication of her findings, including taking legal action against her and her university.

And there is little institutional oversight. A 2018 study found that , among 127 academic institutions in the United States, only one-third required their faculty to submit research consulting agreements for review by the institution.

And 35% of academic institutions did not think it was necessary for the institution to review such agreements. When consulting agreements were reviewed, only 23% of academic institutions looked at publication rights. And only 19% looked for inappropriate confidentiality provisions, such as prohibiting communication about any aspect of the funded work.

Industry sponsors manipulate evidence

The definition of academic freedom boils down to freedom of inquiry, investigation, research, expression and publication (or dissemination).

Read more: Freedom of speech: a history from the forbidden fruit to Facebook

Internal industry documents obtained through litigation have revealed many examples of industry sponsors influencing the design and conduct of research, as well as the partial publication of research where only findings favourable to the funder were published.

For instance, in 1981 an influential Japanese study showed an association between passive smoking and lung cancer. It concluded wives of heavy smokers had up to twice the risk of developing lung cancer as wives of non-smokers and that the risk was dose related.

Tobacco companies then funded academic researchers to create a study that would refute these findings. The tobacco companies were involved in every step of the funded work, but kept the extent of their involvement hidden for decades. They framed the research questions, designed the study, collected and provided data, and wrote the final publication.

research funding problems

This publication was used as “evidence” that tobacco smoke is not harmful. It concluded there was no direct evidence passive smoke exposure increased risk of lung cancer. The tobacco industry cited the study in government and regulatory documents to refute the independent data on the harms of passive smoking.

Industry sponsors influence research agendas

The biggest threat to academic freedom may be the influence industry funders have on the very first stage in the research process: establishing research agendas. This means industry sponsors get unprecedented control over the research questions that get studied.

We recently reviewed research studies that looked at corporate influence on the research agenda. We found industry funding drives researchers to study questions that aim to maximise benefits and minimise harms of their products, distract from independent research that is unfavourable, decrease regulation of their products, and support their legal and policy positions.

research funding problems

In another tobacco-related example, three tobacco companies created and funded The Center for Indoor Air Research that would conduct research to “distract” from evidence for the harms of second-hand smoke. Throughout the 1990s, this centre funded dozens of research projects that suggested components of indoor air, such as carpet off-gases or dirty air filters, were more harmful than tobacco.

The sugar industry also attempted to shift the focus away from evidence showing an association between sugar and heart disease. It was only recently revealed that, in the 1960s, the sugar industry paid scientists at Harvard University to minimise the link between sugar and heart disease, and to shift the blame from sugar to fat as being responsible for the heart disease epidemic

Read more: Essays on health: how food companies can sneak bias into scientific research

The paper’s authors suggested many of today’s dietary recommendations may have been largely shaped by the sugar industry. And some experts have since questioned whether such misinformation can have led to today’s obesity crisis .

Coca-Cola and Mars have also funded university research on physical activity to divert attention away from the association of their products with obesity.

How do we protect academic freedom?

In a climate where relations between academia and industry are encouraged and industry funding for research continues to grow, academics must guard against threats to academic freedom posed by industry support.

Academic freedom means industry funding must come with no strings attached. Researchers must ask themselves if accepting industry funding contributes to the mission of discovering new knowledge or to an industry research agenda aimed at increasing profits.

Governments or independent consortia of multiple funders, including government and industry, must ensure support for research that meets the needs of the public.

When research is supported by industry, funders should not dictate the design, conduct or publication of the research. Many universities have and enforce policies that prevent such restrictions, but this is not universal. Open science, including publication of protocols and data, can expose industry interference in research.

Scientists should never sign, or let their institution sign, an agreement that gives a funder power to prevent dissemination of their research findings. Universities and scientific journals must protect emerging researchers and support all academics in fending off industry influence and preserving academic freedom.

Read the first article in the academic freedom series here

  • Medical research
  • Academic freedom
  • Industry funding
  • Academic freedom series

research funding problems

Head of Evidence to Action

research funding problems

Supply Chain - Assistant/Associate Professor (Tenure-Track)

research funding problems

Education Research Fellow

research funding problems

OzGrav Postdoctoral Research Fellow

research funding problems

Casual Facilitator: GERRIC Student Programs - Arts, Design and Architecture

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Getting to the bottom of research funding: Acknowledging the complexity of funding dynamics

Roles Conceptualization, Data curation, Methodology, Project administration, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation Department of Political Science, Danish Centre for Studies in Research and Research Policy, Aarhus University, Aarhus, Denmark

ORCID logo

Roles Data curation, Formal analysis, Software, Visualization

Affiliations Department of Political Science, Danish Centre for Studies in Research and Research Policy, Aarhus University, Aarhus, Denmark, Faculty of Management, School of Information Management, Dalhousie University, Halifax, Canada

Roles Conceptualization, Methodology, Writing – original draft, Writing – review & editing

  • Kaare Aagaard, 
  • Philippe Mongeon, 
  • Irene Ramos-Vielba, 
  • Duncan Andrew Thomas

PLOS

  • Published: May 12, 2021
  • https://doi.org/10.1371/journal.pone.0251488
  • Reader Comments

Table 1

Research funding is an important factor for public science. Funding may affect which research topics get addressed, and what research outputs are produced. However, funding has often been studied simplistically, using top-down or system-led perspectives. Such approaches often restrict analysis to confined national funding landscapes or single funding organizations and instruments in isolation. This overlooks interlinkages, broader funding researchers might access, and trends of growing funding complexity. This paper instead frames a ‘bottom-up’ approach that analytically distinguishes between increasing levels of aggregation of funding instrument co-use. Funding of research outputs is selected as one way to test this approach, with levels traced via funding acknowledgements (FAs) in papers published 2009–18 by researchers affiliated to Denmark, the Netherlands or Norway, in two test research fields (Food Science, Renewable Energy Research). Three funding aggregation levels are delineated: at the bottom, ‘funding configurations’ of funding instruments co-used by individual researchers (from single-authored papers with two or more FAs); a middle, ‘funding amalgamations’ level, of instruments co-used by collaborating researchers (from multi-authored papers with two or more FAs); and a ‘co-funding network’ of instruments co-used across all researchers active in a research field (all papers with two or more FAs). All three levels are found to include heterogenous funding co-use from inside and outside the test countries. There is also co-funding variety in terms of instrument ‘type’ (public, private, university or non-profit) and ‘origin’ (domestic, foreign or supranational). Limitations of the approach are noted, as well as its applicability for future analyses not using paper FAs to address finer details of research funding dynamics.

Citation: Aagaard K, Mongeon P, Ramos-Vielba I, Thomas DA (2021) Getting to the bottom of research funding: Acknowledging the complexity of funding dynamics. PLoS ONE 16(5): e0251488. https://doi.org/10.1371/journal.pone.0251488

Editor: Cassidy R. Sugimoto, Indiana University Bloomington, UNITED STATES

Received: July 29, 2020; Accepted: April 27, 2021; Published: May 12, 2021

Copyright: © 2021 Aagaard et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the paper and its two Supporting Information files. Further details of core/expansion approach to produce the publication dataset, i.e. core keywords, journals, and article-level clusters for ‘Renewable Energy Research’ and ‘Food Science’ publications, are in S1. The data used in the paper, in anonymised form so no personal data is included, is in S2.

Funding: This work was funded by the Novo Nordisk Foundation under the project title, 'Promoting the socio-economic impact of research - the role of funding practices (PROSECON)', grant number NNF18OC0034422 (received by authors KA, PM, IRV, DT). The funder website URL is: https://novonordiskfonden.dk/en/ . The funder had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Ways and means to allocate research funding are considered one of the most influential elements in any attempt to govern contemporary public science. Funding is assumed likely to affect which topics get addressed, and the scope, content, direction, outputs and even potential impacts of public research [e.g. 1 – 4 ]. Funding secures the livelihood of researchers and is an indispensable prerequisite for almost all research [ 5 ]. Yet knowledge of how funding affects research remains fragmented and inconclusive. This stems in part from how research funding has been studied. In this paper we develop and test an approach that attempts to broaden how such studies are framed, by focusing on how researchers co-use funding at multiple different levels of aggregation.

Limitations of existing research funding studies mainly stem from insufficient attention to recent dynamics, i.e. to growing heterogeneity, complexity and related dynamic trends in contemporary research funding [ 5 , 6 ]. There have been few attempts to acknowledge and identify the wide variety of potentially interlinked sets of funders now operating in scientific fields. Similarly, there has been insufficient exploration of the characteristics of the assorted funding instruments researchers may co-use to do research. Instead, most literature has typically studied single funding instruments and single funding organizations [see e.g. 2 , 7 ] or has analysed strictly-confined national funding landscapes from a top-down/system-led perspective [ 4 , 8 – 10 ]. However, the underlying assumptions justifying these approaches may no longer be adequate. They may even prevent deeper understanding of how researchers’ research funding can be composed and co-used, and how it influences research [ 11 ].

Existing approaches are particularly challenged by increasing developments towards more complicated and dynamic, border-crossing funding landscapes, where regional, national and supranational funders have proliferated from public, private and non-profit sectors [ 3 , 5 , 11 , 12 ]. Despite recent calls for studies of this emerging research funding reality [ 5 , 13 ] most scholarly claims remain based on general observations rather than on systematic, empirical studies [ 14 ]. An accurate understanding of how funding works, however, is a precondition for more realistic study of potential effects upon research of different kinds and combinations of funding. Improved understanding might also benefit research governance, revealing perhaps unsuspected funding synergies for policy action or discouraging use of possibly problematic research funding designs.

Research funding can also embody signals about needs that government(s), their agencies, industries and societies expect funded research to address [ 15 ]. An indicative example is research funded to address Grand Societal Challenges (SCs). For instance, in Europe, dedicated SC-related funding is expected to support research aiming to address pressing global problems (e.g. food security, energy security, public health, impacts of a changing climate). Policy or funder attempts to (re)direct such research to address these challenges is of course mediated by what research funding researchers actually mobilise to do their research. This may now include co-using more than one funding source, potentially leading researchers to face multiple, even conflicting signals from multiple funders.

The funding individual researchers actually encounter and (co-)use therefore is a crucial element. This is especially so because researchers are also an acknowledged obligatory passage point between research funding and research practice [ 16 ]. Approaches to study research funding, therefore, must explore the roles of funding from their viewpoint, rather than only from a coarse, system viewpoint. For this reason, our approach takes a researcher-led perspective. This leads to a ‘bottom-up approach’, entering the science system at the grassroots level of individual researchers and their funding, followed by increasing levels of aggregation of co-use of funding of collaborating researchers, and of a field of active researchers. In this paper, this approach is tested for two SC-related fields where multiple funders are known to operate, and are likely to provide varied funding instruments: Food Science and Renewable Energy Research.

For this approach, there are however different points in the research process through which funding could be studied. Funding could be studied as a research process input , as enabling certain research practices , or as associated with particular research outputs . To study funding as an input, for instance, document analysis of funding instrument characteristics could be attempted. To study funding of research practices, researchers could be surveyed or interviewed about their funding uses. Funding of varied forms of research outputs can instead be analysed. To demonstrate the broad applicability of a bottom-up approach, ideally several of these points would be studied together. For the approach testing purposes of this paper, however, we select only to study funding aggregation levels through research outputs in one of these forms: research publications. This selection is advantageous because funding of research outputs is reported in a standardized fashion. Studying non-paper formats of research output could run into difficulties, e.g. variation in funding reporting protocols. By contrast, in publications, for over a decade researchers’ have routinely self-reported certain funding details, as funding acknowledgements (FAs). To provide a manageable scale for the test, FAs in papers are selected only for researchers affiliated to research organizations (ROs) in three countries (i.e. Denmark, Norway, the Netherlands). This test selection gives both sufficient variety, as these are small, similar yet still intensive research funding contexts, and manageable scale.

Overall this paper aims, first, to frame a bottom-up, funding of researchers-led perspective on research funding dynamics. Second, it will empirically test an analysis of research funding using this perspective through the specific example of FAs in researchers’ publications. The approach is based upon three analytical levels:

  • ‘Funding configurations’ of funding instruments co-used at the level of individual researchers.
  • ‘Funding amalgamations’ of funding instruments co-used at the level of collaborating researchers.
  • ‘Co-funding networks’ of funding instruments co-used at the level of all researchers active in the field.

The paper is structured as follows: the next section justifies why adding a bottom-up perspective is needed and could improve understanding of research funding dynamics. This is followed by a section presenting the bottom-up approach. The strengths and weaknesses of FAs as a data source to illustrate the approach are then reviewed, and the added value of using FAs for a multi-level approach to study research funding dynamics is described. The subsequent section describes the test case selection, research field delineations, FA data collection, cleaning and coding. Findings are then presented for funding configurations, funding amalgamations and field co-funding networks based on the FA data. Finally, we reflect on how the bottom-up, researcher-led approach to funding co-use contributes to understanding research funding dynamics, and conclude with implications for science policy and further research directions.

Why add a bottom-up perspective on research funding?

To understand the value of adding a bottom-up perspective, it should be noted that most previous approaches to research funding studies have relied on a top-down or system-led perspective. This means that to observe funding, studies have focused on features of funding aggregated for national or otherwise geographically confined funding landscapes [e.g. 8 , 9 , 17 , 18 ]. Alternatively, they have examined effects of single funding organizations or single funding instruments within the science system, and attempted to isolate effects for just one funding source [e.g. 10 ].

There are sound, pragmatic, methodological and conceptual reasons for this situation. Clear research field delineations and geographical boundaries for funders’ assumed spheres of operation and influence, drawn from the top-down, can isolate effects of specific funding organizations or instruments from many other extraneous, mediating factors. They also likely service the interests of funders aiming to document and evaluate effects of their own specific investments. Likewise, a national focus may suit vested interests of national policy makers and stakeholders [ 11 ] in legitimising particular uses of public resources. Such analyses may be valid for specific research objectives, e.g. to delineate the portfolio of a single funding organization. However, additional, complimentary approaches are needed to generate broader understandings of researcher-level funding dynamics.

Traditional positions have also generated important insights. Top-down perspectives–typically based on data from OECD Main Science and Technology Indicators (MSTI), Eurostat, national statistics or large cross-country studies–afford broad overviews of central components of national funding systems. These enable comparative insights into important characteristics at country level, such as volume of R&D funding, balances between institutional funding and project funding, distribution of funding between disciplines, and allocation mechanisms for institutional funding [e.g. 4 , 9 , 19 ]. Studies of individual funders and single funding instruments have also provided insight into key characteristics and mechanisms, like how selection procedures for funding instruments might enable support of exceptional research [e.g. 2, 20] or how funding properties, and properties of the research funded, become interdependent for some funding instrument classes [e.g. 7 , 21 , 22 ].

However, such studies also build on two key assumptions that may have become increasingly questionable. First, they assume national funding landscapes mapped and studied from the top down provide a relatively complete picture of funding actually mobilised by researchers working within a given national system. Second, they assume individual funders and funding sources mainly operate in isolation –and hence their operations and effects for researchers using their funding can and should be studied in a kind of vacuum. These assumptions may once have been appropriate, but they are now challenged by multiple, interrelated developments in science and science policy.

First, funding has shifted from internal/institutional to external/project-based in most countries [ 2 , 23 – 26 ]. Second, the scope of research policy has broadened to direct funding towards more diverse goals, e.g. promoting ‘excellence’ in science, providing solutions to economic and social problems, and fostering technology and innovation [ 6 ]. Project funding instruments and funders have thus multiplied and become increasingly differentiated [ 9 ]. In addition, public knowledge production has become more integrated into society, creating an ‘extended peer community’ [ 27 , 28 ] where non-governmental actors are involved as funders, collaborators or stakeholders. Third, increased globalisation of science has made international research collaboration far more widespread [ 11 ], increasing the importance of non-national funding. This includes different types of EU allocations, funding from public, non-profit and private sources, and funding from other countries either directly or indirectly (i.e. through collaboration with international colleagues). All three of these challenges are potentially in play simultaneously. Consequently, the organization of science increasingly ceases to follow disciplinary and/or national borders, and instead features trans-national, trans-sectoral, multi/inter/trans-disciplinary academic communities or fields, and is characterised by both pervasive geographic and research field boundary-crossing [ 11 , 29 ].

Contemporary research at individual, collaborative and field levels, therefore, also is likely to rely on multiple, heterogeneous funding, and funding co-use. Blends of funding instruments mobilised from the bottom up are a part of this development of increasingly multi-level, multi-actor governance systems, and will differ substantially from traditional perceptions of research funding driven by national, public authorities [ 3 , 5 , 6 ]. For these reasons, studying funding dynamics must now consider not only how national funding systems are designed from the top down, but also how funding is mobilised and even co-used by researchers from the bottom up.

While these challenges are increasingly acknowledged, the majority of literature has largely not addressed them. However, some recent studies have begun to call for a ‘reality check’ [see e.g. 5 , 12 , 13 , 30 ]. It is becoming recognised that these research funding dynamics require new approaches, and the limitations of traditional approaches are increasingly highlighted [ 6 , 12 , 13 ]. It is argued that research funding might be better treated as sets of interlinked geographical and/or research field spaces of interaction between different layers of research funders and performers [ 12 ]. These spaces are being recognised as not exogenously defined by organizational or geographical distinctions. Instead they need to be empirically observed [ 12 ]. Additionally, the increasing importance of charities, supranational funding agencies [ 20 , 31 ] and the emergence of new funding schemes for science [ 32 ] all need to be explicitly considered more. New approaches tailored to capture these more complex funding dynamics are, therefore, seen to be necessary [ 5 ].

Most previous studies are further challenged by the analytical level that is adopted. The typical emphasis on aggregating at national level may mask important field differences even within countries. These can include, e.g. large differences across scientific fields in relation to the balance between institutional and project funding [ 18 ]. Similarly, the roles of non-public or non-national funders may vary across fields taken within the same national system. In some fields, public sources may dominate whereas private foundations, patient organizations or companies may in others. Field-specific funding organizations or funding instruments may be marginal for a national picture but nevertheless could play substantial roles at lower levels of aggregation. Similarly, the importance of non-national funding may vary across fields.

National or international research-related statistical organizations are also seldom able to capture the full extent of research fields, to provide usable field delineations for funding studies. Empirically-delineated research fields often will not correspond to the organizational or disciplinary categories used by nation-oriented bodies. Even statistical data at disaggregated levels, if originally framed by national levels, may introduce demarcation problems. Therefore, as science organization and funding have become more complex over time, unsurprisingly the availability of appropriate research funding statistics has struggled to keep pace [ 6 ].

Overall, how science and policy have developed has increasingly challenged traditional top-down, system-led perspectives on research funding. Coupled with field delineation and statistics shortcomings, this generates numerous, problematic assumptions about research funding. The available research funding literature has seemingly not yet fully embraced the implications of these changes. This justifies exploration of a new perspective and approach better suited to tackle these current funding dynamics, and to complement the strengths more traditional approaches can still provide (e.g. regularity, standardization).

A bottom-up approach to research funding dynamics

To begin, the bottom-up approach to research funding dynamics considers funding of the smallest knowledge producer in any research field, i.e. individual researchers, and their research funding instruments. A ‘funding instrument’ is understood as the lowest identifiable level, discrete resource unit provided by any funder (e.g. a specific grant from an internal or external funder). Individual researchers work at a research organization and, considered within a specific window of time, may sometimes co-use a set of funding instruments, i.e. a ‘funding configuration’. An individual researcher’s configuration could fund, e.g. writing a paper, or building equipment, conducting fieldwork, disseminating research results and so on. The next, middle level of aggregation considers funding of knowledge production where researchers collaborate, i.e. funding instruments co-used by collaborating researchers. Co-use may not mean researchers use each other’s funding here, simply that funding has somehow been used to undertake a collaborative research activity. This could include, but is not limited to, co-authoring a research publication. This is a ‘funding amalgamation’. The top level considers funding of knowledge production at level of a research field. This aggregates co-used funding instruments identified in the previous levels, considering now all the researchers active in a research field. This delineates the ‘co-funding network’ of that research field. Scientometrics-based study of co-funding typically only presents this level, and does not frame configurations or amalgamations to study co-use funding by individual or collaborating researchers.

Table 1 contrasts the shift in perspective these three levels of the bottom-up approach provide, relative to the traditional, top-down approach to research funding. Separately, it should also be noted this approach also differs from existing ‘bottom-up’ approaches in scientometrics used to assess research performance [see e.g. 33 ].

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0251488.t001

Fig 1 visualises the bottom-up approach’s three levels–i.e. funding configurations, funding amalgamations, and field co-funding networks. The bottom, ‘configurations’ level registers the distinct sets of funding instruments co-used by individual researchers; the middle, ‘amalgamations’ level registers sets of multiple funding instruments co-used, in some way, by collaborating researchers; and the top level is the ‘co-funding network’, registering all funding instrument co-use instances by all researchers active in the field (i.e. all instances when two or more instruments have been co-used to support individual or collaborative research activity, thus aggregating all forms of configurations, and all forms of amalgamations, within the given research field). The network nodes are instruments of different classes, that can be characterised using appropriate codes (e.g. public, private, national, supranational). The thickness of ties between instruments counts the occurrences of those specific linked instrument classes across the field. This is then based on all instances of instrument co-use by all researchers active in the given field (i.e. all configurations of two or more instruments, and all amalgamations of two or more instruments). Particular instrument classes may then be observed as co-used together more than others (e.g. public instrument-public instrument co-use may be the most common form in some fields; public-private-not-profit, in other fields) or instruments can be grouped by the funder that provided them, to study that aspect. The approach overall captures three different levels of funding instrument co-use. It enables study of how different kinds of funding are blended, in various ways, by individual researchers, by collaborating researchers, and across an entire field of active researchers. (To reiterate, this approach does not study isolated uses of single funding instruments by researchers–i.e. not involving co-use. This would be a separate design, and is not undertaken in our multi-level approach, which focuses exclusively on co-use.)

thumbnail

Line thickness between instruments (boxes) of different classes (e.g. type, origin, providing funder) in the co-funding network represents occurrences of instrument classes that have been co-used.

https://doi.org/10.1371/journal.pone.0251488.g001

Funding acknowledgements to study funding co-use dynamics

To empirically test our bottom-up approach in the study of funding of researchers at different levels of aggregation, we look at funding of research outputs based on funding acknowledgements (FAs) in publications. We review previous uses of FAs in the literature, and emphasise that the approach developed in this study uses FAs to validate its multi-level framing of funding, and not to study research performance, e.g. from a scientometrics perspective, or to advocate that these funding levels can only be studied via publication FAs. Functional and behavioural-related limitations to using FAs as a data source to test our approach are noted. This is followed by a description of the added value of using FAs to study funding co-use dynamics from the bottom-up.

Previous FA-based studies.

Numerous FA-related studies have been produced since FA data became systematically available in large-scale publication databases from late-2008. Some of these have highlighted co-use dynamics. FAs have previously been used as a data source to demonstrate co-funding patterns [e.g. in nanotechnology; see 34 , 35 ]. Analysis of FAs has also shown multiple funding instruments can be co-acknowledged in the same paper, e.g. a publication can acknowledge support from mixes of both public and private funders [ 36 ]. FAs have also been used to map citation scores of research outputs affiliated to particular research organizations within a geographical boundary, e.g. the continent of Africa, to rank relative performance of individual research funders and/or performers [ 37 ].

Even when such studies attempt to delineate apparent ‘funding landscapes’ derived from FAs however, they strictly speaking constitute research performance landscapes, i.e. mapped primarily to assess research performance, and to address which individual research organizations, countries or funders perform or fund the highest cited research. Alternatively, FAs have been used to delineate which research themes and topics are funded across a particular field or geographical area [ 38 ] to inform research prioritisation and policy. In other words, such research typically uses FAs to assess performance or priorities–i.e. for evaluative or descriptive purposes [ 33 ], and does not typically disaggregate multiple levels of funding co-use, which is our approach.

Limitations of using FAs as a data source.

Use of FAs in previous literature has also revealed some limitations to their use as a data source. Reviewing these studies, a first class of limitations can be seen to be functional in nature ( Table 2 ).

thumbnail

[See 34 – 3 , 6 , 39 – 50 ].

https://doi.org/10.1371/journal.pone.0251488.t002

A second class of limitations to using FAs as a data source can be considered as caused by researchers’ acknowledging behaviours. They exist because although funders and authors increasingly consider providing FAs in papers to be mandatory or at least good practice, compliance can vary from researcher to researcher, across researcher country and language groups, or across fields. Consequently, FAs can under- or over represent funding data. FAs may also not reveal certain kinds of funding information, or can equate all funding so it seems to be of equal importance for the published research, when in fact it is not (see Table 3 ).

thumbnail

[See 35 , 36 , 39 , 40 , 43 , 44 , 47 – 4 , 9 , 51 – 53 ].

https://doi.org/10.1371/journal.pone.0251488.t003

Strengths of using FAs as a data source.

Despite these limitations, using FAs for a papers-based test of the bottom-up approach usefully enables delineation of funding configurations (funding of outputs produced by individual researchers), funding amalgamations (funding of outputs produced by collaborating researchers) and co-funding networks (funding of outputs by all researchers active in a field). These would otherwise be difficult to map without using FAs [ 40 , 54 ]. Other methods could have challenges regarding resource intensity, intrusiveness and coverage. For example, instead of using publication FAs, researchers in a field could be surveyed and asked to self-report their funding instruments or configurations. Research groups, teams or networks could be surveyed about funding amalgamations. Such surveys, however, could have issues of memory recall and data completeness, and could be resource-intensive.

Selecting a FA-based method primarily to test the bottom-up approach instead has clear advantages. First, for the test, compared to other methods, e.g. surveys or interviews, FAs are pre-existing paratexts in papers and can be automatically extracted, at scale, from publication databases. This means using FAs is cost-effective. It also enables large-scale, unobtrusive study of funding across all three levels of the bottom-up approach. Second, FAs are funding data self-reported by researchers themselves, rather than via third-party sources, such as funder databases or other repositories. These may not be as direct or comprehensive as FAs. Third, a FA-based approach is scalable and repeatable longitudinally. For studies beyond our current test, they could be used to sample multiple time periods with little marginal cost, if relevant periods are covered by publication datasets. Fourth, as publishers and publication datasets adapt to and advocate their increasing use, FAs may become more available, consistent and stable. As a data source they may become more significant for using a bottom-up approach over time. The primary drawback, however, is that using FAs artificially restricts analysis of funding co-use dynamics to study of funded papers. It excludes studying other co-use instances, which would need separate research, exploring beyond just publication-related research activity.

Using FAs for a bottom-up approach to research funding.

To operationalize use of FAs to test study of research funding dynamics from the bottom up, the starting assumption is that individual researchers can co-use multiple funding instruments, which they acknowledge via FAs, even in single-authored research (i.e. these would then be funding configurations). Funding instruments can become more aggregated within collaborative research activities. These mix–to an unknown greater or lesser extent–some or all the different funding instruments brought in by every researcher (i.e. funding amalgamations). These are evidenced via FAs in multi-authored publications, where co-use includes researchers using separate funding instruments to co-author the paper and/or researchers’ sharing funding instruments. Finally, co-use of funding instruments across a field of researchers becomes highly aggregated. This can be delineated from FA data as a field co-funding network. To produce each level, associated analytical steps are required. These are careful research field delineation, name disambiguation, cleaning of all acknowledged funding instruments, and analytically-informed coding of FAs to classify funding instruments.

To conceptualise this coding, the bottom-up operationalization approaches FAs as traces of complex, adaptive, global contemporary science, potentially involving many funding actors. This acknowledges that paper FAs as a data source report actual funding (co-)used by researchers in a way that is indifferent to boundary/border issues. That is, they naturally accommodate the reality that researchers can and do reach both below and above national levels to obtain funding instruments, and to ‘configure’ or ‘amalgamate’ them together in ways that can transcend geographically-constrained, topic or sector-specific funding patterns [ 11 ]. Nevertheless, an ability to study any national system-related patterns can still be retained. This is done by registering the author affiliation(s) of researchers. For this test, this will be to one or more of the three selected countries (Denmark, Netherlands, Norway), taken from reported author affiliation(s) in publications, in the two test research fields (Food Science, Renewable Energy Research).

There are further issues to note for the operationalization. These concern attribution. For funding configurations, FAs in single-authored publications do provide directly attributable data on funding instrument co-use by individual researchers. In a paper with only one author, if multiple instruments are reported, they unequivocally have been co-used. However, funding amalgamations are operationalized via funding of multi-authored publications. FAs usually provide an undifferentiated data string, without funding-author attribution. They may not attribute each instrument (FA) to an author. It cannot be robustly determined if all the acknowledged funding belongs to only one author (e.g. other authors acknowledge or have no funding). Funding amalgamations can therefore vary. They may involve fusion, where all authors producing the paper are supported by one or more shared funding instrument. They may feature juxtaposition, where authors do not mix funding, but use their own, separate instruments. The amalgamation does not aim to describe how acknowledged funding instruments have been mixed; this needs exploration via other methods. It registers only an analytical level of funding co-use–broadly understood–as funding contributing, in some as yet unknown way, to the act of collaborating to produce a paper.

Publication FAs also typically provide only funder and/or funding instrument names. Therefore, the operationalization needs to code funding instruments. This requires selecting instrument characteristics that are analytically meaningful to label. Based on previous literature, ‘type’ and ‘origin’ were selected to be coded for our test (although others could be developed in future). Type distinguishes whether funding instruments are from a public, private, non-profit or university funder. This can classify the often distinctive kinds of funding instruments provided by these bodies [see 5 , 10 , 55 – 57 ]. Deliberately, both internal and external funding ‘types’ are coded. The role of university/internal funding (e.g. institutional/block grant funding of salaried research time, specific internally funded research activities) is quite underexplored [see 30 ], so should be included

Coding origin follows established practices for FAs [e.g. 36 ]. It distinguishes domestic from foreign or supranational funding instruments. This enables specific characteristics to be flagged, e.g. domestic (i.e. national or sub-national) funding instruments can differ from foreign funding (non-domestic national or sub-national) and from supranational funding in analytically relevant ways. This may be due to these funders’ geographical scope of operation or focus on particular challenges, economic and/or societal missions (e.g. contrasting the EU, multinational corporations, and international non-governmental organizations).

Selecting the test cases

Our two test cases, the Renewable Energy Research and Food Science research fields, were selected to test the approach. This is because they have varied funding and assorted funders to delineate, due to these fields having a wide breadth of research topic/keyword coverage [e.g. see 58 ]. They also feature both fundamental science and applied research, and long traditions of involving industry, public authorities, research users and other collaborators. The two fields are also thematically oriented. This means they can engage grand SC themes, e.g. United Nations Sustainable Development Goals (SDGs) and related policies and politics. These test fields do not represent generally the global science system. They are however, appropriate, dynamic and multi-faceted candidates to test the bottom-up approach. They represent a confirmatory case selection [ 59 ]. The two fields provide similar heterogeneity, yet sufficient variation to enable testing of the approach.

Researcher author affiliation(s) were selected to be research organizations (ROs) in Denmark, the Netherlands and/or Norway. This retains an ability to explore the role of national contexts, even when adopting a bottom-up, country-crossing perspective. These three selected countries are all small, advanced, Western European economies. They have similar Humboldt-inspired university systems, relatively high shares of institutional funding, and traditional public research councils [ 17 ]. At the same time, they have country specific funder/funding variations. E.g. Denmark has relatively clear demarcation between scholarly and societally oriented funders; Norway is more centralized with a broad, unified research council. All three affiliation countries also have active Food Science and Renewable Energy Research sectors.

Field delineations and time horizon selection.

To delineate the two test research fields, Whitley’s notion was adopted of ‘intellectual fields’ as a ‘broader and more general social unit of knowledge production and co-ordination’ than disciplines [ 60 ]. Research ‘fields’ were considered as fairly broad units of knowledge production. They can engage activities outside academia, knowledge co-production with industry or other societal collaborators, and still involve public research organization-related social and organizational structures.

To develop the data collection process, all Web of Science (WoS) publications were retrieved with at least one author with any affiliation to a research organization located in Denmark, the Netherlands or Norway. Renewable Energy Research and Food Science publication datasets were created using a core/expansion approach. The core datasets for the two fields included all papers meeting all three criteria of: containing at least one of our search terms; being published in a relevant journal; and being part of a relevant article-level cluster.

The list of search terms used for each field can be found in S1 File . The relevant journals from each field were selected by going through the list of journals included in relevant WoS subject categories (i.e. “agriculture, dairy & animal science”, “agricultural economics & policy”, “agriculture, multidisciplinary”, “agricultural experiment station reports”, “food science & technology” and “agricultural engineering” for Food Science; and “energy & fuels” and “green & sustainable science & technology” for Renewable Energy Research; previous energy research [ 58 ] was also consulted) or by containing one of our search terms in their title. In the Centre for Science and Technology Studies (CWTS, Leiden, Netherlands) database used for this paper (that included high resolution funding data and author-name disambiguation [ 61 ]) articles were clustered based on citation relationships [c.f. 62 , 63 ]. Clusters with at least five publications containing one of our keywords were screened to identify those clearly related to the fields. The relevant journals and article-level clusters are also included in S1 File .

These two core datasets were then expanded by including all papers meeting any three of the following five criteria:

  • contained at least one of the search terms;
  • published in a relevant journal;
  • part of a relevant article-level cluster;
  • cited or is cited by a publication in the core set;
  • shared an author with a publication in the core set.

Finally, in each field dataset, all papers in the micro-clusters for which more than one third of publications had already been included at that point were also included.

To include publications and their FA data for a reasonable period, the dataset time horizon was selected to be 10 years, i.e. all field publications 2009–18. This had the advantage of starting the datasets soon after WoS began to systematize and make available suitable FA metadata. No specific time criterion was used for author inclusion, i.e. for whether a particular author-researcher had to appear multiple times within the period to be considered ‘active’ in a field. This process produced a publications dataset, covering all six field/affiliation cases, which could be separately studied by filtering fields and/or countries. Gephi software was used for network visualizations.

Disambiguating and coding funding instruments from FA data.

All FA data in this dataset of publications was cleaned and coded. FAs were taken from the ‘funding agencies’ metadata field of WoS records. This provided funder organization names, names of sub-divisions of these funders, and sometimes funding grant numbers or funded project names (i.e. specific details of funding instruments).

Initially, 25,522 distinct funder strings were automatically retrieved from the FAs. Funder strings that occurred at least twice in the dataset were cleaned and disambiguated. Sub-divisions of a funder found in that set were grouped under the main named funder (e.g. diverse funding instruments of the European Commission were grouped). This process merged 19,140 (75%) of the 25,522 original funder strings into a cleaned list of 2,363 unique funding organizations to analyse. Throughout, data errors encountered during the extraction process were addressed, e.g. cleaning funder name variants.

Given the limitations of FA data we have already noted, funders had to be disambiguated and classified so as to code their associated, acknowledged funding instruments. This was done using the selected data labels of type and origin. Four values were used to code type of instruments:

  • Public–typically instruments from research councils and public/state authorities like ministries and agencies (but including supranational organizations like the EU, and regional and local authorities).
  • Private–predominantly instruments from companies/corporations.
  • Non-Profit–instruments from private non-profit organizations, patient organizations, national and supranational non-governmental organizations (NGOs) and related others.
  • University–internal funding directly from a university (most likely either institutional funding or competitively awarded internal university funds).

Three values were coded for origin:

  • Domestic–instruments originating from Denmark, Norway or Netherlands funders (respective to author affiliation).
  • Foreign–instruments originating from all other countries, except the domestic country.
  • Supranational–instruments from funders like the EU, OECD, UN agencies.

Table 4 shows the final total number of articles collected in the full dataset, the number and percentage of papers with at least one acknowledged funder (i.e. instrument detail), and the share of FAs able to be coded for type and/or origin (i.e. between 67.9% and 77.8% for type; between 70.7% and 81.2% for origin). There was an average of 2.3 FAs per paper, for the full dataset of papers with FAs (i.e. 55,089 FAs across the ~23,000 papers with FAs). Non-coded FAs were sampled to check for systematic bias, and their characteristics were found to be acceptably similar to the coded proportion.

thumbnail

https://doi.org/10.1371/journal.pone.0251488.t004

The sub-sample used to obtain configurations, amalgamations and co-funding networks was derived after exploring the number of acknowledged funders across all publications (see Fig 2 ). The aim was to exclude articles with no FAs or only one FA. Across the cases, this left 35–45% of the articles from the full dataset with two or more FAs (i.e. instruments). Of these papers, 0.7% were single-authored (funding configurations) and 99.3% were multi-authored (funding amalgamations). Co-funding networks were obtained by including both single and multi-authored papers with multiple FAs.

thumbnail

https://doi.org/10.1371/journal.pone.0251488.g002

Illustrative findings

Findings from testing the approach are now presented. For funding configurations, this presents patterns of co-used (i.e. co-acknowledged) funding instruments at individual researcher level from single-authored publications. For funding amalgamations, patterns of co-used funding instruments, number of FAs, and number of authors for collaborating researchers, are presented to demonstrate dynamics for multi-authored papers. Finally, field co-funding networks are presented for Renewable Energy Research and Food Science, respectively. This combines all possible affiliation countries and co-use funding patterns from both single and multi-authored papers. Country affiliation-specific networks are also presented, to demonstrate the approach can still illustrate nation-related features for study. Across all three levels, funding instrument type (public, private, non-profit, university) and origin (domestic, foreign, supranational) are explored. This presents our approach to co-use of funding by researchers, conceived of as three increasing levels of aggregation–funding co-use by individual researchers, co-use by collaborating researchers, and co-use by a field of researchers. This illustrates co-use of funding by researchers through the specific lens of papers as a research output. As stated earlier, funding co-use could be explored for different research activities, but still aggregated at these three levels. Throughout, it should also be stressed these are illustrative findings to test the approach, not a comprehensive overview of Renewable Energy Research or Food Science.

Funding configurations.

The bottom level illustration is obtained from single-authored papers with two or more FAs, so presents individual researcher level, funding configurations. Table 5 shows funding configuration patterns. This is in terms of number of funding instruments being configured. From this observation, two findings become evident. First, funding configurations exist in the data. There is acknowledged co-use of funding instruments even in individual researchers’ lone publication outputs (FAs were checked to confirm that each FA did in fact refer to a distinct instrument). Second, in both fields, funding configuration scales vary–even for this small sub-sample. Most configurations featured two instruments (62 out of 90 papers). Others had three, four or even five or more instruments reportedly co-used for a single-authored publication.

thumbnail

https://doi.org/10.1371/journal.pone.0251488.t005

This bottom level, for this illustrative test, is only a small sub-sample of the full dataset. It has only 75 authors and 62 single-authored publications, acknowledging two or more funding instruments. It should be stressed this does not demonstrate that some representative portion of researchers in these two fields have funding configurations. Instead, the data verify that it is valid and important to consider, and to focus upon dynamics of co-use of funding, at individual researcher level–i.e. it is valid to study this analytical level, when exploring uses and roles of funding for research.

A limitation of our FA-based test of the approach is that funding configurations are only observable for single-authored papers here. In highly collaborative fields, single-authored papers are likely rare, and multi-authored papers more common. Separate study could use the insight that funding is co-used at individual researcher level, to study individual researchers via a different research activity. This could, e.g. be individual researchers co-using funding for building equipment, for undertaking fieldwork, or to support policy engagement activity. Even with a small sub-sample, there is configuration variety, in terms of funding instrument type and origin. We argue then these findings warrant such additional explorations. Separate qualitative work on single-author researchers in this dataset has also highlighted variety in reasons for, and mobilizations of co-used funding instruments, see [ 64 ].

The main finding here is not that these particular configurations are representative of individual researcher co-use of funding in these specific fields. Rather it suggests there are potentially interesting dynamics at this level that warrant the inclusion of funding configurations in studies of funding moving forward. Most importantly, these dynamics go unexplored if this level of analysis is excluded . However, separate study of research activity–not limited to publications/FAs–would be needed to understand exactly how and why individual researchers co-use funding instruments, for various research purposes. Similarly, exploring which patterns of funding configurations are prevalent in any given field would require alternative methods. This is because FAs cannot study individual researcher co-use dynamics that happens anywhere other than in single-authored papers.

Within these limitations, and for our specific FA-based findings, we see configurations of ‘public’ instruments are the most common co-use by individual researchers, in terms of funding instrument ‘type’. However, there are also ‘public-private’ and ‘public-university’ configurations, and ‘non-profit’ instruments configured with all three other types by individual researchers. Similarly, there are configurations with mixed ‘origins’. The most common is ‘domestic-domestic’. This seems an interesting indication of even single-authored research needing to draw upon multiple domestic instruments (e.g. multiple national funders). However, there are also ‘domestic-foreign’, ‘domestic-supranational’ and ‘foreign-supranational’ configuration varieties.

Overall, the sub-sample shows funding configurations exist, and that they can vary in terms of type and origin. This is even when only exploring a small number of cases. Taken together, this suggests configurations are a valid analytical level to consider in studies of funding dynamics in a broader sense (and this would eventually be broader than only FA/publication-based data collection).

Funding amalgamations.

The middle level in our bottom-up approach presents amalgamations of instruments co-acknowledged in multi-authored papers (with two or more authors, and two or more FAs). These funding amalgamations indicate that collaborating researchers co-use a wide range of different instruments. This is seemingly broader than how co-use occurs in funding configurations of individual researchers. As we have stated, it cannot be determined–through FAs alone–whether these amalgamations are fused or juxtaposed instrument co-use. However, these illustrative findings are promising to validate the approach, and an emphasis upon this level of funding aggregation. They suggest more funding variety can be seen in funding amalgamations than in funding configurations. This tentatively offers interesting insights by including this level of funding aggregation in studies. It also indicates potentially important funding dynamics may arise from co-use of instruments at the level of collaborating researchers (here addressed only for multiple authors co-authoring a paper).

Patterns of funding amalgamations are presented by type of instrument in Table 6 . The most prevalent amalgamation type is different public funders being co-acknowledged. In Renewable Energy Research for Netherlands- and Denmark-affiliated publications, this ‘public-public’ funding amalgamation type accounted for around 60% of FA cases (60.9% and 58.8%, respectively); for Norway-affiliations, it represented 43.4%. For Food Science, across the same three affiliations, the pattern was different: 44.9% for Netherlands-affiliated, 44.2% for Denmark, and 50.4% for Norway. Such patterns might suggest certain co-influences of these types of funders for these collaborative research outputs, and that influences may vary by field.

thumbnail

https://doi.org/10.1371/journal.pone.0251488.t006

Many ‘public-public’ amalgamations included the EU (coded as a supranational, public instrument) plus a national public instrument. Public-private (i.e. companies) and public-university amalgamations were also present. Norway-affiliated articles had the highest prevalence of these; Netherlands-affiliations had the lowest. Amalgamations with no public funding instruments were rare. For Denmark-affiliations these accounted for less than 10 percent of amalgamations; for Netherlands and Norway-affiliations, less than 20 percent.

Patterns of funding amalgamations by instrument origin are presented in Table 7 . Across all three affiliations, in both fields, the most prevalent amalgamation was ‘domestic-domestic’ (32.5% for Renewable Energy Research; 34.2% for Food Science). There were amalgamations with no domestic funding acknowledged. This indicates papers funded exclusively by amalgamations of foreign and/or supranational funders. Amalgamations in papers with Netherlands-affiliated authors reported the fewest domestic funders.

thumbnail

https://doi.org/10.1371/journal.pone.0251488.t007

There were also ‘domestic-foreign’ and ‘domestic-supranational’ amalgamations. These are country-crossing co-funded publications, funded by funders operating in different countries or at differing geographic scales. Further research could determine if this was intentional action, e.g. funder-to-funder coordination efforts or international research projects, or due to separate dynamics such as researcher collaborations not attributable to the acknowledged funding.

Co-funding networks.

Co-funding networks for the two fields present dynamics that stretch beyond national landscapes. They differ from what might be visualised via nationally-supplied research funding statistics or perspectives. They present an overall view of co-use of funding at the level of researchers active in a field, built upon funding co-use by discrete researcher collaborations (amalgamations) and co-use by individual researchers (configurations). At this level, co-used instruments can also be grouped according to the funders providing them, in addition to labelling of instrument type and origin.

Fig 3 presents the Renewable Energy Research co-funding network across all author country affiliations; Fig 4 shows the Food Science co-funding network for all affiliations. Both networks have many instances of multiple, rather than simple, ties between funding instrument types (indicated by node shape; with funder names also shown) and origins (shown by node colour).

thumbnail

Origin colours: Blue = affiliated-Denmark; green = affiliated-Netherlands; orange = affiliated-Norway; purple = affiliated other countries; red = supranational. Type shapes: Triangle = non-profit; square = public; circle = private; pentagon = university. Node size = number of FAs mentioning that funding instrument; ties = instrument co-occurrences within the same article’s FAs.

https://doi.org/10.1371/journal.pone.0251488.g003

thumbnail

Origin colours: Green = affiliated-Denmark; orange = affiliated-Netherlands; blue = affiliated-Norway; purple = affiliated other countries; red = supranational. Other labels as per Fig 3 .

https://doi.org/10.1371/journal.pone.0251488.g004

Diverse associations emerge from these visual representations. For example, for Fig 3 , a first noticeable feature is the centrality of the EU within the co-funding field network. Such a focal position is emphasized by its node size, indicating the number of FAs mentioning any EU funding instrument, and by multiple links converging at this particular node. The distribution of the most acknowledged funders, to some extent, is also displayed in the highly interconnected sub-network areas. These are populated by funder names from each of the three specific study countries. Among them, particular funding agencies, such as the Danish Council for Strategic Research, seem to play a significant role. Research funding from other countries (e.g. China, USA, Sweden or Spain) may stem from internationalization patterns in publication co-authorships (this would require further study). Collaborative knowledge production is reflected in the co-use of funding instruments from different types and origins, to generate scientific outputs in a field. Varied and mixed combinations (e.g. public-public/supranational-foreign features) are also evident, similarly in Fig 4 . This reveals a fine-grained view of funding at the top level of our approach. Separate methods could in future be used to determine the degree to which the network positions and apparent roles of each funder here are strategic, i.e. a conscious result of some funder actions or deliberate funding instrument designs. The co-funding network could be used to begin such an exploration, e.g. interviews with funders–perhaps also combined with insights from the observed funding amalgamations and configurations.

Similarly, assorted type-based ties are present in both Figs 3 and 4 . Specific private companies and non-profit organizations are identifiable among the variety of research funding acknowledged across all publications in both fields. For instance, in Fig 3 we see energy-related organizations, and in Fig 4 , food-related ones. Within the Fig 3 network, as an example, numerous university type instruments (internal funding) are linked together, as well as simultaneously to multiple varieties of public, private and non-profit instruments (e.g. Technical University of Denmark instruments linked to those of Aarhus University, the Novo Nordisk and Carlsberg foundations, Danish or Norwegian public agencies). In other words, researchers here are engaged in heterogeneous funding co-use (similarly to their configurations and amalgamations, but now at a higher level).

The full dataset can be filtered to present other related funding dynamics to study, i.e. to show field co-funding networks filtered by author country affiliation. These country-filtered co-funding networks still present highly heterogeneous funding dynamics, due to how they have been built from configurations and amalgamations, rather than top down. For our cases, traditional public funding sources, like the EU, remain prevalent once the networks are filtered by affiliation countries, but are clearly operating as parts of large networks of interlinked funding instruments, and not in isolation.

Specific features are also present when the networks are filtered by affiliations. For instance, when filtered for Norway-affiliated researchers, the Norwegian Research Council is shown to have a ‘hub’ function. This may be expected for a centralised public funder, but here can be visualized as occurring from researchers actually co-using their funding, in some way, to collaborate to produce research outputs (irrespective of whether such activity was strategically intended by funders). However, even when filtered this way, ties between more traditional national public funders and non-public, foreign or supranational funders are also present. This emphasizes the interconnected nature of the funding. Field-specific funders stand out for certain cases, e.g. oil companies in the Norway-affiliated Renewable Energy Research case. Additionally, for all six cases, filtered co-funding networks show smaller/niche funders that might be overlooked in a top-down perspective.

This filtering approach underlines that the bottom-up approach does not have to discard entirely the notion of countries. National funding remains present and can still be studied. National funders appear important across all cases, even with the noted emergence generally of multi-level, multi-actor ‘global science’ [c.f. 11 ]. The bottom-up approach can also provide more nuance here, and show that traditional funders nevertheless do however operate in heterogeneous, variously interlinked ways with other funders’ instruments, due to multiple levels of co-use by researchers.

Overall, the bottom-up approach enables study of potentially useful insights into seemingly highly complex funding situations. Future research using the same approach could go further, e.g. to explore field-nexus contexts, e.g. a two field, energy-and-food co-funding network. This would make sense primarily in instances where research topics overlap between fields. For such cases, the approach could reveal unexpected linkages. Similarly, the co-funding networks could be filtered according to author affiliation(s). This would enable study of ranked prevalence and variety of named funders/funding instruments, relevant to country affiliation related aspects.

Discussion and conclusion

Overall, the findings presented above support the key assumption in this paper: taking a researcher-led perspective to get to the ‘bottom’ of research funding can reveal important, varied funding co-use dynamics. There are also several other implications from testing this bottom-up approach.

First, for the test fields of Renewable Energy Research and Food Science, the configuration, amalgamation and co-funding network levels present funding co-use dynamics one would expect from globalisation of science or ‘global science’ [ 11 ], but add more nuance and granularity. The bottom-up perspective, as tested here, verifies recent assertions that knowledge production is increasingly no longer (if indeed it ever was) the ‘product’ of any particular, single country or national science system. This assertion holds across co-use of funding by individual researchers (configurations), collaborating researchers (amalgamations), and fields of collaborating researchers (co-funding networks). It implies not only individual researchers and research organizations navigate an increasingly complex globalised science funding system, but also that funders need to understand and adapt to this complicated operational context, featuring highly interlinked, multi-level, country border-crossing and seemingly interdependent funding co-uses [see 12 ]. Further research using the bottom-up approach for more fields, and for other collaborative research activities not just limited to research in the form of publications (and funding registering via FAs), could generate more general insights about funding dynamics, throughout more of the global science system.

Second, the findings support the assertion that a bottom-up, researcher-led perspective on research funding is valid and necessary. The approach provides useful analytical units that aggregate funding at meaningful levels (researchers, collaborators, fields). These add nuance and help to understand more about the nature of contemporary, interlinked research funding realities in various fields and contexts. They have been tested here using FAs and focusing on publications, but need not be limited to this. At the same time, the framing and test may bring new opportunities for further exploration of the possible complementarities that bottom-up findings could contribute to top-down/system-led studies.

Third, the test presented a high heterogeneity of funding co-use. Domestic, foreign and supranational funders from public, private, university and non-profit sectors were configured, amalgamated and networked in various ways. The implication for understanding funding is that researchers (either individual, collaborating or within a field) can co-use assorted funding instruments to do research. Certain national and sectoral specificities remain present, but geographic boundary-spanning dynamics seem now inherent. This implies future studies of research funding not only need to assign continued importance to national funding but also address funding co-use that can be present right from the bottom up, for individual researchers and beyond. In other words, this bottom-up approach is not simply intended to be a descriptive analysis of funding/funders, similar to what is already possible with scientometrics approaches. It is an attempted re-framing, aiming to stress that there are additional, important levels at which of co-use funding by researchers needs to be considered, because of increasing complexity in contemporary funding dynamics.

Fourth, the bottom-up approach could be used to generate intelligence for funders, policymakers, university research managers and so on. The approach is grounded in funding as it is actually co-used. This is independent of whether such use has been strategically coordinated, intended or expected by various stakeholders [c.f. 35 , 47 , 48 , 65 ]. This perspective could potentially enable funders to look more inclusively and pragmatically at their funding portfolios, to assess and adjust their bi- or multi-lateral coordination efforts, and perhaps to understand better their potential co-influence throughout the science system [c.f. 39 , 44 , 52 ].

At the same time, two limitations need to be highlighted. First, the selected fields to test the approach are highly applied and interdisciplinary. These cases may have limited generalisability. Specifically, they might have idiosyncratic knowledge production and collaborations that are not relevant elsewhere (e.g. in mono-disciplinary fields). Their similarity was useful in this paper to supply funding variety to test the multi-level funding co-use approach. For wider applicability, the approach should be tested not only with more but also with different fields. Similarly, the configuration, amalgamation and co-funding network patterns presented in this paper are obviously specific to the two illustrative fields and for Denmark, Netherlands or Norway affiliated researchers only (i.e. these are researchers in small but open economies that already have strong international ties). Therefore, these cases likely affect how complex and country boundary-spanning co-use appears here.

A second limitation concerns the selected collaborative research activity of research outputs/publications, and use of FAs as the data source for the test and operationalization of our approach. Publication FAs do provide relevant data. However, they do not afford fine details to unpack funding dynamics. Improving guidelines around author use of FAs could potentially contribute to this end, e.g. journals could mandate clear, robust author-funding attribution. The bottom-up approach could then better study the co-use dynamics of funding amalgamations, using this additional funding-author attribution data. Nevertheless, it would remain challenging to attribute particular discrete research content or insights within any given article to particular funding, even with direct author-funding attribution. Further research would still be needed to provide understanding of whether collaborating authors used their own funding in isolation (amalgamations juxtaposing independent funding instruments) or collectively (amalgamations fusing interdependent instruments).

The three levels of the bottom-up analysis could also be differently explored. Future research could use additional methods to study, for instance, what funding configurations exist in particular fields and contexts–or how different types and origins of funding instruments are co-used within configurations. As we have stated, this aggregation level of funding co-use has been largely overlooked in previous studies. And yet, as we have illustrated, even for a small sub-sample, configurations can exist as a distinct level of funding co-use. They can also be varied, warranting further research attention. This clearly requires use of a different method than tracing funding via FAs, and a sole focus on papers. This can result in insufficient numbers of configurations to study (i.e. single-authored papers), particularly in fields where publication is primarily collaborative (multi-authored). Additionally, for funding amalgamations, their composition and how instruments are co-used are other potential research interests. In particular, the inability of a FA-based empirical exploration to distinguish between fused or juxtaposed amalgamations, invites further research using additional methods. Likewise, variation in patterns of co-funding networks in different field types, in field-spanning contexts, or in emerging research areas located in-between existing fields, could also be investigated. For instance, here it could be explored whether network patterns correlate with deliberate funder strategies to achieve synergy through funding co-use within a given field. There could also be separate but related questions to explore, including whether and how researchers enact agency to pursue strategies to (co-)shape the availability and characteristics of funding in a field, which researchers later assemble into funding configurations, amalgamations, and co-funding networks. This could explore if researchers attempt to influence funders to affect funding provision that they need to co-use to do their research in particular ways, in their field [c.f. 5 , 14 ].

Expanding the approach could include additional methods like interviews, case studies, bibliometrics, altmetrics, surveys and/or network analysis. The approach presented in this paper clearly remains instrumental. It has been an initial means to these kinds of analytical ends, not a final goal in itself. Next steps could add further funding variables or characteristics to move beyond the descriptive, technical classifications of type and origin. Further research could attempt to determine what is distinctive about funding instruments apart from these labels, in a given context, which was beyond the scope of this paper. Examples could be features of autonomy and flexibility of instruments, their duration, amount, or focus on impact/users–and whether these matter more in some fields than others. Additionally, whether and how researchers respond to such characteristics of funding provision through configuration and amalgamation-related behaviours could also be explored.

For this current paper, the bottom-up, co-use of funding by researchers approach has provided both a ‘proof-of-concept’ of the perspective, and a foundation for such future studies. For the approach to provide more insights, and for it to be generalisable across the science system, it should ideally comprise part of a broader, multi-faceted framework and mixed methods toolkit. This could then facilitate detailed study and better understanding of a wide range of the effects of contemporary research funding dynamics upon researchers and their research.

Supporting information

S1 file. dataset core/expansion further details..

List of core keywords, relevant journals, and article-level clusters for the dataset of Renewable Energy Research and Food Science publications.

https://doi.org/10.1371/journal.pone.0251488.s001

S2 File. Dataset.

Listing anonymised unique article identifiers, author affiliations, numbers of authors, fields, numbers of funding acknowledgements, and origin and type data used in this paper.

https://doi.org/10.1371/journal.pone.0251488.s002

Acknowledgments

We thank three anonymous reviewers for their helpful comments, and are grateful for insights and comments on an earlier version of this work from Carter Bloch, Arlette Jappe, Maria Nedeva and Corina Balaban.

  • View Article
  • Google Scholar
  • 6. Lepori B, Reale E. The changing governance of research systems. Agencification and organizational differentiation in research funding organizations. Handbook on Science and Public Policy: Edward Elgar Publishing; 2019.
  • PubMed/NCBI
  • 11. Wagner CS. Global science for global challenges. In: Simon D, Kuhlmann S, Stamm J, Canzler W, editors. Handbook on Science and Public Policy. Cheltenham, UK: Edward Elgar Publishing Limited; 2019. p. 92–103.
  • 14. Gläser J. How can governance change research content? Linking science policy studies to the sociology of science. In: Simon D, Kuhlmann S, Stamm J, Canzler W, editors. Handbook on Science and Public Policy. Cheltenham, UK: Edward Elgar Publishing; 2019. p. 419–47.
  • 19. Aagaard K. Kampen om basismidlerne: Historisk institutionel analyse af basisbevillingsmodellens udvikling på universitetsområdet i Danmark: Dansk Center for Forskninganalyse; 2011. Danish.
  • 21. Lal B, Hughes ME, Shipp S, Lee EC, Richards AM, Zhu A. Outcome Evaluation of the National Institutes of Health (NIH) Director’s Pioneer Award (NDPA), FY 2004–2005 Final Report. 2011.
  • 27. Nowotny H, Scott PB, Gibbons MT. Re-thinking science: Knowledge and the public in an age of uncertainty. Cambridge, UK: Polity Press; 2001.
  • 29. Klein JT. Crossing boundaries: Knowledge, disciplinarities, and interdisciplinarities: University of Virginia Press; 1996.
  • 30. OECD. Effective operation of competitive research funding systems. OECD Science, Technology and Industry Policy Papers. Paris: OECD Publishing; 2018.
  • 37. Kozma C, Calero Medina C, Costa R. Research funding landscapes in Africa. In: Beaudry C, Mouton J, Prozesky H, editors. The next generation of scientists in Africa. Cape Town, South Africa: African Minds; 2018. p. 26–42.
  • 38. Institute RoRI, Waltman L, Rafols I, van Eck NJ, Yegros Yegros A. Supporting priority setting in science using research funding landscapes. 2019. Available from: https://rori.figshare.com/articles/Supporting_priority_setting_in_science_using_research_funding_landscapes/9917825/1 .
  • 49. Sirtes D. Funding acknowledgements for the German research foundation (DFG): the dirty data of the Web of Science database and how to clean it up. 2013. Available from: https://www.wissenschaftsmanagement-online.de/beitrag/funding-acknowledgements-german-research-foundation-dfg-dirty-data-web-science-database-and .
  • 57. Lepori B. Analysis of national public research funding (PREF) Handbook for data collection and indicators production. Luxembourg: Publications Office of the European Union, 2017.
  • 60. Whitley R. The intellectual and social organization of the sciences: Oxford University Press on Demand; 2000.
  • 61. Caron E, van Eck NJ, editors. Large scale author name disambiguation using rule-based scoring and clustering. Proceedings of the 19th international conference on science and technology indicators; 2014: CWTS-Leiden University Leiden.

What is research funding, how does it influence research, and how is it recorded? Key dimensions of variation

  • Open access
  • Published: 16 September 2023
  • Volume 128 , pages 6085–6106, ( 2023 )

Cite this article

You have full access to this open access article

research funding problems

  • Mike Thelwall   ORCID: orcid.org/0000-0001-6065-205X 1 , 2 ,
  • Subreena Simrick   ORCID: orcid.org/0000-0002-0170-6940 3 ,
  • Ian Viney   ORCID: orcid.org/0000-0002-9943-4989 4 &
  • Peter Van den Besselaar   ORCID: orcid.org/0000-0002-8304-8565 5 , 6  

7520 Accesses

5 Citations

3 Altmetric

Explore all metrics

Evaluating the effects of some or all academic research funding is difficult because of the many different and overlapping sources, types, and scopes. It is therefore important to identify the key aspects of research funding so that funders and others assessing its value do not overlook them. This article outlines 18 dimensions through which funding varies substantially, as well as three funding records facets. For each dimension, a list of common or possible variations is suggested. The main dimensions include the type of funder of time and equipment, any funding sharing, the proportion of costs funded, the nature of the funding, any collaborative contributions, and the amount and duration of the grant. In addition, funding can influence what is researched, how and by whom. The funding can also be recorded in different places and has different levels of connection to outputs. The many variations and the lack of a clear divide between “unfunded” and funded research, because internal funding can be implicit or unrecorded, greatly complicate assessing the value of funding quantitatively at scale. The dimensions listed here should nevertheless help funding evaluators to consider as many differences as possible and list the remainder as limitations. They also serve as suggested information to collect for those compiling funding datasets.

Similar content being viewed by others

research funding problems

Exploring the effectiveness, efficiency and equity (3e’s) of research and research impact assessment

research funding problems

Obtaining Support and Grants for Research

research funding problems

Myths, Challenges, Risks and Opportunities in Evaluating and Supporting Scientific Research

Explore related subjects.

  • Artificial Intelligence

Avoid common mistakes on your manuscript.

Introduction

Academic research grants account for billions of pounds in many countries and so the funders may naturally want to assess their value for money in the sense of financing desirable outcomes at a reasonable cost (Raftery et al., 2016 ). Since many of the benefits of research are long term and difficult to identify or quantify financially, it is common to benchmark against previous results or other funders to judge progress and efficiency. This is a complex task because academic funding has many small and large variations and is influenced by, and may influence, many aspects of the work and environment of the funded academics (e.g., Reale et al., 2017 ). The goal of this article is to support future analyses of the effectiveness or influence of grant funding by providing a typology of the important dimensions to be considered in evaluations (or otherwise acknowledged as limitations). The focus is on grant funding rather than block funding.

The ideal way to assess the value of a funding scheme would be a counterfactual analyses that showed its contribution by identifying what would have happened without the funding. Unfortunately, counterfactual analyses are usually impossible because of the large number of alternative funding sources. Similarly, comparisons between successful and unsuccessful bidders are faced with major confounding factors that include groups not winning one grant winning another (Neufeld, 2016 ), and complex research projects attracting funding of different kinds from multiple sources (Langfeldt et al., 2015 ; Rigby, 2011 ). Even analyses with effective control groups, such as a study of funded vs. unfunded postdocs (Schneider & van Leeuwen, 2014 ), cannot separate the effect of the funding from the success of the grant selection process: were better projects funded or did the funding or reviewer feedback improve the projects? Although qualitative analyses of individual projects help to explain what happened to the money and what it achieved, large scale analyses are sometimes needed to inform management decision making. For example: would a funder get more value for money from larger or smaller, longer or shorter, more specific or more general grants? For such analyses, many simplifying assumptions need to be made. The same is true for checks of the peer review process of research funders. For example, a funder might compute the average citation impact of publications produced by their grants and compare it to a reference set. This reference set might be as outputs from the rejected set or outputs from a comparable funder. The selection of the reference set is crucial for any attempt to identify the added value of any funding, however defined. For example, comparing the work of grant winners with that of high-quality unsuccessful applicants (e.g., those that just failed to be funded) would be useful to detect the added value of the money rather than the success of the procedure to select winners, assuming that there is little difference in potential between winners and narrow losers (Van den Besselaar & Leydesdorff, 2009 ). Because of the need to make comparisons between groups of outputs based on the nature of their funding, it is important to know the major variations in academic research funding types.

The dimensions of funding analysed in previous evaluations can point to how the above issues have been tackled. Unfortunately, most evaluations of the effectiveness, influence, or products of research funding (however defined) have probably been private reports for or by research funders, but some are in the public domain. Two non-funder studies have analysed whether funding improves research in specific contexts: peer review scores for Scoliosis conference submissions (Roach et al., 2008 ), and the methods of randomised controlled trials in urogynecology (Kim et al., 2018 ). Another compared research funded by China with that funded by the EU (Wang et al., 2020 ). An interesting view on the effect of funding on research output suggests that a grant does not necessarily always result in increased research output compared to participation in a grant competition (Ayoubi et al., 2019 ; Jonkers et al., 2017 ). Finally, a science-wide study of funding for journal articles from the UK suggested that it associated with higher quality research in at least some and possibly all fields (the last figure in: Thelwall et al., 2023 ).

From a different perspective, at least two studies have investigated whether academic funding has commercial value. The UK Medical Research Council (MRC) has analysed whether medical spinouts fared better if they were from teams that received MRC funding rather than from unsuccessful applicants, suggesting that funding helped spin-outs to realise commercial value from their health innovations (Annex A2.7 of: MRC, 2019 ). Also in the UK, firms participating in UK research council funded projects tended to grow faster afterwards compared to comparator firms (ERC, 2017 ).

Discussing the main variations in academic research funding types to inform analyses of the value of research funding is the purpose of the current article. Few prior studies seem to have introduced any systematic attempt to characterise the key dimensions of research funding, although some have listed several different types (e.g., four in: Garrett-Jones, 2000 ; three in: Paulson et al., 2011 ; nine in: Versleijen et al., 2007 ). The focus of the current paper is on grant-funded research conducted at least partly by people employed by an academic institution rather than by people researching as part of their job in a business, government, or other non-academic organisation. The latter are presumably funded usually by their employer, although they may sometimes conduct collaborative projects with academics or win academic research funding. The focus is also on research outputs, such as journal articles, books, patents, performances, or inventions, rather than research impacts or knowledge generation. Nevertheless, many of the options apply to the more general case. The list of dimensions relevant to evaluating the value of research funding has been constructed from a literature review of academic research about funding and insights from discussions with funders and analyses of funding records. The influence of funding on individual research projects is analysed, rather than systematic effects of funding, such as at the national level (e.g., for this, see: Sandström & Van den Besselaar, 2018 ; Van den Besselaar & Sandström, 2015 ). The next sections discuss dimensions in difference in the funding awarded, the influence of the funding on the research, and the way in which the funding is recorded.

Funding sources

There are many types of funders of academic research (Hu, 2009 ). An effort to distinguish between types of funding schemes based on a detailed analysis of the Dutch government budget and the annual reports of the main research funders in the Netherlands found the following nine types of funding instruments (Versleijen et al., 2007 ), but the remainder of this section gives finer-grained breakdown of types. The current paper is primarily concerned with all these except for the basic funding category, which includes the block grants that many universities receive for general research support. Block grants were originally uncompetitive but now may also be fully competitive, as in the UK where they depend on Research Excellence Framework scores, or partly competitive as in the Netherlands, where they partly depend on performance-based parameters like PhD completions (see also: Jonkers & Zacharewicz, 2016 ).

Contract research (project—targeted—small scale)

Open competition (project—free—small scale)

Thematic competition (project—targeted—small scale)

Competition between consortia (project—targeted—large scale)

Mission oriented basic funding (basic—targeted—large scale)

Funding of infrastructure and equipment (basic—targeted—diverse)

Basic funding for universities and public research institutes (basic—free—large scale)

International funding of programs and institutes (basic, both, mainly large scale)

EU funding (which can be subdivided in the previous eight types)

Many studies of the influence of research funding have focused on individual funders (Thelwall et al, 2016 ) and funding agencies’ (frequently unpublished) internal analyses presumably often compare between their own funding schemes, compare overall against a world benchmark, or check whether a funding scheme performance has changed over time (BHF, 2022 ). Public evaluations sometimes analyse individual funding schemes, particularly for large funders (e.g., Defazio et al., 2009 ). The source of funding for a project could be the employing academic institution, academic research funders, or other organisations that sometimes fund research. There are slightly different sets of possibilities for equipment and time funding.

Who funded the research project (type of funder)?

A researcher may be funded by their employer, a specialist research funding organisation (e.g., government-sponsored or non-profit) or an organisation that needs the research. Commercial funding seems likely to have different requirements and goals from academic funding (Kang & Motohashi, 2020 ), such as a closer focus on product or service development, different accounting rules, and confidentiality agreements. The source of funding is an important factor in funding analysis because funders have different selection criteria and methods to allocate and monitor funding. This is a non-exhaustive list.

Self-funded or completely unfunded (individual). Although the focus of this paper is on grant funding, this (and the item below) may be useful to record because it may partly underpin projects with other sources and may form parts of comparator sets (e.g., for the research of unfunded highly qualified applicants) in other contexts.

University employer. This includes funding reallocated from national competitive (e.g., performance-based research funding: Hicks, 2012 ) or non-competitive block research grants, from teaching income, investments and other sources that are allocated for research in general rather than equipment, time, or specific projects.

Other university (e.g., as a visiting researcher on a collaborative project).

National academic research funder (e.g., the UK’s Economic and Social Research Council: ESRC).

International academic research funder (e.g., European Union grants).

Government (contract, generally based on a tender and not from a pot of academic research funding)

Commercial (contract or research funding), sometimes called industry funding.

NGO (contract or research funding, e.g., Cancer Research charity). Philanthropic organisations not responsible to donors may have different motivations to charities, so it may be useful to separate the two sometimes.

Who funded the time needed for the research?

Research typically needs both people and equipment, and these two are sometimes supported separately. The funding for a researcher, if any, might be generic and implicit (it is part of their job to do research) or explicit in terms of a specified project that needs to be completed. Clinicians can have protected research time too: days that are reserved for research activities as part of their employment, including during advanced training (e.g., Elkbuli et al., 2020 ; Voss et al., 2021 ). For academics, research time is sometimes “borrowed” from teaching time (Bernardin, 1996 ; Olive, 2017 ). Time for a project may well be funded differently between members, such as the lead researcher being institutionally supported but using a grant to hire a team of academic and support staff. Inter-institutional research may also have a source for each team. The following list covers a range of different common arrangements.

Independent researcher, own time (e.g., not employed by but emeritus or affiliated with a university).

University researcher, own time (e.g., holidays, evenings, weekends).

University, percentage of the working time of academic staff devoted for research. In some countries this is large related to the amount of block finding versus project funding (Sandström & Van den Besselaar, 2018 ).

University, time borrowed from other activities (e.g., teaching, clinical duties, law practice).

Funder, generic research time funding (e.g., Gates chair of neuropsychology, long term career development funding for a general research programme).

University/Funder, specific time allocated for research programme (e.g., five years to develop cybersecurity research expertise).

University/Funder, employed for specific project (e.g., PhD student, postdoc supervised by member of staff).

University/Funder, specific time allocated for specific study (e.g., sabbatical to write a book).

Who funded the equipment or other non-human resources used in the research?

The resources needed for a research project might be funded as part of the project by the main funder, it may be already available to the researcher (e.g., National Health Service equipment that an NHS researcher could expect to access), or it may be separately funded and made available during the project (e.g., Richards, 2019 ). Here, “equipment” includes data or samples that are access-controlled as well as other resources unrelated to pay, such as travel. These types can be broken down as follows.

Researcher’s own equipment (e.g., a musician’s violin for performance-based research or composition; an archaeologist’s Land Rover to transport equipment to a dig).

University equipment, borrowed/repurposed (e.g., PC for teaching, unused library laptop).

University equipment, dual purpose (e.g., PC for teaching and research, violin for music teaching and research).

University/funder equipment for generic research (e.g., research group’s shared microbiology lab).

University/funder equipment research programme (e.g., GPU cluster to investigate deep learning).

University/funder equipment for specific project (e.g., PCs for researchers recruited for project; travel time).

University/funder equipment for single study (e.g., travel for interviews).

Of course, a funder may only support the loan or purchase of equipment on the understanding that the team will find other funding for research projects using it (e.g., “Funding was provided by the Water Research Commission [WRC]. The Covidence software was purchased by the Water Research fund”: Deglon et al., 2023 ). Getting large equipment working for subsequent research (e.g., a space telescope, a particle accelerator, a digitisation project) might also be the primary goal of a project.

How many funders contributed?

Although many projects are funded by a single source, some have multiple funders sharing the costs by agreement or by chance (Davies, 2016 ), and the following seem to be the logical possibilities for cost sharing.

Partially funded from one source, partly unfunded.

Partially funded from multiple sources, partly unfunded.

Fully funded from multiple sources.

Fully funded from a single source.

As an example of unplanned cost sharing, a researcher might have their post funded by one source and then subsequently bid for funding for equipment and support workers to run a large project. This project would then be part funded by the two sources, but not in a coordinated way. It seems likely that a project with a single adequate source of funding might be more efficient than a project with multiple sources that need to be coordinated. Conversely, a project with multiple funders may have passed through many different quality control steps or shown relevance to a range of different audiences. Those funded by multiple sources may also be less dependent on individual funders and therefore more able to autonomously follow their own research agenda, potentially leading to more innovative research.

How competitive was the funding allocation process?

Whilst government and charitable funding is often awarded on a competitive basis, the degree of competition (e.g., success rate) clearly varies between countries and funding calls and changes over time. In contrast, commercial funding may be gained without transparent competition (Kang & Motohashi, 2020 ), perhaps as part of ongoing work in an established collaboration or even due to a chance encounter. In between these, block research grants and prizes may be awarded for past achievements, so they are competitive, but the recipients are relatively free to spend on any type of research and do not need to write proposals (Franssen et al., 2018 ). Similarly, research centre grants may be won competitively but give the freedom to conduct a wide variety of studies over a long period. This gives the following three basic dimensions.

The success rate from the funding call (i.e., the percentage of initial applicants that were funded) OR

The success rate based on funding awarded for past performance (e.g., prize or competitive block grant, although this may be difficult to estimate) OR

The contract or other funding was allocated non-competitively (e.g., non-competitive block funding).

How was the funding decision made?

Who decides on which researchers receive funding and through which processes is also relevant (Van den Besselaar & Horlings, 2011 ). This is perhaps one of the most important considerations for funders.

The procedure for grant awarding: who decided and how?

There is a lot of research into the relative merits of different selection criteria for grants, such as a recent project to assess whether randomisation could be helpful (Fang & Casadevall, 2016 ; researchonresearch.org/experimental-funder). Peer review, triage, and deliberative committees are common, but not universal, components (Meadmore et al., 2020 ) and sources of variation include whether non-academic stakeholders are included within peer review teams (Luo et al., 2021 ), whether one or two stage submissions are required (Gross & Bergstrom, 2019 ) and whether sandpits are used (Meadmore et al., 2020 ). Although each procedure may be unique in personnel and fine details, broad information about it would be particularly helpful in comparisons between funders or schemes.

What were the characteristics of the research team?

The characteristics of successful proposals or applicants are relevant to analyses of competitive calls (Grimpe, 2012 ), although there are too many to list individually. Some deserve some attention here.

What are the characteristics of the research team behind the project or output (e.g., gender, age, career status, institution)?

What is the track record of the research team (e.g., citations, publications, awards, previous grants, service work).

Gender bias is an important consideration and whether it plays a role is highly disputed in the literature. Recent findings suggest that there is gender bias in reviews, but not success rates (Bol et al., 2022 ; Van den Besselaar & Mom, 2021 ). Some funding schemes have team requirements (e.g., established vs. early career researcher grants) and many evaluate applicants’ track records. Applicants’ previous achievements may be critical to success for some calls, such as those for established researchers or funding for leadership, play a minor role in others, or be completely ignored (e.g., for double blind grant reviewing). In any case, research team characteristics may be important for evaluating the influence of the funding or the fairness of the selection procedure.

What were the funder’s goals?

Funding streams or sources often have goals that influence what type of research can be funded. Moreover, researchers can be expected to modify their aspirations to align with the funding stream. The funder may have different types of goal, from supporting aspects of the research process to supporting relevant projects or completing a specific task (e.g., Woodward & Clifton, 1994 ), to generating societal benefits (Fernández-del-Castillo et al., 2015 ).

A common distinction is between basic and applied research, and the category “strategic research” has also been used to capture basic research aiming at long term societal benefits (Sandström, 2009 ). The Frascati Manual uses Basic Research, Applied Research and Experimental Development instead (OECD, 2015 ), but this is more relevant for analyses that incorporate industrial research and development.

Research funding does not necessarily have the goal to fund research because some streams support network formation in the expectation that the network will access other resources to support studies (Aagaard et al., 2021 ). European Union COST (European Cooperation in Science and Technology) Actions are an example (cost.eu). Others may have indirect goals, such as capacity building or creating a strong national research base that helps industry or attracts to international business research investment (Cooksey, 2006 ), or promoting a topic (e.g., educational research: El-Sawi et al., 2009 ). As a corollary to the last point, some topics may be of little interest to most funders, for example because they would mainly benefit marginalised communities (Woodson & Williams, 2020 ).

Since the early 2000s, many countries have also issued so-called career grants which have become prestigious. At the European level career grants started in 2009: the European Research Council (ERC) grants. These grants have a career effect (Bloch et al., 2014 ; Danell & Hjerm, 2013 ; Schroder et al., 2021 ; Van den Besselaar & Sandström, 2015 ) but this dimension, and the longer-term effects of funding other than on specific outputs, is not considered here. A funding scheme may also have multiple of the following goals.

Basic research (e.g., the Malaysia Toray Science Foundation supports basic research by young scientists to boost national capacity: www.mtsf.org ).

Strategic research (e.g., the UK Natural Environment Research Council’s strategic research funding targets areas of important environmental concern, targeting long term solutions: www.ukri.org/councils/nerc/ ).

Applied research (e.g., the Dutch NWO [Dutch Research Council] applied research fund to develop innovations supporting food security: www.nwo.nl/en/researchprogrammes/food-business-research ).

Technology transfer (i.e., applying research knowledge or skills to a non-research problem) or translational research.

Researcher development and training (including career grants).

Capacity building (e.g., to support research in resource-poor settings).

Collaboration formation (e.g., industry-academia, international, inter-university).

Research within a particular field.

Research with a particular application area (e.g., any research helping Alzheimer’s patients, including a ring-fenced proportion of funding within a broader call).

Tangible academic outputs (e.g., articles, books).

Tangible non-academic outputs (e.g., policy changes, medicine accreditation, patents, inventions).

Extent of the funding

The extent of funding of a project can vary substantially from a small percentage, such as for a single site visit, to 100%. A project might even make a surplus if it is allowed to keep any money left over, its equipment survives the project, or it generates successful intellectual property. The financial value of funding is clearly an important consideration because a cheaper project delivering similar outcomes to a more expensive one would have performed better. Nevertheless, grant size is often ignored in academic studies of the value of funding (e.g., Thelwall et al., 2023 ) because it is difficult to identify the amount and to divide it amongst grant outputs. This section covers four dimensions of the extent of a grant.

What proportion of the research was funded?

A research project might be fully funded, funded for the extras needed above what is already available, or deliberately partly funded (Comins, 2015 ). This last approach is sometimes called “cost sharing”. A grant applied on the Full Economic Cost (FEC) model would pay for the time and resources used by the researchers as well as the administrative support and accommodation provided by their institution. The following seem to be the main possibilities.

Partly funded.

Fully funded but on a partial FEC or sub-FEC model cost sharing model.

FEC plus surplus.

The Frascatti Manual about collecting research and development statistics distinguishes between funding internally within a unit of analysis or externally (OECD, 2015 ) but here the distinction is between explicit and implicit funding, with the latter being classed as “Unfunded”.

How was the funding delivered?

Whilst a research grant would normally be financial, a project might be supported in kind by the loan or gift of equipment or time. For instance, agricultural research might be supported with access to relevant land or livestock (Tricarico et al., 2022 ). Here are three common approaches for delivering funding.

In kind—lending time or loaning/giving equipment or other resources.

Fixed amount of money.

A maximum amount of money, with actual spending justified by receipts.

How much funding did the project receive?

Project funding can be tiny, such as a few pounds for a trip or travel expenses, or enormous, such as for a particle accelerator. Grants of a few thousand pounds can also be common in some fields and for some funders (e.g., Gallo et al., 2014 ; Lyndon, 2018 ). In competitive processes, the funder normally indicates the grant size range that it is prepared to fund. The amount of funding for research has increased over time (Bloch & Sørensen, 2015 ).

The money awarded and/or claimed by the project.

How long was the funding for?

Funded projects can be short term, such as for a one-day event, or very long term, such as a 50-year nuclear fusion reactor programme. There seems to be a trend for longer term and larger amounts of funding, such as for centres of excellence that can manage multiple different lines of research (Hellström, 2018 ; OECD, 2014 ).

The intended or actual (e.g., due to costed or non-costed extensions) duration of the project.

Influence of the funding on the research project

A variety of aspects of the funding system were discussed in the previous sections, and this section and the next switch to the effects of funding on what research is conducted and how. Whist some grant schemes explicitly try to direct research (e.g., funding calls to build national artificial intelligence research capacity), even open calls may have indirect influences on team formation, goals, and broader research directions. This section discusses three different ways in which funding can influence a research project.

Influence on what the applicant did

Whilst funding presumably has a decisive influence on whether a study occurs most of the time because of the expense of the equipment or effort (e.g., to secure ethical approval for medical studies: Jonker et al., 2011 ), there may be exceptions. For example, an analysis of unfunded medical research found that it was often hospital-based (Álvarez-Bornstein et al., 2019 ), suggesting that it was supported by employers. Presumably the researcher applying for funding would usually have done something else research-related if they did not win the award, such as conducting different studies or applying for other funding. The following seem to be the main dimensions of variation here.

No influence (the study would have gone ahead without the funding).

Improved existing study (e.g., more time to finish, more/better equipment, more collaborators, constructive ideas from the peer review process). An extreme example of the latter is the Medical Research Council’s Developmental Pathway Funding Scheme (DPFS), which has expert input and decision making throughout a project.

Made the study possible, replacing other research-related activities (e.g., a different type of investigation, supporting another project, PhD mentoring).

Made the study possible, replacing non-research activities (e.g., teaching, clinical practice).

Researchers may conduct unfunded studies if financing is not essential and they would like to choose their own goals (Edwards, 2022 ; Kayrooz et al., 2007 ), or if their research time can be subsidised by teaching revenue (Olive, 2017 ). Some types of research are also inherently cheaper than others, such as secondary data analysis (Vaduganathan et al., 2018 ) and reviews in medical fields, so may not need funding. At the other extreme, large funding sources may redirect the long-term goals of an entire research group (Jeon, 2019 ). In between these two, funding may improve the quality of a study that would have gone ahead anyway, such as by improving its methods, including the sample size or the range of analyses used (Froud et al., 2015 ). Alternatively, it may have changed a study without necessarily improving it, such as by incorporating funder-relevant goals, methods, or target groups. Scholars with topics that do not match the major funding sources may struggle to be able to do research (Laudel, 2005 ).

Influence on research goals or methods

In addition to supporting the research, the nature of the influence of the source of funding can be minor or major, from the perspective of the funded researcher. It seems likely most funding requires some changes to what a self-funded researcher might otherwise do, if only to give reassurance that the proposed research will deliver tangible outputs (Serrano Velarde, 2018 ), or to fit specific funder requirements (Luukkonen & Thomas, 2016 ). Funding influence can perhaps be split into the following broad types, although they are necessarily imprecise, with considerable overlaps.

No influence (the applicant did not modify their research goals for the funder, or ‘relabelled’ their research goals to match the funding scheme).

Partial influence (the applicant modified their research goals for the funder)

Strong influence (the applicant developed new research goals for the funder, such as a recent call for non-AI researchers to retrain to adopt AI).

Full determination (the funder specified the project, such as a pharmaceutical industry contract to test a new vaccine).

Focusing on more substantial changes only, the funding has no influence if the academic did not need to consider funder-related factors when proposing their study, or could select a funder that fully aligned with their goals. On the other hand, the influence is substantial if the researcher changed their goals to fit the funder requirements (Currie-Alder, 2015 ; Tellmann, 2022 ). In between, a project goals may be tailored to a funder or funding requirements (Woodward & Clifton, 1994 ). An indirect way in which health-related funders often influence research is by requiring Patient and Public Involvement (PPI) at all levels of a project, including strategy development (e.g., Brett et al., 2014 ). Funding initiatives may aim to change researchers’ goals, such as to encourage the growth of a promising new field (Gläser et al., 2016 ). The wider funding environment may also effectively block some research types or topics if it is not in scope for most grants (Laudel & Gläser, 2014 ).

It seems likely that funding sources have the greatest influence on researchers’ goals in resource intensive areas, presumably including most science and health research, and especially those that routinely issue topic-focused calls (e.g., Laudel, 2006 ; Woelert et al., 2021 ). The perceived likelihood of receiving future funding may also influence research methods, such as by encouraging researchers to hoard resources (e.g., perform fewer laboratory experiments for a funded paper) when future access may be at risk (Laudel, 2023 ).

Influence on research team composition

The funder call may list eligibility requirements of various types. For example, the UK national funders specify that applicants must be predominantly UK academics. One common type of specification seems to be team size and composition since many funders (e.g., EU) specify or encourage collaborative projects. Funding may also encourage commercial participants or end user partnerships, which may affect team composition (e.g., Gaughan & Bozeman, 2002 ). Four different approaches may be delineated as follows.

No influence (the funder allows any team size).

Partial influence (the applicant chooses a team size to enhance their perceived success rate).

Funder parameters (the funder specifies parameters, such as a requirement for collaboration or partners from at least three EU countries, disciplinary composition or interdisciplinarity mandate).

Full determination (the funder specifies the team size, such as individual applicants only for career-related grants).

The influence of funders on research team composition is unlikely to be strict even if they fully determine grant applicant team sizes because the funded researchers may choose to collaborate with others using their own grants or unfunded.

Influence of the funding on the research outputs

The above categories cover how research funding helps or influences research studies. This section focuses on what may change in the outputs of researchers or projects due to the receipt of funding. This is important to consider because research outputs are the most visible and countable outcomes of research projects, but they are not always necessary (e.g., funding for training or equipment) and different types can be encouraged. Four relevant dimensions of influence are discussed below.

Influence of funding on the applicant’s productivity

Funding can normally be expected to support the production of new outputs by an academic or team (Bloch et al., 2014 ; Danell & Hjerm, 2013 ), but this may be field dependent. Studying the factors affecting productivity, DFG grants had a positive effect on the productivity for German political scientists (Habicht et al., 2021 ). However, in some cases funding may produce fewer tangible outputs because of the need to collaborate with end users or conduct activities of value to them (Hottenrott & Thorwarth, 2011 ), or if the funding is for long-term high-risk investigations. In areas where funding is inessential or where or core/block funding provides some baseline capability, academics who choose not to apply for it can devote all their research time to research rather than grant writing, which may increase their productivity (Thyer, 2011 ). Although simplistic, the situation may therefore be characterised into three situations.

Reduction in the number or size of outputs of relevant types by the applicant(s) during and/or after the project.

No change in the number or size of outputs of relevant types by the applicant(s) during and/or after the project.

Increase in the number or size of outputs of relevant types by the applicant(s) during and/or after the project.

Funding can also have the long-term indirect effect of improving productivity, though career benefits for those funded, such as making them more likely to attract collaborators and future funding (Defazio et al., 2009 ; Heyard & Hottenrott, 2021 ; Hussinger & Carvalho, 2022 ; Saygitov, 2018 ; Shimada et al., 2017 ). Writing grant applications may also provide an intensive learning process, which may help careers (Ayoubi et al., 2019 ; Jonkers et al., 2017 ).

Influence of funding on the applicant’s research output types

Funding may change what a researcher or research team produces. For example, a commercial component of grants may reduce the number of journal articles produced (Hottenrott & Lawson, 2017 ). Funder policies may have other influences on what a researcher does, such as conditions to disseminate the results in a certain way. This may include open access, providing accessible research data, or writing briefings for policy makers or the public. Whilst this may be considered good practice, some may be an additional overhead for the researcher. This may be summarised as follows, although the distinctions are qualitative.

No change in the nature of the outputs produced.

Partial change in the nature of the outputs produced.

Complete change in the nature of the outputs produced (e.g., patents instead of articles).

Influence of funding on the impact or quality of the research

Although cause-and-effect may be difficult to prove (e.g., Aagaard & Schneider, 2017 ), funding seems likely to change the citation, scholarly, societal, or other impacts of what a researcher or research team produces. For example, a reduction in citation impact may occur if the research becomes more application-focused and an increase may occur if the funding improves the quality of the research.

Most studies have focused on citation impact, finding that funded research, or research funded by a particular funder, tends to be more cited than other research (Álvarez-Bornstein et al., 2019 ; Gush et al., 2018 ; Heyard & Hottenrott, 2021 ; Rigby, 2011 ; Roshani et al., 2021 ; Thelwall et al., 2016 ; Yan et al., 2018 ), albeit with a few exceptions (Alkhawtani et al., 2020 ; Jowkar et al., 2011 ; Muscio et al., 2017 ). Moreover, unfunded work, or work that does not explicitly declare funding sources, in some fields can occasionally be highly cited (Sinha et al., 2016 ; Zhao, 2010 ). Logically, however, there are three broad types of influence on the overall impacts of the outputs produced, in addition to changes in the nature of the impacts.

Reduction in the citation/scholarly/societal/other impact of the outputs produced.

No change in the citation/scholarly/societal/other impact of the outputs produced.

Increase in the citation/scholarly/societal/other impact of the outputs produced.

The quality of the research produced is also important and could be assessed by a similar list to the one above. Research quality is normally thought to encompass three aspects: methodological rigour, innovativeness, and societal/scientific impact (Langfeldt et al., 2020 ). Considering quality overall therefore entails attempting to also assess the rigour and innovativeness of research. These seem likely to correlate positively with research impact and are difficult to assess on a large scale. Whilst rigour might be equated with passing journal peer review in some cases, innovation has no simple proxy indictor and is a particular concern for funding decisions (Franssen, et al., 2018 ; Whitley et al., 2018 ).

The number and types of outcomes supported by a grant

When evaluating funding, it is important to consider the nature and number of the outputs and other outcomes produced specifically from it. Research projects often deliver multiple products, such as journal articles, scholarly talks, public-facing talks, and informational websites. There may also be more applied outputs, such as health policy changes, spin-out companies, and new drugs (Ismail et al., 2012 ). Since studies evaluating research funding often analyse only the citation impact of the journal articles produced (because of the ease of benchmarking), it is important to at least acknowledge that other outputs are also produced by researchers, even if it is difficult to take them into account in quantitative analyses.

The number and type of outcomes or outputs associated with a grant.

Of course, the non-citation impacts of research, such as policy changes or drug development, are notoriously difficult to track down even for individual projects (Boulding et al., 2020 ; Raftery et al., 2016 ), although there have been systematic attempts to identify policy citations (Szomszor & Adie, 2022 ). Thus, most types of impacts could not be analysed on a large scale and individual qualitative analyses are the only option for detailed impact analyses (Guthrie et al., 2015 ). In parallel with this, studies that compare articles funded by different sources should really consider the number of outputs per grant, since a grant producing more outputs would tend to be more successful. This approach does not seem to be used when average citation impact is compared, which is a limitation.

A pragmatic issue for studies of grants: funding records

Finally, from a pragmatic data collection perspective, the funding for a research output can be recorded in different places, not all of which are public. A logical place to look for this information is within the output, although it may be recorded within databases maintained by the funder or employer. Related to this, it is not always clear how much of an output can be attributed to an acknowledged funding source. Whilst the location of a funding record presumably has no influence on the effectiveness of the funding, so is not relevant to the goals of this article, it is included here an important practical consideration that all studies of grant funding must cope with. Three relevant dimensions of this ostensibly simple issue are discussed below.

Where the funding is recorded inside the output

Funding can be acknowledged explicitly in journal articles (Aagaard et al., 2021 ) and other research outputs, whether to thank the funder or to record possible conflicts of interest. This information may be omitted because the authors forget or do not want to acknowledge some or all funders. Here is a list of common locations.

A Funding section.

An Acknowledgements section.

A Notes section.

A Declaration of Interests section.

The first footnote.

The last footnote.

The last paragraph of the conclusions.

Elsewhere in the output.

Not recorded in the output.

The compulsory funding declaration sections of an increasing minority of journals are the ideal place for funder information. These force corresponding authors to declare funding, although they may not be able to track down all sources for large, multiply-funded teams. This section also is probably the main place where a clear statement that a study was unfunded could be found. A Declaration of Interests section may also announce an absence of funding, although this cannot be inferred from the more usual statement that the authors have no competing interests. Funding statements in other places are unsystematic in the sense that it seems easy for an author to forget them. Nevertheless, field norms may dictate a specific location for funding information (e.g., always a first page footnote), and this seems likely to reduce the chance that this step is overlooked.

Where the funding is recorded outside the output

Large funders are likely to keep track of the outputs from their funded research, and research institutions may also keep systematic records (Clements et al., 2017 ). These may be completed by researchers or administrators and may be mandatory or optional. Funders usually also record descriptive qualitative information about funded projects that is not essential for typical large-scale analyses of funded research but is important to keep track of individual projects. It may also be used large scale descriptive analyses of grant portfolio changes over time. For example, the UKRI Gateway to Research information includes project title, abstract (lay and technical), value (amount awarded by UKRI—so usually 80% FEC), funded period (start and end), project status (whether still active), category (broad research grant type—e.g., Fellowship), grant reference, Principle Investigator (PI) (and all co-Investigators), research classifications (e.g. Health Research Classification System [HRCS] for MRC grants), research organisations involved (whether as proposed collaborators or funding recipients/partners), and, as the project progresses, any outputs reported via Researchfish.

Academic employers may also track the outputs and funding of their staff in a current research information system or within locally designed databases or spreadsheets. Dimensions for Funders (Dimensions, 2022 ), for example, compiles funding information from a wide range of sources. Other public datasets include the UKRI Gateway to Research (extensive linkage to outputs), the Europe PMC grant lookup tool (good linkage to publications) or the UKCDR covid funding tracker (some linkage to publications via Europe PMC), or the occasional UK Health Research Analysis (.net), and the European commission CORDIS dataset. There are also some initiatives to comprehensively catalogue who funds what in particular domains, such as for UK non-commercial health research (UKCRC, 2020 ). Of course, there are ad-hoc funding statements too, such as in narrative claims of research impact in university websites or as part of evaluations (Grant & Hinrichs, 2015 ), but these may be difficult to harvest systematically. The following list includes a range of common locations.

In a university/employer public/private funding record.

In the academic’s public/private CV.

In the funder’s public/private record.

In a shared public/private research funding system used by the funder (e.g., Researchfish).

In publicity for the grant award (if output mentioned specifically enough).

In publicity for the output (e.g., a theatre programme for a performance output).

Elsewhere outside the output.

Not recorded outside the output.

From the perspective of third parties obtaining information about funding for outputs, if the employer and/or funder databases are private or public but difficult to search then online publicity about the outputs or funding may give an alternative record.

What is the connection between outputs and their declared funders?

Some outputs have a clear identifiable funder or set of funders. For example, a grant may be awarded to write a book and the book would therefore clearly be the primary output of the project. Similarly, a grant to conduct a specified randomised controlled trial seems likely to produce an article reporting the results; this, after passing review, would presumably be the primary research output even though an unpublished statistical summary of the results might suffice in some cases, especially when time is a factor. More loosely, a grant may specify a programme of research and promise several unspecified or vaguely specified outputs. In this case there may be outputs related to the project but not essential to it that might be classed as being part of it. It is also possible that outputs with little connection to a project are recorded as part of it for strategic reasons, such as to satisfy a project quota or gain a higher end-of-project grade. For example, Researchfish (Reddick et al., 2022 ) allows grant holders to select which publications on their CVs associate with each grant. There are also genuine mistakes in declaring funding (e.g., Elmunim et al., 2022 ). The situation may be summarised with the following logical categories.

Direct, clear connection (e.g., the study is a named primary output of a project).

Indirect, clear connection (e.g., the study is a writeup of a named project outcome).

Indirect, likely connection (e.g., the study is an output of someone working on the project and the output is on the project topic).

Tenuous connection (e.g., the study was completed before the project started, by personnel not associated with the project, or by project personnel on an unrelated topic).

No connection at all (such as due to a recording error; presumably rare).

Conclusions

This paper has described dimensions along which research funding differs between projects, with a focus on grant funding. This includes dimensions that are important to consider when analysing the value of research funding quantitatively. This list is incomplete, and not all aspects will be relevant to all future analyses of funding. Most qualitative and rarer dimensions of difference associated with funding are omitted, including the exact nature of any societal impact, support for researcher development, and support for wider social, ethical or scientific issues (e.g., promoting open science).

Organisations that compile funding datasets or otherwise record funding information may also consult the lists above when considering the records that are desirable to collect. Of course, the providers of large datasets, such as the Dimensions for Funders system, may often not be able to find this information for inclusion (not provided by funders) or not be able to adequately process it (e.g., simply too many variations in funding types, and no straightforward way to present this data to users).

When comparing funding sources or evaluating the impact of funding, it is important to consider as many dimensions as practically possible to ensure that comparisons are fair as achievable, whilst acknowledging the remaining sources of variation as limitations. Even at the level of funding schemes, all have unique features but since comparisons must be made for management purposes, it is important to consider differences or to at least be aware of them when making comparisons.

Aagaard, K., Mongeon, P., Ramos-Vielba, I., & Thomas, D. A. (2021). Getting to the bottom of research funding: Acknowledging the complexity of funding dynamics. PLoS ONE, 16 (5), e0251488.

Article   Google Scholar  

Aagaard, K., & Schneider, J. W. (2017). Some considerations about causes and effects in studies of performance-based research funding systems. Journal of Informetrics, 11 (3), 923–926.

Alkhawtani, R. H., Kwee, T. C., & Kwee, R. M. (2020). Funding of radiology research: Frequency and association with citation rate. American Journal of Roentgenology, 215 , 1286–1289.

Álvarez-Bornstein, B., Díaz-Faes, A. A., & Bordons, M. (2019). What characterises funded biomedical research? Evidence from a basic and a clinical domain. Scientometrics, 119 (2), 805–825.

Ayoubi, C., Pezzoni, M., & Visentin, F. (2019). The important thing is not to win, it is to take part: What if scientists benefit from participating in research grant competitions? Research Policy, 48 (1), 84–97.

Bernardin, H. J. (1996). Academic research under siege: Toward better operational definitions of scholarship to increase effectiveness, efficiencies and productivity. Human Resource Management Review, 6 (3), 207–229.

BHF. (2022). Research evaluation report—British Heart Foundation. Retrieved from https://www.bhf.org.uk/for-professionals/information-for-researchers/managing-your-grant/research-evaluation

Bloch, C., Graversen, E., & Pedersen, H. (2014). Competitive grants and their impact on career performance. Minerva, 52 , 77–96.

Bloch, C., & Sørensen, M. P. (2015). The size of research funding: Trends and implications. Science and Public Policy, 42 (1), 30–43.

Bol, T., de Vaan, T., & van de Rijt, A. (2022). Gender-equal funding rates conceal unequal evaluations. Research Policy, 51 (2022), 104399.

Boulding, H., Kamenetzky, A., Ghiga, I., Ioppolo, B., Herrera, F., Parks, S., & Hinrichs-Krapels, S. (2020). Mechanisms and pathways to impact in public health research: A preliminary analysis of research funded by the National Institute for health research (NIHR). BMC Medical Research Methodology, 20 (1), 1–20.

Brett, J. O., Staniszewska, S., Mockford, C., Herron-Marx, S., Hughes, J., Tysall, C., & Suleman, R. (2014). Mapping the impact of patient and public involvement on health and social care research: A systematic review. Health Expectations, 17 (5), 637–650.

Clements, A., Reddick, G., Viney, I., McCutcheon, V., Toon, J., Macandrew, H., & Wastl, J. (2017). Let’s Talk-Interoperability between university CRIS/IR and Researchfish: A case study from the UK. Procedia Computer Science, 106 , 220–231.

Comins, J. A. (2015). Data-mining the technological importance of government-funded patents in the private sector. Scientometrics, 104 (2), 425–435.

Cooksey, D. (2006). A review of UK health research funding. Retrieved from https://www.jla.nihr.ac.uk/news-and-publications/downloads/Annual-Report-2007-08/Annexe-8-2007-2008-CookseyReview.pdf

Currie-Alder, B. (2015). Research for the developing world: Public funding from Australia, Canada, and the UK . Oxford University Press.

Book   Google Scholar  

Danell, R., & Hjerm, R. (2013). The importance of early academic career opportunities and gender differences in promotion rates. Research Evaluation, 22 , 2010–2214.

Davies, J. (2016). Collaborative funding for NCDs—A model of research funding. The Lancet Diabetes & Endocrinology, 4 (9), 725–727.

Defazio, D., Lockett, A., & Wright, M. (2009). Funding incentives, collaborative dynamics and scientific productivity: Evidence from the EU framework program. Research Policy, 38 (2), 293–305.

Deglon, M., Dalvie, M. A., & Abrams, A. (2023). The impact of extreme weather events on mental health in Africa: A scoping review of the evidence. Science of the Total Environment, 881 , 163420.

Dimensions. (2022). Dimensions for funders. Retrieved from https://www.dimensions.ai/who/government-and-funders/dimensions-for-funders/

Edwards, R. (2022). Why do academics do unfunded research? Resistance, compliance and identity in the UK neo-liberal university. Studies in Higher Education, 47 (4), 904–914.

Elkbuli, A., Zajd, S., Narvel, R. I., Dowd, B., Hai, S., Mckenney, M., & Boneva, D. (2020). Factors affecting research productivity of trauma surgeons. The American Surgeon, 86 (3), 273–279.

Elmunim, N. A., Abdullah, M., & Bahari, S. A. (2022). Correction: Elnumin et al. Evaluating the Performance of IRI-2016 Using GPS-TEC measurements over the equatorial region: Atmosphere 2021, 12, 1243. Atmosphere, 13 (5), 762.

El-Sawi, N. I., Sharp, G. F., & Gruppen, L. D. (2009). A small grants program improves medical education research productivity. Academic Medicine, 84 (10), S105–S108.

ERC. (2017). Assessing the business performance effects of receiving publicly-funded science, research and innovation grants. Retrieved from https://www.enterpriseresearch.ac.uk/publications/accessing-business-performance-effects-receiving-publicly-funded-science-research-innovation-grants-research-paper-no-61/

Fang, F. C., & Casadevall, A. (2016). Research funding: The case for a modified lottery. Mbio, 7 (2), 10–1128.

Fernández-del-Castillo, E., Scardaci, D., & García, Á. L. (2015). The EGI federated cloud e-infrastructure. Procedia Computer Science, 68 , 196–205.

Franssen, T., Scholten, W., Hessels, L. K., & de Rijcke, S. (2018). The drawbacks of project funding for epistemic innovation: Comparing institutional affordances and constraints of different types of research funding. Minerva, 56 (1), 11–33.

Froud, R., Bjørkli, T., Bright, P., Rajendran, D., Buchbinder, R., Underwood, M., & Eldridge, S. (2015). The effect of journal impact factor, reporting conflicts, and reporting funding sources, on standardized effect sizes in back pain trials: A systematic review and meta-regression. BMC Musculoskeletal Disorders, 16 (1), 1–18.

Gallo, S. A., Carpenter, A. S., Irwin, D., McPartland, C. D., Travis, J., Reynders, S., & Glisson, S. R. (2014). The validation of peer review through research impact measures and the implications for funding strategies. PLoS ONE, 9 (9), e106474.

Garrett-Jones, S. (2000). International trends in evaluating university research outcomes: What lessons for Australia? Research Evaluation, 9 (2), 115–124.

Gaughan, M., & Bozeman, B. (2002). Using curriculum vitae to compare some impacts of NSF research grants with research center funding. Research Evaluation, 11 (1), 17–26.

Gläser, J., Laudel, G., & Lettkemann, E. (2016). Hidden in plain sight: The impact of generic governance on the emergence of research fields. The local configuration of new research fields: On regional and national diversity, 25–43.

Grant, J., & Hinrichs, S. (2015). The nature, scale and beneficiaries of research impact: An initial analysis of the Research Excellence Framework (REF) 2014 impact case studies. Retrieved from https://kclpure.kcl.ac.uk/portal/files/35271762/Analysis_of_REF_impact.pdf

Grimpe, C. (2012). Extramural research grants and scientists’ funding strategies: Beggars cannot be choosers? Research Policy, 41 (8), 1448–1460.

Gross, K., & Bergstrom, C. T. (2019). Contest models highlight inherent inefficiencies of scientific funding competitions. PLoS Biology, 17 (1), e3000065.

Gush, J., Jaffe, A., Larsen, V., & Laws, A. (2018). The effect of public funding on research output: The New Zealand Marsden Fund. New Zealand Economic Papers, 52 (2), 227–248.

Guthrie, S., Bienkowska-Gibbs, T., Manville, C., Pollitt, A., Kirtley, A., & Wooding, S. (2015). The impact of the national institute for health research health technology assessment programme, 2003–13: A multimethod evaluation. Health Technology Assessment, 19 (67), 1–291.

Habicht, I. M., Lutter, M., & Schröder, M. (2021). How human capital, universities of excellence, third party funding, mobility and gender explain productivity in German political science. Scientometrics, 126 , 9649–9675.

Hellström, T. (2018). Centres of excellence and capacity building: From strategy to impact. Science and Public Policy, 45 (4), 543–552.

Heyard, R., & Hottenrott, H. (2021). The value of research funding for knowledge creation and dissemination: A study of SNSF research grants. Humanities and Social Sciences Communications, 8 (1), 1–16.

Hicks, D. (2012). Performance-based university research funding systems. Research Policy, 41 (2), 251–261.

Hottenrott, H., & Lawson, C. (2017). Fishing for complementarities: Research grants and research productivity. International Journal of Industrial Organization, 51 (1), 1–38.

Hottenrott, H., & Thorwarth, S. (2011). Industry funding of university research and scientific productivity. Kyklos, 64 (4), 534–555.

Hu, M. C. (2009). Developing entrepreneurial universities in Taiwan: The effects of research funding sources. Science, Technology and Society, 14 (1), 35–57.

Hussinger, K., & Carvalho, J. N. (2022). The long-term effect of research grants on the scientific output of university professors. Industry and Innovation, 29 (4), 463–487.

Ismail, S., Tiessen, J., & Wooding, S. (2012). Strengthening research portfolio evaluation at the medical research council: Developing a survey for the collection of information about research outputs. Rand Health Quarterly , 1 (4). Retrieved from https://www.rand.org/pubs/technical_reports/TR743.html

Jeon, J. (2019). Invisibilizing politics: Accepting and legitimating ignorance in environmental sciences. Social Studies of Science, 49 (6), 839–862.

Jonker, L., Cox, D., & Marshall, G. (2011). Considerations, clues and challenges: Gaining ethical and trust research approval when using the NHS as a research setting. Radiography, 17 (3), 260–264.

Jonkers, K., & Zacharewicz, T. (2016). Research performance based funding systems: A comparative assessment. European Commission. Retrieved from https://ec.europa.eu/jrc/en/publication/eur-scientific-and-technical-research-reports/research-performance-based-funding-systems-comparative-assessment

Jonkers, K., Fako P., Isella, L., Zacharewicz, T., Sandstrom, U., & Van den Besselaar, P. (2017). A comparative analysis of the publication behaviour of MSCA fellows. Proceedings STI conference . Retrieved from https://www.researchgate.net/profile/Ulf-Sandstroem-2/publication/319547178_A_comparative_analysis_of_the_publication_behaviour_of_MSCA_fellows/links/59b2ae00458515a5b48d133f/A-comparative-analysis-of-the-publication-behaviour-of-MSCA-fellows.pdf

Jowkar, A., Didegah, F., & Gazni, A. (2011). The effect of funding on academic research impact: A case study of Iranian publications. Aslib Proceedings, 63 (6), 593–602.

Kang, B., & Motohashi, K. (2020). Academic contribution to industrial innovation by funding type. Scientometrics, 124 (1), 169–193.

Kayrooz, C., Åkerlind, G. S., & Tight, M. (Eds.). (2007). Autonomy in social science research, volume 4: The View from United Kingdom and Australian Universities . Emerald Group Publishing Limited.

Google Scholar  

Kim, K. S., Chung, J. H., Jo, J. K., Kim, J. H., Kim, S., Cho, J. M., & Lee, S. W. (2018). Quality of randomized controlled trials published in the international urogynecology journal 2007–2016. International Urogynecology Journal, 29 (7), 1011–1017.

Langfeldt, L., Bloch, C. W., & Sivertsen, G. (2015). Options and limitations in measuring the impact of research grants—Evidence from Denmark and Norway. Research Evaluation, 24 (3), 256–270.

Langfeldt, L., Nedeva, M., Sörlin, S., & Thomas, D. A. (2020). Co-existing notions of research quality: A framework to study context-specific understandings of good research. Minerva, 58 (1), 115–137.

Laudel, G. (2005). Is external research funding a valid indicator for research performance? Research Evaluation, 14 (1), 27–34.

Laudel, G. (2006). The art of getting funded: How scientists adapt to their funding conditions. Science and Public Policy, 33 (7), 489–504.

Laudel, G. (2023). Researchers’ responses to their funding situation. In: B. Lepori & B. Jongbloed (Eds.), Handbook of public funding of research (pp. 261–278).

Laudel, G., & Gläser, J. (2014). Beyond breakthrough research: Epistemic properties of research and their consequences for research funding. Research Policy, 43 (7), 1204–1216.

Luo, J., Ma, L., & Shankar, K. (2021). Does the inclusion of non-academic reviewers make any difference for grant impact panels? Science and Public Policy, 48 (6), 763–775.

Lutter, M., Habicht, I. M., & Schröder, M. (2022). Gender differences in the determinants of becoming a professor in Germany: An event history analysis of academic psychologists from 1980 to 2019. Research Policy, 51 , 104506.

Luukkonen, T., & Thomas, D. A. (2016). The ‘negotiated space’ of university researchers’ pursuit of a research agenda. Minerva, 54 (1), 99–127.

Lyndon, A. R. (2018). Influence of the FSBI small research grants scheme: An analysis and appraisal. Journal of Fish Biology, 92 (3), 846–850.

Meadmore, K., Fackrell, K., Recio-Saucedo, A., Bull, A., Fraser, S. D., & Blatch-Jones, A. (2020). Decision-making approaches used by UK and international health funding organisations for allocating research funds: A survey of current practice. PLoS ONE, 15 (11), e0239757.

MRC. (2019). MRC 10 year translational research evaluation report 2008 to 2018. Retrieved from https://www.ukri.org/publications/mrc-translational-research-evaluation-report/

Muscio, A., Ramaciotti, L., & Rizzo, U. (2017). The complex relationship between academic engagement and research output: Evidence from Italy. Science and Public Policy, 44 (2), 235–245.

Neufeld, J. (2016). Determining effects of individual research grants on publication output and impact: The case of the Emmy Noether Programme (German Research Foundation). Research Evaluation, 25 (1), 50–61.

OECD. (2014). Promoting research excellence: new approaches to funding. OECD. Retrieved from https://www.oecd-ilibrary.org/science-and-technology/promoting-research-excellence_9789264207462-en

OECD. (2015). Frascati manual 2015. Retrieved from https://www.oecd.org/innovation/frascati-manual-2015-9789264239012-en.htm

Olive, V. (2017). How much is too much? Cross-subsidies from teaching to research in British Universities . Higher Education Policy Institute.

Paulson, K., Saeed, M., Mills, J., Cuvelier, G. D., Kumar, R., Raymond, C., & Seftel, M. D. (2011). Publication bias is present in blood and marrow transplantation: An analysis of abstracts at an international meeting. Blood, the Journal of the American Society of Hematology, 118 (25), 6698–6701.

Raftery, J., Hanley, S., Greenhalgh, T., Glover, M., & Blotch-Jones, A. (2016). Models and applications for measuring the impact of health research: Update of a systematic review for the health technology assessment programme. Health Technology Assessment, 20 (76), 1–254. https://doi.org/10.3310/hta20760

Reale, E., Lepori, B., & Scherngell, T. (2017). Analysis of national public research funding-pref. JRC-European Commission. Retrieved from https://core.ac.uk/download/pdf/93512415.pdf

Reddick, G., Malkov, D., Sherbon, B., & Grant, J. (2022). Understanding the funding characteristics of research impact: A proof-of-concept study linking REF 2014 impact case studies with Researchfish grant agreements. F1000Research, 10 , 1291.

Richards, H. (2019). Equipment grants: It’s all in the details. Journal of Biomolecular Techniques: JBT, 30 (Suppl), S49.

Rigby, J. (2011). Systematic grant and funding body acknowledgement data for publications: New dimensions and new controversies for research policy and evaluation. Research Evaluation, 20 (5), 365–375.

Roach, J. W., Skaggs, D. L., Sponseller, P. D., & MacLeod, L. M. (2008). Is research presented at the scoliosis research society annual meeting influenced by industry funding? Spine, 33 (20), 2208–2212.

Roshani, S., Bagherylooieh, M. R., Mosleh, M., & Coccia, M. (2021). What is the relationship between research funding and citation-based performance? A comparative analysis between critical disciplines. Scientometrics, 126 (9), 7859–7874.

Sandström, U. (2009). Research quality and diversity of funding: A model for relating research money to output of research. Scientometrics, 79 (2), 341–349.

Sandström, U., & Van den Besselaar, P. (2018). Funding, evaluation, and the performance of national research systems. Journal of Informetrics, 12 , 365–384.

Saygitov, R. T. (2018). The impact of grant funding on the publication activity of awarded applicants: A systematic review of comparative studies and meta-analytical estimates. Biorxiv , 354662.

Schneider, J. W., & van Leeuwen, T. N. (2014). Analysing robustness and uncertainty levels of bibliometric performance statistics supporting science policy: A case study evaluating Danish postdoctoral funding. Research Evaluation, 23 (4), 285–297.

Schroder, M., Lutter, M., & Habicht, I. M. (2021). Publishing, signalling, social capital, and gender: Determinants of becoming a tenured professor in German political science. PLoS ONE, 16 (1), e0243514.

Serrano Velarde, K. (2018). The way we ask for money… The emergence and institutionalization of grant writing practices in academia. Minerva, 56 (1), 85–107.

Shimada, Y. A., Tsukada, N., & Suzuki, J. (2017). Promoting diversity in science in Japan through mission-oriented research grants. Scientometrics, 110 (3), 1415–1435.

Sinha, Y., Iqbal, F. M., Spence, J. N., & Richard, B. (2016). A bibliometric analysis of the 100 most-cited articles in rhinoplasty. Plastic and Reconstructive Surgery Global Open, 4 (7), e820. https://doi.org/10.1097/GOX.0000000000000834

Szomszor, M., & Adie, E. (2022). Overton: A bibliometric database of policy document citations. arXiv preprint arXiv:2201.07643 .

Tellmann, S. M. (2022). The societal territory of academic disciplines: How disciplines matter to society. Minerva, 60 (2), 159–179.

Thelwall, M., Kousha, K., Abdoli, M., Stuart, E., Makita, M., Font-Julián, C. I., Wilson, P., & Levitt, J. (2023). Is research funding always beneficial? A cross-disciplinary analysis of UK research 2014–20. Quantitative Science Studies, 4 (2), 501–534. https://doi.org/10.1162/qss_a_00254

Thelwall, M., Kousha, K., Dinsmore, A., & Dolby, K. (2016). Alternative metric indicators for funding scheme evaluations. Aslib Journal of Information Management, 68 (1), 2–18. https://doi.org/10.1108/AJIM-09-2015-0146

Thyer, B. A. (2011). Harmful effects of federal research grants. Social Work Research, 35 (1), 3–7.

Tricarico, J. M., de Haas, Y., Hristov, A. N., Kebreab, E., Kurt, T., Mitloehner, F., & Pitta, D. (2022). Symposium review: Development of a funding program to support research on enteric methane mitigation from ruminants. Journal of Dairy Science, 105 , 8535–8542.

UKCRC. (2020). UK health research analysis 2018. Retrieved from https://hrcsonline.net/reports/analysis-reports/uk-health-research-analysis-2018/

Vaduganathan, M., Nagarur, A., Qamar, A., Patel, R. B., Navar, A. M., Peterson, E. D., & Butler, J. (2018). Availability and use of shared data from cardiometabolic clinical trials. Circulation, 137 (9), 938–947.

Van den Besselaar, P., & Horlings, E. (2011). Focus en massa in het wetenschappelijk onderzoek. de Nederlandse onderzoeksportfolio in internationaal perspectief. (In Dutch : Focus and mass in research: The Dutch research portfolio from an international perspective ). Den Haag, Rathenau Instituut.

Van den Besselaar, P. & Mom, C. (2021). Gender bias in grant allocation, a mixed picture . Preprint.

Van den Besselaar, P., & Leydesdorff, L. (2009). Past performance, peer review, and project selection: A case study in the social and behavioral sciences. Research Evaluation, 18 (4), 273–288.

Van den Besselaar, P., & Sandström, U. (2015). Early career grants, performance and careers; a study of predictive validity in grant decisions. Journal of Informetrics, 9 , 826–838.

Versleijen, A., van der Meulen, B., van Steen, J., Kloprogge, P., Braam, R., Mamphuis, R., & van den Besselaar, P. (2007). Dertig jaar onderzoeksfinanciering—rends, beleid en implicaties. (In Dutch: Thirty years research funding in the Netherlands—1975–2005) . Den Haag: Rathenau Instituut 2007.

Voss, A., Andreß, B., Pauzenberger, L., Herbst, E., Pogorzelski, J., & John, D. (2021). Research productivity during orthopedic surgery residency correlates with pre-planned and protected research time: A survey of German-speaking countries. Knee Surgery, Sports Traumatology, Arthroscopy, 29 , 292–299.

Wang, L., Wang, X., Piro, F. N., & Philipsen, N. J. (2020). The effect of competitive public funding on scientific output: A comparison between China and the EU. Research Evaluation, 29 (4), 418–429.

Whitley, R., Gläser, J., & Laudel, G. (2018). The impact of changing funding and authority relationships on scientific innovations. Minerva, 56 , 109–134.

Woelert, P., Lewis, J. M., & Le, A. T. (2021). Formally alive yet practically complex: An exploration of academics’ perceptions of their autonomy as researchers. Higher Education Policy, 34 , 1049–1068.

Woodson, T. S., & Williams, L. D. (2020). Stronger together: Inclusive innovation and undone science frameworks in the Global South. Third World Quarterly, 41 (11), 1957–1972.

Woodward, D. K., & Clifton, G. D. (1994). Development of a successful research grant application. American Journal of Health-System Pharmacy, 51 (6), 813–822.

Yan, E., Wu, C., & Song, M. (2018). The funding factor: A cross-disciplinary examination of the association between research funding and citation impact. Scientometrics, 115 (1), 369–384.

Zhao, D. (2010). Characteristics and impact of grant-funded research: A case study of the library and information science field. Scientometrics, 84 (2), 293–306.

Download references

No funding was received for conducting this study.

Author information

Authors and affiliations.

Statistical Cybermetrics and Research Evaluation Group, University of Wolverhampton, Wolverhampton, UK

Mike Thelwall

Information School, University of Sheffield, Sheffield, UK

MRC Secondee, Evaluation and Analysis Team, Medical Research Council, London, UK

Subreena Simrick

Evaluation and Analysis Team, Medical Research Council, London, UK

Department of Organization Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands

Peter Van den Besselaar

German Centre for Higher Education Research and Science Studies (DZHW), Berlin, Germany

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mike Thelwall .

Ethics declarations

Competing interest.

The first and fourth authors are members of the Distinguished Reviewers Board of Scientometrics. The second and third authors work for research funders.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Thelwall, M., Simrick, S., Viney, I. et al. What is research funding, how does it influence research, and how is it recorded? Key dimensions of variation. Scientometrics 128 , 6085–6106 (2023). https://doi.org/10.1007/s11192-023-04836-w

Download citation

Received : 12 February 2023

Accepted : 05 September 2023

Published : 16 September 2023

Issue Date : November 2023

DOI : https://doi.org/10.1007/s11192-023-04836-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research funding
  • Academic research funding
  • Research funding typology
  • Funding effects
  • Find a journal
  • Publish with us
  • Track your research

October 1, 2018

Science Funding Is Broken

The way we pay for science does not encourage the best results

By John P. A. Ioannidis

research funding problems

With millions of scientific papers published every year and more than $2 trillion invested annually in research and development, scientists make plenty of progress. But could we do better? There is increasing evidence that some of the ways we conduct, evaluate, report and disseminate research are miserably ineffective. A series of papers in 2014 in the Lancet , for instance, estimated that 85 percent of investment in biomedical research is wasted. Many other disciplines have similar problems. Here are some of the ways our reward and incentives systems fail and some proposals for fixing the problems.

We Fund Too Few Scientists

Funding is largely concentrated in the hands of a few investigators. There are many talented scientists, and major success is largely the result of luck, as well as hard work. The investigators currently enjoying huge funding are not necessarily genuine superstars; they may simply be the best connected.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Use a lottery to decide which grant applications to fund (perhaps after they pass a basic review). This scheme would eliminate the arduous effort and expenditure that now goes into reviewing proposals and would give a chance to many more investigators.

A proposed cap to the maximum funding that any single investigator can receive was fiercely shot down by the prestigious institutions that gain the most from this overconcentration. Shifting the funds from senior people to younger researchers, perhaps even in the same laboratory, however, would not affect these institutions and would also make the cohort of principal investigators more open to innovation.

We Do Not Reward Transparency

Many scientific protocols, analysis methods, computational processes and data are opaque. When researchers try to crack open these black boxes, they often discover that many top findings cannot be reproduced. That is the case for two out of three top psychology papers, one out of three top papers in experimental economics and more than 75 percent of top papers identifying new cancer drug targets. Most important, scientists are not rewarded for sharing their techniques. These good scientific citizenship activities take substantial effort. In competitive environments, many scientists even think, Why offer ammunition to competitors? Why share?

Create better infrastructure for enabling transparency, openness and sharing.

Make transparency a prerequisite for funding.

Universities and research institutes could preferentially hire, promote or tenure those who are champions of transparency.

We Do Not Encourage Replication

Under continuous pressure to deliver new discoveries, researchers in many fields have little incentive and plenty of counterincentives to try replicating results of previous studies. Yet replication is an indispensable centerpiece of the scientific method. Without it, we run the risk of flooding scientific journals with false information that never gets corrected.

Funding agencies must pay for replication studies.

Scientists’ advancement should be based not only on their discoveries but also on their replication track record.

We Do Not Fund Young Investigators

The average age of biomedical scientists receiving their first substantial grant is 46 and is increasing over time. The average age for a full professor in the U.S. is 55 and growing. Only 1.6 percent of funding in the NIH’s Research Project Grant program went to principal investigators younger than 36 in 2017, but 13.2 percent went to those 66 and older. Similar aging is seen in other sciences, and it is not explained simply by life-expectancy improvement. Werner Heisenberg, Albert Einstein, Paul Dirac and Wolfgang Pauli made their top contributions in their mid-20s. Imagine telling them it would be another 25 years before they could receive funding. Some of the best minds may quit rather than wait.

A larger proportion of funding should be earmarked for young investigators.

Universities should try to shift the aging distribution of their faculty by hiring more young investigators.

We Use Biased Funding Sources

Most funding for research and development in the U.S. comes not from the government but from private, for-profit sources, raising unavoidable conflicts of interest and pressure to deliver results favorable to the sponsor. Clinical trials funded by the pharmaceutical industry, for instance, have 27 percent higher odds of reaching favorable results than publicly funded trials. Some of the sponsors are improbable champions of scientific truth. For example, Philip Morris (the manufacturer of Marlboro cigarettes) recently announced it would contribute $960 million over 12 years to establish the Foundation for a Smoke Free World, a nonprofit initiative that aims to eliminate smoking. Disclosure of conflicts of interest has improved in many fields, but in-depth detective work suggests that it is still far from complete.

Restrict or even ban funding that has overt conflicts of interest. Journals should not accept research with such conflicts.

For less conspicuous conflicts, at a minimum ensure transparent and thorough disclosure.

We Fund the Wrong Fields

Much like Mafia clans, some fields and families of ideas have traditionally been more powerful. Well-funded fields attract more scientists to work for them, which increases their lobbying reach, fueling a vicious circle. Some entrenched fields absorb enormous funding even though they have clearly demonstrated limited yield or uncorrectable flaws. Further investment in them is futile.

Independent, impartial assessment of output is necessary for lavishly funded fields.

More funds should be earmarked for new fields and fields that are high risk.

Researchers should be encouraged to switch fields, whereas currently they are incentivized to focus in one area.

We Do Not Spend Enough

In many countries, public funding has stagnated and is under increasing threat from contesting budget items. The budget for U.S. military spending ($886 billon) is 24 times the budget of the NIH ($37 billion). The value of a single soccer team such as Manchester United ($4.1 billion) is larger than the annual research budget of any university. Investment in science benefits society at large, yet attempts to convince the public often make matters worse when otherwise well-intentioned science leaders promise the impossible, such as promptly eliminating all cancer or Alzheimer’s disease. When these promises do not deliver, support for science can flag.

We need to communicate how science funding is used by making the process of science clearer, including the number of scientists it takes to make major accomplishments. Universities, science museums and science journalism can help get this message out.

We would also make a more convincing case for science if we could show that we do work hard on improving how we run it.

We Reward Big Spenders

Hiring, promotion and tenure decisions primarily rest on a researcher’s ability to secure high levels of funding. But the expense of a project does not necessarily correlate with its importance. Such reward structures select mostly for politically savvy managers who know how to absorb money.

We should reward scientists for high-quality work, reproducibility and social value rather than for securing funding.

Excellent research can be done with little to no funding other than protected time. Institutions should provide this time and respect scientists who can do great work without wasting tons of money.

We Do Not Fund High-Risk Ideas

Review panels, even when they are made up of excellent scientists, are allergic to risky ideas. The pressure that taxpayer money be “well spent” leads government funders to back projects most likely to pay off with a positive result, even if riskier projects might lead to more important, but less assured, advances. Industry also avoids investing in high-risk projects, waiting for start-ups to try (and often fail with) out-of-the-box ideas. As a result, nine out of the 10 largest pharmaceutical companies spend more on marketing than on R&D. Public funding agencies contend that they cherish “innovation” when they judge grant applications. This is nonsense. Innovation is extremely difficult, if not impossible, to predict in advance. Any idea that survives the scrutiny of 20 people reviewing it (the typical NIH study section) has little chance of being truly disruptive or innovative. It must be mainstream, if not plain mediocre, to be accepted by everyone.

Fund excellent scientists rather than projects and give them freedom to pursue research avenues as they see fit. Some institutions such as Howard Hughes Medical Institute already use this model with success.

Communicate to the public and policy makers that science is a cumulative investment. Of 1,000 projects, 999 may fail, and we cannot know which one will succeed ahead of time. We must judge success on the total agenda, not a single experiment or result.

We Lack Good Data

There is relatively limited evidence about which scientific practices work best. We need more research on research (“meta-research”) to understand how to best perform, evaluate, review, disseminate and reward science.

We should invest in studying how to get the best science and how to choose and reward the best scientists. We should not trust opinion (including my own) without evidence.

The 7 biggest problems facing science, according to 270 scientists

by Julia Belluz , Brad Plumer , and Brian Resnick

research funding problems

Science is in big trouble. Or so we’re told.

In the past several years, many scientists have become afflicted with a serious case of doubt — doubt in the very institution of science.

As reporters covering medicine, psychology, climate change, and other areas of research, we wanted to understand this epidemic of doubt. So we sent scientists a survey asking this simple question: If you could change one thing about how science works today, what would it be and why?

We heard back from 270 scientists all over the world, including graduate students, senior professors, laboratory heads, and Fields Medalists . They told us that, in a variety of ways, their careers are being hijacked by perverse incentives. The result is bad science.

The scientific process, in its ideal form, is elegant: Ask a question, set up an objective test, and get an answer. Repeat. Science is rarely practiced to that ideal. But Copernicus believed in that ideal. So did the rocket scientists behind the moon landing.

But nowadays, our respondents told us, the process is riddled with conflict. Scientists say they’re forced to prioritize self-preservation over pursuing the best questions and uncovering meaningful truths.

“I feel torn between asking questions that I know will lead to statistical significance and asking questions that matter,” says Kathryn Bradshaw, a 27-year-old graduate student of counseling at the University of North Dakota.

Today, scientists’ success often isn’t measured by the quality of their questions or the rigor of their methods. It’s instead measured by how much grant money they win, the number of studies they publish, and how they spin their findings to appeal to the public.

“Is the point of research to make other professional academics happy, or is it to learn more about the world?” —Noah Grand, former lecturer in sociology, UCLA

Scientists often learn more from studies that fail. But failed studies can mean career death. So instead, they’re incentivized to generate positive results they can publish. And the phrase “publish or perish” hangs over nearly every decision. It’s a nagging whisper, like a Jedi’s path to the dark side.

“Over time the most successful people will be those who can best exploit the system,” Paul Smaldino, a cognitive science professor at University of California Merced, says.

To Smaldino, the selection pressures in science have favored less-than-ideal research: “As long as things like publication quantity, and publishing flashy results in fancy journals are incentivized, and people who can do that are rewarded … they’ll be successful, and pass on their successful methods to others.”

Many scientists have had enough. They want to break this cycle of perverse incentives and rewards. They are going through a period of introspection, hopeful that the end result will yield stronger scientific institutions. In our survey and interviews, they offered a wide variety of ideas for improving the scientific process and bringing it closer to its ideal form.

Before we jump in, some caveats to keep in mind: Our survey was not a scientific poll. For one, the respondents disproportionately hailed from the biomedical and social sciences and English-speaking communities.

Many of the responses did, however, vividly illustrate the challenges and perverse incentives that scientists across fields face. And they are a valuable starting point for a deeper look at dysfunction in science today.

The place to begin is right where the perverse incentives first start to creep in: the money.

1. Academia has a huge money problem

To do most any kind of research, scientists need money:to run studies, to subsidize lab equipment, to pay their assistants and even their own salaries. Our respondents told us that getting — and sustaining — that funding is a perennial obstacle.

Their gripe isn’t just with the quantity, which, in many fields, is shrinking. It’s the way money is handed out that puts pressure on labs to publish a lot of papers, breeds conflicts of interest, and encourages scientists to overhype their work.

In the United States, academic researchers in the sciences generally cannot rely on university funding alone to pay for their salaries, assistants, and lab costs. Instead, they have to seek outside grants. “In many cases the expectations were and often still are that faculty should cover at least 75 percent of the salary on grants,” writes John Chatham, a professor of medicine studying cardiovascular disease at University of Alabama at Birmingham.

Grants also usually expire after three or so years, which pushes scientists away from long-term projects. Yet as John Pooley, a neurobiology postdoc at the University of Bristol, points out, the biggest discoveries usually take decades to uncover and are unlikely to occur under short-term funding schemes.

Outside grants are also in increasingly short supply. In the US, the largest source of funding is the federal government, and that pool of money has been plateauing for years, while young scientists enter the workforce at a faster rate than older scientists retire.

Take the National Institutes of Health, a major funding source. Its budget rose at a fast clip through the 1990s, stalled in the 2000s, and then dipped with sequestration budget cuts in 2013. All the while, rising costs for conducting science meant that each NIH dollar purchased less and less. Last year, Congress approved the biggest NIH spending hike in a decade . But it won’t erase the shortfall.

The consequences are striking: In 2000, more than 30 percent of NIH grant applications got approved. Today, it’s closer to 17 percent. “It’s because of what’s happened in the last 12 years that young scientists in particular are feeling such a squeeze,” NIH Director Francis Collins said at the Milken Global Conference in May.

Some of our respondents said that this vicious competition for funds can influence their work.Funding “affects what we study, what we publish, the risks we (frequently don’t) take,” explains Gary Bennett a neuroscientist at Duke University. It “nudges us to emphasize safe, predictable (read: fundable) science.”

Truly novel research takes longer to produce, and it doesn’t always pay off. A National Bureau of Economic Research working paper found that, on the whole, truly unconventional papers tend to be less consistently cited in the literature. So scientists and funders increasingly shy away from them, preferring short-turnaround, safer papers. But everyone suffers from that: the NBER report found that novel papers also occasionally lead to big hits that inspire high-impact, follow-up studies.

“I think because you have to publish to keep your job and keep funding agencies happy, there are a lot of (mediocre) scientific papers out there ... with not much new science presented,” writes Kaitlyn Suski, a chemistry and atmospheric science postdoc at Colorado State University.

Another worry: When independent, government, or university funding sources dry up, scientists may feel compelled to turn to industry or interest groups eager to generate studies to support their agendas. “With funding from NIH, USDA, and foundations so limited ... researchers feel obligated — or willingly seek — food industry support. The frequent result? Conflicts of interest.” —Marion Nestle, food politics professor, New York University

Already, much of nutrition science, for instance, is funded by the food industry — an inherent conflict of interest. And the vast majority of drug clinical trials are funded by drugmakers. Studies have found that private industry–funded research tends to yield conclusions that are more favorable to the sponsors.

Finally, all of this grant writing is a huge time suck, taking resources away from the actual scientific work. Tyler Josephson, an engineering graduate student at the University of Delaware, writes that many professors he knows spend 50 percent of their time writing grant proposals. “Imagine,” he asks, “what they could do with more time to devote to teaching and research?”

It’s easy to see how these problems in funding kick off a vicious cycle. To be more competitive for grants, scientists have to have published work. To have published work, they need positive (i.e., statistically significant ) results. That puts pressure on scientists to pick “safe” topics that will yield a publishable conclusion — or, worse, may bias their research toward significant results.

“When funding and pay structures are stacked against academic scientists,” writes Alison Bernstein, a neuroscience postdoc at Emory University, “these problems are all exacerbated.”

Fixes for science’s funding woes

Right now there are arguably too many researchers chasing too few grants. Or, as a 2014 piece in the Proceedings of the National Academy of Sciences put it: “The current system is in perpetual disequilibrium, because it will inevitably generate an ever-increasing supply of scientists vying for a finite set of research resources and employment opportunities.”

“As it stands, too much of the research funding is going to too few of the researchers,” writes Gordon Pennycook, a PhD candidate in cognitive psychology at the University of Waterloo. “This creates a culture that rewards fast, sexy (and probably wrong) results.”

One straightforward way to ameliorate these problems would be for governments to simply increase the amount of money available for science. (Or, more controversially, decrease the number of PhDs, but we’ll get to that later.) If Congress boosted funding for the NIH and National Science Foundation, that would take some of the competitive pressure off researchers.

But that only goes so far. Funding will always be finite, and researchers will never get blank checks to fund the risky science projects of their dreams. So other reforms will also prove necessary.

One suggestion: Bring more stability and predictability into the funding process. “The NIH and NSF budgets are subject to changing congressional whims that make it impossible for agencies (and researchers) to make long term plans and commitments,” M. Paul Murphy, a neurobiology professor at the University of Kentucky, writes. “The obvious solution is to simply make [scientific funding] a stable program, with an annual rate of increase tied in some manner to inflation.”

“Bitter competition leads to group leaders working desperately to get any money just to avoid closing their labs, submitting more proposals, overwhelming the grant system further. It’s all kinds of vicious circles on top of each other.” —Maximilian Press, graduate student in genome science, University of Washington

Another idea would be to change how grants are awarded: Foundations and agencies could fund specific people and labs for a period of time rather than individual project proposals. (The Howard Hughes Medical Institute already does this.) A system like this would give scientists greater freedom to take risks with their work.

Alternatively, researchers in the journal mBio recently called for a lottery-style system. Proposals would be measured on their merits, but then a computer would randomly choose which get funded.

“Although we recognize that some scientists will cringe at the thought of allocating funds by lottery,” the authors of the mBio piece write, “the available evidence suggests that the system is already in essence a lottery without the benefits of being random.” Pure randomness would at least reduce some of the perverse incentives at play in jockeying for money.

There are also some ideas out there to minimize conflicts of interest from industry funding. Recently, in PLOS Medicine , Stanford epidemiologist John Ioannidis suggested that pharmaceutical companies ought to pool the money they use to fund drug research, to be allocated to scientists who then have no exchange with industry during study design and execution. This way, scientists could still get funding for work crucial for drug approvals — but without the pressures that can skew results.

These solutions are by no means complete, and they may not make sense for every scientific discipline. The daily incentives facing biomedical scientists to bring new drugs to market are different from the incentives facing geologists trying to map out new rock layers. But based on our survey, funding appears to be at the root of many of the problems facing scientists, and it’s one that deserves more careful discussion.

2. Too many studies are poorly designed. Blame bad incentives.

Scientists are ultimately judged by the research they publish. And the pressure to publish pushes scientists to come up with splashy results, of the sort that get them into prestigious journals. “Exciting, novel results are more publishable than other kinds,” says Brian Nosek , who co-founded the Center for Open Science at the University of Virginia.

The problem here is that truly groundbreaking findings simply don’t occur very often, which means scientists face pressure to game their studies so they turn out to be a little more “revolutionary.” (Caveat: Many of the respondents who focused on this particular issue hailed from the biomedical and social sciences.)

Some of this bias can creep into decisions that are made early on: choosing whether or not to randomize participants, including a control group for comparison, or controlling for certain confounding factors but not others. (Read more on study design particulars here .)

Many of our survey respondents noted that perverse incentives can also push scientists to cut corners in how they analyze their data.

“I have incredible amounts of stress that maybe once I finish analyzing the data, it will not look significant enough for me to defend,” writes Jess Kautz, a PhD student at the University of Arizona. “And if I get back mediocre results, there’s going to be incredible pressure to present it as a good result so they can get me out the door. At this moment, with all this in my mind, it is making me wonder whether I could give an intellectually honest assessment of my own work.”

“Novel information trumps stronger evidence which sets the parameters for working scientists.” —Jon-Patrick Allem, postdoctoral social scientist, USC Keck School of Medicine

Increasingly, meta-researchers (who conduct research on research) are realizing that scientists often do find little ways to hype up their own results — and they’re not always doing it consciously. Among the most famous examples is a technique called “p-hacking,” in which researchers test their data against many hypotheses and only report those that have statistically significant results.

In a recent study , which tracked the misuse of p-values in biomedical journals, meta-researchers found “an epidemic” of statistical significance: 96 percent of the papers that included a p-value in their abstracts boasted statistically significant results.

That seems awfully suspicious. It suggests the biomedical community has been chasing statistical significance, potentially giving dubious results the appearance of validity through techniques like p-hacking — or simply suppressing important results that don’t look significant enough. Fewer studies share effect sizes (which arguably gives a better indication of how meaningful a result might be) or discuss measures of uncertainty.

“The current system has done too much to reward results,” says Joseph Hilgard, a postdoctoral research fellow at the Annenberg Public Policy Center. “This causes a conflict of interest: The scientist is in charge of evaluating the hypothesis, but the scientist also desperately wants the hypothesis to be true.”

The consequences are staggering. An estimated $200 billion — or the equivalent of 85 percent of global spending on research — is routinely wasted on poorly designed and redundant studies, according to meta-researchers who have analyzed inefficiencies in research. We know that as much as 30 percent of the most influential original medical research papers later turn out to be wrong or exaggerated.

Fixes for poor study design

Our respondents suggested that the two key ways to encourage stronger study design — and discourage positive results chasing — would involve rethinking the rewards system and building more transparency into the research process.

“I would make rewards based on the rigor of the research methods, rather than the outcome of the research,” writes Simine Vazire, a journal editor and a social psychology professor at UC Davis. “Grants, publications, jobs, awards, and even media coverage should be based more on how good the study design and methods were, rather than whether the result was significant or surprising.”

Likewise, Cambridge mathematician Tim Gowers argues that researchers should get recognition for advancing science broadly through informal idea sharing — rather than only getting credit for what they publish.

“We’ve gotten used to working away in private and then producing a sort of polished document in the form of a journal article,” Gowers said. “This tends to hide a lot of the thought process that went into making the discoveries. I’d like attitudes to change so people focus less on the race to be first to prove a particular theorem, or in science to make a particular discovery, and more on other ways of contributing to the furthering of the subject.”

When it comes to published results, meanwhile, many of our respondents wanted to see more journals put a greater emphasis on rigorous methods and processes rather than splashy results.

“Science is a human activity and is therefore prone to the same biases that infect almost every sphere of human decision-making.” —Jay Van Bavel, psychology professor, New York University

“I think the one thing that would have the biggest impact is removing publication bias: judging papers by the quality of questions, quality of method, and soundness of analyses, but not on the results themselves,” writes Michael Inzlicht , a University of Toronto psychology and neuroscience professor.

Some journals are already embracing this sort of research. PLOS One , for example, makes a point of accepting negative studies (in which a scientist conducts a careful experiment and finds nothing) for publication, as does the aptly named Journal of Negative Results in Biomedicine .

More transparency would also help, writes Daniel Simons, a professor of psychology at the University of Illinois. Here’s one example: ClinicalTrials.gov , a site run by the NIH, allows researchers to register their study design and methods ahead of time and then publicly record their progress. That makes it more difficult for scientists to hide experiments that didn’t produce the results they wanted. (The site now holds information for more than 180,000 studies in 180 countries.)

Similarly, the AllTrials campaign is pushing for every clinical trial (past, present, and future) around the world to be registered, with the full methods and results reported. Some drug companies and universities have created portals that allow researchers to access raw data from their trials.

The key is for this sort of transparency to become the norm rather than a laudable outlier.

3. Replicating results is crucial. But scientists rarely do it.

Replication is another foundational concept in science. Researchers take an older study that they want to test and then try to reproduce it to see if the findings hold up.

Testing, validating, retesting — it’s all part of a slow and grinding process to arrive at some semblance of scientific truth. But this doesn’t happen as often as it should, our respondents said. Scientists face few incentives to engage in the slog of replication. And even when they attempt to replicate a study, they often find they can’t do so . Increasingly it’s being called a “crisis of irreproducibility. ”

The stats bear this out: A 2015 study looked at 83 highly cited studies that claimed to feature effective psychiatric treatments. Only 16 had ever been successfully replicated. Another 16 were contradicted by follow-up attempts, and 11 were found to have substantially smaller effects the second time around. Meanwhile, nearly half of the studies (40) had never been subject to replication at all.

More recently, a landmark study published in the journal Science demonstrated that only a fraction of recent findings in top psychology journals could be replicated. This is happening in other fields too, says Ivan Oransky, one of the founders of the blog Retraction Watch , which tracks scientific retractions.

As for the underlying causes, our survey respondents pointed to a couple of problems. First, scientists have very few incentives to even try replication. Jon-Patrick Allem, a social scientist at the Keck School of Medicine of USC, noted that funding agencies prefer to support projects that find new information instead of confirming old results.

Journals are also reluctant to publish replication studies unless “they contradict earlier findings or conclusions,” Allem writes. The result is to discourage scientists from checking each other’s work. “Novel information trumps stronger evidence, which sets the parameters for working scientists.”

The second problem is that many studies can be difficult to replicate. Sometimes their methods are too opaque. Sometimes the original studies had too few participants to produce a replicable answer. And sometimes, as we saw in the previous section, the study is simply poorly designed or outright wrong.

Again, this goes back to incentives: When researchers have to publish frequently and chase positive results, there’s less time to conduct high-quality studies with well-articulated methods.

Fixes for underreplication

Scientists need more carrots to entice them to pursue replication in the first place. As it stands, researchers are encouraged to publish new and positive results and to allow negative results to linger in their laptops or file drawers.

This has plagued science with a problem called “publication bias ” — not all studies that are conducted actually get published in journals, and the ones that do tend to have positive and dramatic conclusions.

If institutions started to reward tenure positions or make hires based on the quality of a researcher’s body of work, instead of quantity, this might encourage more replication and discourage positive results chasing.

“The key that needs to change is performance review,” writes Christopher Wynder, a former assistant professor at McMaster University. “It affects reproducibility because there is little value in confirming another lab’s results and trying to publish the findings.”

“Replication studies should be incentivized somehow, and journals should be incentivized to publish ‘negative’ studies. All results matter, not just the flashy, paradigm-shifting results.” —Stephanie Thurmond, biology graduate student, University of California Riverside

The next step would be to make replication of studies easier. This could include more robust sharing of methods in published research papers. “It would be great to have stronger norms about being more detailed with the methods,” says University of Virginia’s Brian Nosek.

He also suggested more regularly adding supplements at the end of papers that get into the procedural nitty-gritty, to help anyone wanting to repeat an experiment. “If I can rapidly get up to speed, I have a much better chance of approximating the results,” he said.

Nosek has detailed other potential fixes that might help with replication — all part of his work at the Center for Open Science .

A greater degree of transparency and data sharing would enable replications, said Stanford’s John Ioannidis. Too often, anyone trying to replicate a study must chase down the original investigators for details about how the experiment was conducted.

“It is better to do this in an organized fashion with buy-in from all leading investigators in a scientific discipline,” he explained, “rather than have to try to find the investigator in each case and ask him or her in detective-work fashion about details, data, and methods that are otherwise unavailable.”

Researchers could also make use of new tools , such as open source software that tracks every version of a data set, so that they can share their data more easily and have transparency built into their workflow.

Some of our respondents suggested that scientists engage in replication prior to publication. “Before you put an exploratory idea out in the literature and have people take the time to read it, you owe it to the field to try to replicate your own findings,” says John Sakaluk, a social psychologist at the University of Victoria.

For example, he has argued, psychologists could conduct small experiments with a handful of participants to form ideas and generate hypotheses. But they would then need to conduct bigger experiments, with more participants, to replicate and confirm those hypotheses before releasing them into the world. “In doing so,” Sakaluk says, “the rest of us can have more confidence that this is something we might want to [incorporate] into our own research.”

4. Peer review is broken

Peer review is meant to weed out junk science before it reaches publication. Yet over and over again in our survey, respondents told us this process fails. It was one of the parts of the scientific machinery to elicit the most rage among the researchers we heard from.

Normally, peer review works like this: A researcher submits an article for publication in a journal. If the journal accepts the article for review, it’s sent off to peers in the same field for constructive criticism and eventual publication — or rejection. (The level of anonymity varies; some journals have double-blind reviews, while others have moved to triple-blind review, where the authors, editors, and reviewers don’t know who one another are.)

It sounds like a reasonable system. But numerous studies and systematic reviews have shown that peer review doesn’t reliably prevent poor-quality science from being published.

“I think peer review is, like democracy, bad, but better than anything else.” —Timothy Bates, psychology professor, University of Edinburgh

The process frequently fails to detect fraud or other problems with manuscripts, which isn’t all that surprising when you consider researchers aren’t paid or otherwise rewarded for the time they spend reviewing manuscripts. They do it out of a sense of duty — to contribute to their area of research and help advance science.

But this means it’s not always easy to find the best people to peer-review manuscripts in their field, that harried researchers delay doing the work (leading to publication delays of up to two years), and that when they finally do sit down to peer-review an article they might be rushed and miss errors in studies.

“The issue is that most referees simply don’t review papers carefully enough, which results in the publishing of incorrect papers, papers with gaps, and simply unreadable papers,” says Joel Fish, an assistant professor of mathematics at the University of Massachusetts Boston. “This ends up being a large problem for younger researchers to enter the field, since that means they have to ask around to figure out which papers are solid and which are not.”

“Science is fluid; publishing isn’t. It takes forever for research to make it to print, there is little benefit to try [to] replicate studies or publish insignificant results, and it is expensive to access the research.” —Amanda Caskenette, aquatic science biologist, Fisheries and Oceans Canada

That’s not to mention the problem of peer review bullying. Since the default in the process is that editors and peer reviewers know who the authors are (but authors don’t know who the reviews are), biases against researchers or institutions can creep in, opening the opportunity for rude, rushed, and otherwise unhelpful comments. (Just check out the popular #SixWordPeerReview hashtag on Twitter.)

These issues were not lost on our survey respondents, who said peer review amounts to a broken system, which punishes scientists and diminishes the quality of publications. They want to not only overhaul the peer review process but also change how it’s conceptualized.

Fixes for peer review

On the question of editorial bias and transparency, our respondents were surprisingly divided. Several suggested that all journals should move toward double-blinded peer review, whereby reviewers can’t see the names or affiliations of the person they’re reviewing and publication authors don’t know who reviewed them. The main goal here was to reduce bias.

“We know that scientists make biased decisions based on unconscious stereotyping,” writes Pacific Northwest National Lab postdoc Timothy Duignan. “So rather than judging a paper by the gender, ethnicity, country, or institutional status of an author — which I believe happens a lot at the moment — it should be judged by its quality independent of those things.”

Yet others thought that more transparency, rather than less, was the answer: “While we correctly advocate for the highest level of transparency in publishing, we still have most reviews that are blinded, and I cannot know who is reviewing me,” writes Lamberto Manzoli, a professor of epidemiology and public health at the University of Chieti, in Italy. “Too many times we see very low quality reviews, and we cannot understand whether it is a problem of scarce knowledge or conflict of interest.”

“We need to recognize academic journals for what they are: shop windows for incomplete descriptions of research, that make semi-arbitrary editorial [judgments] about what to publish and often have harmful policies that restrict access to important post-publication critical appraisal of published research.”—Ben Goldacre, epidemiology researcher, physician, and author

Perhaps there is a middle ground. For example, e Life , a new open access journal that is rapidly rising in impact factor, runs a collaborative peer review process. Editors and peer reviewers work together on each submission to create a consolidated list of comments about a paper. The author can then reply to what the group saw as the most important issues, rather than facing the biases and whims of individual reviewers. (Oddly, this process is faster — eLife takes less time to accept papers than Nature or Cell.)

Still, those are mostly incremental fixes. Other respondents argued that we might need to radically rethink the entire process of peer review from the ground up.

“The current peer review process embraces a concept that a paper is final,” says Nosek. “The review process is [a form of] certification, and that a paper is done.” But science doesn’t work that way. Science is an evolving process, and truth is provisional. So, Nosek said, science must “move away from the embrace of definitiveness of publication.”

Some respondents wanted to think of peer review as more of a continuous process, in which studies are repeatedly and transparently updated and republished as new feedback changes them — much like Wikipedia entries. This would require some sort of expert crowdsourcing.

“The scientific publishing field — particularly in the biological sciences — acts like there is no internet,” says Lakshmi Jayashankar, a senior scientific reviewer with the federal government. “The paper peer review takes forever, and this hurts the scientists who are trying to put their results quickly into the public domain.”

One possible model already exists in mathematics and physics, where there is a long tradition of “pre-printing” articles. Studies are posted on an open website called arXiv.org , often before being peer-reviewed and published in journals. There, the articles are sorted and commented on by a community of moderators, providing another chance to filter problems before they make it to peer review.

“Posting preprints would allow scientific crowdsourcing to increase the number of errors that are caught, since traditional peer-reviewers cannot be expected to be experts in every sub-discipline,” writes Scott Hartman, a paleobiology PhD student at the University of Wisconsin.

And even after an article is published, researchers think the peer review process shouldn’t stop. They want to see more “post-publication” peer review on the web, so that academics can critique and comment on articles after they’ve been published. Sites like PubPeer and F1000Research have already popped up to facilitate that kind of post-publication feedback.

“We do this a couple of times a year at conferences,” writes Becky Clarkson, a geriatric medicine researcher at the University of Pittsburgh. “We could do this every day on the internet.”

The bottom line is that traditional peer review has never worked as well as we imagine it to — and it’s ripe for serious disruption.

5. Too much science is locked behind paywalls

After a study has been funded, conducted, and peer-reviewed, there’s still the question of getting it out so that others can read and understand its results.

Over and over, our respondents expressed dissatisfaction with how scientific research gets disseminated. Too much is locked away in paywalled journals, difficult and costly to access, they said. Some respondents also criticized the publication process itself for being too slow, bogging down the pace of research.

On the access question, a number of scientists argued that academic research should be free for all to read. They chafed against the current model, in which for-profit publishers put journals behind pricey paywalls.

A single article in Science will set you back $30; a year-long subscription to Cell will cost $279. Elsevier publishes 2,000 journals that can cost up to $10,000 or $20,000 a year for a subscription.

“My problem is one that many scientists have: It’s overly simplistic to count up someone’s papers as a measure of their worth.” —Lex Kravitz, investigator, neuroscience of obesity, National Institutes of Health

Many US institutions pay those journal fees for their employees, but not all scientists (or other curious readers) are so lucky. In a recent issue of Science , journalist John Bohannon described the plight of a PhD candidate at a top university in Iran. He calculated that the student would have to spend $1,000 a week just to read the papers he needed.

As Michael Eisen, a biologist at UC Berkeley and co-founder of the Public Library of Science (or PLOS ) , put it , scientific journals are trying to hold on to the profits of the print era in the age of the internet. Subscription prices have continued to climb, as a handful of big publishers (like Elsevier) have bought up more and more journals, creating mini knowledge fiefdoms.

“Large, publicly owned publishing companies make huge profits off of scientists by publishing our science and then selling it back to the university libraries at a massive profit (which primarily benefits stockholders),” Corina Logan, an animal behavior researcher at the University of Cambridge, noted. “It is not in the best interest of the society, the scientists, the public, or the research.” (In 2014, Elsevier reported a profit margin of nearly 40 percent and revenues close to $3 billion.)

“It seems wrong to me that taxpayers pay for research at government labs and universities but do not usually have access to the results of these studies, since they are behind paywalls of peer-reviewed journals,” added Melinda Simon, a postdoc microfluidics researcher at Lawrence Livermore National Lab.

Fixes for closed science

Many of our respondents urged their peers to publish in open access journals (along the lines of PeerJ or PLOS Biology ). But there’s an inherent tension here. Career advancement can often depend on publishing in the most prestigious journals, like Science or Nature , which still have paywalls.

There’s also the question of how best to finance a wholesale transition to open access. After all, journals can never be entirely free. Someone has to pay for the editorial staff, maintaining the website, and so on. Right now, open access journals typically charge fees to those submitting papers, putting the burden on scientists who are already struggling for funding.

One radical step would be to abolish for-profit publishers altogether and move toward a nonprofit model. “For journals I could imagine that scientific associations run those themselves,” suggested Johannes Breuer, a postdoctoral researcher in media psychology at the University of Cologne. “If they go for online only, the costs for web hosting, copy-editing, and advertising (if needed) can be easily paid out of membership fees.”

As a model, Cambridge’s Tim Gowers has launched an online mathematics journal called Discrete Analysis . The nonprofit venture is owned and published by a team of scholars, it has no publisher middlemen, and access will be completely free for all.

“I personally spend a lot of time writing scientific Wikipedia articles because I believe that advances the cause of science far more than my professional academic articles.” —Ted Sanders, magnetic materials PhD student, Stanford University

Until wholesale reform happens, however, many scientists are going a much simpler route: illegally pirating papers.

Bohannon reported that millions of researchers around the world now use Sci-Hub , a site set up by Alexandra Elbakyan, a Russia-based neuroscientist, that illegally hosts more than 50 million academic papers. “As a devout pirate,” Elbakyan told us, “I think that copyright should be abolished.”

One respondent had an even more radical suggestion: that we abolish the existing peer-reviewed journal system altogether and simply publish everything online as soon as it’s done.

“Research should be made available online immediately, and be judged by peers online rather than having to go through the whole formatting, submitting, reviewing, rewriting, reformatting, resubmitting, etc etc etc that can takes years,” writes Bruno Dagnino, formerly of the Netherlands Institute for Neuroscience. “One format, one platform. Judge by the whole community, with no delays.”

A few scientists have been taking steps in this direction. Rachel Harding, a genetic researcher at the University of Toronto, has set up a website called Lab Scribbles , where she publishes her lab notes on the structure of huntingtin proteins in real time, posting data as well as summaries of her breakthroughs and failures. The idea is to help share information with other researchers working on similar issues, so that labs can avoid needless overlap and learn from each other’s mistakes.

Not everyone might agree with approaches this radical; critics worry that too much sharing might encourage scientific free riding. Still, the common theme in our survey was transparency. Science is currently too opaque, research too difficult to share. That needs to change.

6. Science is poorly communicated to the public

“If I could change one thing about science, I would change the way it is communicated to the public by scientists, by journalists, and by celebrities,” writes Clare Malone, a postdoctoral researcher in a cancer genetics lab at Brigham and Women’s Hospital.

She wasn’t alone. Quite a few respondents in our survey expressed frustration at how science gets relayed to the public. They were distressed by the fact that so many laypeople hold on to completely unscientific ideas or have a crude view of how science works.

They griped that misinformed celebrities like Gwyneth Paltrow have an outsize influence over public perceptions about health and nutrition. (As the University of Alberta’s Timothy Caulfield once told us , “It’s incredible how much she is wrong about.”)

They have a point. Science journalism is often full of exaggerated, conflicting, or outright misleading claims. If you ever want to see a perfect example of this, check out “Kill or Cure,” a site where Paul Battley meticulously documents all the times the Daily Mail reported that various items — from antacids to yogurt — either cause cancer, prevent cancer, or sometimes do both.

“Far too often, there are less than 10 people on this planet who can fully comprehend a single scientist’s research.” —Michael Burel, PhD student, stem cell biology, New York University School of Medicine

Sometimes bad stories are peddled by university press shops. In 2015, the University of Maryland issued a press release claiming that a single brand of chocolate milk could improve concussion recovery. It was an absurd case of science hype.

Indeed, one review in BMJ found that one-third of university press releases contained either exaggerated claims of causation (when the study itself only suggested correlation), unwarranted implications about animal studies for people, or unfounded health advice.

But not everyone blamed the media and publicists alone. Other respondents pointed out that scientists themselves often oversell their work, even if it’s preliminary, because funding is competitive and everyone wants to portray their work as big and important and game-changing.

“You have this toxic dynamic where journalists and scientists enable each other in a way that massively inflates the certainty and generality of how scientific findings are communicated and the promises that are made to the public,” writes Daniel Molden, an associate professor of psychology at Northwestern University. “When these findings prove to be less certain and the promises are not realized, this just further erodes the respect that scientists get and further fuels scientists desire for appreciation.”

Fixes for better science communication

Opinions differed on how to improve this sorry state of affairs — some pointed to the media, some to press offices, others to scientists themselves.

Plenty of our respondents wished that more science journalists would move away from hyping single studies. Instead, they said, reporters ought to put new research findings in context, and pay more attention to the rigor of a study’s methodology than to the splashiness of the end results.

“On a given subject, there are often dozens of studies that examine the issue,” writes Brian Stacy of the US Department of Agriculture. “It is very rare for a single study to conclusively resolve an important research question, but many times the results of a study are reported as if they do.”

“Being able to explain your work to a non-scientific audience is just as important as publishing in a peer-reviewed journal, in my opinion, but currently the incentive structure has no place for engaging the public.” —Crystal Steltenpohl, PhD student, community psychology, DePaul University

But it’s not just reporters who will need to shape up. The “toxic dynamic” of journalists, academic press offices, and scientists enabling one another to hype research can be tough to change, and many of our respondents pointed out that there were no easy fixes — though recognition was an important first step.

Some suggested the creation of credible referees that could rigorously distill the strengths and weaknesses of research. (Some variations of this are starting to pop up: The Genetic Expert News Service solicits outside experts to weigh in on big new studies in genetics and biotechnology.) Other respondents suggested that making research free to all might help tamp down media misrepresentations.

Still other respondents noted that scientists themselves should spend more time learning how to communicate with the public — a skill that tends to be under-rewarded in the current system.

“Being able to explain your work to a non-scientific audience is just as important as publishing in a peer-reviewed journal, in my opinion, but currently the incentive structure has no place for engaging the public,” writes Crystal Steltenpohl, a graduate assistant at DePaul University.

Reducing the perverse incentives around scientific research itself could also help reduce overhype. “If we reward research based on how noteworthy the results are, this will create pressure to exaggerate the results (through exploiting flexibility in data analysis, misrepresenting results, or outright fraud),” writes UC Davis’s Simine Vazire. “We should reward research based on how rigorous the methods and design are.”

Or perhaps we should focus on improving science literacy. Jeremy Johnson, a project coordinator at the Broad Institute, argued that bolstering science education could help ameliorate a lot of these problems. “Science literacy should be a top priority for our educational policy,” he said, “not an elective.”

7. Life as a young academic is incredibly stressful

When we asked researchers what they’d fix about science, many talked about the scientific process itself, about study design or peer review. These responses often came from tenured scientists who loved their jobs but wanted to make the broader scientific project even better.

But on the flip side, we heard from a number of researchers — many of them graduate students or postdocs — who were genuinely passionate about research but found the day-to-day experience of being a scientist grueling and unrewarding. Their comments deserve a section of their own.

Today, many tenured scientists and research labs depend on small armies of graduate students and postdoctoral researchers to perform their experiments and conduct data analysis.

These grad students and postdocs are often the primary authors on many studies. In a number of fields, such as the biomedical sciences, a postdoc position is a prerequisite before a researcher can get a faculty-level position at a university.

This entire system sits at the heart of modern-day science. (A new card game called Lab Wars pokes fun at these dynamics.)

But these low-level research jobs can be a grind. Postdocs typically work long hours and are relatively low-paid for their level of education — salaries are frequently pegged to stipends set by NIH National Research Service Award grants, which start at $43,692 and rise to $47,268 in year three.

Postdocs tend to be hired on for one to three years at a time, and in many institutions they are considered contractors, limiting their workplace protections. We heard repeatedly about extremely long hours and limited family leave benefits.

“End the PhD or drastically change it. There is a high level of depression among PhD students. Long hours, limited career prospects, and low wages contribute to this emotion.” —Don Gibson, PhD student in plant genetics, UC Davis

“Oftentimes this is problematic for individuals in their late 20s and early to mid-30s who have PhDs and who may be starting families while also balancing a demanding job that pays poorly,” wrote one postdoc, who asked for anonymity.

This lack of flexibility tends to disproportionately affect women — especially women planning to have families — which helps contribute to gender inequalities in research. ( A 2012 paper found that female job applicants in academia are judged more harshly and are offered less money than males.) “There is very little support for female scientists and early-career scientists,” noted another postdoc.

“There is very little long-term financial security in today’s climate, very little assurance where the next paycheck will come from,” wrote William Kenkel, a postdoctoral researcher in neuroendocrinology at Indiana University. “Since receiving my PhD in 2012, I left Chicago and moved to Boston for a post-doc, then in 2015 I left Boston for a second post-doc in Indiana. In a year or two, I will move again for a faculty job, and that’s if I’m lucky. Imagine trying to build a life like that.”

This strain can also adversely affect the research that young scientists do. “Contracts are too short term,” noted another researcher. “It discourages rigorous research as it is difficult to obtain enough results for a paper (and hence progress) in two to three years. The constant stress drives otherwise talented and intelligent people out of science also.”

Because universities produce so many PhDs but have way fewer faculty jobs available, many of these postdoc researchers have limited career prospects. Some of them end up staying stuck in postdoc positions for five or 10 years or more.

“In the biomedical sciences,” wrote the first postdoc quoted above, “each available faculty position receives applications from hundreds or thousands of applicants, putting immense pressure on postdocs to publish frequently and in high impact journals to be competitive enough to attain those positions.”

Many young researchers pointed out that PhD programs do fairly little to train people for careers outside of academia. “Too many [PhD] students are graduating for a limited number of professor positions with minimal training for careers outside of academic research,” noted Don Gibson, a PhD candidate studying plant genetics at UC Davis.

Laura Weingartner, a graduate researcher in evolutionary ecology at Indiana University, agreed: “Few universities (specifically the faculty advisors) know how to train students for anything other than academia, which leaves many students hopeless when, inevitably, there are no jobs in academia for them.”

Add it up and it’s not surprising that we heard plenty of comments about anxiety and depression among both graduate students and postdocs. “There is a high level of depression among PhD students,” writes Gibson. “Long hours, limited career prospects, and low wages contribute to this emotion.”

A 2015 study at the University of California Berkeley found that 47 percent of PhD students surveyed could be considered depressed. The reasons for this are complex and can’t be solved overnight. Pursuing academic research is already an arduous, anxiety-ridden task that’s bound to take a toll on mental health.

But as Jennifer Walker explored recently at Quartz, many PhD students also feel isolated and unsupported, exacerbating those issues.

Fixes to keep young scientists in science

We heard plenty of concrete suggestions. Graduate schools could offer more generous family leave policies and child care for graduate students. They could also increase the number of female applicants they accept in order to balance out the gender disparity.

But some respondents also noted that workplace issues for grad students and postdocs were inseparable from some of the fundamental issues facing science that we discussed earlier. The fact that university faculty and research labs face immense pressure to publish — but have limited funding — makes it highly attractive to rely on low-paid postdocs.

“There is little incentive for universities to create jobs for their graduates or to cap the number of PhDs that are produced,” writes Weingartner. “Young researchers are highly trained but relatively inexpensive sources of labor for faculty.”

“There is substantial bias against women and ethnic minorities, and blind experiments have shown that removing names and institutional affiliations can radically change important decisions that shape the careers of scientists.” —Terry McGlynn, professor of biology, California State University Dominguez Hills

Some respondents also pointed to the mismatch between the number of PhDs produced each year and the number of academic jobs available.

A recent feature by Julie Gould in Nature explored a number of ideas for revamping the PhD system. One idea is to split the PhD into two programs: one for vocational careers and one for academic careers. The former would better train and equip graduates to find jobs outside academia.

This is hardly an exhaustive list. The core point underlying all these suggestions, however, was that universities and research labs need to do a better job of supporting the next generation of researchers. Indeed, that’s arguably just as important as addressing problems with the scientific process itself. Young scientists, after all, are by definition the future of science.

Weingartner concluded with a sentiment we saw all too frequently: “Many creative, hard-working, and/or underrepresented scientists are edged out of science because of these issues. Not every student or university will have all of these unfortunate experiences, but they’re pretty common. There are a lot of young, disillusioned scientists out there now who are expecting to leave research.”

Science needs to correct its greatest weaknesses

Science is not doomed.

For better or worse, it still works. Look no further than the novel vaccines to prevent Ebola, the discovery of gravitational waves , or new treatments for stubborn diseases. And it’s getting better in many ways. See the work of meta -researchers who study and evaluate research — a field that has gained prominence over the past 20 years.

More from this feature

We asked hundreds of scientists what they’d change about science. Here are 33 of our favorite responses.

But science is conducted by fallible humans, and it hasn’t been human-proofed to protect against all our foibles. The scientific revolution began just 500 years ago. Only over the past 100 has science become professionalized. There is still room to figure out how best to remove biases and align incentives.

To that end, here are some broad suggestions:

One: Science has to acknowledge and address its money problem. Science is enormously valuable and deserves ample funding. But the way incentives are set up can distort research.

Right now, small studies with bold results that can be quickly turned around and published in journals are disproportionately rewarded. By contrast, there are fewer incentives to conduct research that tackles important questions with robustly designed studies over long periods of time. Solving this won’t be easy, but it is at the root of many of the issues discussed above.

Two: Science needs to celebrate and reward failure. Accepting that we can learn more from dead ends in research and studies that failed would alleviate the “publish or perish” cycle. It would make scientists more confident in designing robust tests and not just convenient ones, in sharing their data and explaining their failed tests to peers, and in using those null results to form the basis of a career (instead of chasing those all-too-rare breakthroughs).

Three: Science has to be more transparent. Scientists need to publish the methods and findings more fully, and share their raw data in ways that are easily accessible and digestible for those who may want to reanalyze or replicate their findings.

There will always be waste and mediocre research, but as Stanford’s Ioannidis explains in a recent paper , a lack of transparency creates excess waste and diminishes the usefulness of too much research.

Again and again, we also heard from researchers, particularly in social sciences, who felt that their cognitive biases in their own work, influenced by pressures to publish and advance their careers, caused science to go off the rails. If more human-proofing and de-biasing were built into the process — through stronger peer review, cleaner and more consistent funding, and more transparency and data sharing — some of these biases could be mitigated.

These fixes will take time, grinding along incrementally — much like the scientific process itself. But the gains humans have made so far using even imperfect scientific methods would have been unimaginable 500 years ago. The gains from improving the process could prove just as staggering, if not more so.

Correction: An earlier version of this story misstated Noah Grand’s title. At the time of the survey he was a lecturer in sociology at UCLA, not a professor.

Editor: Eliza Barclay

Visuals: Javier Zarracina (charts), Annette Elizabeth Allen (illustrations)

Readers: Steven J. Hoffman, Konstantin Kakaes

Most Popular

  • Kamala Harris’s speech triggered a vintage Trump meltdown
  • A Trump judge ruled there’s a Second Amendment right to own machine guns
  • The difference between American and UK Love Is Blind
  • The massive Social Security number breach is actually a good thing
  • What you need to know about the new Covid-19 vaccine

Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.

 alt=

This is the title for the native ad

 alt=

More in Features

One complicated reason homeschooling is on the rise

Some families of students with disabilities feel pushed out of public schools.

What happens when everyone decides they need a gun?

We are living through an inflection point in America’s relationship with guns. There may be no going back.

How computers made poker a game for nerds

Algorithms changed the classic card game for good — but is it for the better?

The false promise of the “timeless” wedding

“The garden party,” “old money,” and “cool girl” weddings are starting to feel more like dinner parties.

There’s a secret wildlife wonderland hidden in the US — and it’s in danger

Countless rare animals lurk under the logs in the Appalachian Mountains.

House of the Dragon and the Targaryen family, explained

HBO’s Game of Thrones prequel reveals a period of turbulence and upheaval for Daenerys’s formidable family.

BETHESDA, MD - JUNE 29, 2019: NIH NATIONAL INSTITUTES OF HEALTH sign emblem seal on gateway center entrance building at NIH campus. The NIH is the US's medical research agency.

The NIH Has the Opportunity to Address Research Funding Disparities

By Leah Pierson

The Biden administration plans to greatly increase funding for the National Institutes of Health (NIH) in 2022, presenting the agency with new opportunities to better align research funding with public health needs.

The NIH has long been criticized for disproportionately devoting its research dollars to the study of conditions that affect a small and advantaged portion of the global population .

For instance, three times as many people have sickle cell disease — which disproportionately affects Black people — than cystic fibrosis — which disproportionately affects white people. Despite this, the NIH devotes comparable research funding to both diseases. These disparities are further compounded by differences in research funding from non-governmental organizations, with philanthropies spending seventy-five times more per patient on cystic fibrosis research than on sickle cell disease research.

Diseases that disproportionately affect men also receive more NIH funding than those that primarily affect women. This disparity can be seen in the lagging funding for research on gynecologic cancers. The NIH presently spends eighteen times as much on prostate cancer than ovarian cancer per person-years of life lost for every case, and although this difference is partly explained by the fact that prostate cancer is far more prevalent than ovarian cancer, this disparity persists even after prevalence is accounted for. Making matters even worse, funding for research on gynecological cancers has fallen , even as overall NIH funding has increased.

Disparities in what research is funded are further compounded by disparities in who gets funded. Black scientists are also about half as likely to receive NIH funding than white scientists, and this discrepancy holds constant across academic ranks (e.g., between Black and white scientists who are full professors ). This disparity is partly driven by topic choice , with grant applications from Black scientists focusing more frequently on “health disparities and patient-focused interventions,” which are topics that are less likely to be funded. Recent calls to address structural racism in research funding have led the NIH to commit $90 million to combatting health disparities and researching the health effects of discrimination, although this would represent less than two percent of the Biden administration’s proposed NIH budget.

The disconnect between research funding and public health needs is also driven by the fact that the NIH tends to fund relatively little social science research. For instance, police violence is a pressing public health problem: in 2019, more American men were killed by police violence than by Hodgkin lymphoma or testicular cancer. But unlike Hodgkin lymphoma and testicular cancer, which receive tens of millions of dollars of research funding from the NIH every year and additional funding from non-governmental organizations and private companies , the NIH funds little research on police violence. For instance, in 2021, only six NIH funded projects mentioned “ police violence , ” “ police shooting , ” or “ police force” in their title, abstract, or project terms, while 119 mentioned “ Hodgkin lymphoma ” and 24 mentioned “ testicular cancer. ”

While many view the NIH as an organization focused exclusively on basic science research, its mandate is much broader. Indeed, the NIH’s mission is “to seek fundamental knowledge about the nature and behavior of living systems and the application of that knowledge to enhance health, lengthen life, and reduce illness and disability.” Epidemiologists, health economists, and other social science researchers studying how societies promote or undermine health should thus receive NIH funding that is more proportionate to the magnitude of the health problems they research.

Research funding disparities have multiple causes and warrant different solutions, from prioritizing work conducted by scientists from underrepresented backgrounds, to ensuring that there is gender parity in the size of NIH grants awarded to first-time Principal Investigators. To address the broader problem of scientific priorities not reflecting the size of health problems, the NIH should instruct grant reviewers to consider how many people are affected by a health problem, how serious that health problem is for each person affected by it, and whether a disease primarily affects marginalized populations. In addition, the NIH should commit to funding more research on public health problems — like police violence — that cause substantial harm but receive relatively little attention from the health research enterprise.

As the NIH prepares for a massive influx of funding, it must follow through on its commitment to address health research funding disparities.

Share this:

  • Click to share on Facebook (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to print (Opens in new window)

Leave a Reply Cancel reply

You must be logged in to post a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Sign up for our newsletter

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian Dermatol Online J
  • v.12(1); Jan-Feb 2021

Research Funding—Why, When, and How?

Shekhar neema.

Department of Dermatology, Armed Forces Medical College, Pune, Maharashtra, India

Laxmisha Chandrashekar

1 Department of Dermatology, Jawaharlal Institute of Postgraduate Medical Education and Research (JIPMER), Dhanvantari Nagar, Puducherry, India

Research funding is defined as a grant obtained for conducting scientific research generally through a competitive process. To apply for grants and securing research funding is an essential part of conducting research. In this article, we will discuss why should one apply for research grants, what are the avenues for getting research grants, and how to go about it in a step-wise manner. We will also discuss how to write research grants and what to be done after funding is received.

Introduction

The two most important components of any research project is idea and execution. The successful execution of the research project depends not only on the effort of the researcher but also on available infrastructure to conduct the research. The conduct of a research project entails expenses on man and material and funding is essential to meet these requirements. It is possible to conduct many research projects without any external funding if the infrastructure to conduct the research is available with the researcher or institution. It is also unethical to order tests for research purpose when it does not benefit patient directly or is not part of the standard of care. Research funding is required to meet these expenses and smooth execution of research projects. Securing funding for the research project is a topic that is not discussed during postgraduation and afterwards during academic career especially in medical science. Many good ideas do not materialize into a good research project because of lack of funding.[ 1 ] This is an art which can be learnt only by practising and we intend to throw light on major hurdles faced to secure research funding.

Why Do We Need the Funds for Research?

It is possible to publish papers without any external funding; observational research and experimental research with small sample size can be conducted without external funding and can result in meaningful papers like case reports, case series, observational study, or small experimental study. However, when studies like multi-centric studies, randomized controlled trial, experimental study or observational study with large sample size are envisaged, it may not be possible to conduct the study within the resources of department or institution and a source of external funding is required.

Basic Requirements for Research Funding

The most important requirement is having an interest in the particular subject, thorough knowledge of the subject, and finding out the gap in the knowledge. The second requirement is to know whether your research can be completed with internal resources or requires external funding. The next step is finding out the funding agencies which provide funds for your subject, preparing research grant and submitting the research grant on time.

What Are the Sources of Research Funding? – Details of Funding Agencies

Many local, national, and international funding bodies can provide grants necessary for research. However, the priorities for different funding agencies on type of research may vary and this needs to be kept in mind while planning a grant proposal. Apart from this, different funding agencies have different timelines for proposal submission and limitation on funds. Details about funding bodies have been tabulated in Table 1 . These details are only indicative and not comprehensive.

Details of funding agencies

Funding agencyTimelineKey thrust areas
Local
InstituteVariable, depends on instituteNot defined, mostly student research
University grants commission (UGC)[ ]Any time of yearRetired or working teachers in college and university under section 2(f) and 12 (b) of the UGC act 1956. The list is available on UGC website.
Evaluation in January and July
Major research project - up to 12 lacs
Minor research project - 1 lac
Indian association of Dermatologist, Venereologist and Leprologist (IADVL)[ ]March - AprilBasic sciences, clinical, laboratory based, epidemiological or quality of life studies. Up to Rs. 500,000 per project per
Life Member of IADVL and one of the few grants in which private practitioners can also apply. Other grants available from IADVL are Post Graduate thesis grant and L’Oreal research grant.
Indian Council of Medical Research (ICMR)[ ]Oct - NovBasic science, communicable and non-communicable disease, nutrition
 Short term studentshipTo facilitate undergraduate research. Funding is 25,000 per student
 Ad-hoc extramural researchLimit is up to 30 lacs per project
 Task force research projectMulticentric projects
 Financial support for thesisWithin 12 months of registration of MDAnti-microbial resistance, tuberculosis, HIV/AIDS, malaria, diabetes, maternal and child health
A total assistance of Rs 50,000/- will be given
Department of science and technology[ ]
Core research grant (extramural research grant)Apr - MayLifesciences
Notification on serbonline.in
Early career research awardNotification on serbonline. inLifesciences. Maximum funding is 50 lacs per proposal
Upper age limit is 37 years
Department of biotechnology[ ]Notification on dbtindia. gov. inVaccine research, nutrition and public health, stem cells and regenerative medicine, infectious and chronic disease biology
Council of scientific and industrial research (CSIR)[ ]Anytime of the yearProject in collaboration with CSIR institutes are given priority
Evaluation twice a year
Defence Research and Development Organisation (DRDO) (Life sciences research board)[ ]Any time of yearProject of national/defence interest
Call for proposal specify the key thrust areas
Department of Health Research (DHR) _ Grant aid scheme[ ]Any time of the yearPublic health
Translational research project
Cost- effectiveness analysis of health technologies
National psoriasis foundation (NPF)[ ]Call for proposal available on websiteVarious research grants are available for psoriasis and includes: Psoriasis prevention initiative, milestone to a cure, Discovery, Translational, Early career research grant and Bridge grants
National Institute of Health (NIH)[ ]Call for proposal available onlineLimited research grants applicable to researcher outside USA
Leo foundation[ ]Call for proposal available on websiteImprove the understanding of the underlying medicinal, biological, chemical, or pharmacological mechanisms of dermatological diseases and their symptoms

Application for the Research Grant

Applying for a research grant is a time-consuming but rewarding task. It not only provides an opportunity for designing a good study but also allows one to understand the administrative aspect of conducting research. In a publication, the peer review is done after the paper is submitted but in a research grant, peer review is done at the time of proposal, which helps the researcher to improve his study design even if the grant proposal is not successful. Funds which are available for research is generally limited; resulting in reviewing of a research grant on its merit by peer group before the proposal is approved. It is important to be on the lookout for call for proposal and deadlines for various grants. Ideally, the draft research proposal should be ready much before the call for proposal and every step should be meticulously planned to avoid rush just before the deadline. The steps of applying for a research grant are mentioned below and every step is essential but may not be conducted in a particular order.

  • Idea: The most important aspect of research is the idea. After having the idea in mind, it is important to refine your idea by going through literature and finding out what has already been done in the subject and what are the gaps in the research. FINER framework should be used while framing research questions. FINER stands for feasibility, interesting, novel, ethical, and relevant
  • Designing the study: Well-designed study is the first step of a well-executed research project. It is difficult to correct flawed study design when the project is advanced, hence it should be planned well and discussed with co-workers. The help of an expert epidemiologist can be sought while designing the study
  • Collaboration: The facility to conduct the study within the department is often limited. Inter-departmental and inter-institutional collaboration is the key to perform good research. The quality of project improves by having a subject expert onboard and it also makes acceptance of grant easier. The availability of the facility for conduct of research in department and institution should be ascertained before planning the project
  • Scientific and ethical committee approval: Most of the research grants require the project to be approved by the institutional ethical committee (IEC) before the project is submitted. IEC meeting usually happens once in a quarter; hence pre-planning the project is essential. Some institutes also conduct scientific committee meeting before the proposal can be submitted for funding. A project/study which is unscientific is not ethical, therefore it is a must that a research proposal should pass both the committees’ scrutiny
  • Writing research grant: Writing a good research grant decides whether research funding can be secured or not. So, we will discuss this part in detail.

How to write a research grant proposal [ 13 , 14 , 15 ] The steps in writing a research grant are as follows

  • Identifying the idea and designing the study. Study design should include details about type of study, methodology, sampling, blinding, inclusion and exclusion criteria, outcome measurements, and statistical analysis
  • Identifying the prospective grants—the timing of application, specific requirements of grant and budget available in the grant
  • Discussing with collaborators (co-investigators) about the requirement of consumables and equipment
  • Preparing a budget proposal—the two most important part of any research proposal is methodology and budget proposal. It will be discussed separately
  • Preparing a specific proposal as outlined in the grant document. This should contain details about the study including brief review of literature, why do you want to conduct this study, and what are the implications of the study, budget requirement, and timeline of the study
  • A timeline or Gantt chart should always accompany any research proposal. This gives an idea about the major milestones of the project and how the project will be executed
  • The researcher should also be ready for revising the grant proposal. After going through the initial proposal, committee members may suggest some changes in methodology and budgetary outlay
  • The committee which scrutinizes grant proposal may be composed of varied specialities. Hence, proposal should be written in a language which even layman can understand. It is also a good idea to get the proposal peer reviewed before submission.

Budgeting for the Research Grant

Budgeting is as important as the methodology for grant proposal. The first step is to find out what is the monetary limit for grant proposal and what are the fund requirements for your project. If these do not match, even a good project may be rejected based on budgetary limitations. The budgetary layout should be prepared with prudence and only the amount necessary for the conduct of research should be asked. Administrative cost to conduct the research project should also be included in the proposal. The administrative cost varies depending on the type of research project.

Research fund can generally be used for the following requirement but not limited to these; it is helpful to know the subheads under which budgetary planning is done. The funds are generally allotted in a graded manner as per projected requirement and to the institution, not to the researcher.

  • Purchase of equipment which is not available in an institution (some funding bodies do not allow equipment to be procured out of research funds). The equipment once procured out of any research fund is owned by the institute/department
  • Consumables required for the conduct of research (consumables like medicines for the conduct of the investigator-initiated trials and laboratory consumables)
  • The hiring of trained personnel—research assistant, data entry operator for smooth conduct of research. The remuneration details of trained personnel can be obtained from the Indian Council of Medical Research (ICMR) website and the same can be used while planning the budget
  • Stationary—for the printing of forms and similar expense
  • Travel expense—If the researcher has to travel to present his finding or for some other reason necessary for the conduct of research, travel grant can be part of the research grant
  • Publication expense: Some research bodies provide publication expense which can help the author make his findings open access which allows wider visibility to research
  • Contingency: Miscellaneous expenditure during the conduct of research can be included in this head
  • Miscellaneous expenses may include expense toward auditing the fund account, and other essential expenses which may be included in this head.

Once the research funding is granted. The fund allotted has to be expended as planned under budgetary planning. Transparency, integrity, fairness, and competition are the cornerstones of public procurement and should be remembered while spending grant money. The hiring of trained staff on contract is also based on similar principles and details of procurement and hiring can be read at the ICMR website.[ 4 ] During the conduct of the study, many of grant guidelines mandate quarterly or half-yearly progress report of the project. This includes expense on budgetary layout and scientific progress of the project. These reports should be prepared and sent on time.

Completion of a Research Project

Once the research project is completed, the completion report has to be sent to the funding agency. Most funding agencies also require period progress report and project should ideally progress as per Gantt chart. The completion report has two parts. The first part includes a scientific report which is like writing a research paper and should include all subheads (Review of literature, material and methods, results, conclusion including implications of research). The second part is an expense report including how money was spent, was it according to budgetary layout or there was any deviation, and reasons for the deviation. Any unutilized fund has to be returned to the funding agency. Ideally, the allotted fund should be post audited by a professional (chartered accountant) and an audit report along with original bills of expenditure should be preserved for future use in case of any discrepancy. This is an essential part of any funded project that prevents the researcher from getting embroiled in any accusations of impropriety.

Sharing of scientific findings and thus help in scientific advancement is the ultimate goal of any research project. Publication of findings is the part of any research grant and many funding agencies have certain restrictions on publications and presentation of the project completed out of research funds. For example, Indian Association of Dermatologists, Venereologists and Leprologists (IADVL) research projects on completion have to be presented in a national conference and the same is true for most funding agencies. It is imperative that during presentation and publication, researcher mentions the source of funding.

Research funding is an essential part of conducting research. To be able to secure a research grant is a matter of prestige for a researcher and it also helps in the advancement of career.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Public Praises Science; Scientists Fault Public, Media
  • Section 3: Funding Scientific Research

Table of Contents

  • Section 1: Public Views of Science and Scientists
  • Section 2: Scientists Assess the State of Their Field
  • Section 4: Scientists, Politics and Religion
  • Section 5: Evolution, Climate Change and Other Issues
  • Section 6: Scientists and Their Careers
  • Section 7: Science Interest and Knowledge
  • About the Survey

There is broad agreement among scientists that a lack of funding currently represents the biggest impediment to conducting high-quality scientific research. Nearly half (46%) cite a lack of funding for basic research as a very serious impediment to high-quality research, while another 41% say it is a serious impediment.

A majority of scientists (56%) say that visa and immigration problems facing foreign scientists or students who want to work or study in the United States present either a very serious (17%) or serious (39%) obstacle to high-quality scientific research in this country. This view is particularly widespread among scientists who are not U.S. citizens: 78% of non-citizens see visa problems as a serious impediment to research, with 43% saying it is a very serious obstacle. By comparison, a smaller majority of U.S. citizens (54%) say visa problems for foreign scientists and students are a serious impediment to high-quality research, with just 14% calling it very serious.

Far fewer scientists see other factors as presenting serious obstacles to high-quality research. Just 27% say that regulations on the use of animals in research are very serious (6%) or serious (21%) impediments to research; more than half (59%) say these regulations are not serious impediments. Even among researchers who have worked on projects involving animal subjects in the past five years – roughly a third of the scientists interviewed – only about three-in-ten (31%) see restrictions on animal research as a serious impediment.

Just 21% of scientists say that regulations to prevent U.S. technology from being misused overseas are a serious impediment to high-quality research. Physicists and astronomers are far more likely than those in other disciplines to see these regulations as a serious barrier to research (40%).

About one-in-five scientists (19%) say the way that institutional review boards implement rules on human subjects is a serious impediment to high-quality research. Scientists who have worked on a research project with human subjects in the past five years are about twice as likely as those who have not worked with human subjects (31% vs. 16%) to see this as a serious impediment.

Funders’ Priorities

In general, scientists say that most of the funders of scientific research in their field emphasize low-risk, low-reward projects over high-risk projects that have the potential for scientific breakthroughs.

Comparable shares of scientists working in applied (62%) and basic (60%) research say that most research funders in their fields emphasize lower risk projects expected to make incremental progress. Across scientific disciplines, those working in the biological and medical sciences are more likely than others to say that most funders stress low-risk projects.

Most Decry Funding Chase

Half of scientists (50%) say that political groups or officials have too much influence on the direction of research in their specialty, while 47% disagree. Scientists who primarily address applied research questions (55%) are more likely than those involved in basic research (45%) to say that political groups or officials have too much influence. In addition, more scientists working in government (62%) and industry (56%) say political groups or officials have too much influence than do those in non-profits (45%) or academia (45%).

The Color of Money

Roughly two-thirds (68%) of scientists working in industry say that possible financial rewards lead some in their specialty to pursue projects that yield marketable products, but do little to advance science. By comparison, only about four-in-ten of those working in government (43%), academia (43%) or for non-profits (42%) say this.

For the most part, scientists – those in industry and elsewhere – do not see the prospect of personal financial gain leading colleagues to cut corners on research quality or to violate ethical standards. Overall, about a quarter (26%) says the possibility of making a lot of money leads colleagues to cut corners in research while 11% say it has led scientists in their specialty to pursue research that violates ethical standards.

Government Dominates Research Funding

Overwhelming percentages of scientists working in basic (91%) and applied research (81%) cite federal government sources as among the most important in their specialty, as do more than eight-in-ten across all scientific disciplines.

Nearly half of scientists (49%) specify the National Institutes of Health (NIH) among the most important sources funding their research area; and roughly the same number (47%) cite the National Science Foundation (NSF). The shares mentioning each of these government agencies nearly equals the proportion (50%) citing any kind of non-government funding source as most important.

As might be expected, NIH is particularly important in funding biological and medical sciences; nearly two-thirds of the scientists in that field (65%) name NIH as among the most important funding sources in their specialty. A majority of chemists (59%) also name NIH as among the most important funders in their discipline.

The NSF is cited most frequently by geoscientists (70%) and physicists and astronomers (62%) and by a majority of chemists (56%). The Department of Energy, mentioned by 13% of scientists overall, is a particularly important funding source in physics and astronomy (45%). In addition, a third of physicists and astronomers (33%) cite the Department of Defense among the most important funding sources in their field, far more than do scientists working in other specialties.

Half of all scientists (50%) cite one or more non-government funding source – including foundations, non-profits and industry – as among the most important for their specialty. Scientists working in applied research (57%) are more likely than those working in basic research (46%) to mention a non-government funding source as most important. Among scientific specialties, a majority of those working in biological and medical sciences (55%) cites non-government sources as among the most important, as do 53% of chemists. Far fewer of those working in geosciences (35%) and in physics and astronomy (28%) point to non-government funding sources as most important.

Even among scientists who themselves work for business or industry employers, the government is seen as a significant source of funding. Nearly two-thirds (64%) list one or more government sources as among the most important to their field of scientific specialty, with 26% explicitly mentioning NIH and 22% mentioning NSF. Roughly half (52%) list industry sources as most important within their field.

Public’s View: Government Funding Needed

As is often the case with opinions about the role of government, there is a substantial partisan divide in views of government investment in scientific research. Fewer than half of conservative Republicans (44%) say that government investment in research is essential for scientific progress; 48% of conservative Republicans say private investment will ensure that scientific progress is made. By comparison, 56% of moderate and liberal Republicans, 59% of independents and a much larger majority of Democrats (71%) say that government investment in research is essential.

Opinions about these investments vary little across political and demographic groups. Eight-in-ten Democrats (80%) say that government investments in basic science research pay off in the long run, as do 72% of independents and 68% of Republicans. Views about whether government engineering and technological investments pay off largely mirror those about basic science investments.

Stable Support for Science Spending

However, the public’s support for increased spending has declined for many policy areas, while opinions about government spending on scientific research have changed little since 2001.

Currently, 39% say they would increase spending on scientific research; about the same share (40%) say they would keep spending the same; 14% say they would decrease the budget for scientific research. In April 2001, 41% said they would increase spending, 46% favored keeping spending the same, while 10% favored less spending for scientific research.

In April 2001, there was little difference in partisan opinions about spending on science. Roughly four-in-ten independents (43%), Democrats (38%) and Republicans (37%) favored increased spending. Today, about half (51%) of Democrats favor increasing spending on science, up 13 points from 2001; among Republicans, just 25% support increasing the budget for scientific research, down 12 points over the same period. Opinion among independents has changed little (40% favor increased spending today, 43 % in 2001).

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Economic Policy
  • Science Funding & Policy
  • Scientists’ Views

Americans’ Views of Government’s Role: Persistent Divisions and Areas of Agreement

Biden, trump supporters both say the u.s. economic system unfairly favors powerful interests, 7 facts about americans and taxes, americans’ top policy priority for 2024: strengthening the economy, congress has long struggled to pass spending bills on time, most popular, report materials.

  • June 2009 Omnibus Survey
  • May 2009 Science Survey

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

Enago Academy

Grant Funding: Known Problem Areas & Likely Solutions

' src=

A recent article published in PLOS One raises questions over the need for competitive research funding . There are many problems with how funding agencies currently operate. Academic researchers might not receive funding because of gender, affiliation, or ethnicity biases. In addition, deciding who receives federal funding is an expensive process. The process also seems to be unreliable. Luck plays a bigger role than it should in the grant award process.

Remove the Competition

There have been suggestions to fix the process. Funding agencies could more carefully select their reviewers. They could also award grants by assessing academic researchers instead of grant proposals. Making the peer review process transparent could force reviewers to be more thorough. Funding agencies could also combine a simpler review process with a lottery.

A shift in the funding process could mean giving money to researchers and not their projects. There are many ways that this could happen. Funding agencies could give grants based on merit. This would involve assessing a researcher’s track record. However, this does not work well for young researchers. Therefore, a funding lottery could be used. This would mean researchers would be randomly funded. All federal funding could also be evenly distributed among scientists.

This concept of egalitarian funding is the focus of the PLOS One paper . This approach would end bias. It could also reduce the incentive to commit academic fraud. Having steady funding could also keep talented academic researchers from leaving their labs. It would be much cheaper to administer egalitarian grants. The data suggest that researchers with large grants generate less impact per dollar. Awarding grant money equally could result in more effective grant usage.

Study Reveals the Possibilities

The study focused on the Netherlands, the United States of America, and the United Kingdom. The authors assumed that the research project being funded would last for five years. Based on the amount of Dutch federal funding available, each professor would get €390,000 or $507,000. If the researchers formed groups of five, each group would have $2.5 million. Dutch institutions usually pay their researchers’ salaries. This means that all of this money could be spent solely on research.

The authors assumed that current PhD student and postdoctoral fellow rates would not change. This means that a Dutch researcher would have about $160,000 left to spend on equipment and travel. If they formed research groups of five, this would mean there would be $800 million to spend over five years.

In the United States, each researcher would get about $553,000 over five years. This would allow them to pay PhD students and postdoctoral fellows. A research team of five would then have about $2.1 million to spend on travel and equipment. This would mean each professor would have about $418,000 in their research budget. (American professors’ salaries are also paid by their institutions).

In the United Kingdom, each researcher would have $364,000. The authors assumed that the UK and the Netherlands had similar employment rates. In this case, each researcher would have about $87,000 over five years. A five-member research team would have $717,000 at their disposal. In the United Kingdom, universities can choose how to spend the grant money. It is possible that some of the grant money received is used to pay staff salaries.

Pros and Cons

This egalitarian model is very different from the current way of awarding grants. The paper suggests that this could be a useful way to keep research labs afloat. There would be enough money for students, postdocs, equipment, and travel for most researchers. However, this depends on the nature of the research. Some experiments are significantly more expensive than others. These budgets could be supplemented by the resources currently spent on grant review.

One of the criticisms of this paper is the fact that the grant award process currently controls the number of researchers. Under the egalitarian model, scientists would have to compete for faculty positions. This would qualify them to receive their share of federal funding. Since there would be no small grants, it would become an all or nothing situation.

Another criticism is the automated way of assessing an applicant’s research track record. Any metric that is used to determine who should be funded could be manipulated . Scientists have been known to commit research fraud in search of prestigious publications. There have also been instances of fake peer review. Scientists have even formed groups to artificially inflate their citation rates.

The current way of allocating research funding has some flaws. It is an expensive and time-consuming process. There are also biases in the way reviewers assess applicants. It has been suggested that funding agencies change the way they operate to improve the way grants are awarded. One fairly radical suggestion is to evenly divide federal funding among all scientists. This might be one way to help research groups move forward. It would also ease the burden on grant reviewers. Under this system, researchers in more expensive areas may need supplementary funding.

Have you faced similar issues in funding your research ? Let us know your thoughts in the comments below!

Rate this article Cancel Reply

Your email address will not be published.

research funding problems

Enago Academy's Most Popular Articles

Secure Research Funding in 2024: AI-Powered Grant Writing Strategies

  • Manuscripts & Grants
  • Reporting Research

Mastering Research Grant Writing in 2024: Navigating new policies and funder demands

Entering the world of grants and government funding can leave you confused; especially when trying…

Clarivate launches Web of Science Grants Index for researchers

  • Industry News

Optimizing Funding Strategies: Clarivate unveils Web of Science Grants Index for researchers

Clarivate Plc, a global leader in providing information services, has recently launched the Web of…

grant allocation

  • Diversity and Inclusion

Addressing Socioeconomic Disparities in Academic Funding: Inclusivity in research grant allocation

Research grant allocation is a critical process that determines the distribution of funds to various…

Open Access Publication Funds

  • Thought Leadership
  • Trending Now

Are Researchers Alone Responsible for Securing Open Access Publication Funds?

Should we expect research to make a societal impact, when it is not accessible to…

research funding problems

  • Career Corner
  • PhDs & Postdocs

Opening Doors to Academic Inclusivity: The significance of open access funding

Academia is an ever-evolving space where researchers strive to advance our understanding of the world…

Investing in Visibility: Incorporating publishing funds effectively in grant…

Learn How to Write a Persuasive Letter of Support for Grant

How to Write a Data Management Plan During Grant Application

research funding problems

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

  • Publishing Research
  • AI in Academia
  • Promoting Research
  • Infographics
  • Expert Video Library
  • Other Resources
  • Enago Learn
  • Upcoming & On-Demand Webinars
  • Peer Review Week 2024
  • Open Access Week 2023
  • Conference Videos
  • Enago Report
  • Journal Finder
  • Enago Plagiarism & AI Grammar Check
  • Editing Services
  • Publication Support Services
  • Research Impact
  • Translation Services
  • Publication solutions
  • AI-Based Solutions
  • Call for Articles
  • Call for Speakers
  • Author Training
  • Edit Profile

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

research funding problems

In your opinion, what is the most effective way to improve integrity in the peer review process?

  • U.S. Department of Health & Human Services

National Institutes of Health (NIH) - Turning Discovery into Health

  • Virtual Tour
  • Staff Directory
  • En Español

You are here

Grants & funding.

The National Institutes of Health is the largest public funder of biomedical research in the world. In fiscal year 2022, NIH invested most of its $45 billion appropriations in research seeking to enhance life, and to reduce illness and disability. NIH-funded research has led to breakthroughs and new treatments helping people live longer, healthier lives, and building the research foundation that drives discovery.

three-scientists-goggles-test-tube.jpg

Three scientists wearing goggles looking at a test tube.

Grants Home Page

NIH’s central resource for grants and funding information.

lab-glassware-with-colorful-liquid-square.jpg

Laboratory glassware with colorful liquid.

Find Funding

NIH offers funding for many types of grants, contracts, and even programs that help repay loans for researchers.

calendar-page-square.jpg

Calendar page

Grant applications and associated documents (e.g., reference letters) are due by 5:00 PM local time of application organization on the specified due date.

submit-key-red-square.jpg

Close-up of a red submit key on a computer keyboard.

How to Apply

Instructions for submitting a grant application to NIH and other Public Health Service agencies.

female-researcher-in-lab-square.jpg

Female researcher in the laboratory.

About Grants

An orientation to NIH funding, grant programs, how the grants process works, and how to apply.

binder-with-papers-on-office-desk-square.jpg

Binder with papers on office desk.

Policy & Compliance

By accepting a grant award, recipients agree to comply with the requirements in the NIH Grants Policy Statement unless the notice of award states otherwise.

blog-key-blue-square.jpg

Blue blog key on a computer keyboard.

Grants News/Blog

News, updates, and blog posts on NIH extramural grant policies, processes, events, and resources.

scientist-flipping-through-report-square.jpg

Scientist flipping through a report in the laboratory.

Explore opportunities at NIH for research and development contract funding.

smiling-female-researcher-square.jpg

Smiling female researcher.

Loan Repayment

The NIH Loan Repayment Programs repay up to $50,000 annually of a researcher’s qualified educational debt in return for a commitment to engage in NIH mission-relevant research.

Connect with Us

  • More Social Media from NIH

Last Published 8/23/24

Division of Academic Affairs

Office of Research and Sponsored Programs Research and Sponsored Programs

Extramural Funding

ORSP is committed to supporting faculty and staff as they seek extramural funding opportunities to carry out research or programs that reflect the university’s mission and strategic plan. Types of extramural funding sources include federal, state or local governments, as well as corporate and private foundations. Faculty, staff and students are encouraged to explore past extramural awards, funding opportunities and the Grant Life Cycle. 

Extramural Awards

Explore previous extramural funding awards granted to CSUF faculty, staff, and students. 

View Past Recipients

Find information on how to locate the best opportunities for your specific project.

Grant Life Cycle

The Grant Life Cycle provides a brief overview of the research proposal process at Cal State Fullerton.

mobile menu Menu

Lock Icon - login required Portal

Campus Wide Resources

on this page Page Topics

Search Icon Search

switch to light mode Dark Mode

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: October 2003

Research funding: the problem with priorities

Nature Materials volume  2 ,  page 639 ( 2003 ) Cite this article

8404 Accesses

3 Citations

2 Altmetric

Metrics details

Who is best placed to decide which blue-sky research projects to fund — government, or academia? This issue is at the heart of a debate between UK physical scientists and the Engineering and Physical Sciences Research Council (EPSRC), a government body responsible for allocating funding to the physics, materials science, chemistry, engineering and mathematics communities. In an open letter to the UK government's Science and Technology Committee 1 , the physics community 2 (represented by the Institute of Physics; IOP) have expressed their concerns about the low success rate and lack of transparency associated with the prioritizing and funding of curiosity-driven research by the EPSRC. In particular, the selection criteria for basic science grants are very hazy to most researchers, who only know that their applications are being rejected, even after receiving very positive comments from referees.

Certainly UK physical sciences research is not in a happy state. Staff numbers in physics and engineering continue to decline. The amount of funding for physical sciences and engineering as a proportion of the total funding for all research areas also continues to drop. In the past year, physical-science researchers have seen the success rate of grant proposals funded by the EPSRC fall to new lows. The IOP estimates that the success rate has fallen to just 10–15% for so-called responsive grants used for funding curiosity-driven basic research, compared to around 30% for the managed programmes. Researchers are asking the question: is the recent decline in the success rate of curiosity-driven grant proposals just unfortunate, or part of a deliberate strategy?

What is clear is that when it comes to funding science, governments are not interested in providing a pool of money simply for the purposes of satisfying researchers' curiosity. Rather, they like to think in broad strategic terms — which research areas are most likely to lead to future advances in technology and wider societal benefits. This issue is by no means confined to the UK: there is a general trend in Europe and the US for basic research to be directed towards the same areas: nanotechnology, materials for energy and photonics to name a few. Many of these areas are undoubtedly going to be important for the future development of science and technology in the UK. But what many researchers are concerned about is that funding for these managed programs is eating into the funding available for bottom-up blue-sky research. The UK excels in a few key fields — organic semiconductors, photonics and carbon electronics, for example — and these fields are held up by the EPSRC as examples of past success. But by and large, these were unanticipated successes, rather than arising out of a deliberate effort. Meanwhile, the UK is falling further behind in other research areas 3 . This situation will surely not be helped by further concentration of funding in a few 'strategic' programmes.

In the end, this is not an argument about increasing the funding of basic physical sciences research. It is a question of whether funding priorities should be so heavily skewed towards a few so-called strategic areas. Should researchers be forced (even indirectly) to change the aim of their research so that it better complies with so-called strategic priorities? Interdisciplinary initiatives such as the Life Sciences Interface Programme — a joint EPSRC/Medical Research Council initiative — are drawing a larger proportion of government funding. However, because of their interdisciplinary nature, many of these proposals are scientifically weak in one of the component fields, and do not make it through the refereeing process. As the interdisciplinarity of basic science research increases, this problem will undoubtedly increase. The criteria for funding grants within and between the difference research councils urgently needs to be standardized. The EPSRC should not ignore the concerns of the physical sciences and engineering community, but should work with it to increase transparency and broaden the base of research topics on which they receive advice before funds are doled out the next time around.

UK Science and Technology Committee reports: http://www.parliament.the-stationery-office.co.uk/pa/cm/cmsctech.htm

IOP response to the Science and Technology Committee's scrutiny of the EPSRC: http://policy.iop.org/Policy/EPSRC%20scrutiny-final.doc

International Panel Review of UK Physics and Astronomy Research: http://policy.iop.org/Policy/Intrev.html

Download references

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Research funding: the problem with priorities. Nature Mater 2 , 639 (2003). https://doi.org/10.1038/nmat992

Download citation

Issue Date : October 2003

DOI : https://doi.org/10.1038/nmat992

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

The business of science.

Nature Materials (2006)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research funding problems

SC Daily Gazette

  • Criminal justice
  • Economy + workforce
  • Election 2024

USC ends teacher-training program on ‘culturally relevant’ K-12 lessons, citing funding issues

By: jessica holdman - august 23, 2024 4:24 pm.

research funding problems

The Horseshoe of the University of South Carolina campus, Monday, Oct. 30, 2023 in Columbia, S.C.(File/Mary Ann Chastain/Special to the SC Daily Gazette)

COLUMBIA – The University of South Carolina is ending a teacher-training program that sought to improve Black students’ success in school by incorporating their life experiences in classroom lessons.

The governing board for the state’s largest university system voted Friday to end the Center for the Education and Equity of African American Students — a division of USC’s College of Education — stating it was not financially viable, a requirement for all university programs. The program’s cost and potential shortfalls were not discussed.

USC Provost Donna Arnett told the board the center’s executive director agreed to its closure, which immediately ends all related  training seminars for K-12 teachers.

The program’s discontinuation falls against a backdrop of political pressure surrounding diversity, equity and inclusion programs and attempts to restrict how teachers discuss race in K-12 classrooms in South Carolina and across the country.

USC spokesman Jeff Stensland said the decision to end the program is in no way related to Statehouse debates over DEI initiatives or the research conducted by the center or its director.

The center’s director did not respond to messages from the SC Daily Gazette.

Mark Minett, a professor and president of the USC chapter of the American Association of University Professors, said funding issues are common for academic centers. He was not familiar with the financial health of this particular initiative but said he hopes the USC board judged it objectively, especially given the pushback on diversity and equity programs.

“Now is the time to be vigilant and ask questions and make sure the standards are fairly applied,” he said.

USC College of Education professor and researcher Gloria Boutte founded the center in 2017 in an effort to help school districts improve attendance rates and close learning gaps.

“There are documented examples of schools across the country that are effectively teaching African American students — even students from lower socioeconomic statuses. So, it can be done,” Boutte said in a statement announcing the center. “Through partnerships with public schools across the state and through numerous outreach programs, the center aims to improve academic and cultural outcomes for Black students. By drawing from research about the most effective ways to instruct, educators can teach African American students in culturally relevant ways.”

But her work also made her a target for politicians as conservatives nationwide pushed bans on so-called “critical race theory.”

Boutte’s research is focused on culturally relevant teaching, an educational practice which seeks to connect students’ cultures, languages, and life experiences to what they learn in school. It’s a different concept from critical race theory, which recognizes systemic racism in society and how laws and policies, even those not explicitly about race, can cause or worsen racial disparities, according to the Anti-Defamation League .

Still, Boutte was among university professors attacked in online posts starting in 2021. Two years later, the South Carolina General Assembly started debating legislation requiring “fact-based” classroom discussions on history and amending existing state law that bans race-based curriculum in public schools.

The legislation, which also would have created a statewide process for parental complaints, ultimately failed. But the list of what’s banned, which legislators initially inserted in the state budget in 2021, remains unchanged. Banned concepts include any race being “inherently superior” to another, anyone being responsible for past atrocities because of their race, and that traits such as hard work are oppressive and racist.

The law bans school districts from using state aid to train teachers or buy materials incorporating the banned concepts. It does not ban training related to unconscious bias or issues related to historical discriminatory policies. It also doesn’t apply to colleges.

Backlash against DEI spreads to more states

A separate bill banning public colleges from factoring applicants’ political stances into hiring, firing and admission decisions passed the state House but died with the end of session without a vote in the Senate.

But USC had already struck the terminology “diversity, equity and inclusion” from a cabinet-level office in August 2023. It also changed the title of its leader, Julian Williams, to vice president of access, civil rights and community engagement.

Four months later, Clemson University renamed its own equity and inclusion office, changing it to the Division of Community Engagement, Belonging and Access.

Clemson becomes second SC university to rename equity and inclusion office

Boutte herself has served as associate dean for diversity, equity and inclusion within USC’s college of education in 2023. But in further scrubbing of that language from university roles, USC has changed the title of that job to associate dean for democracy, education, and inclusivity.

Elsewhere, the University of South Alabama eliminated its diversity, equity and inclusion offices amid a new state law limiting the use of public funds for such offices. And in North Carolina, the University of North Carolina system repealed its policy on diversity, equity and inclusion. UNC Chapel Hill struck DEI-related funding from its budget.

Editor’s Note: This story has been updated to reflect the offerings impacted by the ending of the Center for the Education and Equity of African American Students program.

Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our website. AP and Getty images may not be republished. Please see our republishing guidelines for use of any other photos and graphics.

Jessica Holdman

Jessica Holdman

Jessica Holdman writes about the economy, workforce and higher education. Before joining the SC Daily Gazette, she was a business reporter for The Post and Courier.

SC Daily Gazette is part of States Newsroom , the nation’s largest state-focused nonprofit news organization.

Related News

research funding problems

the Institute of Development Studies and partner organisations

Heifer-in-trust, Social Protection and Graduation: Conceptual Issues and Research Questions

Ids item types, external publisher.

  • https://www.future-agricultures.org/publications/research-and-analysis/working-papers

Usage metrics

Future Agricultures Consortium

IMAGES

  1. How to Get Funding for Research Projects: the Complete Guide for

    research funding problems

  2. Funding Instability Hurts Scientific Research

    research funding problems

  3. U.S. Science Suffering From Booms And Busts In Funding : Shots

    research funding problems

  4. Section 3: Funding Scientific Research

    research funding problems

  5. Diagram of Research Funding Process Stock Image

    research funding problems

  6. How difficult is it to get funding in your research field?

    research funding problems

COMMENTS

  1. When big companies fund academic research, the truth often comes last

    By 2011, industry funding, compared to public sources, accounted for two-thirds of medical research worldwide. Research funding from other industries is increasing too, including food and beverage ...

  2. Major budget cuts to two high-profile NIH efforts leave ...

    But because of a quirk in their funding arrangement, two high-profile NIH programs in neuroscience and genomic medicine will be cut by more than one-third, receiving $462 million less than their 2023 total of $1.2 billion. For the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a decade-old project to map ...

  3. Getting to the bottom of research funding: Acknowledging the ...

    Research funding is an important factor for public science. Funding may affect which research topics get addressed, and what research outputs are produced. However, funding has often been studied simplistically, using top-down or system-led perspectives. Such approaches often restrict analysis to confined national funding landscapes or single funding organizations and instruments in isolation.

  4. What steps to take when funding starts to run out

    The problem of maintaining enough support to keep lines of research going is a continuing one for academics, with the major government funding bodies regularly awarding money to only one-quarter ...

  5. What is research funding, how does it influence research ...

    Evaluating the effects of some or all academic research funding is difficult because of the many different and overlapping sources, types, and scopes. It is therefore important to identify the key aspects of research funding so that funders and others assessing its value do not overlook them. This article outlines 18 dimensions through which funding varies substantially, as well as three ...

  6. Underfunding of Research in Women's Health Issues Is the Biggest ...

    For far too long, the medical sciences have treated men and women as interchangeable subjects, favoring men's health for funding and the male body for study. This approach creates a problem, not just for women but for everyone. Not only does it miss a large and critical slice of the population, it leaves an unknown amount of science unexplored.

  7. Why many funding schemes harm rather than support research

    Why many funding schemes harm rather than support research. Nature Human Behaviour 6 , 607-608 ( 2022) Cite this article. To the editor — Researchers are spending an increasing fraction of ...

  8. Science Funding Is Broken

    Only 1.6 percent of funding in the NIH's Research Project Grant program went to principal investigators younger than 36 in 2017, but 13.2 percent went to those 66 and older.

  9. Women's health research lacks funding

    Across the four diseases, the NIH budget for women-focused research was $350 million. The study modelled what might happen if that doubled, and assumed that this increase would deliver a slight (0 ...

  10. Improving coherence of global research funding: Pandemic PACT

    The COVID-19 pandemic exposed major problems with global research funding systems. The possibility of undertaking research anywhere in the world drove inequities in the prioritisation, funding, practice, and access to the products of COVID-19 research.

  11. Practical Problems Related to Health Research Funding Decisions

    In the commentary, I will describe some practical problems that contribute to the complexity of health research funding decisions. The first practical problem is that the relationship between research funding and health outcomes is much more complex and uncertain than Pierson and Millum recognize. The road from research funding related to a ...

  12. The 7 biggest problems facing science, according to 270 scientists

    The place to begin is right where the perverse incentives first start to creep in: the money. 1. Academia has a huge money problem. To do most any kind of research, scientists need money:to run ...

  13. The ripple effects of funding on researchers and output

    The largest effects of funding on research output are ripple effects on publications that do not include PIs. While funders focus on research output from projects, they would be well advised to consider how funding ripples through the wide range of people, including trainees and staff, employed on projects. NIH funding stimulates research by ...

  14. Concentration or dispersal of research funding?

    Abstract. The relationship between the distribution of research funding and scientific performance is a major discussion point in many science policy contexts. Do high shares of funding handed out to a limited number of elite scientists yield the most value for money, or is scientific progress better supported by allocating resources in smaller portions to more teams and individuals? In this ...

  15. The NIH Has the Opportunity to Address Research Funding Disparities

    Epidemiologists, health economists, and other social science researchers studying how societies promote or undermine health should thus receive NIH funding that is more proportionate to the magnitude of the health problems they research. Research funding disparities have multiple causes and warrant different solutions, from prioritizing work ...

  16. Research Funding—Why, When, and How?

    Research funding is defined as a grant obtained for conducting scientific research generally through a competitive process. To apply for grants and securing research funding is an essential part of conducting research. In this article, we will discuss why should one apply for research grants, what are the avenues for getting research grants ...

  17. Section 3: Funding Scientific Research

    Section 3: Funding Scientific Research. There is broad agreement among scientists that a lack of funding currently represents the biggest impediment to conducting high-quality scientific research. Nearly half (46%) cite a lack of funding for basic research as a very serious impediment to high-quality research, while another 41% say it is a ...

  18. Grant Funding: Known Problem Areas & Likely Solutions

    The authors assumed that the research project being funded would last for five years. Based on the amount of Dutch federal funding available, each professor would get €390,000 or $507,000. If the researchers formed groups of five, each group would have $2.5 million. Dutch institutions usually pay their researchers' salaries.

  19. The value of research funding for knowledge creation and ...

    This study investigates the effect of competitive project funding on researchers' publication outputs. Using detailed information on applicants at the Swiss National Science Foundation and their ...

  20. Grants & Funding

    Grants & Funding. The National Institutes of Health is the largest public funder of biomedical research in the world. In fiscal year 2022, NIH invested most of its $45 billion appropriations in research seeking to enhance life, and to reduce illness and disability. NIH-funded research has led to breakthroughs and new treatments helping people ...

  21. 7 Research Challenges (And how to overcome them)

    Complete the sentence: "The purpose of this study is …". Formulate your research questions. Let your answers guide you. Determine what kind of design and methodology can best answer your research questions. If your questions include words such as "explore," "understand," and "generate," it's an indication that your study is ...

  22. Past, present, and future of global health financing: a review of

    Financing for global health has increased steadily over the past two decades and is projected to continue increasing in the future, although at a slower pace of growth and with persistent disparities in per-capita health spending between countries. Out-of-pocket spending is projected to remain substantial outside of high-income countries. Many low-income countries are expected to remain ...

  23. Extramural Funding

    ORSP is committed to supporting faculty and staff as they seek extramural funding opportunities to carry out research or programs that reflect the university's mission and strategic plan. Types of extramural funding sources include federal, state or local governments, as well as corporate and private foundations.

  24. Research funding: the problem with priorities

    The amount of funding for physical sciences and engineering as a proportion of the total funding for all research areas also continues to drop. In the past year, physical-science researchers have ...

  25. USC ends teacher-training program on 'culturally relevant' K-12 lessons

    Mark Minett, a professor and president of the USC chapter of the American Association of University Professors, said funding issues are common for academic centers. He was not familiar with the financial health of this particular initiative but said he hopes the USC board judged it objectively, especially given the pushback on diversity and ...

  26. Heifer-in-trust, Social Protection and Graduation: Conceptual Issues

    The imagery of movement is deeply engrained in development discourse, and particularly in relation to poverty: we commonly talk, for example, of people moving 'out of poverty' or 'up the asset ladder'. Nevertheless, these simple images hide what are now widely understood to be complex, non-linear and dynamic processes that are impacted by a bewildering array of factors from human ...