• Research Outputs
  • Output Types

Ask a Librarian

Decorative chess piece

Scholars circulate and share research in a variety of ways and in numerous genres. Below you'll find a few common examples. Keep in mind there are many other ways to circulate knowledge: factsheets, software, code, government publications, clinical guidelines, and exhibitions, just to name a few.

Outputs Defined

Original research article.

An article published in an academic journal can go by several names: original research, an article, a scholarly article, or a peer reviewed article. This format is an important output for many fields and disciplines. Original research articles are written by one or a number of authors who typically advance a new argument or idea to their field.

Conference Presentations or Proceedings

Conferences are organized events, usually centered on one field or topic, where researchers gather to present and discuss their work. Typically, presenters submit abstracts, or short summaries of their work, before a conference, and a group of organizers select a number of researchers who will present. Conference presentations are frequently transcribed and published in written form after they are given.

Books are often composed of a collection of chapters, each written by a unique author. Usually, these kinds of books are organized by theme, with each author's chapter presenting a unique argument or perspective. Books with uniquely authored chapters are often curated and organized by one or more editors, who may contribute a chapter or foreward themselves.

Often, when researchers perform their work, they will produce or work with large amounts of data, which they compile into datasets. Datasets can contain information about a wide variety of topics, from genetic code to demographic information. These datasets can then be published either independently, or as an accompaniment to another scholarly output, such as an article. Many scientific grants and journals now require researchers to publish datasets.  

For some scholars, artwork is a primary research output. Scholars’ artwork can come in diverse forms and media, such as paintings, sculptures, musical performances, choreography, or literary works like poems.

Reports can come in many forms and may serve many functions. They can be authored by one or a number of people, and are frequently commissioned by government or private agencies. Some examples of reports are market reports, which analyze and predict a sector of an economy, technical reports, which can explain to researchers or clients how to complete a complex task, or white papers, which can inform or persuade an audience about a wide range of complex issues.

Digital Scholarship

Digital scholarship is a research output that significantly incorporates or relies on digital methodologies, authoring, presentation, and presentation. Digital scholarship often complements and adds to more traditional research outputs, and may be presented in a multimedia format. Some examples include mapping projects; multimodal projects that may be composed of text, visual, and audio elements; or digital, interactive archives.

Researchers from every field and discipline produce books as a research output. Because of this, books can vary widely in content, length, form, and style, but often provide a broad overview of a topic compared to research outputs that are more limited in length, such as articles or conference proceedings. Books may be written by one or many authors, and researchers may contribute to a book in a number of ways: they could author an entire book, write a forward, or collect and organize existing works in an anthology, among others.

Scholars may be called upon by media outlets to share their knowledge about the topic they study. Interviews can provide an opportunity for researchers to teach a more general audience about the work that they perform.

Article in a Newspaper or Magazine

While a significant amount of researchers’ work is intended for a scholarly audience, occasionally researchers will publish in popular newspapers or magazines. Articles in these popular genres can be intended to inform a general audience of an issue in which the researcher is an expert, or they may be intended to persuade an audience about an issue.

In addition to other scholarly outputs, many researchers also compose blogs about the work they do. Unlike books or articles, blogs are often shorter, more general, and more conversational, which makes them accessible to a wider audience. Blogs, again unlike other formats, can be published almost in real time, which can allow scholars to share current developments of their work. 

  • University of Colorado Boulder Libraries
  • Research Guides
  • Research Strategies
  • Last Updated: May 14, 2024 2:33 PM
  • URL: https://libguides.colorado.edu/products
  • © Regents of the University of Colorado
  • Welcome to the Staff Intranet
  • My Workplace
  • Staff Directory
  • Service Status
  • Student Charter & Professional Standards
  • Quick links
  • Bright Red Triangle
  • New to Edinburgh Napier?
  • Regulations
  • Academic Skills
  • A-Z Resources
  • ENroute: Professional Recognition Framework
  • ENhance: Curriculum Enhancement Framework
  • Programmes and Modules
  • QAA Enhancement Themes
  • Quality & Standards
  • L&T ENssentials Quick Guides & Resources
  • DLTE Research
  • Student Interns
  • Intercultural Communication
  • Far From Home
  • Annual Statutory Accounts
  • A-Z Documents
  • Finance Regulations
  • Insurance Certificates
  • Procurement
  • Who's Who
  • Staff Briefing Note on Debt Sanctions
  • Operational Communications
  • Who's Who in Governance & Compliance
  • Governance Services
  • Health & Safety
  • Customer Charter
  • Pay and Benefits
  • HR Policy and Forms
  • Working at the University
  • Recruitment
  • Leaving the University
  • ​Industrial Action
  • Learning Technology
  • Digital Skills
  • IS Policies
  • Plans & Performance
  • Research Cycle
  • International & EU Recruitment
  • International Marketing and Intelligence
  • International Programmes
  • Global Online
  • Global Mobility
  • English for Academic Purposes (EAP)
  • UCAS Results Embargo
  • UK Recruitment
  • Visa and International Support
  • Useful Documents
  • Communications
  • Corporate Gifts
  • Development & Alumni Engagement
  • NSS Staff Hub
  • Planning & Performance
  • Business Intelligence
  • Market Intelligence
  • Data Governance
  • Principal & Vice-Chancellor
  • University Leadership Team
  • The University Chancellor
  • University Strategy
  • Catering, Events & Vacation Lettings
  • Environmental Sustainability
  • Facilities Service Desk
  • Print Services
  • Property and Maintenance
  • Student Accommodation
  • A-Z of Services
  • Directorate
  • Staff Documents
  • Design principles
  • Business Engagement
  • Commercialise Your Research
  • Intellectual Property
  • Consultancy and Commercial Activity Framework
  • Continuing Professional Development (CPD)
  • Research Process
  • Policies and Guidance
  • External Projects
  • Public Engagement
  • Research Data
  • Research Degrees
  • Researcher Development
  • Research Governance
  • Research Induction
  • Research Integrity
  • Worktribe Log-in
  • Worktribe RMS
  • Knowledge Exchange Concordat
  • Academic Appeals
  • Academic Calendar
  • Academic Integrity
  • Curriculum Management
  • Examinations
  • Graduations
  • Key Dates Calendar
  • My Programme template
  • Our Charter
  • PASS Process Guides
  • Student Centre & Campus Receptions (iPoints)
  • Student Check In
  • Student Decision and Status related codes
  • Student Engagement Reporting
  • Student Records
  • Students requesting to leave
  • The Student Charter
  • Student Sudden Death
  • Programme and Student Support (PASS)
  • Timetabling
  • Strategy Hub
  • Careers & Skills Development
  • Placements & Practice Learning
  • Graduate Recruitment
  • Student Ambassadors
  • Confident Futures
  • Disability Inclusion
  • Student Funding
  • Report and Support
  • Keep On Track
  • Student Pregnancy, Maternity, Paternity and Adoption
  • Counselling
  • Widening Access
  • About the AUA
  • Edinburgh Napier Students' Association
  • Join UNISON
  • Member Information & Offers
  • LGPS Pensions Bulletin
  • Donations made to Charity

Skip Navigation Links

  • REF2021 - Results
  • You Said, We Listened
  • Outputs from Research
  • Impact from Research
  • REF Training and Development
  • Sector Consultation

​​Outputs from Research 

A research output is the product of research .  It can take many different forms or types.  See here for a full glossary of output types.

The tables below sets out the generic criteria for assessing outputs and the definitions of the starred levels, as used during the REF2021 exercise.

Definitions 

'World-leading', 'internationally' and 'nationally' in this context refer to quality standards. They do not refer to the nature or geographical scope of particular subjects, nor to the locus of research, nor its place of dissemination.

Definitions of Originality, Rigour and Significance

Supplementary output criteria – understanding the thresholds:.

The 'Panel criteria' explains in more detail how the sub-panels apply the assessment criteria and interpret the thresholds:

Main Panel A: Medicine, health and life sciences  Main Panel B: Physical sciences, engineering and mathematics  Main Panel C: Social sciences  Main Panel D: Arts and humanities ​

Definition of Research for the REF

1. For the purposes of the REF, research is defined as a process of investigation leading to new insights, effectively shared.

2. It  includes  work of direct relevance to the needs of commerce, industry, culture, society, and to the public and voluntary sectors; scholarship; the invention and generation of ideas, images, performances, artefacts including design, where these lead to new or substantially improved insights; and the use of existing knowledge in experimental development to produce new or substantially improved materials, devices, products and processes, including design and construction. It excludes routine testing and routine analysis of materials, components and processes such as for the maintenance of national standards, as distinct from the development of new analytical techniques. 

It also  excludes  the development of teaching materials that do not embody original research.

3. It  includes  research that is published, disseminated or made publicly available in the form of assessable research outputs, and confidential reports 

​Output FAQs

Q.  what is a research output.

A research output is the product of research.  An underpinning principle of the REF is that all forms of research output will be assessed on a fair and equal basis.  Sub-panels will not regard any particular form of output as of greater or lesser quality than another per se.  You can access the full list of eligible output types her​e.

Q.  When is the next Research Excellence Framework?

The next exercise will be REF 2029, with results published in 2029.  It is therefore likely that we will make our submission towards the end of 2028, but the actual timetable hasn't been confirmed yet.

A sector-wide consultation is currently occurring to help refine the detail of the next exercise.  You can learn more about the emerging REF 2029 here.

Q.  Why am I being contacted now, if we don't know the final details for a future assessment?

Although we don't know all of the detail, we know that some of the core components of the previous exercise will be retained.  This will include the assessment of research outputs. 

To make the internal process more manageable and avoid a rush at the end of the REF cycle, we will be conducting an output review process on an annual basis, in some shape and form to spread the workload.

Furthermore, regardless of any external assessment frameworks, it is also important for us to understand the quality of research being produced at Edinburgh Napier University and to introduce support mechanisms that will enhance the quality of the research conducted.  This is of benefit to the University and to you and your career development.

Q. I haven't produced any REF-eligible outputs as yet, what should I do?

We recognise that not everyone contacted this year will have produced a REF-eligible output so early on in a new REF cycle.  If this is the case, you can respond with a nil return and you may be contacted again in a future annual review.

If you need additional support to help you deliver on your research objectives, please contact your line manager and/or Head of Research to discuss.

Q.  I was contacted last year to identify an output, but I have not received a notification for the 2024 annual cycle, why not?

Due to administrative capacity in RIE and the lack of detail on the REF 2029 rules relating to staff and outputs, we are restricting this years' scoring activity to a manageable volume based on a set of pre-defined, targeted criteria.

An output review process will be repeated annually.  If an output is not reviewed in the current year, we anticipate that it will be included in a future review process if it remains in your top selection.

Once we know more about the shape of future REF, we will adapt the annual process to meet the new eligibility criteria and aim to increase the volume of outputs being reviewed.

Q. I am unfamiliar with the REF criteria, and I do not feel well-enough equipped to provide a score or qualitative statement for my output/s, what should I do?

The output self-scoring field is optional.  We appreciate that some staff may not be familiar with the criteria and are therefore unable to provide a reliable score. 

The REF team has been working with Schools to develop a programme of REF awareness and output quality enhancement which aims to promote understanding of REF criteria and enable staff to score their work in future.  We aim to deliver quality enhancement training in all Schools by the end of the 2023-24 academic cycle.

Please look out for further communications on this.

For those staff who do wish to provide a score and commentary, please refer specifically to the REF main panel output criteria: Main Panel A: Medicine, health and life sciences  Main Panel B: Physical sciences, engineering and mathematics  Main Panel C: Social sciences  Main Panel D: Arts and humanities 

Q. Can I refer to Journal impact factors or other metrics as a basis of Output quality?

An underpinning principle of REF is that journal impact factors or any hierarchy of journals, journal-based metrics (this includes ABS rating, journal ranking and total citations) should not be used in the assessment o​f outputs. No output is privileged or disadvantaged on the basis of the publisher, where it is published or the medium of its publication. 

An output should be assessed on its content and contribution to advancing knowledge in its own right and in the context of the REF quality threshold criteria, irrespective of the ranking of the journal or publication outlet in which it appears.

You should refer only to the REF output quality criteria (please see definitions above) if you are adding the optional self-score and commentary field and you should not refer to any journal ranking sources.

Q. What is Open Access Policy and how does it affect my outputs?

Under current rules, to be eligible for future research assessment exercises, higher education institutions (HEIs) are required to implement processes and procedures to comply with the REF Open Access policy. 

It is a requirement for all journal articles and conference proceedings with an International Standard Serial Number (ISSN), accepted for publication after 1 April 2016, to be made open access.  This can be achieved by either publishing the output in an open access journal outlet or by depositing an author accepted manuscript version in the University's repository within three months of the acceptance date.

Although the current Open Access policy applies only to journal and conference proceedings with an ISSN, Edinburgh Napier University expects staff to deposit all forms of research output in the University research management system, subject to any publishers' restrictions.

You can read the University's Open Access Policy here .

Q. My Output is likely to form part of a portfolio of work (multi-component output), how do I collate and present this type of output for assessment?

The REF team will be working with relevant School research leadership teams to develop platforms to present multicomponent / portfolio submissions.  In the meantime, please use the commentary section to describe how your output could form part of a multicomponent submission and provide any useful contextual information about the research question your work is addressing.

Q. How will the information I provide about my outputs be used and for what purpose?

In the 2024 output cycle, a minimum of one output identified by each identified author will be reviewed by a panel of internal and external subject experts.

The information provided will be used to enable us to report on research quality measures as identified in the University R&I strategy.

Output quality data will be recorded centrally on the University's REF module in Worktribe.  Access to this data is restricted to a core team of REF staff based with the Research, Innovation and Enterprise Office and key senior leaders in the School.

The data will not be used for any other purpose, other than for monitoring REF-related preparations.

Q. Who else will be involved in reviewing my output/s?

Outputs will be reviewed by an expert panel of internal and external independent reviewers.

Q. Will I receive feedback on my Output/s?

The REF team encourages open and transparent communication relating to output review and feedback.  We will be working with senior research leaders within the School to promote this.

Q.  I have identified more than one Output, will all of my identified outputs be reviewed this year?

In the 2024 cycle, we are committed to reviewing at least one output from each contacted author via an internal, external and moderation review process in the 2024 cycle.

​Once we know more about the shape of a future REF, we will adapt the annual process to meet the new criteria / eligibility.

Get in touch

  • Report a bug
  • Privacy Policy

Edinburgh Napier University is a registered Scottish charity. Registration number SC018373

Outputs Versus Outcomes

  • First Online: 02 October 2020

Cite this chapter

desired output in research

  • Jacqui Ewart 3 &
  • Kate Ames 4  

569 Accesses

This chapter explores what we mean by research project deliverables—particularly the difference between outputs and outcomes. This is an increasingly important distinction to funding bodies. Research outputs, which are key performance indicators for academics, are not always the same as project outcomes. Setting expectations amongst team members and between researchers and funders is critical in the early stages of research project management, and can make the difference between whether a team is willing to work together, and/or able to be funded in an ongoing capacity. We also examine issues we can encounter when reporting for industry and government.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Juniper, E. F. (2009). Validated questionnaires should not be modified. European Respiratory Journal , 34, 1015–1017. https://doi.org/10.1183/09031936.00110209 .

Mills-Scofield, D. (2012, November 26). It’s Not Just Semantics: Managing Outcomes Vs. Outputs. Harvard Business Review. Retrieved from https://hbr.org/2012/11/its-not-just-semantics-managing-outcomes .

Download references

Author information

Authors and affiliations.

Griffith University, Nathan, QLD, Australia

Prof. Jacqui Ewart

Central Queensland University, Brisbane, QLD, Australia

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jacqui Ewart .

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this chapter

Ewart, J., Ames, K. (2020). Outputs Versus Outcomes. In: Managing Your Academic Research Project. Springer, Singapore. https://doi.org/10.1007/978-981-15-9192-1_7

Download citation

DOI : https://doi.org/10.1007/978-981-15-9192-1_7

Published : 02 October 2020

Publisher Name : Springer, Singapore

Print ISBN : 978-981-15-9191-4

Online ISBN : 978-981-15-9192-1

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Becker Medical Library logotype

  • Library Hours
  • (314) 362-7080
  • [email protected]
  • Library Website
  • Electronic Books & Journals
  • Database Directory
  • Catalog Home
  • Library Home

Research Impact : Outputs and Activities

  • Outputs and Activities
  • Establishing Your Author Name and Presence
  • Enhancing Your Impact
  • Tracking Your Work
  • Telling Your Story
  • Impact Frameworks

What are Scholarly Outputs and Activities?

Scholarly/research outputs and activities represent the various outputs and activities created or executed by scholars and investigators in the course of their academic and/or research efforts.

One common output is in the form of scholarly publications which are defined by Washington University as:

". . . articles, abstracts, presentations at professional meetings and grant applications, [that] provide the main vehicle to disseminate findings, thoughts, and analysis to the scientific, academic, and lay communities. For academic activities to contribute to the advancement of knowledge, they must be published in sufficient detail and accuracy to enable others to understand and elaborate the results. For the authors of such work, successful publication improves opportunities for academic funding and promotion while enhancing scientific and scholarly achievement and repute."

Examples of activities include: editorial board memberships, leadership in professional societies, meeting organizer, consultative efforts, contributions to successful grant applications, invited talks and presentations, admininstrative roles, contribution of service to a clinical laboratory program, to name a few. For more examples of activities, see Washington University School of Medicine Appointments & Promotions Guidelines and Requirements or the "Examples of Outputs and Activities" box below. Also of interest is Table 1 in the " Research impact: We need negative metrics too " work.

Tracking your research outputs and activities is key to being able to document the impact of your research. One starting point for telling a story about your research impact is your publications. Advances in digital technology afford numerous avenues for scholars to not only disseminate research findings but also to document the diffusion of their research. The capacity to measure and report tangible outcomes can be used for a variety of purposes and tailored for various audiences ranging from the layperson, physicians, investigators, organizations, and funding agencies. Publication data can be used to craft a compelling narrative about your impact. See Quantifying the Impact of My Publications for examples of how to tell a story using publication data.

Another tip is to utilize various means of disseminating your research. See Strategies for Enhancing Research Impact for more information.

Examples of Outputs and Activities

  • << Previous: Impact
  • Next: Establishing Your Author Name and Presence >>
  • Last Updated: Mar 29, 2024 9:16 AM
  • URL: https://beckerguides.wustl.edu/impact

Welcome to American Journal of Biomedical Science & Research

Navigation Menu

Introduction, originality, significance, share this article.

Biomedical Science and Research

Quality Criteria of a Research Output

John ke mubazi*.

*Corresponding author: John KE Mubazi, Kyambogo and Makerere Universities, Kampala, Uganda.

Received: June 08, 2019; Published: June 12, 2019

DOI: 10.34297/AJBSR.2019.03.000674

An autonomous scholar, whether engaged in teaching, research, or professional service, writes with a clear demonstration of satisfying four criteria: originality, rigour, significance, and coherence. The first three are measures of relative quality while the last demands that the work has a unifying theme. In the subsequent subsections we discuss what each criterion could take into account.

Originality means different things in different disciplines. Nonetheless it can mean applying existing stances, methodologies or theories to new data; finding new ways of analysing/theorising existing data; proposing new methods/theories for old problems; reinterpreting existing data or theories and revising old views. It can also mean new knowledge or new theories or new connections with previously unrelated materials [ 1 ]. Argues that the traditional PhD “privileges the creation of new knowledge over the application, extension, interpretation or questioning of existing knowledge”[ 2 ].

It is possible to claim originality in terms of approach, presentation or topic. Original approach includes the use of a new research technique or testing ideas or being the first to try an approach in a particular region or country or disciplinary area. Original topics include subjects which have not previously been researched.

The element of originality in most research is usually small. At Makerere University, “to qualify for a doctorate, there should be strong evidence that the subject is thoroughly understood, with some original thinking” [ 3 ].

Different subject disciplines may offer different definitions but rigour is usually linked to robustness of argument and method. It may also refer to methodological advances. However, [ 4 ] argue that “when somebody does something for the first time, she may do it brilliantly, but she cannot do it rigorously”.

In subject disciplines, economists, for example, like their data hard and methods stiff, and call this rigour. Benchmarks include scientific rigour with regard to design, methods, and analysis. For languages, it could be ‘intellectual coherence, methodological precision and analytical power; accuracy and depth of scholarship’. In art, drama, and music, it could be ‘the degree of intellectual precision and/or systemic method and/or integrity embodied in the research’.

Indicators of significance include showing of achievement of goals, adding consequentially to the field, and opening of additional areas for further exploration.

A provision of a convincing critical narrative about the overall unifying intellectual position of the work may be regarded as coherence. Here “coherence” means “unification” and “cohesion”, terms intended to indicate that the work can be seen, and can be shown, to form an integrated whole. On analysis one hopes to find in the work “integration and cohesion” in order to conclude that it demonstrates coherence. On the basis of that vague phrases such as “everything fits together as it should do” can be used to describe the work for it to qualify as being coherent.

The following coherence descriptors are included by [ 5 ]:

I. Displaying coherence of structure when conclusions clearly follow from the data.

II. Skillful organisation of a number of different angles.

III. Cogent organisation and expression.

IV. Possessing a definite agenda and an explicit structure.

V. Presenting a sense of the research as a journey, as a structured incremental progress through a process of both argument and discovery

It is argued that:

The key terms here are argument, coherence, discovery, learning, process, progress, organisation and structure. Perhaps it is the final descriptor – of a research project as a completed journey – which best conveys an overall notion of integration and coherence since completed journeys can be said to signify and summarise intellectual processes of planning, travelling (actually or virtually), stopping (addressing, analysing, reflecting on the issues raised in the places visited), overcoming difficulties en route, and arriving at a real or imagined destination [ 1 ].

In conclusion, we also indicate the criteria of publishability that can be identified as criticality, contextualisation, impact, originality, rigour, scale, significance, and topicality. Many of these criteria are subjective, vague and often overlap. For instance, works can be evaluated as original because they are significant, significant because they are original, and paradigm shifting because they are original and significant [ 6 ]. In the final analysis what unifies the activities of a scholar is an approach to each task as a novel situation, a voyage of exploration into the partially unknown.

  • Dadley Graham (2009) ‘Publish and be Doctor-rated: the PhD by published work”. Quality Assurance in Education 17(4): 331-342.
  • Park C (2005) “New Variant PhD: the changing nature of the doctorate in the UK”. Journal of Higher Education Policy and Management, 27(2): 189-207.
  • (2011) Directorate of Research and Graduate Training (DRGT) Students Handbook, Makerere University, Kampala, Uganda.
  • Rorty, R (1998) Truth and Progress: Philosophical Papers, Cambridge University Press, UK.
  • Winter R, M Griffiths, K Green (2000) “The Academic Qualities of Practice: what are the criteria for a practice-based PhD?”. Studies in Higher Education 25(1): 25-37.
  • Johnson R (2008) “On structuring subjective judgements: originality, significance, and rigour in Research Assessment Exercise (RAE) 2008”. Higher Education Quarterly 62(1-2): 120-147.

biomedgrid signup for newsletter

Sign up for Newsletter

Sign up for our newsletter to receive the latest updates. We respect your privacy and will never share your email address with anyone else.

American Journal of Biomedical Science & Research (ISSN: 2642-1747) is an Open access online Journal dedicated in advancing the latest scientific knowledge of science, medicine, technology and its related disciplines.

Contact info

BiomedGrid LLC, 333 City Boulevard West, 17 th Floor, Orange, California, 92868, USA   +1 (626) 698-0574   [email protected]

Quick Links

Aim & scope

Editorial Committee

Open Access

Peer Review Process

Special Issues

Online Submission

Register for Editorial Committee

Recommended a Librarian

Advertise with us

Refer a friend.

Creative Commons License

Track Your Article

Suggested by

Referrer Details

Disseminate your Research today!

We offers a wealth of free resources on academic research and publishing. Sign up and get complete access to a vibrant global community of researchers.

If you forgot / lost your password, please click on this link to reset it.

Not a member? Sign Up

Forgot Password?

Already have an account? Log In

Unsubscribed

Your request for unsubscribing was successful. Unsubscription will be finalized in next 24 hours. During this time you may receive some previously planned communications.

Join as Reviewer We welcome academicians to join our committee

Biomedgrid Cookie

Banner

Library for Staff: Types of Research outputs

  • Research Skills and Resources
  • Information Literacy and the Library This link opens in a new window
  • The Wintec Research Archive
  • Copyright for Staff
  • Publishing at Wintec
  • Liaison Librarians
  • Wintec Library Collection Policy
  • Reciprocal agreement UOW & DHB
  • Teaching Books
  • Staff guidelines for APA assesment
  • Adult and Tertiary Education

Types of Research outputs

  • Last Updated: May 16, 2024 9:06 AM
  • URL: https://libguides.wintec.ac.nz/staff

Outcome-Based Research: Directing Research Towards the Desired Goal

Have you ever heard of or read about outcome-based research (OBR)? Does it sound familiar? If not, this article explains what OBR means.

What is outcome based research? How does it work?

The truth is, outcome-based research is a wordplay from outcome-based education (OBE), a popular theory that emphasizes the outcomes or goals of an educational system. That is, the focus is not on content but the object of the training – the student. Also, OBE does not follow the rigid dictates of some methodology to educate the student. It focuses on the outcome or goal as the ultimate measure of the effectiveness of a curriculum. OBR works the same way.

The Principle of Outcome-Based Research

The idea of OBR just occurred in my mind as I read through or heard about outcome-based education. Why not adopt the same principle in doing research? Make it goal-oriented, just like OBE.

outcome based research

As the university research director, I embarked on the idea by holding a three-day research planning workshop two weeks ago. The e-tools I used consisted of a free version of Vensim ®, a systems analysis tool, and XMind , a mind mapping software.

I believe that the university’s research performance will be boosted further by the innovative approach of outcome-based research. The focus is on the goal of study founded on the research agenda of the university. The research agenda guides researchers on university identified topics of investigation.

How the Tools Were Used in Identifying Issues

Vensim was used to identify which specific issues need to be addressed by research programs, projects, or activities. The tool was used to help unravel which variable or variables of the whole chain of related events or resource states matter. It also allowed the researcher to discern if they have the relevant expertise to research an issue or problem identified in the systems analysis.

When the specific issue or problem has already been identified, the workshop participants came up with their desired research goal to help address the issue or problem. The desired goal became the head of the fishbone diagram created using XMind.

How Outcome-Based Research Works

Outcome-based research starts at the goal, then works back to identify the steps required to achieve the pre-set goal. It is an incremental process, linking one stage to the next.

Examples are always great in bringing forth understanding. Hence, I provide below an example of the OBR approach (click to enlarge):

pollution mitigation

The fishbone diagram given in the diagram shows the steps that a researcher may follow to create the desired output. In this instance, it is the production of low-cost pollution mitigating technologies. The research outcome is not easily arrived at.

From the stated outcome, examples of activities as we work backward may include the following: pollution prevention strategies must be designed, technologies must be generated, information and education materials must be designed, etc. It is a complex process.

Outcome-based Research is Goal-oriented

You would notice that by stating clearly the goal of the research, everything falls in place. Research now is not merely just research for the sake of research but also an exercise that can help resolve an issue or problem. The steps required in carrying out the research venture are also identified, such that all efforts converge towards the desired goal. It is a step-by-step process.

Therefore, outcome-based research is a new approach that brings the value of study towards a higher level. It is responsive to the needs of society. It does not stop at publications as the outcome of research but a much higher goal to make life better for everyone. Research is not just for the sake of personal gain but for the sake of humanity.

©2015 August 2 P. A. Regoniel

Related Posts

Social media and data collection.

How to Reduce Researcher Bias in Social Research

How to Reduce Researcher Bias in Social Research

How to Write a Scientific Paper: 8 Elements

How to Write a Scientific Paper: 8 Elements

About the author, patrick regoniel.

Dr. Regoniel, a faculty member of the graduate school, served as consultant to various environmental research and development projects covering issues and concerns on climate change, coral reef resources and management, economic valuation of environmental and natural resources, mining, and waste management and pollution. He has extensive experience on applied statistics, systems modelling and analysis, an avid practitioner of LaTeX, and a multidisciplinary web developer. He leverages pioneering AI-powered content creation tools to produce unique and comprehensive articles in this website.

SimplyEducate.Me Privacy Policy

  • Skip to main content
  • Skip to primary sidebar

IResearchNet

Input-Process-Output Model

Much of the work in organizations is accomplished through teams. It is therefore crucial to determine the factors that lead to effective as well as ineffective team processes and to better specify how, why, and when they contribute. Substantial research has been conducted on the variables that influence team effectiveness, yielding several models of team functioning. Although these models differ in a number of aspects, they share the commonality of being grounded in an input-process-output (IPO) framework. Inputs are the conditions that exist prior to group activity, whereas processes are the interactions among group members. Outputs are the results of group activity that are valued by the team or the organization.

The input-process-output model has historically been the dominant approach to understanding and explaining team performance and continues to exert a strong influence on group research today. The framework is based on classic systems theory, which states that the general structure of a system is as important in determining how effectively it will function as its individual components. Similarly, the IPO model has a causal structure, in that outputs are a function of various group processes, which are in turn influenced by numerous input variables. In its simplest form, the model is depicted as the following:

Input —> Process —> Output

Inputs reflect the resources that groups have at their disposal and are generally divided into three categories: individual-level factors, group-level factors, and environmental factors. Individual-level factors are what group members bring to the group, such as motivation, personality, abilities, experiences, and demographic attributes. Examples of group-level factors are work structure, team norms, and group size. Environmental factors capture the broader context in which groups operate, such as reward structure, stress level, task characteristics, and organizational culture.

Processes are the mediating mechanisms that convert inputs to outputs. A key aspect of the definition is that processes represent interactions that take place among team members. Many different taxonomies of teamwork behaviors have been proposed, but common examples include coordination, communication, conflict management, and motivation.

In comparison with inputs and outputs, group processes are often more difficult to measure, because a thorough understanding of what groups are doing and how they complete their work may require observing members while they actually perform a task. This may lead to a more accurate reflection of the true group processes, as opposed to relying on members to self-report their processes retrospectively. In addition, group processes evolve over time, which means that they cannot be adequately represented through a single observation. These difficult methodological issues have caused many studies to ignore processes and focus only on inputs and outputs. Empirical group research has therefore been criticized as treating processes as a “black box” (loosely specified and unmeasured), despite how prominently featured they are in the IPO model. Recently, however, a number of researchers have given renewed emphasis to the importance of capturing team member interactions, emphasizing the need to measure processes longitudinally and with more sophisticated measures.

Indicators of team effectiveness have generally been clustered into two general categories: group performance and member reactions. Group performance refers to the degree to which the group achieves the standard set by the users of its output. Examples include quality, quantity, timeliness, efficiency, and costs. In contrast, member reactions involve perceptions of satisfaction with group functioning, team viability, and personal development. For example, although the group may have been able to produce a high-quality product, mutual antagonism may be so high that members would prefer not to work with one another on future projects. In addition, some groups contribute to member well-being and growth, whereas others block individual development and hinder personal needs from being met.

Both categories of outcomes are clearly important, but performance outcomes are especially valued in the teams literature. This is because they can be measured more objectively (because they do not rely on team member self-reports) and make a strong case that inputs and processes affect the bottom line of group effectiveness.

Steiner’s Formula

Consistent with the IPO framework, Ivan Steiner derived the following formula to explain why teams starting off with a great deal of promise often end up being less than successful:

Actual productivity = potential productivity – process loss

Although potential productivity is the highest level of performance attainable, a group’s actual productivity often falls short of its potential because of the existence of process loss. Process loss refers to the suboptimal ways that groups operate, resulting in time and energy spent away from task performance. Examples of process losses include group conflict, communication breakdown, coordination difficulty, and social loafing (group members shirking responsibility and failing to exert adequate individual effort). Consistent with the assumptions of the IPO model, Steiner’s formula highlights the importance of group processes and reflects the notion that it is the processes and not the inputs (analogous to group potential) that create the group’s outputs. In other words, teams are a function of the interaction of team members and not simply the sum of individuals who perform tasks independently.

Limitations of the IPO Model

The major criticism that has been levied against the IPO model is the assumption that group functioning is static and follows a linear progression from inputs through outputs. To incorporate the reality of dynamic change, feedback loops were added to the original IPO model, emanating primarily from outputs and feeding back to inputs or processes. However, the single-cycle, linear IPO path has been emphasized in most of the empirical research. Nevertheless, in both theory and measurement, current team researchers are increasingly invoking the notion of cyclical causal feedback, as well as nonlinear or conditional relationships.

Although the IPO framework is the dominant way of thinking about group performance in the teams literature, relatively few empirical studies have been devoted to the validity of the model itself. In addition, research directly testing the input-process-output links has frequently been conducted in laboratory settings, an approach that restricts the number of relevant variables that would realistically occur in an organization. However, although the IPO model assumes that process fully mediates the association between inputs and outputs, some research has suggested that a purely mediated model may be too limited. Therefore, alternative models have suggested that inputs may directly affect both processes and outputs.

Without question, the IPO model reflects the dominant way of thinking about group performance in the groups literature. As such, it has played an important role in guiding research design and encouraging researchers to sample from the input, process, and output categories in variable selection. Recent research is increasingly moving beyond a strictly linear progression and incorporating the reality of dynamic change. In addition, alternatives to the traditional IPO model have been suggested in which processes are not purely mediated.

References:

  • Hackman, J. R. (1987). The design of work teams. In J. Lorsch (Ed.), Handbook of organizational behavior (pp. 315-342). New York: Prentice Hall.
  • Ilgen, D. R., Hollenbeck, J. R., Johnson, M., & Jundt, D. (2005). Teams in organizations: From input-process-output models to IMOI models. Annual Review of Psychology, 56, 517-543.
  • Steiner, I. D. (1972). Group process and productivity. New York: Academic Press.
  • Group Dynamics
  • Industrial-Organizational Psychology

Embargoed Country

Due to U.S. export compliance requirements, Spunk has blocked your access to Splunk web properties. If you believe that the action was made in error, please send an email to  [email protected]  with your name, complete address, your physical location at the time of seeking access, email, and phone number. Splunk will research the issue and respond.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.37(16); 2022 Apr 25

Logo of jkms

A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

Edward barroga.

1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.

INTRODUCTION

Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6

It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4

There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.

DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES

A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5

On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4

Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8

Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12

CHARACTERISTICS OF GOOD RESEARCH QUESTIONS AND HYPOTHESES

Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13

There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10

TYPES OF RESEARCH QUESTIONS AND HYPOTHESES

Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .

Research questions in quantitative research

In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .

Hypotheses in quantitative research

In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .

Research questions in qualitative research

Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15

There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .

Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15

Hypotheses in qualitative research

Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1

FRAMEWORKS FOR DEVELOPING RESEARCH QUESTIONS AND HYPOTHESES

Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14

The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14

As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.

a These statements were composed for comparison and illustrative purposes only.

b These statements are direct quotes from Higashihara and Horiuchi. 16

a This statement is a direct quote from Shimoda et al. 17

The other statements were composed for comparison and illustrative purposes only.

CONSTRUCTING RESEARCH QUESTIONS AND HYPOTHESES

To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g001.jpg

Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.

Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12

In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g002.jpg

EXAMPLES OF RESEARCH QUESTIONS FROM PUBLISHED ARTICLES

  • EXAMPLE 1. Descriptive research question (quantitative research)
  • - Presents research variables to be assessed (distinct phenotypes and subphenotypes)
  • “BACKGROUND: Since COVID-19 was identified, its clinical and biological heterogeneity has been recognized. Identifying COVID-19 phenotypes might help guide basic, clinical, and translational research efforts.
  • RESEARCH QUESTION: Does the clinical spectrum of patients with COVID-19 contain distinct phenotypes and subphenotypes? ” 19
  • EXAMPLE 2. Relationship research question (quantitative research)
  • - Shows interactions between dependent variable (static postural control) and independent variable (peripheral visual field loss)
  • “Background: Integration of visual, vestibular, and proprioceptive sensations contributes to postural control. People with peripheral visual field loss have serious postural instability. However, the directional specificity of postural stability and sensory reweighting caused by gradual peripheral visual field loss remain unclear.
  • Research question: What are the effects of peripheral visual field loss on static postural control ?” 20
  • EXAMPLE 3. Comparative research question (quantitative research)
  • - Clarifies the difference among groups with an outcome variable (patients enrolled in COMPERA with moderate PH or severe PH in COPD) and another group without the outcome variable (patients with idiopathic pulmonary arterial hypertension (IPAH))
  • “BACKGROUND: Pulmonary hypertension (PH) in COPD is a poorly investigated clinical condition.
  • RESEARCH QUESTION: Which factors determine the outcome of PH in COPD?
  • STUDY DESIGN AND METHODS: We analyzed the characteristics and outcome of patients enrolled in the Comparative, Prospective Registry of Newly Initiated Therapies for Pulmonary Hypertension (COMPERA) with moderate or severe PH in COPD as defined during the 6th PH World Symposium who received medical therapy for PH and compared them with patients with idiopathic pulmonary arterial hypertension (IPAH) .” 21
  • EXAMPLE 4. Exploratory research question (qualitative research)
  • - Explores areas that have not been fully investigated (perspectives of families and children who receive care in clinic-based child obesity treatment) to have a deeper understanding of the research problem
  • “Problem: Interventions for children with obesity lead to only modest improvements in BMI and long-term outcomes, and data are limited on the perspectives of families of children with obesity in clinic-based treatment. This scoping review seeks to answer the question: What is known about the perspectives of families and children who receive care in clinic-based child obesity treatment? This review aims to explore the scope of perspectives reported by families of children with obesity who have received individualized outpatient clinic-based obesity treatment.” 22
  • EXAMPLE 5. Relationship research question (quantitative research)
  • - Defines interactions between dependent variable (use of ankle strategies) and independent variable (changes in muscle tone)
  • “Background: To maintain an upright standing posture against external disturbances, the human body mainly employs two types of postural control strategies: “ankle strategy” and “hip strategy.” While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the level of muscle tone, one of the crucial parameters of bodily function, determines the use of each strategy. We have previously confirmed using forward dynamics simulations of human musculoskeletal models that an increased muscle tone promotes the use of ankle strategies. The objective of the present study was to experimentally evaluate a hypothesis: an increased muscle tone promotes the use of ankle strategies. Research question: Do changes in the muscle tone affect the use of ankle strategies ?” 23

EXAMPLES OF HYPOTHESES IN PUBLISHED ARTICLES

  • EXAMPLE 1. Working hypothesis (quantitative research)
  • - A hypothesis that is initially accepted for further research to produce a feasible theory
  • “As fever may have benefit in shortening the duration of viral illness, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response when taken during the early stages of COVID-19 illness .” 24
  • “In conclusion, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response . The difference in perceived safety of these agents in COVID-19 illness could be related to the more potent efficacy to reduce fever with ibuprofen compared to acetaminophen. Compelling data on the benefit of fever warrant further research and review to determine when to treat or withhold ibuprofen for early stage fever for COVID-19 and other related viral illnesses .” 24
  • EXAMPLE 2. Exploratory hypothesis (qualitative research)
  • - Explores particular areas deeper to clarify subjective experience and develop a formal hypothesis potentially testable in a future quantitative approach
  • “We hypothesized that when thinking about a past experience of help-seeking, a self distancing prompt would cause increased help-seeking intentions and more favorable help-seeking outcome expectations .” 25
  • “Conclusion
  • Although a priori hypotheses were not supported, further research is warranted as results indicate the potential for using self-distancing approaches to increasing help-seeking among some people with depressive symptomatology.” 25
  • EXAMPLE 3. Hypothesis-generating research to establish a framework for hypothesis testing (qualitative research)
  • “We hypothesize that compassionate care is beneficial for patients (better outcomes), healthcare systems and payers (lower costs), and healthcare providers (lower burnout). ” 26
  • Compassionomics is the branch of knowledge and scientific study of the effects of compassionate healthcare. Our main hypotheses are that compassionate healthcare is beneficial for (1) patients, by improving clinical outcomes, (2) healthcare systems and payers, by supporting financial sustainability, and (3) HCPs, by lowering burnout and promoting resilience and well-being. The purpose of this paper is to establish a scientific framework for testing the hypotheses above . If these hypotheses are confirmed through rigorous research, compassionomics will belong in the science of evidence-based medicine, with major implications for all healthcare domains.” 26
  • EXAMPLE 4. Statistical hypothesis (quantitative research)
  • - An assumption is made about the relationship among several population characteristics ( gender differences in sociodemographic and clinical characteristics of adults with ADHD ). Validity is tested by statistical experiment or analysis ( chi-square test, Students t-test, and logistic regression analysis)
  • “Our research investigated gender differences in sociodemographic and clinical characteristics of adults with ADHD in a Japanese clinical sample. Due to unique Japanese cultural ideals and expectations of women's behavior that are in opposition to ADHD symptoms, we hypothesized that women with ADHD experience more difficulties and present more dysfunctions than men . We tested the following hypotheses: first, women with ADHD have more comorbidities than men with ADHD; second, women with ADHD experience more social hardships than men, such as having less full-time employment and being more likely to be divorced.” 27
  • “Statistical Analysis
  • ( text omitted ) Between-gender comparisons were made using the chi-squared test for categorical variables and Students t-test for continuous variables…( text omitted ). A logistic regression analysis was performed for employment status, marital status, and comorbidity to evaluate the independent effects of gender on these dependent variables.” 27

EXAMPLES OF HYPOTHESIS AS WRITTEN IN PUBLISHED ARTICLES IN RELATION TO OTHER PARTS

  • EXAMPLE 1. Background, hypotheses, and aims are provided
  • “Pregnant women need skilled care during pregnancy and childbirth, but that skilled care is often delayed in some countries …( text omitted ). The focused antenatal care (FANC) model of WHO recommends that nurses provide information or counseling to all pregnant women …( text omitted ). Job aids are visual support materials that provide the right kind of information using graphics and words in a simple and yet effective manner. When nurses are not highly trained or have many work details to attend to, these job aids can serve as a content reminder for the nurses and can be used for educating their patients (Jennings, Yebadokpo, Affo, & Agbogbe, 2010) ( text omitted ). Importantly, additional evidence is needed to confirm how job aids can further improve the quality of ANC counseling by health workers in maternal care …( text omitted )” 28
  • “ This has led us to hypothesize that the quality of ANC counseling would be better if supported by job aids. Consequently, a better quality of ANC counseling is expected to produce higher levels of awareness concerning the danger signs of pregnancy and a more favorable impression of the caring behavior of nurses .” 28
  • “This study aimed to examine the differences in the responses of pregnant women to a job aid-supported intervention during ANC visit in terms of 1) their understanding of the danger signs of pregnancy and 2) their impression of the caring behaviors of nurses to pregnant women in rural Tanzania.” 28
  • EXAMPLE 2. Background, hypotheses, and aims are provided
  • “We conducted a two-arm randomized controlled trial (RCT) to evaluate and compare changes in salivary cortisol and oxytocin levels of first-time pregnant women between experimental and control groups. The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an infant (control intervention protocol). The primary outcome was salivary cortisol level and the secondary outcome was salivary oxytocin level.” 29
  • “ We hypothesize that at 30 min after touching and holding an infant, the salivary cortisol level will significantly decrease and the salivary oxytocin level will increase in the experimental group compared with the control group .” 29
  • EXAMPLE 3. Background, aim, and hypothesis are provided
  • “In countries where the maternal mortality ratio remains high, antenatal education to increase Birth Preparedness and Complication Readiness (BPCR) is considered one of the top priorities [1]. BPCR includes birth plans during the antenatal period, such as the birthplace, birth attendant, transportation, health facility for complications, expenses, and birth materials, as well as family coordination to achieve such birth plans. In Tanzania, although increasing, only about half of all pregnant women attend an antenatal clinic more than four times [4]. Moreover, the information provided during antenatal care (ANC) is insufficient. In the resource-poor settings, antenatal group education is a potential approach because of the limited time for individual counseling at antenatal clinics.” 30
  • “This study aimed to evaluate an antenatal group education program among pregnant women and their families with respect to birth-preparedness and maternal and infant outcomes in rural villages of Tanzania.” 30
  • “ The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) give birth in a health facility, (4) have less complications of women at birth, and (5) have less complications and deaths of infants than those who did not receive the education .” 30

Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Methodology: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Data Descriptor
  • Open access
  • Published: 03 May 2024

A dataset for measuring the impact of research data and their curation

  • Libby Hemphill   ORCID: orcid.org/0000-0002-3793-7281 1 , 2 ,
  • Andrea Thomer 3 ,
  • Sara Lafia 1 ,
  • Lizhou Fan 2 ,
  • David Bleckley   ORCID: orcid.org/0000-0001-7715-4348 1 &
  • Elizabeth Moss 1  

Scientific Data volume  11 , Article number:  442 ( 2024 ) Cite this article

665 Accesses

8 Altmetric

Metrics details

  • Research data
  • Social sciences

Science funders, publishers, and data archives make decisions about how to responsibly allocate resources to maximize the reuse potential of research data. This paper introduces a dataset developed to measure the impact of archival and data curation decisions on data reuse. The dataset describes 10,605 social science research datasets, their curation histories, and reuse contexts in 94,755 publications that cover 59 years from 1963 to 2022. The dataset was constructed from study-level metadata, citing publications, and curation records available through the Inter-university Consortium for Political and Social Research (ICPSR) at the University of Michigan. The dataset includes information about study-level attributes (e.g., PIs, funders, subject terms); usage statistics (e.g., downloads, citations); archiving decisions (e.g., curation activities, data transformations); and bibliometric attributes (e.g., journals, authors) for citing publications. This dataset provides information on factors that contribute to long-term data reuse, which can inform the design of effective evidence-based recommendations to support high-impact research data curation decisions.

Similar content being viewed by others

desired output in research

SciSciNet: A large-scale open data lake for the science of science research

desired output in research

Data, measurement and empirical methods in the science of science

desired output in research

Interdisciplinarity revisited: evidence for research impact and dynamism

Background & summary.

Recent policy changes in funding agencies and academic journals have increased data sharing among researchers and between researchers and the public. Data sharing advances science and provides the transparency necessary for evaluating, replicating, and verifying results. However, many data-sharing policies do not explain what constitutes an appropriate dataset for archiving or how to determine the value of datasets to secondary users 1 , 2 , 3 . Questions about how to allocate data-sharing resources efficiently and responsibly have gone unanswered 4 , 5 , 6 . For instance, data-sharing policies recognize that not all data should be curated and preserved, but they do not articulate metrics or guidelines for determining what data are most worthy of investment.

Despite the potential for innovation and advancement that data sharing holds, the best strategies to prioritize datasets for preparation and archiving are often unclear. Some datasets are likely to have more downstream potential than others, and data curation policies and workflows should prioritize high-value data instead of being one-size-fits-all. Though prior research in library and information science has shown that the “analytic potential” of a dataset is key to its reuse value 7 , work is needed to implement conceptual data reuse frameworks 8 , 9 , 10 , 11 , 12 , 13 , 14 . In addition, publishers and data archives need guidance to develop metrics and evaluation strategies to assess the impact of datasets.

Several existing resources have been compiled to study the relationship between the reuse of scholarly products, such as datasets (Table  1 ); however, none of these resources include explicit information on how curation processes are applied to data to increase their value, maximize their accessibility, and ensure their long-term preservation. The CCex (Curation Costs Exchange) provides models of curation services along with cost-related datasets shared by contributors but does not make explicit connections between them or include reuse information 15 . Analyses on platforms such as DataCite 16 have focused on metadata completeness and record usage, but have not included related curation-level information. Analyses of GenBank 17 and FigShare 18 , 19 citation networks do not include curation information. Related studies of Github repository reuse 20 and Softcite software citation 21 reveal significant factors that impact the reuse of secondary research products but do not focus on research data. RD-Switchboard 22 and DSKG 23 are scholarly knowledge graphs linking research data to articles, patents, and grants, but largely omit social science research data and do not include curation-level factors. To our knowledge, other studies of curation work in organizations similar to ICPSR – such as GESIS 24 , Dataverse 25 , and DANS 26 – have not made their underlying data available for analysis.

This paper describes a dataset 27 compiled for the MICA project (Measuring the Impact of Curation Actions) led by investigators at ICPSR, a large social science data archive at the University of Michigan. The dataset was originally developed to study the impacts of data curation and archiving on data reuse. The MICA dataset has supported several previous publications investigating the intensity of data curation actions 28 , the relationship between data curation actions and data reuse 29 , and the structures of research communities in a data citation network 30 . Collectively, these studies help explain the return on various types of curatorial investments. The dataset that we introduce in this paper, which we refer to as the MICA dataset, has the potential to address research questions in the areas of science (e.g., knowledge production), library and information science (e.g., scholarly communication), and data archiving (e.g., reproducible workflows).

We constructed the MICA dataset 27 using records available at ICPSR, a large social science data archive at the University of Michigan. Data set creation involved: collecting and enriching metadata for articles indexed in the ICPSR Bibliography of Data-related Literature against the Dimensions AI bibliometric database; gathering usage statistics for studies from ICPSR’s administrative database; processing data curation work logs from ICPSR’s project tracking platform, Jira; and linking data in social science studies and series to citing analysis papers (Fig.  1 ).

figure 1

Steps to prepare MICA dataset for analysis - external sources are red, primary internal sources are blue, and internal linked sources are green.

Enrich paper metadata

The ICPSR Bibliography of Data-related Literature is a growing database of literature in which data from ICPSR studies have been used. Its creation was funded by the National Science Foundation (Award 9977984), and for the past 20 years it has been supported by ICPSR membership and multiple US federally-funded and foundation-funded topical archives at ICPSR. The Bibliography was originally launched in the year 2000 to aid in data discovery by providing a searchable database linking publications to the study data used in them. The Bibliography collects the universe of output based on the data shared in each study through, which is made available through each ICPSR study’s webpage. The Bibliography contains both peer-reviewed and grey literature, which provides evidence for measuring the impact of research data. For an item to be included in the ICPSR Bibliography, it must contain an analysis of data archived by ICPSR or contain a discussion or critique of the data collection process, study design, or methodology 31 . The Bibliography is manually curated by a team of librarians and information specialists at ICPSR who enter and validate entries. Some publications are supplied to the Bibliography by data depositors, and some citations are submitted to the Bibliography by authors who abide by ICPSR’s terms of use requiring them to submit citations to works in which they analyzed data retrieved from ICPSR. Most of the Bibliography is populated by Bibliography team members, who create custom queries for ICPSR studies performed across numerous sources, including Google Scholar, ProQuest, SSRN, and others. Each record in the Bibliography is one publication that has used one or more ICPSR studies. The version we used was captured on 2021-11-16 and included 94,755 publications.

To expand the coverage of the ICPSR Bibliography, we searched exhaustively for all ICPSR study names, unique numbers assigned to ICPSR studies, and DOIs 32 using a full-text index available through the Dimensions AI database 33 . We accessed Dimensions through a license agreement with the University of Michigan. ICPSR Bibliography librarians and information specialists manually reviewed and validated new entries that matched one or more search criteria. We then used Dimensions to gather enriched metadata and full-text links for items in the Bibliography with DOIs. We matched 43% of the items in the Bibliography to enriched Dimensions metadata including abstracts, field of research codes, concepts, and authors’ institutional information; we also obtained links to full text for 16% of Bibliography items. Based on licensing agreements, we included Dimensions identifiers and links to full text so that users with valid publisher and database access can construct an enriched publication dataset.

Gather study usage data

ICPSR maintains a relational administrative database, DBInfo, that organizes study-level metadata and information on data reuse across separate tables. Studies at ICPSR consist of one or more files collected at a single time or for a single purpose; studies in which the same variables are observed over time are grouped into series. Each study at ICPSR is assigned a DOI, and its metadata are stored in DBInfo. Study metadata follows the Data Documentation Initiative (DDI) Codebook 2.5 standard. DDI elements included in our dataset are title, ICPSR study identification number, DOI, authoring entities, description (abstract), funding agencies, subject terms assigned to the study during curation, and geographic coverage. We also created variables based on DDI elements: total variable count, the presence of survey question text in the metadata, the number of author entities, and whether an author entity was an institution. We gathered metadata for ICPSR’s 10,605 unrestricted public-use studies available as of 2021-11-16 ( https://www.icpsr.umich.edu/web/pages/membership/or/metadata/oai.html ).

To link study usage data with study-level metadata records, we joined study metadata from DBinfo on study usage information, which included total study downloads (data and documentation), individual data file downloads, and cumulative citations from the ICPSR Bibliography. We also gathered descriptive metadata for each study and its variables, which allowed us to summarize and append recoded fields onto the study-level metadata such as curation level, number and type of principle investigators, total variable count, and binary variables indicating whether the study data were made available for online analysis, whether survey question text was made searchable online, and whether the study variables were indexed for search. These characteristics describe aspects of the discoverability of the data to compare with other characteristics of the study. We used the study and series numbers included in the ICPSR Bibliography as unique identifiers to link papers to metadata and analyze the community structure of dataset co-citations in the ICPSR Bibliography 32 .

Process curation work logs

Researchers deposit data at ICPSR for curation and long-term preservation. Between 2016 and 2020, more than 3,000 research studies were deposited with ICPSR. Since 2017, ICPSR has organized curation work into a central unit that provides varied levels of curation that vary in the intensity and complexity of data enhancement that they provide. While the levels of curation are standardized as to effort (level one = less effort, level three = most effort), the specific curatorial actions undertaken for each dataset vary. The specific curation actions are captured in Jira, a work tracking program, which data curators at ICPSR use to collaborate and communicate their progress through tickets. We obtained access to a corpus of 669 completed Jira tickets corresponding to the curation of 566 unique studies between February 2017 and December 2019 28 .

To process the tickets, we focused only on their work log portions, which contained free text descriptions of work that data curators had performed on a deposited study, along with the curators’ identifiers, and timestamps. To protect the confidentiality of the data curators and the processing steps they performed, we collaborated with ICPSR’s curation unit to propose a classification scheme, which we used to train a Naive Bayes classifier and label curation actions in each work log sentence. The eight curation action labels we proposed 28 were: (1) initial review and planning, (2) data transformation, (3) metadata, (4) documentation, (5) quality checks, (6) communication, (7) other, and (8) non-curation work. We note that these categories of curation work are very specific to the curatorial processes and types of data stored at ICPSR, and may not match the curation activities at other repositories. After applying the classifier to the work log sentences, we obtained summary-level curation actions for a subset of all ICPSR studies (5%), along with the total number of hours spent on data curation for each study, and the proportion of time associated with each action during curation.

Data Records

The MICA dataset 27 connects records for each of ICPSR’s archived research studies to the research publications that use them and related curation activities available for a subset of studies (Fig.  2 ). Each of the three tables published in the dataset is available as a study archived at ICPSR. The data tables are distributed as statistical files available for use in SAS, SPSS, Stata, and R as well as delimited and ASCII text files. The dataset is organized around studies and papers as primary entities. The studies table lists ICPSR studies, their metadata attributes, and usage information; the papers table was constructed using the ICPSR Bibliography and Dimensions database; and the curation logs table summarizes the data curation steps performed on a subset of ICPSR studies.

Studies (“ICPSR_STUDIES”): 10,605 social science research datasets available through ICPSR up to 2021-11-16 with variables for ICPSR study number, digital object identifier, study name, series number, series title, authoring entities, full-text description, release date, funding agency, geographic coverage, subject terms, topical archive, curation level, single principal investigator (PI), institutional PI, the total number of PIs, total variables in data files, question text availability, study variable indexing, level of restriction, total unique users downloading study data files and codebooks, total unique users downloading data only, and total unique papers citing data through November 2021. Studies map to the papers and curation logs table through ICPSR study numbers as “STUDY”. However, not every study in this table will have records in the papers and curation logs tables.

Papers (“ICPSR_PAPERS”): 94,755 publications collected from 2000-08-11 to 2021-11-16 in the ICPSR Bibliography and enriched with metadata from the Dimensions database with variables for paper number, identifier, title, authors, publication venue, item type, publication date, input date, ICPSR series numbers used in the paper, ICPSR study numbers used in the paper, the Dimension identifier, and the Dimensions link to the publication’s full text. Papers map to the studies table through ICPSR study numbers in the “STUDY_NUMS” field. Each record represents a single publication, and because a researcher can use multiple datasets when creating a publication, each record may list multiple studies or series.

Curation logs (“ICPSR_CURATION_LOGS”): 649 curation logs for 563 ICPSR studies (although most studies in the subset had one curation log, some studies were associated with multiple logs, with a maximum of 10) curated between February 2017 and December 2019 with variables for study number, action labels assigned to work description sentences using a classifier trained on ICPSR curation logs, hours of work associated with a single log entry, and total hours of work logged for the curation ticket. Curation logs map to the study and paper tables through ICPSR study numbers as “STUDY”. Each record represents a single logged action, and future users may wish to aggregate actions to the study level before joining tables.

figure 2

Entity-relation diagram.

Technical Validation

We report on the reliability of the dataset’s metadata in the following subsections. To support future reuse of the dataset, curation services provided through ICPSR improved data quality by checking for missing values, adding variable labels, and creating a codebook.

All 10,605 studies available through ICPSR have a DOI and a full-text description summarizing what the study is about, the purpose of the study, the main topics covered, and the questions the PIs attempted to answer when they conducted the study. Personal names (i.e., principal investigators) and organizational names (i.e., funding agencies) are standardized against an authority list maintained by ICPSR; geographic names and subject terms are also standardized and hierarchically indexed in the ICPSR Thesaurus 34 . Many of ICPSR’s studies (63%) are in a series and are distributed through the ICPSR General Archive (56%), a non-topical archive that accepts any social or behavioral science data. While study data have been available through ICPSR since 1962, the earliest digital release date recorded for a study was 1984-03-18, when ICPSR’s database was first employed, and the most recent date is 2021-10-28 when the dataset was collected.

Curation level information was recorded starting in 2017 and is available for 1,125 studies (11%); approximately 80% of studies with assigned curation levels received curation services, equally distributed between Levels 1 (least intensive), 2 (moderately intensive), and 3 (most intensive) (Fig.  3 ). Detailed descriptions of ICPSR’s curation levels are available online 35 . Additional metadata are available for a subset of 421 studies (4%), including information about whether the study has a single PI, an institutional PI, the total number of PIs involved, total variables recorded is available for online analysis, has searchable question text, has variables that are indexed for search, contains one or more restricted files, and whether the study is completely restricted. We provided additional metadata for this subset of ICPSR studies because they were released within the past five years and detailed curation and usage information were available for them. Usage statistics including total downloads and data file downloads are available for this subset of studies as well; citation statistics are available for 8,030 studies (76%). Most ICPSR studies have fewer than 500 users, as indicated by total downloads, or citations (Fig.  4 ).

figure 3

ICPSR study curation levels.

figure 4

ICPSR study usage.

A subset of 43,102 publications (45%) available in the ICPSR Bibliography had a DOI. Author metadata were entered as free text, meaning that variations may exist and require additional normalization and pre-processing prior to analysis. While author information is standardized for each publication, individual names may appear in different sort orders (e.g., “Earls, Felton J.” and “Stephen W. Raudenbush”). Most of the items in the ICPSR Bibliography as of 2021-11-16 were journal articles (59%), reports (14%), conference presentations (9%), or theses (8%) (Fig.  5 ). The number of publications collected in the Bibliography has increased each decade since the inception of ICPSR in 1962 (Fig.  6 ). Most ICPSR studies (76%) have one or more citations in a publication.

figure 5

ICPSR Bibliography citation types.

figure 6

ICPSR citations by decade.

Usage Notes

The dataset consists of three tables that can be joined using the “STUDY” key as shown in Fig.  2 . The “ICPSR_PAPERS” table contains one row per paper with one or more cited studies in the “STUDY_NUMS” column. We manipulated and analyzed the tables as CSV files with the Pandas library 36 in Python and the Tidyverse packages 37 in R.

The present MICA dataset can be used independently to study the relationship between curation decisions and data reuse. Evidence of reuse for specific studies is available in several forms: usage information, including downloads and citation counts; and citation contexts within papers that cite data. Analysis may also be performed on the citation network formed between datasets and papers that use them. Finally, curation actions can be associated with properties of studies and usage histories.

This dataset has several limitations of which users should be aware. First, Jira tickets can only be used to represent the intensiveness of curation for activities undertaken since 2017, when ICPSR started using both Curation Levels and Jira. Studies published before 2017 were all curated, but documentation of the extent of that curation was not standardized and therefore could not be included in these analyses. Second, the measure of publications relies upon the authors’ clarity of data citation and the ICPSR Bibliography staff’s ability to discover citations with varying formality and clarity. Thus, there is always a chance that some secondary-data-citing publications have been left out of the bibliography. Finally, there may be some cases in which a paper in the ICSPSR bibliography did not actually obtain data from ICPSR. For example, PIs have often written about or even distributed their data prior to their archival in ICSPR. Therefore, those publications would not have cited ICPSR but they are still collected in the Bibliography as being directly related to the data that were eventually deposited at ICPSR.

In summary, the MICA dataset contains relationships between two main types of entities – papers and studies – which can be mined. The tables in the MICA dataset have supported network analysis (community structure and clique detection) 30 ; natural language processing (NER for dataset reference detection) 32 ; visualizing citation networks (to search for datasets) 38 ; and regression analysis (on curation decisions and data downloads) 29 . The data are currently being used to develop research metrics and recommendation systems for research data. Given that DOIs are provided for ICPSR studies and articles in the ICPSR Bibliography, the MICA dataset can also be used with other bibliometric databases, including DataCite, Crossref, OpenAlex, and related indexes. Subscription-based services, such as Dimensions AI, are also compatible with the MICA dataset. In some cases, these services provide abstracts or full text for papers from which data citation contexts can be extracted for semantic content analysis.

Code availability

The code 27 used to produce the MICA project dataset is available on GitHub at https://github.com/ICPSR/mica-data-descriptor and through Zenodo with the identifier https://doi.org/10.5281/zenodo.8432666 . Data manipulation and pre-processing were performed in Python. Data curation for distribution was performed in SPSS.

He, L. & Han, Z. Do usage counts of scientific data make sense? An investigation of the Dryad repository. Library Hi Tech 35 , 332–342 (2017).

Article   Google Scholar  

Brickley, D., Burgess, M. & Noy, N. Google dataset search: Building a search engine for datasets in an open web ecosystem. In The World Wide Web Conference - WWW ‘19 , 1365–1375 (ACM Press, San Francisco, CA, USA, 2019).

Buneman, P., Dosso, D., Lissandrini, M. & Silvello, G. Data citation and the citation graph. Quantitative Science Studies 2 , 1399–1422 (2022).

Chao, T. C. Disciplinary reach: Investigating the impact of dataset reuse in the earth sciences. Proceedings of the American Society for Information Science and Technology 48 , 1–8 (2011).

Article   ADS   Google Scholar  

Parr, C. et al . A discussion of value metrics for data repositories in earth and environmental sciences. Data Science Journal 18 , 58 (2019).

Eschenfelder, K. R., Shankar, K. & Downey, G. The financial maintenance of social science data archives: Four case studies of long–term infrastructure work. J. Assoc. Inf. Sci. Technol. 73 , 1723–1740 (2022).

Palmer, C. L., Weber, N. M. & Cragin, M. H. The analytic potential of scientific data: Understanding re-use value. Proceedings of the American Society for Information Science and Technology 48 , 1–10 (2011).

Zimmerman, A. S. New knowledge from old data: The role of standards in the sharing and reuse of ecological data. Sci. Technol. Human Values 33 , 631–652 (2008).

Cragin, M. H., Palmer, C. L., Carlson, J. R. & Witt, M. Data sharing, small science and institutional repositories. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 368 , 4023–4038 (2010).

Article   ADS   CAS   Google Scholar  

Fear, K. M. Measuring and Anticipating the Impact of Data Reuse . Ph.D. thesis, University of Michigan (2013).

Borgman, C. L., Van de Sompel, H., Scharnhorst, A., van den Berg, H. & Treloar, A. Who uses the digital data archive? An exploratory study of DANS. Proceedings of the Association for Information Science and Technology 52 , 1–4 (2015).

Pasquetto, I. V., Borgman, C. L. & Wofford, M. F. Uses and reuses of scientific data: The data creators’ advantage. Harvard Data Science Review 1 (2019).

Gregory, K., Groth, P., Scharnhorst, A. & Wyatt, S. Lost or found? Discovering data needed for research. Harvard Data Science Review (2020).

York, J. Seeking equilibrium in data reuse: A study of knowledge satisficing . Ph.D. thesis, University of Michigan (2022).

Kilbride, W. & Norris, S. Collaborating to clarify the cost of curation. New Review of Information Networking 19 , 44–48 (2014).

Robinson-Garcia, N., Mongeon, P., Jeng, W. & Costas, R. DataCite as a novel bibliometric source: Coverage, strengths and limitations. Journal of Informetrics 11 , 841–854 (2017).

Qin, J., Hemsley, J. & Bratt, S. E. The structural shift and collaboration capacity in GenBank networks: A longitudinal study. Quantitative Science Studies 3 , 174–193 (2022).

Article   PubMed   PubMed Central   Google Scholar  

Acuna, D. E., Yi, Z., Liang, L. & Zhuang, H. Predicting the usage of scientific datasets based on article, author, institution, and journal bibliometrics. In Smits, M. (ed.) Information for a Better World: Shaping the Global Future. iConference 2022 ., 42–52 (Springer International Publishing, Cham, 2022).

Zeng, T., Wu, L., Bratt, S. & Acuna, D. E. Assigning credit to scientific datasets using article citation networks. Journal of Informetrics 14 , 101013 (2020).

Koesten, L., Vougiouklis, P., Simperl, E. & Groth, P. Dataset reuse: Toward translating principles to practice. Patterns 1 , 100136 (2020).

Du, C., Cohoon, J., Lopez, P. & Howison, J. Softcite dataset: A dataset of software mentions in biomedical and economic research publications. J. Assoc. Inf. Sci. Technol. 72 , 870–884 (2021).

Aryani, A. et al . A research graph dataset for connecting research data repositories using RD-Switchboard. Sci Data 5 , 180099 (2018).

Färber, M. & Lamprecht, D. The data set knowledge graph: Creating a linked open data source for data sets. Quantitative Science Studies 2 , 1324–1355 (2021).

Perry, A. & Netscher, S. Measuring the time spent on data curation. Journal of Documentation 78 , 282–304 (2022).

Trisovic, A. et al . Advancing computational reproducibility in the Dataverse data repository platform. In Proceedings of the 3rd International Workshop on Practical Reproducible Evaluation of Computer Systems , P-RECS ‘20, 15–20, https://doi.org/10.1145/3391800.3398173 (Association for Computing Machinery, New York, NY, USA, 2020).

Borgman, C. L., Scharnhorst, A. & Golshan, M. S. Digital data archives as knowledge infrastructures: Mediating data sharing and reuse. Journal of the Association for Information Science and Technology 70 , 888–904, https://doi.org/10.1002/asi.24172 (2019).

Lafia, S. et al . MICA Data Descriptor. Zenodo https://doi.org/10.5281/zenodo.8432666 (2023).

Lafia, S., Thomer, A., Bleckley, D., Akmon, D. & Hemphill, L. Leveraging machine learning to detect data curation activities. In 2021 IEEE 17th International Conference on eScience (eScience) , 149–158, https://doi.org/10.1109/eScience51609.2021.00025 (2021).

Hemphill, L., Pienta, A., Lafia, S., Akmon, D. & Bleckley, D. How do properties of data, their curation, and their funding relate to reuse? J. Assoc. Inf. Sci. Technol. 73 , 1432–44, https://doi.org/10.1002/asi.24646 (2021).

Lafia, S., Fan, L., Thomer, A. & Hemphill, L. Subdivisions and crossroads: Identifying hidden community structures in a data archive’s citation network. Quantitative Science Studies 3 , 694–714, https://doi.org/10.1162/qss_a_00209 (2022).

ICPSR. ICPSR Bibliography of Data-related Literature: Collection Criteria. https://www.icpsr.umich.edu/web/pages/ICPSR/citations/collection-criteria.html (2023).

Lafia, S., Fan, L. & Hemphill, L. A natural language processing pipeline for detecting informal data references in academic literature. Proc. Assoc. Inf. Sci. Technol. 59 , 169–178, https://doi.org/10.1002/pra2.614 (2022).

Hook, D. W., Porter, S. J. & Herzog, C. Dimensions: Building context for search and evaluation. Frontiers in Research Metrics and Analytics 3 , 23, https://doi.org/10.3389/frma.2018.00023 (2018).

https://www.icpsr.umich.edu/web/ICPSR/thesaurus (2002). ICPSR. ICPSR Thesaurus.

https://www.icpsr.umich.edu/files/datamanagement/icpsr-curation-levels.pdf (2020). ICPSR. ICPSR Curation Levels.

McKinney, W. Data Structures for Statistical Computing in Python. In van der Walt, S. & Millman, J. (eds.) Proceedings of the 9th Python in Science Conference , 56–61 (2010).

Wickham, H. et al . Welcome to the Tidyverse. Journal of Open Source Software 4 , 1686 (2019).

Fan, L., Lafia, S., Li, L., Yang, F. & Hemphill, L. DataChat: Prototyping a conversational agent for dataset search and visualization. Proc. Assoc. Inf. Sci. Technol. 60 , 586–591 (2023).

Download references

Acknowledgements

We thank the ICPSR Bibliography staff, the ICPSR Data Curation Unit, and the ICPSR Data Stewardship Committee for their support of this research. This material is based upon work supported by the National Science Foundation under grant 1930645. This project was made possible in part by the Institute of Museum and Library Services LG-37-19-0134-19.

Author information

Authors and affiliations.

Inter-university Consortium for Political and Social Research, University of Michigan, Ann Arbor, MI, 48104, USA

Libby Hemphill, Sara Lafia, David Bleckley & Elizabeth Moss

School of Information, University of Michigan, Ann Arbor, MI, 48104, USA

Libby Hemphill & Lizhou Fan

School of Information, University of Arizona, Tucson, AZ, 85721, USA

Andrea Thomer

You can also search for this author in PubMed   Google Scholar

Contributions

L.H. and A.T. conceptualized the study design, D.B., E.M., and S.L. prepared the data, S.L., L.F., and L.H. analyzed the data, and D.B. validated the data. All authors reviewed and edited the manuscript.

Corresponding author

Correspondence to Libby Hemphill .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Hemphill, L., Thomer, A., Lafia, S. et al. A dataset for measuring the impact of research data and their curation. Sci Data 11 , 442 (2024). https://doi.org/10.1038/s41597-024-03303-2

Download citation

Received : 16 November 2023

Accepted : 24 April 2024

Published : 03 May 2024

DOI : https://doi.org/10.1038/s41597-024-03303-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

desired output in research

Take the Quiz: Find the Best State for You »

What's the best state for you ».

Japan Sees Need for Sharp Hike in Power Output by 2050 to Meet Demand From AI, Chip Plants

Reuters

FILE PHOTO: Power lines are silhouetted against the sky near the Oi thermal power station of Tokyo Electric Power Company, Inc. (TEPCO) in Tokyo January 26, 2009. REUTERS/Stringer/File Photo

TOKYO (Reuters) - Japan envisages the need for electricity output to rise 35% to 50% by 2050 due to growing demand from semiconductor plants and data centres backing artificial intelligence (AI), the government has forecast.

Power output should grow from 1 trillion kilowatt-hours (kWh) projected for the current decade to about 1.35-1.5 trillion kWh in 2050 to meet demand as Japan sets up more data centres, chip factories and other energy-consuming businesses, the government said in a document published late on Monday.

The increase in demand would be the first in 20 years and requires large-scale investments in power sources, the document said.

Unless Japan increases renewable energy output, stable supply of power could be uncertain, the government said, as it began mapping out a new strategy on decarbonisation and industrial policy by 2040 which it plans to finalise by the end of March.

Japan, which relies heavily on the Middle East for fossil fuel supplies, last year passed a law aimed at promoting decarbonisation investments totalling more than 150 trillion yen ($962 billion) in the public and private sectors over 10 years.

The country is counting on next-generation solar cells, known as perovskite solar cells, floating offshore wind farms, restarts of nuclear power plants and the introduction of next-generation reactors to meet the demand, the paper showed.

(Reporting by Katya Golubkova; Editing by Chang-Ran Kim and Sonali Paul)

Copyright 2024 Thomson Reuters .

Join the Conversation

Tags: artificial intelligence , environment , renewable energy , pollution , Japan

America 2024

desired output in research

Health News Bulletin

Stay informed on the latest news on health and COVID-19 from the editors at U.S. News & World Report.

Sign in to manage your newsletters »

Sign up to receive the latest updates from U.S News & World Report and our trusted partners and sponsors. By clicking submit, you are agreeing to our Terms and Conditions & Privacy Policy .

You May Also Like

The 10 worst presidents.

U.S. News Staff Feb. 23, 2024

desired output in research

Cartoons on President Donald Trump

Feb. 1, 2017, at 1:24 p.m.

desired output in research

Photos: Obama Behind the Scenes

April 8, 2022

desired output in research

Photos: Who Supports Joe Biden?

March 11, 2020

desired output in research

Who Is Prime Minister Robert Fico?

Laura Mannweiler May 15, 2024

desired output in research

Biden and Trump Agree to Debate

Lauren Camera May 15, 2024

desired output in research

Biden Muddies Message on Israel

Aneeta Mathur-Ashton May 15, 2024

desired output in research

Consumers Get a Break From Inflation

Tim Smart May 15, 2024

desired output in research

Trump Team Gets Its Shot at Cohen

Lauren Camera May 14, 2024

desired output in research

New China Tariffs: What to Know

Cecelia Smith-Schoenwalder May 14, 2024

desired output in research

IMAGES

  1. Desired Outcome is a Transformative Concept

    desired output in research

  2. | Essential inputs and outputs, outcomes and impact of the research

    desired output in research

  3. Our combined Inputs focused on your desired Outputs

    desired output in research

  4. | Essential inputs and outputs, outcomes and impact of the research

    desired output in research

  5. Types of research output

    desired output in research

  6. Research Outputs

    desired output in research

VIDEO

  1. RESEARCH INSTRUMENTS FOR QUANTITATIVE AND QUALITATIVE RESEARCH

  2. INPUT-OUTPUT ANALYSIS BEST EXPLAIN || MATHEMATICAL ECONOMICS BY DIGVIJAY SIR ||

  3. Get AI Job Fast

  4. Analysis 8

  5. Analysis 4

  6. Coin Collecting : How to Start a Coin Collection

COMMENTS

  1. Output Types

    Artwork: For some scholars, artwork is a primary research output. Scholars' artwork can come in diverse forms and media, such as paintings, sculptures, musical performances, choreography, or literary works like poems. Reports: Reports can come in many forms and may serve many functions. They can be authored by one or a number of people, and ...

  2. Ten simple rules for innovative dissemination of research

    Rule 3: Encourage participation. In the age of open research, don't just broadcast. Invite and engage others to foster participation and collaboration with research audiences. Scholarship is a collective endeavour, and so we should not expect its dissemination to be unidirectional, especially not in the digital age.

  3. Effectiveness

    Effectiveness. Look up effectiveness in Wiktionary, the free dictionary. Effectiveness or effectivity [1] is the capability of producing a desired result or the ability to produce desired output. When something is deemed effective, it means it has an intended or expected outcome, or produces a deep, vivid impression. [2]

  4. Outputs from Research

    Originality will be understood as the extent to which the output makes an important and innovative contribution to understanding and knowledge in the field.Research outputs that demonstrate originality may do one or more of the following: produce and interpret new empirical findings or new material; engage with new and/or complex problems; develop innovative research methods, methodologies and ...

  5. Types of outcomes in clinical research

    Typical examples of outcomes are cure, clinical worsening, and mortality. The primary outcome is the variable that is the most relevant to answer the research question. Ideally, it should be patient-centered (i.e., an outcome that matters to patients, such as quality of life and survival). Secondary outcomes are additional outcomes monitored to ...

  6. Outputs Versus Outcomes

    Mills-Schofield argues outcomes in contrast to outputs are the "difference made by the outputs" ( 2012, n.p.). She cites examples such as better traffic flow, shorter travel times and few accidents as outcomes of a new highway, whereby the outputs are "project design and the number of highway miles built and repaired" (Mills-Scofield ...

  7. Nine Criteria for a Measure of Scientific Output

    Additionally, evaluating a research group, a research center, or a department may be distinct from evaluating an individual and require somewhat different metrics (e.g., Hughes et al., 2010), but once suitable measures of output are available, productivity can be evaluated in terms of either years of effort, number of people involved, research ...

  8. BeckerGuides: Research Impact : Outputs and Activities

    Scholarly/research outputs and activities represent the various outputs and activities created or executed by scholars and investigators in the course of their academic and/or research efforts. One common output is in the form of scholarly publications which are defined by Washington University as:". . . articles, abstracts, presentations at ...

  9. The Future of Research Outputs

    The changing nature of research outputs has the potential to affect a wide range of organisations and people in the sector. Joined-up thinking and action could help. As the diversity of research outputs increases, we have to make choices. We can either be reactive, responding to needs and challenges as they emerge, or proactive, to help shape ...

  10. Quality Criteria of a Research Output

    An autonomous scholar, whether engaged in teaching, research, or professional service, writes with a clear demonstration of satisfying four criteria: originality, rigour, significance, and coherence. The first three are measures of relative quality while the last demands that the work has a unifying theme. In the subsequent subsections we discuss what each criterion could take into account.

  11. What Is Design of Experiments (DOE)?

    Quality Glossary Definition: Design of experiments. Design of experiments (DOE) is defined as a branch of applied statistics that deals with planning, conducting, analyzing, and interpreting controlled tests to evaluate the factors that control the value of a parameter or group of parameters. DOE is a powerful data collection and analysis tool ...

  12. LibGuides: Library for Staff: Types of Research outputs

    A creative research/problem-solving output in the form of design drawings, books, models, exhibitions, websites, installations or build works. This can include (but is not limited) to: fashion/textile design. graphic design. interior design. other designs. industrial design. architectural design. multimedia design.

  13. Outcome-Based Research: What Is This Concept All About

    It is a step-by-step process. Therefore, outcome-based research is a new approach that brings the value of study towards a higher level. It is responsive to the needs of society. It does not stop at publications as the outcome of research but a much higher goal to make life better for everyone. Research is not just for the sake of personal gain ...

  14. How to write the expected results in a research proposal?

    Writing about the expected results of your study in your proposal is a good idea as it can help to establish the significance of your study. On the basis of the problems you have identified and your proposed methodology, you can describe what results can be expected from your research. It's not possible for you to predict the exact outcome of ...

  15. PDF 8. Communicating the Outcomes of Research

    8. COMMUNICATING THE OUTCOMES OF RESEARCH 8.1 Good practice in publication 8.1.1 The University considers it an important priority that high quality research is disseminated to relevant audiences and supports the UWE Bristol 2030 strategy objective of focusing on internationally excellent research with real-world impact. The

  16. Input-Process-Output Model

    Outputs are the results of group activity that are valued by the team or the organization. The input-process-output model has historically been the dominant approach to understanding and explaining team performance and continues to exert a strong influence on group research today. The framework is based on classic systems theory, which states ...

  17. Outputs vs. Outcomes: Understanding the Differences

    Output: Processing 100 customer service calls in a day ... -term effects. While outputs show what was produced or accomplished, outcomes show the effect of these outputs on the desired result. ... invest in product research and development to identify features that will better meet the needs and desires of your target audience.

  18. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  19. (PDF) Research management and research output

    research output will lead to desired outcomes such as recognition and support. The South African- based study therefore posits that institutional characteristics are a strong influence on research

  20. A dataset for measuring the impact of research data and their ...

    This paper introduces a dataset developed to measure the impact of archival and data curation decisions on data reuse. The dataset describes 10,605 social science research datasets, their curation ...

  21. Desired output versus the actual output in the first iteration

    Abhijit Singare. This paper proposed identification by using Artificial Neural Networks (ANNs) model for speed control of DC Motor. The output waveform trajectory of the dc motor model can be made ...

  22. CellAgent: An LLM-driven Multi-Agent Framework for Automated ...

    Single-cell RNA sequencing (scRNA-seq) data analysis is crucial for biological research, as it enables the precise characterization of cellular heterogeneity. However, manual manipulation of various tools to achieve desired outcomes can be labor-intensive for researchers. To address this, we introduce CellAgent, an LLM-driven multi-agent framework, specifically designed for the automatic ...

  23. Control output format (JSON mode)

    Specifying the desired format. One of the simplest ways to control Claude's output is to simply state the format you want. Claude can understand and follow instructions related to formatting, and format outputs such as: For example, if you want Claude to generate a haiku in JSON format, you can use a prompt like this: Please write a haiku ...

  24. Robust Data-driven Control Design for Linear Systems ...

    Abstract. This paper deals with the problem of providing a data-driven solution to the local stabilization of linear systems subject to input saturation. After presenting a model-based solution to ...

  25. Research Assistant (Life Science Research Professional 1)

    The Pleiner lab at Stanford Medicine, Department of Molecular and Cellular Physiology, is seeking a Research Assistant (Life Sciences Research Professional 1) to assist a fundamental research program in cellular and molecular biology. Join our growing team! Our lab aims to understand how human cells produce and quality control biomedically important membrane proteins like transporters ...

  26. Faculty Research Assistants (TEMP)

    The Law Library is currently seeking Faculty Research Assistants to work part-time during the summer term. What Faculty Research Assistants Do: Conduct in-depth, complex legal and non-legal research projects for Michigan Law Faculty members to support their research, scholarship, and teaching; Provide high-quality memos for each project

  27. Japan Sees Need for Sharp Hike in Power Output by 2050 to Meet Demand

    Power output should grow from 1 trillion kilowatt-hours (kWh) projected for the current decade to about 1.35-1.5 trillion kWh in 2050 to meet demand as Japan sets up more data centres, chip ...

  28. Zambia Construction Industry Report 2024: Output to Grow by

    Dublin, May 14, 2024 (GLOBE NEWSWIRE) -- The . Zambia Construction Industry Report 2024: Output to Grow by 2.4% in Real Terms this year, Following an Annual Growth of 6.4% in 2023 - Forecasts to 2028