FiggeritsAnswers.com

Scientific work based on research

scientific work based on research figgerits

If you already solved this puzzle and are looking for other definitions from the same level then head over to Figgerits Level 160 Answers

Leave a Comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Gamer Digest

  • Privacy Policy
  • Review Policy

Figgerits Scientific work based on research Answer

Figgerits Scientific work based on research Answer

Figgerits is a mobile puzzle game developed by Hitapps, and it’s available on iOS and Android. This game will test your vocabulary and general knowledge if you’re looking for a challenging brain teaser. You might come across a clue that stumps you, and we’re here to help. Below is the  Figgerits Scientific work based on research answer , in case you’re stuck and can’t figure it out.

The answer to the Figgerits clue Scientific work based on research is THESIS.

Other Clues from this Puzzle

  • Lack of anything in particular
  • Someone who prefers to live in isolation
  • Fully developed, no longer young
  • Oil conduit
  • To polish metal
  • (3rd person) To necessitate, require
  • Female protagonist
  • Don’t try to __ the truth with pretty lies
  • (syn.) Boring, dull, insipid
  • Spot for vows
  • Sorry for the crimes
  • Almost transparent
  • A must-have for a White Christmas

The Figgerits Scientific work based on research answer above is from level 159. Visit the Figgerits level 159 answers page if you need help with any other clues in this particular puzzle to help you figure out the cryptogram.

As new levels are added to the game, we’ll update our master list of Figgerits answers , so you can reference it if you ever need help solving a puzzle.

Figgerits is available to download on the  App Store  and  Google Play .

Check out our site’s word games section for more challenging word puzzles and answers .

Figgerits Level Exotic Tribes 3 Answers

Puzzle Game Master

Puzzle Game Answers and Solutions

Scientific work based on research: Figgerits Answer + Phrase

Figgerits: scientific work based on research answer.

Figgerits Scientific work based on research answers with the Phrase, cheat are provided on this page, This game is developed by Figgerits – Word Puzzle Game Hitapps and is available on the Google PlayStore & Apple AppStore . Figgerits is a kind of cross logic and word puzzle game for adults that will blow your mind and train brainpower. Play IQ logic games, solve brain puzzles, and complete top word games to win. Use clues to decrypt the message and decipher the cryptogram. A Figgerit is a brain word connect puzzle game. When the mind task is completed, it will yield a little truism written onto the solution dashes.

Note: Visit PuzzleGameMaster.com  To support our hard work when you get stuck at any level. Visit the below link for all other levels.

Scientific work based on research: THESIS Phrase: THAT WILL BE HIS PUNISHMENT – AS WELL AS THE PRISON
  • Instrument to announce things to soldiers: Figgerits Answer + Phrase
  • (plural) A medieval instrument for writing: Figgerits Answer + Phrase
  • Slow-witted and dull: Figgerits Answer + Phrase

Thank You for visiting this page; if you need more answers to Figgerits, Click the above link, or if the answers are wrong, please comment; our team will update you as soon as possible.

  • ← European __; __ Jack: Figgerits Answer + Phrase
  • Fun Feud Trivia: Name Something You Might Eat With A Hamburger →

Words Answers

scientific work based on research figgerits

Words Answers » Figgerits Answers » Level 160 » Scientific work based on research

Figgerits Answers

Scientific work based on research Figgerits Answers

Greetings, player! You came to the right place, where all the answers for the Figgerits game are published. Looking for help? Need someone to help or just stuck on some level? Earlier or later you will need help to pass this challenging game and our website is here to equip you with Figgerits Scientific work based on research answers and other useful information like tips, solutions and cheats. In addition to Figgerits, the developer Hitapps has created other amazing games.

More tips for another level you will find on Figgerits Level 160 answers page.

Scientific work based on research

←   →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

My Word Games

Scientific work based on research Figgerits Answer

Few minutes ago, I was playing the Clue : Scientific work based on research of the game Figgerits and I was able to find its answer. Now, I can reveal the words that may help all the upcoming players. And about the game answers of Figgerits, they will be up to date during the lifetime of the game.

Answer of Figgerits Scientific work based on research:

Please remember that I’ll always mention the master topic of the game : Figgerits Answers , the link to the previous level : Spot for vows Figgerits and the link to the main level Figgerits answers level 159 . You may want to know the content of nearby topics so these links will tell you about it !

Please let us know your thoughts. They are always welcome. So, have you thought about leaving a comment, to correct a mistake or to add an extra value to the topic ? I’m all ears.

Leave a Comment Cancel reply

Figgerits

Figgerits: What's a scientific research space?

Figgerits answers

Hi and thank you for visiting our website, here you will be able to find all the answers for Figgerits Game. Figgerits is the new wonderful word game developed by Hitapps, known by his best puzzle word games on the android and apple store. Figgerits isn’t only a logic puzzle and smart game, it’s a kind of cross logic and word puzzle games for adults that will blow your mind and train brainpower. Sounds interesting, right? Play IQ logic games, solve brain puzzles and complete top word games to win. Use clues to decrypt the message and decipher the cryptogram It is a letter guessing game where you have to find phrases, you are given the definition of the hidden words and you have to correctly find the solution. Choose from a range of topics like Elementary, Great Minds, Proverbs, Historical facts, Space, Fauna, Economics & Politics, Planet Earth, Sports and more! Give your brain some exercise and solve your way through brilliant levels! The answers are divided into several pages to keep it clear.

What's a scientific research space?

More questions from same level

  • Involving X-rays
  • Solidified or thickened
  • The __ ceremony celebrated the union of two hearts
  • Insufficient body mass
  • State of unpredictability
  • Physical manifestation
  • Commanding attention
  • Negligent behavior
  • The bridge was __ sound, ensuring safety for all
  • Notable or substantial
  • The __ of restrictions brought relief to the community
  • Not clear or recognizable
  • Related to life sciences (adv.)
  • Ally or partner in a union

scientific work based on research figgerits

FiggeritsAnswers.com

Scientific work based on research

scientific work based on research figgerits

If you already solved this puzzle and are looking for other definitions from the same level then head over to Figgerits Level 160 Answers

Leave a Comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

scientific work based on research figgerits

  • Figgerits Level 160

Scientific work based on research 

scientific work based on research figgerits

Here you go for Scientific work based on research Figgerits answer. This clue was last seen on Figgerits Level 160 Answers . The answer we have for Scientific work based on research in our database has a total of 6 letters . At A nswers.org , we understand that sometimes even the most seasoned puzzle solvers can encounter challenges they can't overcome. That's why we've compiled a complete guide to Figgerits, providing answers and solutions for every level. This resource ensures that no player gets stuck for too long, offering a safety net for those moments when a puzzle seems just out of reach. 

POSSIBLE ANSWER:

Our website is a reliable companion for your Figgerits journey, ensuring you have all the tools you need to conquer any puzzle. The easy-to-use interface allows you to navigate through solutions effortlessly, giving you the assistance you need without spoiling the fun of the game.

Related Solutions

Here is the list with other clues from the same Level 160 puzzle:

  • Lack of anything in particular
  • Someone who prefers to live in isolation
  • Fully developed no longer young
  • Oil conduit
  • To polish metal
  • (3rd person) To necessitate require
  • Female protagonist
  • Don't try to __ the truth with pretty lies
  • (syn.) Boring dull insipid
  • Spot for vows
  • Sorry for the crimes
  • Almost transparent
  • A must-have for a White Christmas

Scientific work based on research Figgerits

Scientific work based on research, more from this puzzle:, more app solutions.

Codycross

Puzzle Game Master

Puzzle Game Answers and Solutions

Scientific work based on research: Figgerits Answer + Phrase

Figgerits: scientific work based on research answer.

Figgerits Scientific work based on research answers with the Phrase, cheat are provided on this page, This game is developed by Figgerits – Word Puzzle Game Hitapps and is available on the Google PlayStore & Apple AppStore . Figgerits is a kind of cross logic and word puzzle game for adults that will blow your mind and train brainpower. Play IQ logic games, solve brain puzzles, and complete top word games to win. Use clues to decrypt the message and decipher the cryptogram. A Figgerit is a brain word connect puzzle game. When the mind task is completed, it will yield a little truism written onto the solution dashes.

Note: Visit PuzzleGameMaster.com  To support our hard work when you get stuck at any level. Visit the below link for all other levels.

Scientific work based on research: THESIS Phrase: THAT WILL BE HIS PUNISHMENT – AS WELL AS THE PRISON
  • (syn.) The tiniest: Figgerits Answer + Phrase
  • Not hidden or secret: Figgerits Answer + Phrase
  • She is totally __ in her book: Figgerits Answer + Phrase

Thank You for visiting this page; if you need more answers to Figgerits, Click the above link, or if the answers are wrong, please comment; our team will update you as soon as possible.

  • ← European __; __ Jack: Figgerits Answer + Phrase
  • Fun Feud Trivia: Name Something You Might Eat With A Hamburger →

Words Answers

scientific work based on research figgerits

Words Answers » Figgerits Answers » Level 160 » Scientific work based on research

Figgerits Answers

Scientific work based on research Figgerits Answers

Greetings, player! You came to the right place, where all the answers for the Figgerits game are published. Looking for help? Need someone to help or just stuck on some level? Earlier or later you will need help to pass this challenging game and our website is here to equip you with Figgerits Scientific work based on research answers and other useful information like tips, solutions and cheats. In addition to Figgerits, the developer Hitapps has created other amazing games.

More tips for another level you will find on Figgerits Level 160 answers page.

←   →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

My Word Games

Scientific work based on research Figgerits Answer

Few minutes ago, I was playing the Clue : Scientific work based on research of the game Figgerits and I was able to find its answer. Now, I can reveal the words that may help all the upcoming players. And about the game answers of Figgerits, they will be up to date during the lifetime of the game.

Answer of Figgerits Scientific work based on research:

Please remember that I’ll always mention the master topic of the game : Figgerits Answers , the link to the previous level : Spot for vows Figgerits and the link to the main level Figgerits answers level 159 . You may want to know the content of nearby topics so these links will tell you about it !

Please let us know your thoughts. They are always welcome. So, have you thought about leaving a comment, to correct a mistake or to add an extra value to the topic ? I’m all ears.

Gamer Digest

  • Privacy Policy
  • Review Policy

Figgerits Scientific investigation Answer

Figgerits Scientific investigation Answer

Figgerits is a mobile puzzle game developed by Hitapps, and it’s available on iOS and Android. This game will test your vocabulary and general knowledge if you’re looking for a challenging brain teaser. You might come across a clue that stumps you, and we’re here to help. Below is the  Figgerits Scientific investigation answer , in case you’re stuck and can’t figure it out.

The answer to the Figgerits clue Scientific investigation is RESEARCH.

Other Clues from this Puzzle

  • Firefighters connect hoses to it
  • It can stay behind after a chemical reaction
  • This episode is a __ on his reputation
  • Lacking depth or substance
  • The process of gaining knowledge
  • To back off
  • Insolent and cheeky
  • We had a __ meal yesterday
  • Lingerie or briefs
  • Every person in a group
  • (syn.) Must, ought to
  • Magicians use this often

The Figgerits Scientific investigation answer above is from level 76. Visit the Figgerits level 76 answers page if you need help with any other clues in this particular puzzle to help you figure out the cryptogram.

As new levels are added to the game, we’ll update our master list of Figgerits answers , so you can reference it if you ever need help solving a puzzle.

Figgerits is available to download on the  App Store  and  Google Play .

Check out our site’s word games section for more challenging word puzzles and answers .

Figgerits Level Exotic Tribes 3 Answers

Are You Ready to Rack Your Brain and Be Astonished?

If so, let’s play Figgerits! It is a unique word game where you reveal an interesting fact or quote by solving definition puzzles. Figgerits is more than just a crossword — each level is a small discovery.

Figgerits Features

  • 10 000+ words to figure out
  • 600+ curious facts that will blow your mind
  • 15 regular categories
  • 25 additional rare categories
  • Two types of difficulty

Get the Benefits

  • Lots of fun facts will entertain you
  • Myriads of puzzles will train your brain
  • Tons of words will improve your vocabulary
  • Plenty of curious information will provide you with topics for small talk

scientific work based on research figgerits

Read What Our Users Say About Us

scientific work based on research figgerits

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of f1000res

Improving open and rigorous science: ten key future research opportunities related to rigor, reproducibility, and transparency in scientific research

Danny valdez.

1 Indiana University School of Public Health, Bloomington, IN, 47403, USA

Colby J. Vorland

Andrew w. brown, evan mayo-wilson, justin otten, richard ball.

2 Project TIER, Haverford College, Haverford, Pennsylvania, 19041, USA

3 Indiana University Purdue University Indianapolis Fairbanks School of Public Health, Indianapolis, IN, 46223, USA

Rachel Levy

4 Rachel Levy, Mathematical Association of America, 1529 18th St. NW, Washington, DC, 20036, USA

Dubravka Svetina Valdivia

5 Indiana University School of Education, Bloomington, IN, 47401, USA

David B. Allison

Associated data.

No data are associated with this article.

All participants have provided their permission to be named in this article.

Peer Review Summary

Review dateReviewer name(s)Version reviewedReview status
Sheenah M Mische Approved
Christopher A Mebane Approved
Judith A Hewitt Approved

Background: As part of a coordinated effort to expand research activity around rigor, reproducibility, and transparency (RRT) across scientific disciplines, a team of investigators at the Indiana University School of Public Health-Bloomington hosted a workshop in October 2019 with international leaders to discuss key opportunities for RRT research.

Objective: The workshop aimed to identify research priorities and opportunities related to RRT.

Design: Over two-days, workshop attendees gave presentations and participated in three working groups: (1) Improving Education & Training in RRT, (2) Reducing Statistical Errors and Increasing Analytic Transparency, and (3) Looking Outward: Increasing Truthfulness and Accuracy of Research Communications. Following small-group discussions, the working groups presented their findings, and participants discussed the research opportunities identified. The investigators compiled a list of research priorities, which were circulated to all participants for feedback.

Results: Participants identified the following priority research questions: (1) Can RRT-focused statistics and mathematical modeling courses improve statistics practice?; (2) Can specialized training in scientific writing improve transparency?; (3) Does modality (e.g. face to face, online) affect the efficacy RRT-related education?; (4) How can automated programs help identify errors more efficiently?; (5) What is the prevalence and impact of errors in scientific publications (e.g., analytic inconsistencies, statistical errors, and other objective errors)?; (6) Do error prevention workflows reduce errors?; (7) How do we encourage post-publication error correction?; (8) How does ‘spin’ in research communication affect stakeholder understanding and use of research evidence?; (9) Do tools to aid writing research reports increase comprehensiveness and clarity of research reports?; and (10) Is it possible to inculcate scientific values and norms related to truthful, rigorous, accurate, and comprehensive scientific reporting?

Conclusion: Participants identified important and relatively unexplored questions related to improving RRT. This list may be useful to the scientific community and investigators seeking to advance meta-science (i.e. research on research).

Introduction

Rigor, reproducibility, and transparency (RRT) are scientific cornerstones that promote truthful, accurate, and objective science ( McNutt, 2014 ). In the context of scientific research, rigor is defined as a thorough, careful approach that enhances the veracity of findings ( Casadevall & Fang, 2012 ). There are several types of reproducibility , which include the ability to evaluate and follow the same procedures as previous studies, obtain comparable results, and draw similar inferences ( Goodman et al ., 2016 ; National Academies of Sciences, 2019 ). Transparency is a process by which methodology, experimental design, coding, and data analysis tools are reported clearly and openly shared ( Nosek et al ., 2015 ; Prager et al ., 2019 ). Together, these scientific norms represent the best means of obtaining objective knowledge of the world ( Anderson et al ., 2010 ; Allison et al ., 2016 ). The science concerning these norms is a specific branch of meta-science, or “research on research”, led by scientists who promote these values through the education of early career scientists, identifying areas of concern for scientific validity, and postulating paths toward stronger, more credible science ( Ioannidis et al ., 2015 ).

Several factors compete with the pursuit of rigorous, reproducible, and transparent research. For example, the rate of scientific publication has risen dramatically in the last two decades. Although this is indicative of many important scientific breakthroughs ( Van Noorden, 2014 ), the rate of manuscript retractions due to either researcher error or malfeasance has also increased ( Steen et al ., 2013 ). A survey found between 40% and 70% of scientists agreed that factors including fraud, selective reporting, and pressure to publish contribute to the irreproducibility of scientific findings ( Fanelli, 2018 ). These concerns also have the potential to decrease public trust in science, although research on this question is needed ( National Academies of Sciences, 2017 ).

Basic and applied science are undermined when scientists fail to uphold high standards of conduct ( Prager et al ., 2019 ). Given that many authors have identified issues or concerns in science, the emerging challenge for scholars in this area is to find workable solutions to improve RRT, rather than simply continuing to illustrate problems related to RRT ( Allen & Mehler, 2019 ). To this end, in October 2019, Indiana University School of Public Health-Bloomington hosted a multidisciplinary meeting of leading scholars to discuss ongoing RRT-related challenges. The purpose of the meeting, which was funded by the Afred P. Sloan Foundation, was to identify new opportunities to advance sound scientific practice, from the early stages of planning a study, through to execution and the communication of findings. This paper presents findings from that meeting.

The meeting was structured around three areas:

  • (1) Improving education & training in RRT.
  • (2) Reducing statistical errors and increasing analytic transparency.
  • (3) Looking outward: increasing truthfulness and accuracy of research communications.

Participants

We invited participants based on prior contributions to RRT research. Participants included representatives from several leading organizations and Indiana University (IU) faculty, staff, and graduate students who were invited to participate in the meeting and proceedings ( Table 1 ). For their participation in the meeting, invited guests who were not federal employees or IU employees received a $1,000 honorarium.

NameAffiliationSubgroup
Richard Ball, Ph.D.Project TIER [Teaching Integrity in
Empirical Research]
Improving Education & Training in rigor, reproducibility, and
transparency [RRT]
Rachel Levy, Ph.D.Mathematical Association of AmericaImproving Education & Training in RRT
Keith Baggerly, Ph.D.University of TexasReducing Statistical Errors and Increasing Analytic Transparency
John Ioannidis, M.D., DScMETRICS [Meta-Research Innovation
Center at Stanford]
Reducing Statistical Errors and Increasing Analytic Transparency
Brian Nosek, Ph.D.Center for Open ScienceReducing Statistical Errors and Increasing Analytic Transparency
Phillipe Ravaud, M.D.,
Ph.D.
Paris Descartes UniversityLooking Outward: Increasing Truthfulness and Accuracy of Research
Communications
Machell Town, Ph.D.Centers for Disease Control and PreventionLooking Outward: Increasing Truthfulness and Accuracy of Research
Communications
Matt Vassar, MBA, Ph.D.Oklahoma State UniversityLooking Outward: Increasing Truthfulness and Accuracy of Research
Communications
David B. Allison, Ph.D.Dean of the School of Public HealthImproving Education & Training
Dubravka Svetina, Ph.D.School of EducationImproving Education & Training
Elizabeth Housworth,
Ph.D.
MathematicsImproving Education & Training
Emily Meanwell, Ph.D.Social Science Research CommonsImproving Education & Training
Roger Zoh, MS, Ph.D.School of Public HealthImproving Education & Training
Andrew W. Brown, Ph.D.School of Public HealthReducing Statistical Errors and Increasing Analytic Transparency
Stephanie Dickinson, MSSchool of Public HealthReducing Statistical Errors and Increasing Analytic Transparency
Mandy Mejia, Ph.D.StatisticsReducing Statistical Errors and Increasing Analytic Transparency
Carmen Tekwe, MS, Ph.D.School of Public HealthReducing Statistical Errors and Increasing Analytic Transparency
Evan Mayo-Wilson, DPhilSchool of Public HealthLooking Outward: Increasing Truthfulness and Accuracy of Research
Communications
Ana Bento, Ph.D.School of Public HealthLooking Outward: Increasing Truthfulness and Accuracy of Research
Communications
Jutta Schickore, Ph.D.History and Philosophy of Science
and Medicine
Looking Outward: Increasing Truthfulness and Accuracy of Research
Communications
Jamie Wittenberg, M.B.S,
MSLIS
Library SystemLooking Outward: Increasing Truthfulness and Accuracy of Research
Communications
Lilian Golzarri Arroyo, MS (IU School of Public Health); Chris Bogert, Ph.D. (IU Applied Pharmacology & Toxicology); Sean Grant, DPhil (IUPUI School of Public
Health); Stasa Milojevic, Ph.D. (IU Informatics); Luis Mestre, MS (IU School of Public Health); Justin Otten, Ph.D. (IU School of Public Health); Danny Valdez, Ph.D.
(IU School of Public Health); Colby J. Vorland, Ph.D. (IU School of Public Health)

Meeting format

The two-day meeting was comprised of nine prepared research talks, moderated panel discussions, and small-group open-forum style sessions related to each of the three previously stated goals.

Day one. On the first day, participants presented 10–12 minute research talks, each of which was followed by a moderated question-and-answer period. Participants discussed questions pertaining to RRT and sought to identify emerging areas of research including novel approaches, testable outcomes, and potential limitations. During the afternoon session, participants were divided into three small-groups to discuss potential research opportunities, moderated by an IU faculty representative charged with compiling notes for record keeping and dissemination.

Day two. On the second day, one representative from each group summarized major points through a brief presentation, which was followed by a question-and-answer session with all participants. This dialogue was intended to clarify ideas raised and to identify fundable research opportunities. The meeting concluded with a call to action by the Dean of the School of Public Health-Bloomington and Co-Principal Investigator of the project (DA), to continue promoting interdisciplinary RRT Science.

Subgroup 1: improving education & training in RRT

We asked the first subgroup to discuss research opportunities related to implementing and testing RRT-guided academic curricula. The group identified elements of current undergraduate and graduate education that contribute to problematic data practices, including possible underlying causes and potential solutions (see Table 2 ). Three primary education-related questions guided the discussion:

GroupQuestionChallenges
improving
education & training in
rigor, reproducibility, and
transparency (RRT)

1. It would be difficult to isolate and to evaluate the effects of changes to
existing curricula.
2. Proximal measures related to technical skills might not translate into
improved research practices.
1. Writing is an abstract science, which would make measuring outcomes
challenging.
2. There are currently limited existing graduate level curricula that pertain
exclusively to writing.
1. Feasibility concerns including, cost, time, and other additional
resources needed to facilitate an intervention.
2. Examining heterogeneity requires large and diverse populations, and is
practically difficult.
reducing
statistical errors and increasing
analytic transparency
1. Automation may be technically possible for only certain types of errors.
2. New programs intended to automate error correction require a certain
level of computer programming expertise.
1. It would be difficult to generalize the prevalence of errors, because
many common errors have field-specific names.
2. Assessing the impact of errors is largely subjective, unless strict
guidelines are agreed upon and adopted.
1. It would be difficult to determine if workflows are entirely responsible
for reduced error and improved research practice.
2. It may be challenging to identify generalizable workflows that logically
function across disciplines.
1. It would be difficult to implement standard post-publication error
correction guidelines that function effectively across disciplines.
2. There is a hesitancy to embrace error correction as a normal
component of the editorial process.
looking outward:
increasing truthfulness
and accuracy of research
communications

1. The effects of spin in controlled research settings might not generalize
to real-world decisions.
2. Reviewing and categorizing text is both subjective and time consuming.

1. Although tools could be developed for testing, implementation
challenges could mitigate their effectiveness in practice.
2. Previous guidelines have had minimal impact on reporting quality.

1. There are few model interventions to form self-identity.
2. There may be limited opportunities and enthusiasm to integrate
values-based education in classes that focus on technical skills.

1 We present here only two of the most salient challenges.

  • (1) Can RRT-focused statistics and mathematical modeling courses improve statistical practice?
  • (2) Can specialized training in scientific writing improve transparency?
  • (3) Does modality affect the efficacy of RRT-related education?

With respect to each question the existing and entrenched practices, feasibility of change, and proper audience for interventions were discussed.

1. Can RRT-focused statistics and mathematical modeling courses improve statistical practice?

Incorrect analyses are some of the most common, preventable errors in science ( Resnik, 2012 ). Scholars attribute mistakes to gaps in statistics education ( Thompson, 2006 ). With the rise in data science as a component of scientific exploration, students need more exposure to evidence-based pedagogical approaches to statistics and mathematical modeling ( GAISE, 2016 ; GAIMME, 2016 , and NASEM, 2018 ). Many introductory data science courses include topics from statistics (e.g., contingency tables [chi-square tests], multiple regression, analysis of variance, and the broader general linear model) ( Gorsuch, 2015 ), as well as mathematical modeling approaches and computational algorithms. These topics can be reframed through an RRT lens as modules/domains within existing mathematics or data-science courses or structured as new data-driven courses entirely.

Indeed, participants noted that to improve RRT practices, there are opportunities to design new courses with a direct RRT focus at the undergraduate, graduate and postdoctoral levels ( Willig et al ., 2018 ). Courses could include modules related to the identification of errors in published research, proposing solutions to these errors, addressing real-world contexts and demonstrating the importance of careful methodological decision-making ( Peng, 2015 ). Specific assignments could test for and reinforce RRT principles, such as research compendia (i.e. sharable electronic folders containing code, and other exploratory information to validate reported results) ( Ball & Medeiros, 2012 ; King, 1995 ; Stodden et al ., 2015 ), workflows, which are described later in this paper, and other research projects related to communication and computational reproducibility. The learning practices could be assessed to ensure that students appropriately apply concepts rather than demonstrate rote-formula memorization ( Thompson, 2002 ; Ware et al ., 2013 ). Integrating learning into stages of education where students are concurrently engaged in research can help improve both retention and transfer of the RRT ideas to future scientific settings.

2. Can specialized training in scientific writing improve transparency?

Clear scientific writing is necessary to reproduce and build on research findings. To facilitate better writing, scholars have developed curricula to help academics improve writing practice and quality (e.g., Goodson, 2016 ). However, many academic writing programs focus on personal habit building and development of linguistic mechanics to craft more powerful prose ( Elbow, 1998 ; Kellogg & Whiteford, 2009 ). In such courses, RRT-related dimensions of writing (such as writing transparently or minimizing ‘spin’) may not be emphasized. Thus, the subgroup discussed how existing writing curricula could incorporate RRT principles, what new writing courses guided by RRT would entail, and research opportunities to test the efficacy of new writing curricula.

Participants identified several RRT-specific writing principles and discussed how a deeper understanding of the extent to which writing and research are intertwined may increase transparency. Examples included learning about methodological reporting guidelines, writing compelling post-publication peer reviews, and other transparent writing practices. The group also discussed how courses could be developed or redesigned specifically to center on RRT principles. One theme of the discussion was the need for rigorous testing of student learning outcomes associated with novel writing content. However, a primary concern was the identification of the appropriate outcome measures for writing-specific interventions ( Barnes et al ., 2015 ) given the subjective and nebulous nature of constructs like writing quality, individual improvement, and writing-related self-efficacy.

3. Does modality affect the efficacy of RRT-related education?

Another research opportunity discussed by the subgroup related to instructional modality, which refers to the manner in which a curriculum or intervention is experienced by the learner ( Perry & Pilati, 2011 ). These may include traditional face-to-face instruction, synchronous or asynchronous online meetings/trainings, and various hybrid formats ( Beall et al ., 2014 ). Understanding the relative benefits of each modality is important in choosing an appropriate intervention. Indeed, educational needs vary among learner groups; for example, what is most effective for undergraduate students may not be effective or feasible for post-doctoral researchers with full-time professional commitments. Broad research questions identified by the group included:

  • a) What modalities exist beyond face-to-face, online, or hybrid instruction?;
  • b) How can technology push modality beyond online courses and other Massive Open Online Course formats?
  • c) Which modality is most effective and among which audiences?

In the context of previously discussed coursework in statistics and writing, participants explored the strengths and weaknesses of various modalities and how interventions could be conducted to test them empirically. There are logistical considerations, such as cost, space, and faculty time, that further complicate the feasibility of these interventions. For example, a face-to-face intervention may offer more tailored instruction to individual learners, while an online intervention may better deliver content to a wider audience. Thus, the subgroup identified several areas for future research, including comparisons of student learning across modalities, strategies for scaling educational content to institutional constraints, and the moderating effects of learner demographics on intervention efficacy.

Subgroup 2: reducing statistical errors and increasing analytical transparency

Errors are “actions or conclusions that are demonstrably and unequivocally incorrect from a logical or epistemological point of view” ( Brown et al ., 2018 ). Despite the adage that science is self-correcting, uncorrected errors are prevalent in the scientific literature ( Brown et al ., 2018 ; Ioannidis, 2012 ). Subgroup 2 discussed questions related to reducing and mitigating such errors, including:

  • (4) Can automation help identify errors more efficiently?;
  • (5) What is the impact of errors within disciplines?;
  • (6) Do standardized procedures (i.e., workflows) prevent errors?
  • (7) How do we encourage post-publication error correction?

The costs and benefits associated with each question were also discussed (see Table 2 ).

4. Can automation help identify errors more efficiently?

Various automated and manual methods have been developed and applied to assess analytic inconsistencies, statistical errors and improbabilities, and other errors (e.g., Anaya, 2016 ; Baggerly & Coombes, 2009 ; Brown & Heathers, 2017 ; Georgescu & Wren, 2018 ; Labbé et al ., 2019 ; Monsarrat & Vergnes, 2018 ). An increase in automation (i.e., producing more user-friendly tools and algorithms) has the potential for surveilling the prevalence, prevention, and correction of errors. However, more work is needed to determine the most efficient use of such tools, including their collective abilities to detect field-specific issues that require subject matter expertise ( Lakens & Debruine, 2020 ). For example, the automatic recomputation of some p-values is possible using the program ‘Statcheck’, but only for articles that utilize the American Psychological Association’s (APA) in-text citation style for statistical reporting ( Nuijten et al ., 2017 ). Other examples require statistical ratios ( Georgescu & Wren, 2018 ), or integer-based data and sample sizes (e.g., Brown & Heathers, 2017 ; Heathers et al ., 2018 ), which are both challenging to automate and not recurrent across all fields.

Automated error detection is currently limited to a narrow range of errors. Other types of errors might be detected by careful readers, such as the ignoring of clustering in cluster-randomized trials ( Brown et al ., 2015 ; Heo et al ., 2018 ), misinterpretation of differences in nominal significance, and post-hoc fallacies ( Brown et al ., 2019 ; George et al ., 2016 ). The subgroup discussed opportunities to define, and possibly automate, diagnostic checklists, advanced natural language processing, or other computational informatics approaches that would facilitate the detection of these errors. These novel automated measures could be tested empirically for effectiveness.

5. What is the prevalence and impact of errors?

Different errors will have varying impacts on study conclusions. While some errors can be easily corrected and reported, some fundamentally invalidate study conclusions. Some general statistical errors have occurred repeatedly across disciplines for decades (e.g., mistaken differences due to “regression to the mean” since at least 1886 [ Thomas et al ., 2020 ] and “differences in nominal significance” for decades [ Altman, 2002 ; Thompson, 2002 ]). Automated methods, such as those outlined above, have been used almost exclusively to illuminate problems but not necessarily correct them ( Georgescu & Wren, 2018 ; Monsarrat & Vergnes, 2018 ; Nuijten et al ., 2017 ).

To achieve the goal of error reduction, one must first know how pervasive errors are. Yet, it remains challenging to generalize the detection and correction of scientific errors across disciplines because of field specificity (i.e. the unique nuances and methodological specificities inherent to a specific field of study) ( Lohse et al ., 2020 ), the various terminologies used for describing the same models (e.g. ‘Hierarchical Linear’ models vs ‘Multilevel’ models), as well as the seeming need to repackage the same problem as new disciplines arise (e.g. ongoing multiple comparison issues raised anew with the advent of genome-wide association studies, microarray, microbiome, and functional magnetic resonance imaging methods). Thus, this subgroup discussed the value of longitudinal, discipline-specific error surveillance and error frequency estimation to collect empirical evidence about error rate differences among disciplines. Other issues discussed were the identification of better prevalence estimates across fields, and how simulation studies can modify our confidence in the understanding of the prevalence of errors and their generalizability across disciplines.

6. Do error prevention workflows reduce errors?

Workflows are the various approaches for accomplishing scientific objectives, usually expressed as tasks and dependencies ( Ludäscher et al., 2009 ). The implementation of clear, logical workflows can potentially prevent errors and improve research transparency. Workflows may be of value to catch errors at various stages of the research process, from planning, to data collection and handling procedures, and reporting/manuscript screening ( Cohen-Boulakia et al ., 2017 ). Error detection processes within scientific workflows may serve as mechanisms to prevent errors before publication, akin to how text duplication software (e.g. iThenticate) is used prophylactically to catch inadvertent plagiarism. Separately, some research groups implement workflows that require two independent scientists to verify data, analyses, and statistical reporting prior to manuscript publication, with at least one of those individuals being a professional statistician ( George et al ., 2016 ). A similar workflow is to establish “red teams”, consisting of methodologists, statisticians, and subject-matter experts, to critique the study design and analysis for errors, offering incentives akin to “bug bounty” programs in computer software development ( Lakens, 2020 ).

The development and dissemination of research workflows could be modeled after those outlined above, or in other ways such as the use of checklists to complete work systematically . Registrations, reporting guidelines, and other workflow approaches essentially serve as checklists of the plan for a study and what should be reported. Although this subgroup agreed about the importance of preventive versus post-publication workflows and integration of automated methods to detect errors, questions regarding their efficacy remained. For example, how might workflows be generalized across academic disciplines? At what level should standardizing data collection and handling be taught to scientists to maintain data provenance (e.g. Long, 2009 )? And can workflows be tested empirically? What is the cost of automated versus manual workflows, versus none at all, at detecting and preventing errors? How do workflows impact productivity?

7. How do we encourage post-publication error correction?

Science cannot self-correct without processes that facilitate correction ( Firestein, 2012 ). Unfortunately, errors in science may be tied with perceived reputation costs, yet it is unclear that correcting errors actually harms a researcher’s reputation ( Azoulay et al ., 2015 ). Thus, destigmatizing error correction and likewise embracing the importance of scientific failures may be of value for individual scientists and editors overseeing content through the peer-review process ( Teixeira da Silva & Al-Khatib, 2019 ). Journals and their editors, as gatekeepers of science, are key stakeholders in this culture shift. They may also require practical guidelines to facilitate judgement-free corrections that would be acceptable to editors and reviewers.

Error correction should be done in a fair and efficient manner (e.g., Vorland et al ., 2020 ). Although there are several existing standards for publication ethics and norms (e.g., Committee on Publication Ethics [COPE], and the International Committee of Medical Journal Editors [ICMJE]), few have been tested empirically. The subgroup debated how journals and their editors could be part of empirically tested trials on best approaches to facilitate correction and minimize the incurring of additional costs. For example, based on our experiences, journals have few procedures for handling errors separate from typical scholarly dialogue. We believe it is important to examine which procedures are more efficient and fair to authors, whether such procedures can be standardized to enable editors to handle different types of errors consistently and transparently, whether correction mechanisms are sufficient or require additional innovation (e.g. retraction and republication is sufficient or versioning), and how authors can be supported and encouraged in the process. Three such costs that require further study include the actual cost of post-publication error correction across all parties involved (e.g. page charges, salary), how those costs to the scientific enterprise compare to implementing prevention strategies, and the cost-benefit of salvaging a publication containing an error depending on the quality of the collected data versus simply retracting.

Subgroup 3 - looking outward: increasing truthfulness and accuracy of research communications

The third working group discussed opportunities for research related to research reporting and dissemination, primarily highlighting the importance of accuracy and truthfulness when communicating research findings (see Table 2 ). Specifically, this group identified research opportunities tied to the following questions:

  • (8) How does ‘spin’ in research communication affect stakeholders’ understanding and use of research evidence?
  • (9) Do tools to aid writing research reports increase the comprehensiveness and clarity of research reports?
  • (10) Is it possible to inculcate scientific values and norms related to truthful, rigorous, accurate, and comprehensive scientific reporting?

8. How does “spin” in research communication affect stakeholders’ understanding and use of research evidence?

In addition to conducting research rigorously, investigators should describe their research comprehensively and interpret their findings by balancing the strengths and limitations of their methods and results ( Brown et al., 2017 ). By contrast, researchers might ‘spin’ their results through misleading reporting, misleading interpretation, and inappropriate extrapolation ( Fletcher & Black, 2007 ; Yavchitz et al ., 2016 ). Some evidence suggests that spin is common in reports of clinical trials and meta-analyses ( Boutron et al ., 2019 ; Lazarus et al ., 2015 ) and that authors in a variety of research disciplines often draw inappropriate causal inferences ( Bleske-Rechek et al ., 2015 ; Casazza et al ., 2013 ; Chiu et al ., 2017 ; Knight et al ., 1996 ; Ochodo et al ., 2013 ). Moreover, spin in popular media (e.g., newspapers) appears to stem from spin in scientific reports (e.g., journal articles) and associated press releases ( de Semir et al ., 1998 ; Schwartz et al ., 2012 ; Schwitzer, 2008 ).

Spin is unscientific, and could have implications for policy and practice ( Adams et al ., 2016 ; Boutron et al ., 2019 ; Matthews et al ., 2016 ). Workshop participants discussed the need for more evidence to determine whether and how spin in scientific reports affects other stakeholders such as healthcare and social service providers, service users, policymakers, and payers. Evidence concerning the ways in which stakeholders use and interpret research evidence could inform future efforts to improve research communication ( Boutron et al ., 2019 ; Lazarus et al ., 2015 ).

9. Do tools to aid writing research reports increase the comprehensiveness and clarity of research reports?

Research reports (e.g., journal articles) should describe what was done and what was found ( von Elm et al ., 2007 ). Stakeholders need comprehensive and accurate information about research methods and results to assess risk of bias, interpret the generalizability of study results, and reproduce the conditions (e.g., interventions) described ( Moher et al ., 2011 ). Reporting guidelines describe the minimum information that should be included in reports of different types of research, yet much evidence suggests that scientific reports do not include this information (e.g., Grant et al ., 2013 ). Some tools to help authors write better reports have been developed, such as the consort-based web tool (COBWEB) ( Barnes et al ., 2015 ); some preliminary evaluations suggest that these tools could help authors write better reports.

Workshop participants identified a need for research to develop and to test tools that could help authors write reports that adhere to existing guidelines. Some tools could be used when writing scientific manuscripts ( Turner et al ., 2012 ) while other tools could be used in graduate education (e.g. class assignments, dissertation writing) or continuing education. Guidelines designed to increase authors’ and reviewers’ knowledge of reporting requirements are not commonly adhered to and, thus, have minimal impact on reporting quality ( Capers et al ., 2015 ). Participants emphasized the need for new interventions and implementation research that promote guideline adherence.

10. Is it possible to inculcate scientific values and norms related to truthful, rigorous, accurate, and comprehensive scientific reporting?

In the 1940s, Robert Merton proposed that communism/communality, universalism, disinterestedness, and organized skepticism constitute the ethos of modern science ( Merton, 1942 ). As the National Research Council stated in their report “Scientific Research in Education”, these fundamental principles are enforced by the community of researchers that shape scientific understanding ( Shavelson & Towne, 2003 ). Evidence suggests that most scientists endorse these positive values and norms, but fewer scientists believe that their colleagues behave in accordance with these positive norms ( Anderson et al ., 2007 ). Better incentives ( Begley et al ., 2017 ; Fanelli, 2010 ; Nosek et al ., 2012 ) and better methods for detecting scientific errors, might improve scientific practice and communication; yet fundamentally, we will always have to place some trust in the veracity of our fellow scientists ( Jamieson et al ., 2017 ).

Participants agreed that ethics and responsibility are vital across scientific disciplines, yet graduate research often neglects the philosophy of science and the formation of professional identity as a scientist. Instead, training tends to focus on the technical skills needed to conduct experiments and analyze data in specific disciplines ( Bosch, 2018 ; Bosch & Casadevall, 2017 ). Technical skills are essential to produce good science; to apply them ethically and responsibly, however, it is paramount that scientists also endorse scientific values and norms. Participants identified a need for research to determine how these scientific values could be inculcated in scientists and how scientists should be taught to enact those values in their research.

Scientists slow the pursuit of truth when research is not rigorous, reproducible, or transparent ( Collins & Tabak, 2014 ). To improve the state of science, RRT leaders have long raised concerns about many of the current challenges the scientific enterprise faces by identifying novel strategies intended to uphold and improve scientific validity. Discussions among RRT leaders at Indiana University Bloomington reinforce the value and importance of promoting accurate, objective, and truthful science. The proposal, execution, and evaluation of the ideas presented herein showcases how the collective and interdisciplinary efforts of those investing in the future of science can solve problems in unique and exciting ways.

Data availability

[version 1; peer review: 3 approved]

Funding Statement

This work was funded by the Alfred P. Sloan Foundation (G-2019-11438) and awarded to David B. Allison.

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

  • Adams SA, Choi SK, Eberth JM, et al.: Adams et al. Respond. Am J Public Health. 2016; 106 ( 6 ):e8–9. 10.2105/AJPH.2016.303231 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Allen C, Mehler DMA: Open science challenges, benefits and tips in early career and beyond. PLoS Biol. 2019; 17 ( 5 ):e3000246. 10.1371/journal.pbio.3000246 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Allison DB, Brown AW, George BJ, et al.: Reproducibility: A tragedy of errors. Nature. 2016; 530 ( 7588 ):27–9. 10.1353/jhe.0.0095 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Altman DG: Poor-Quality Medical Research: What Can Journals Do? JAMA. 2002; 287 ( 21 ):2765–2767. 10.1001/jama.287.21.2765 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Anaya J: The GRIMMER test: A method for testing the validity of reported measures of variability (e2400v1). PeerJ Inc. 2016. [ Google Scholar ]
  • Anderson MS, Martinson BC, De Vries R, et al.: Normative Dissonance in Science: Results from a National Survey of U.S. Scientists. J Empir Res Hum Res Ethics. 2007; 2 ( 4 ):3–14. 10.1525/jer.2007.2.4.3 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Anderson MS, Ronning EA, DeVries R, et al.: Extending the Mertonian Norms: Scientists’ Subscription to Norms of Research. J Higher Educ. 2010; 81 ( 3 ):366–393. 10.1353/jhe.0.0095 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Azoulay P, Bonatti A, Krieger JL: The Career Effects of Scandal: Evidence from Scientific Retractions. (Working Paper No. 21146). National Bureau of Economic Research.2015. 10.3386/w21146 [ CrossRef ] [ Google Scholar ]
  • Baggerly KA, Coombes KR: Deriving chemosensitivity from cell lines: Forensic bioinformatics and reproducible research in high-throughput biology. The Annals of Applied Statistics. 2009; 3 ( 4 ):1309–1334. 10.1214/09-AOAS291 [ CrossRef ] [ Google Scholar ]
  • Ball R, Medeiros N: Teaching Integrity in Empirical Research: A Protocol for Documenting Data Management and Analysis. The Journal of Economic Education. 2012; 43 ( 2 ):182–189. 10.1080/00220485.2012.659647 [ CrossRef ] [ Google Scholar ]
  • Barnes C, Boutron I, Giraudeau B, et al.: Impact of an online writing aid tool for writing a randomized trial report: The COBWEB (Consort-based WEB tool) randomized controlled trial. BMC Med. 2015; 13 ( 1 ):221. 10.1186/s12916-015-0460-y [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Beall RF, Baskerville N, Golfam M, et al.: Modes of delivery in preventive intervention studies: A rapid review. Eur J Clin Invest. 2014; 44 ( 7 ):688–696. 10.1111/eci.12279 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Begley EB, Ware JM, Hexem SA, et al.: Personally Identifiable Information in State Laws: Use Release, and Collaboration at Health Departments. Am J Public Health. 2017; 107 ( 8 ):1272–1276. 10.2105/AJPH.2017.303862 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bleske-Rechek A, Morrison KM, Heidtke LD: Causal Inference from Descriptions of Experimental and Non-Experimental Research: Public Understanding of Correlation-Versus-Causation. J Gen Psychol. 2015; 142 ( 1 ):48–70. 10.1080/00221309.2014.977216 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bosch G: Train PhD students to be thinkers not just specialists. Nature. 2018; 554 ( 7692 ):277. 10.1038/d41586-018-01853-1 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bosch G, Casadevall A: Graduate Biomedical Science Education Needs a New Philosophy. mBio. 2017; 8 ( 6 ):e01539–17. 10.1128/mBio.01539-17 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Boutron I, Haneef R, Yavchitz A, et al.: Three randomized controlled trials evaluating the impact of “spin” in health news stories reporting studies of pharmacologic treatments on patients’/caregivers’ interpretation of treatment benefit. BMC Med. 2019; 17 ( 1 ):105. 10.1186/s12916-019-1330-9 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brown AW, Altman DG, Baranowski T, et al.: Childhood obesity intervention studies: A narrative review and guide for investigators, authors, editors, reviewers, journalists, and readers to guard against exaggerated effectiveness claims. Obes Rev. 2019; 20 ( 11 ):1523–1541. 10.1111/obr.12923 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brown AW, Kaiser KA, Allison DB: Issues with data and analyses: Errors, underlying themes, and potential solutions. Proc Natl Acad Sci U S A. 2018; 115 ( 11 ):2563–2570. 10.1073/pnas.1708279115 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brown NJL, Heathers JAJ: The GRIM Test: A Simple Technique Detects Numerous Anomalies in the Reporting of Results in Psychology. Social Psychological and Personality Science. 2017; 8 ( 4 ):363–369. 10.1177/1948550616673876 [ CrossRef ] [ Google Scholar ]
  • Brown AW, Li P, Bohan MMB, et al.: Best (but oft-forgotten) practices: designing, analyzing, and reporting cluster randomized controlled trials. Am J Clin Nutr. 2015; 102 ( 2 ):241–248. 10.3945/ajcn.114.105072 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brown AW, Mehta TS, Allison DB: Publication bias in science: what is it why is it problematic, and how can it be addressed? The Oxford Handbook of the Science of Science Communication. 2017;93–101. 10.1093/oxfordhb/9780190497620.013.10 [ CrossRef ] [ Google Scholar ]
  • Capers PL, Brown AW, Dawson JA, et al.: Double sampling with multiple imputation to answer large sample meta-research questions: introduction and illustration by evaluating adherence to two simple CONSORT guidelines. Front Nutr. 2015; 2 :6. 10.3389/fnut.2015.00006 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Carver R, Everson M, Gabrosek J, et al.: Guidelines for assessment and instruction in statistics education (GAISE) college report 2016 .2016. Reference Source [ Google Scholar ]
  • Casadevall A, Fang FC: Reforming Science: Methodological and Cultural Reforms. Infect Immun. 2012; 80 ( 3 ):891–896. 10.1128/IAI.06183-11 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Casazza K, Fontaine KR, Astrup A, et al.: Myths, Presumptions, and Facts about Obesity. N Engl J Med. 2013; 368 ( 5 ):446–454. 10.1056/NEJMsa1208051 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chiu K, Grundy Q, Bero L: 'Spin' in published biomedical literature: A methodological systematic review. PLoS Biol. 2017; 15 ( 9 ):e2002173. 10.1371/journal.pbio.2002173 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cohen-Boulakia S, Belhajjame K, Collin O, et al.: Scientific workflows for computational reproducibility in the life sciences: Status, challenges and opportunities. Future Generation Computer Systems. 2017; 75 :284–298. 10.1016/j.future.2017.01.012 [ CrossRef ] [ Google Scholar ]
  • Collins FS, Tabak LA: Policy: NIH plans to enhance reproducibility. Nature. 2014; 505 ( 7485 ):612–613. 10.1038/505612a [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Consortium for Mathematics and its Applications & Society for Industrial and Applied Mathematics: Guidelines for assessment and instruction in mathematical modeling education .2016. Reference Source [ Google Scholar ]
  • de Semir V, Ribas C, Revuelta G: Press releases of science journal articles and subsequent newspaper stories on the same topic. JAMA. 1998; 280 ( 3 ):294–295. 10.1001/jama.280.3.294 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Elbow P: Writing With Power: Techniques for Mastering the Writing Process . Oxford University Press,1998. Reference Source [ Google Scholar ]
  • Fanelli D: “Positive” Results Increase Down the Hierarchy of the Sciences. PLoS One. 2010; 5 ( 4 ):e10068. 10.1371/journal.pone.0010068 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fanelli D: Opinion: Is science really facing a reproducibility crisis, and do we need it to? Proc Natl Acad Sci U S A. 2018; 115 ( 11 ):2628–2631. 10.1073/pnas.1708272114 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Firestein S: Ignorance: How It Drives Science . (1 edition), Oxford University Press,2012. Reference Source [ Google Scholar ]
  • Fletcher RH, Black B: "Spin" in Scientific Writing: Scientific Mischief and Legal Jeopardy. Med Law. 2007; 26 ( 3 ):511–525. [ PubMed ] [ Google Scholar ]
  • George BJ, Beasley TM, Brown AW, et al.: Common scientific and statistical errors in obesity research. Obesity (Silver Spring). 2016; 24 ( 4 ):781–790. 10.1002/oby.21449 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Georgescu C, Wren JD: Algorithmic identification of discrepancies between published ratios and their reported confidence intervals and P-values. Bioinformatics. 2018; 34 ( 10 ):1758–1766. 10.1093/bioinformatics/btx811 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Goodman SN, Fanelli D, Ioannidis JPA: What does research reproducibility mean? Sci Transl Med. 2016; 8 ( 341 ):341ps12. 10.1126/scitranslmed.aaf5027 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Goodson P: Becoming an Academic Writer: 50 Exercises for Paced, Productive, and Powerful Writing . SAGE Publications,2016. Reference Source [ Google Scholar ]
  • Gorsuch RL: Enhancing the Teaching of Statistics by Use of the Full GLM. Journal of Methods and Measurement in the Social Sciences. 2015; 6 ( 2 ):60–69. Reference Source [ Google Scholar ]
  • Grant SP, Mayo-Wilson E, Melendez-Torres GJ, et al.: Reporting Quality of Social and Psychological Intervention Trials: A Systematic Review of Reporting Guidelines and Trial Publications. PLoS One. 2013; 8 ( 5 ):e65442. 10.1371/journal.pone.0065442 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Heathers JA, Anaya J, Zee T, et al.: Recovering data from summary statistics: Sample Parameter Reconstruction via Iterative TEchniques (SPRITE) . PeerJ Inc.,2018;e26968v1 10.7287/peerj.preprints.26968 [ CrossRef ] [ Google Scholar ]
  • Heo M, Nair SR, Wylie-Rosett J, et al.: Trial characteristics and appropriateness of statistical methods applied for design and analysis of randomized school-based studies addressing weight-related issues: a literature review. J Obes. 2018; 2018 :8767315. 10.1155/2018/8767315 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ioannidis JPA: Why Science Is Not Necessarily Self-Correcting. Perspect Psychol Sci. 2012; 7 ( 6 ):645–54. 10.1177/1745691612464056 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ioannidis JPA, Fanelli D, Dunne DD, et al.: Meta-research: evaluation and improvement of research methods and practices. PLoS Biol. 2015; 13 ( 10 ): e1002264. 10.1371/journal.pbio.1002264 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jamieson KH, Kahan DM, Scheufele DA: The Oxford Handbook of the Science of Science Communication . Oxford University Press.2017. 10.1093/oxfordhb/9780190497620.001.0001 [ CrossRef ] [ Google Scholar ]
  • Kellogg RT, Whiteford AP: Training advanced writing skills: The case for deliberate practice. Educational Psychologist. 2009; 44 ( 4 ):250–266. 10.1080/00461520903213600 [ CrossRef ] [ Google Scholar ]
  • King G: Replication, Replication. PS: Political Science and Politics. 1995; 28 ( 3 ):444–452. 10.2307/420301 [ CrossRef ] [ Google Scholar ]
  • Knight GP, Fabes RA, Higgins DA: Concerns about drawing causal inferences from meta-analyses: An example in the study of gender differences in aggression. Psychol Bull. 1996; 119 ( 3 ):410–421. 10.1037/0033-2909.119.3.410 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Labbé C, Grima N, Gautier T, et al.: Semi-automated fact-checking of nucleotide sequence reagents in biomedical research publications: The Seek & Blastn tool. PLoS One. 2019; 14 ( 3 ):e0213266. 10.1371/journal.pone.0213266 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lakens D: Pandemic researchers - recruit your own best critics. Nature. 2020; 581 ( 7807 ):121. 10.1038/d41586-020-01392-8 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lakens D, DeBruine L: Improving transparency, falsifiability, and rigour by making hypothesis tests machine readable .2020. 10.31234/osf.io/5xcda [ CrossRef ] [ Google Scholar ]
  • Lazarus C, Haneef R, Ravaud P, et al.: Classification and prevalence of spin in abstracts of non-randomized studies evaluating an intervention. BMC Med Res Methodol. 2015; 15 :85. 10.1186/s12874-015-0079-x [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lohse K, Sainani K, Taylor JA, et al.: Systematic Review of the use of “Magnitude-Based Inference” in Sports Science and Medicine . [Preprint]. SportRxiv,2020. 10.31236/osf.io/wugcr [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Long JS: The workflow of data analysis using Stata . Stata Press.2009. Reference Source [ Google Scholar ]
  • Ludäscher B, Weske M, McPhillips T, et al.: Scientific workflows: Business as usual? In: International Conference on Business Process Management. Springer, Berlin, Heidelberg.2009; 5701 :31–47. 10.1007/978-3-642-03848-8_4 [ CrossRef ] [ Google Scholar ]
  • Matthews DD, Smith JC, Brown AL, et al.: Reconciling Epidemiology and Social Justice in the Public Health Discourse Around the Sexual Networks of Black Men Who Have Sex With Men. Am J Public Health. 2016; 106 ( 5 ):808–814. 10.2105/AJPH.2015.303031 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McNutt M: Reproducibility. Science. 2014; 343 ( 6168 ):229. 10.1126/science.1250475 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Merton RK: A Note on Science and Democracy. Journal of Legal and Political Sociology. 1942; 1 :115 Reference Source [ Google Scholar ]
  • Moher D, Weeks L, Ocampo M, et al.: Describing reporting guidelines for health research: a systematic review. J Clin Epidemiol. 2011; 64 ( 7 ):718–742. 10.1016/j.jclinepi.2010.09.013 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Monsarrat P, Vergnes JN: Data mining of effect sizes from PubMed abstracts: a cross-study conceptual replication. Bioinformatics. 2018; 34 ( 15 ):2698–2700. 10.1093/bioinformatics/bty153 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • National Academies of Sciences, Engineering, and Medicine: Data science for undergraduates: Opportunities and options . National Academies Press.2018. 10.17226/25104 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • National Academies of Sciences, Engineering, and Medicine, Policy and Global Affairs, Government-University-Industry Research Roundtable: Examining the Mistrust of Science: Proceedings of a Workshop—in Brief .2017. 10.17226/24819 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • National Academies of Sciences, Engineering, and Medicine: Reproducibility and Replicability in Science .2019. 10.17226/25303 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nosek BA, Spies JR, Motyl M: Scientific Utopia: II. Restructuring incentives and practices to promote truth over publishability. ArXiv: 1205.4251 [Physics]. 2012. Reference Source [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Nuijten MB, Assen MAL, van Hartgerink CHJ, et al.: The Validity of the Tool “statcheck” in Discovering Statistical Reporting Inconsistencies. PsyArXiv. 2017. 10.17605/OSF.IO/TCXAJ [ CrossRef ] [ Google Scholar ]
  • Ochodo EA, de Haan MC, Reitsma JB, et al.: Overinterpretation and misreporting of diagnostic accuracy studies: evidence of "spin". Radiology. 2013; 267 ( 2 ):581–588. 10.1148/radiol.12120527 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nosek, et al.: Open Science Collaboration: PSYCHOLOGY. Estimating the reproducibility of psychological science. Science. 2015; 349 ( 6251 ):aac4716. 10.1126/science.aac4716 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Peng R: The reproducibility crisis in science: A statistical counterattack. Significance. 2015; 12 ( 3 ):30–32. 10.1111/j.1740-9713.2015.00827.x [ CrossRef ] [ Google Scholar ]
  • Perry EH, Pilati ML: Online learning. New Directions for Teaching and Learning. 2011; 128 :95–104. Reference Source [ Google Scholar ]
  • Prager EM, Chambers KE, Plotkin JL, et al.: Improving transparency and scientific rigor in academic publishing. J Neurosci Res. 2019; 97 ( 4 ):377–390. 10.1002/jnr.24340 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Resnik DB: Ethical virtues in scientific research. Account Res. 2012; 19 ( 6 ):329–343. 10.1080/08989621.2012.728908 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schwartz LM, Woloshin S, Andrews A, et al.: Influence of medical journal press releases on the quality of associated newspaper coverage: retrospective cohort study. BMJ. 2012; 344 :d8164. 10.1136/bmj.d8164 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schwitzer G: How do US journalists cover treatments, tests, products, and procedures? An evaluation of 500 stories. PLoS Med. 2008; 5 ( 5 ):e95. 10.1371/journal.pmed.0050095 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shavelson RJ, Towne L: Committee on Scientific Principles for Education Research .2003;204. [ Google Scholar ]
  • Steen RG, Casadevall A, Fang FC: Why has the number of scientific retractions increased? PLoS One. 2013; 8 ( 7 ):e68397. 10.1371/journal.pone.0068397 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Stodden V, Miguez S, Seiler J: ResearchCompendia.org: Cyberinfrastructure for reproducibility and collaboration in computational science. Computing in Science and Engineering. 2015; 17 ( 1 ):12–19. 10.1109/MCSE.2015.18 [ CrossRef ] [ Google Scholar ]
  • Teixeira da Silva JA, Al-Khatib A: Ending the retraction stigma: Encouraging the reporting of errors in the biomedical record. Research Ethics. 2019. 10.1177/1747016118802970 [ CrossRef ] [ Google Scholar ]
  • Thomas DM, Clark N, Turner D, et al.: Best (but oft-forgotten) practices: identifying and accounting for regression to the mean in nutrition and obesity research. Am J Clin Nutr. 2020; 111 ( 2 ):256–265. 10.1093/ajcn/nqz196 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Thompson B: “Statistical,” “Practical,” and “Clinical”: How Many Kinds of Significance Do Counselors Need to Consider? Journal of Counseling Development. 2002; 80 ( 1 ):64–71. 10.1002/j.1556-6678.2002.tb00167.x [ CrossRef ] [ Google Scholar ]
  • Thompson B: Foundations of Behavioral Statistics: An Insight-Based Approach . Guilford Press.2006. Reference Source [ Google Scholar ]
  • Turner L, Shamseer L, Altman DG, et al.: Does use of the CONSORT Statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane review. Syst Rev. 2012; 1 ( 1 ):60. 10.1186/2046-4053-1-60 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Van Noorden R: Online collaboration: Scientists and the social network. Nature. 2014; 512 ( 7513 ):126–9. 10.1038/512126a [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • von Elm E, Altman DG, Egger M, et al.: Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. BMJ. 2007; 335 ( 7624 ):806–808. 10.1136/bmj.39335.541782.AD [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vorland CJ, Brown AW, Ejima K, et al.: Toward fulfilling the aspirational goal of science as self-correcting: A call for editorial courage and diligence for error correction. Eur J Clin Invest. 2020; 50 ( 2 ):e13190. 10.1111/eci.13190 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ware WB, Ferron JM, Miller BM: Introductory Statistics: A Conceptual Approach Using R. Routledge .2013. Reference Source [ Google Scholar ]
  • Willig J, Croker J, Wallace B, et al.: 2440 Teaching rigor, reproducibility, and transparency using gamification. J Clin Transl Sci. 2018; 2 ( S1 ):61 10.1017/cts.2018.227 [ CrossRef ] [ Google Scholar ]
  • Yavchitz A, Ravaud P, Altman DG, et al.: A new classification of spin in systematic reviews and meta-analyses was developed and ranked according to the severity. J Clin Epidemiol. 2016; 75 :56–65. 10.1016/j.jclinepi.2016.01.020 [ PubMed ] [ CrossRef ] [ Google Scholar ]

Reviewer response for version 1

Sheenah m mische.

1 Department of Pathology, New York University (NYU) Langone Medical Center, New York, NY, USA

This manuscript is a concise summary of a two day workshop held at Indiana University School of Public Health - Bloomington on identifying key opportunities for rigor, reproducibility & transparency (RRT) in research. This is not a research report, rather a report on the status of scientific research. The meeting attendance was invitation only and IU faculty staff and graduate students were joined by invited participants with recognized expertise in RRT, and reflected in the extensive references. Opportunities were focused in three key areas: 1) education and training, 2) reducing statistical errors while increasing analytical transparency and 3) improving transparency (truthfulness) and accuracy of research communications to promote accurate, objective, and truthful science. The article reads well with a focus on biomedical research.

Specific Comments:

This manuscript provided an excellent summary of numerous and important challenges facing the research enterprise. There were no applicable outcomes or solutions. Of particular note:

  • Education and training: instructional modality, and understanding the relative benefits of various hybrid formats: there is no debate on the importance of RRT education and training. Both formal and informal forums are critical to research-integrity issues. Bringing scientific integrity issues into the open provides practical guidance for everyone from graduate students to faculty members. Faced with the pandemic, we all have adapted modalities to virtual platforms, emphasizing the importance of instruction regardless of format. Furthermore, formal instruction in rigorous experimental design and transparency is now required for NIH training, career development, and fellowship applications.
  • Reducing statistical errors while increasing analytical transparency and 3) improving transparency (truthfulness) and accuracy of research communications: sharing knowledge is what drives scientific progress—each new advance or innovation in biomedical research builds on previous observations. Experimental reports must have sufficient information to validate the original results and be verified by other researchers to be broadly accepted as credible by the scientific community. While statistics is a necessary for data interpretation by clinical researchers, psychologists, and epidemiologists whose conclusions depend wholly on statistics, the interpretation of data in papers published in the biological sciences does not always require sophisticated statistical analyses; rather, diligent data reporting and transparency is essential.

Conclusion:

The authors summarize with “proposal, execution, and evaluation of the ideas presented herein showcases how the collective and interdisciplinary efforts of those investing in the future of science can solve problems in unique and exciting ways”. While appreciating this forward looking statement, the message is clear: the issue of reproducibility in science is complex and will continue to be debated and discussed in workshops such as this manuscript describes in the coming years. In response to well-publicized allegations of the inability to reproduce published biomedical research there have been numerous declarations of the components of trustworthy research and research integrity such as the Singapore Statement in 2010, the Montreal Statement in 2013, the Hong Kong Principles in 2019 and the European Code of Conduct for Research Integrity in 2017, and U.S. NIH and NSF Federal RRT policies. Ultimately we are all responsible for careful assessment of the rigor of the prior research, rigorous experimental design for robust and unbiased results by application of the scientific method; consideration of relevant biological variables and authentication of key biological and/or chemical resources used to conduct research and the use of numerical identifiers and the RRID syntax to improve communication of critical experimental details within the research community and to the public.

Is the topic of the opinion article discussed accurately in the context of the current literature?

Are arguments sufficiently supported by evidence from the published literature?

Are all factual statements correct and adequately supported by citations?

Are the conclusions drawn balanced and justified on the basis of the presented arguments?

Reviewer Expertise:

Pathology, Technology, Shared Research Resource

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

Christopher A Mebane

1 Idaho Water Science Center, US Geological Survey, Boise, Idaho, USA

The article “Improving open and rigorous science....” is a report out on a workshop intended to make recommendations on improving rigor, reproducibility, and transparency (RRT) in interdisciplinary science. The idea of peer reviewing a workshop report is a bit of a curious assignment. What’s a reviewer to say?  No, those weren’t the best topics to debate at your workshop, please reconvene and discuss something else? Raise questions about whether the article faithfully reports the workshop deliberations and consensus, when the reviewer wasn’t there?  As such this review is rather limited. The article reads well and has clearly been well vetted by the authors. The workshop and paper are interdisciplinary, although the focus is strongly slanted toward biomedical research and the health sciences.

Not all errors are mistakes

My only criticism of substance is the use of the term “statistical errors.” Consider replacing it with “statistical mistakes” throughout the manuscript.  In many fields, including mine (environmental science), the word “error” could refer to variability in the data, such as “the standard error of the mean.”  In other contexts, the word error is often used to describe the limits of precision. DNA and cells replicate with small errors, which over time lead to aging and senescence. In analytical chemistry, deviations from instrument values for calibration or quality control samples may be termed measurement error. Measurement error might refer to the inherent limits of a sensor in the instrument or the combined errors of the method. For example, in a bathymetric survey, errors accrue from inherent limits in the measuring distance as a function of sound through water, temperature changes in the water introduce error, a breeze adding motion to the boat introduces error, plants growing on the bottom muddy the signal increasing error, imprecision in the Earth’s spheroid and canyon walls interfere with the GPS, and on and on. The hydrologist tries to reflect the accumulated error with a margin of error statement on overall accuracy.  Those are examples of error – something the scientist always seeks to reduce and to accurately report the uncertainties associated with measurements, modeling, etc., but the presence of error is unavoidable. A mistake on the other hand is a blunder. Attaching the bathymetric sensor backwards, entering the wrong units into the calculations, using a long-wave, deep ocean sensor in shallow water, using the wrong datum, using a poorly suited method, neglecting calibrations, .... Just as with statistical mistakes, the topic of the argument, while there are often different appropriate methods of measurement for just about any scientific setting, some controversial or debatable methods, and some that are just plain wrong.  The focus of the authors is on the latter – helping scientists avoid statistical blunders that are just plain wrong. I strongly urge you to call these “statistical mistakes” which is less ambiguous than “errors.”  There are supposed to be interdisciplinary RRT recommendations.

Minor suggestions

p7., in subsection titled “ 5. What is the prevalence and impact of errors, ” I thought the second paragraph was particularly dense and probably impenetrable to those not already in the know:

“Thus, [Subgroup 2] discussed the value of longitudinal, discipline-specific error surveillance and error frequency estimation to collect empirical evidence about error rate differences among disciplines. Other issues discussed were the identification of better prevalence estimates across fields, and how simulation studies can modify our confidence in the understanding of the prevalence of errors and their generalizability across disciplines.”

I think if you could expand on these points with some examples or examples with citations, readers might have better understanding of what is being recommended.

That’s all. This was a tightly written report out of the workshop. Thank you for considering my rant about mistakes versus errors, where depending on the field and context, the latter is often a neutral descriptor of uncertainty.

I am an environmental scientist who has published on related topics of research rigor, bias, and transparency in the environmental sciences.

Judith A Hewitt

1 Office of Biodefense, Research Resources and Translational Research, Division of Microbiology and Infectious Diseases, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Bethesda, MD, USA

This is a concise and well written summary of a meeting of ~30 people on the vitally important topic of rigor, reproducibility & transparency. The meeting discussion questions were very well formulated, though with the small size of the meeting and the limited number of invited participants outside of the university host, it is difficult to say whether the discussions, presented in a very succinct format of key challenges, is representative of all of the issues or viewpoints on the topic. Nevertheless, this appears to have been a good discussion that raised significant challenges. I would have preferred to see a bit more focus on solutions, as the challenges raised are all daunting.

Introduction :

Regarding the statement that 40-70% of scientists agreed on factors contributing to irreproducibility, the original citation be used, (Baker, 2016; added 1 ). Also, reference to the funder for the meeting is very much appreciated - but it is "Alfred P Sloan" not "Afred". In the last sentence, ""through to execution" is unwieldy - either "through" or "to" works but no need for both.

I very much appreciate the list of participants and acknowledgement of honorariums - kudos on the transparency! I also appreciate knowing who participated in the small groups, but it would have been nice to see the agenda or titles of the Day One research presentations. Were those research or meta-research presentations? Also, "small-groups" should not be hyphenated, in fact you could just say three groups and let the reader come to their own conclusion about size; "breakout" is another useful term.

Subgroup 1, first paragraph: the following wording could be more precise by changing "three primary education-related questions" (where primary modifies education and not questions) to "three primary questions, education-related," or something similar. Precision of language is one of the articulated goals of training and communication in this article!

Q5, 2nd paragraph: I disagree with the first sentence, "To achieve the goal of error reduction, one must first know how pervasive errors are." I think any reduction in errors is a win, even without understanding the entire landscape, and needing to fully understand the landscape before attempting solutions is just kicking the can down the road. It's the "measurement" of error reduction or assessing progress toward a particular goal (which is not articulated) that requires knowing the pervasiveness first, and I agree that is extremely difficult to measure.

Q7, 2nd paragraph, last sentence: I question whether understanding "salary" costs of error correction is a valid pursuit, whether it's a case of pay now or pay later; page charges are a different matter.

Since the Methods section stated that the meeting ended with a "call to action" to continue promoting interdisciplinary RRT science, I wonder if that call to action is accurately summarized? I found a great summary of the discussion but didn't walk away with a clearly articulated call to action in the very brief conclusion.

General Comments :

I tend to agree that the challenges are many and difficult, though the small group discussions are distilled down to two challenges per question. They are mainly framed in negative terms, which is hard to read as a "call to action" without more detail. Nonetheless, the challenges raised are important and should be addressed, I'm just left scratching my head on what the next step is for many of these, given how they are stated.

I note that many of the references are from participants at the meeting, which may reflect the meeting content (difficult to judge without seeing the agenda), but does not necessarily instill in others an unbiased approach; this is perhaps a limitation of a small-meeting-by-invitation and could be formally recognized in the paper. This is not a value judgement on the references, indeed there is some balance, but it is a selected view that focuses on the meeting participants.

infectious diseases; animal models of infectious diseases; translational research; scientific rigor; reproducibility

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 13 July 2024

Towards an Interoperability Landscape for a National Research Data Infrastructure for Personal Health Data

  • Carina Nina Vorisek   ORCID: orcid.org/0000-0001-7499-8115 1 ,
  • Sophie Anne Inès Klopfenstein 1 , 2 ,
  • Matthias Löbe 3 ,
  • Carsten Oliver Schmidt   ORCID: orcid.org/0000-0001-5266-9396 4 ,
  • Paula Josephine Mayer 1 ,
  • Martin Golebiewski 5 &
  • Sylvia Thun   ORCID: orcid.org/0000-0002-3346-6806 1  

Scientific Data volume  11 , Article number:  772 ( 2024 ) Cite this article

Metrics details

  • Epidemiology
  • Public health

The German initiative “National Research Data Infrastructure for Personal Health Data” (NFDI4Health) focuses on research data management in health research. It aims to foster and develop harmonized informatics standards for public health, epidemiological studies, and clinical trials, facilitating access to relevant data and metadata standards. This publication lists syntactic and semantic data standards of potential use for NFDI4Health and beyond, based on interdisciplinary meetings and workshops, mappings of study questionnaires and the NFDI4Health metadata schema, and literature search. Included are 7 syntactic, 32 semantic and 9 combined syntactic and semantic standards. In addition, 101 ISO Standards from ISO/TC 215 Health Informatics and ISO/TC 276 Biotechnology could be identified as being potentially relevant. The work emphasizes the utilization of standards for epidemiological and health research data ensuring interoperability as well as the compatibility to NFDI4Health, its use cases, and to (inter-)national efforts within these sectors. The goal is to foster collaborative and inter-sectoral work in health research and initiate a debate around the potential of using common standards.

Similar content being viewed by others

scientific work based on research figgerits

FAIRification of health-related data using semantic web technologies in the Swiss Personalized Health Network

scientific work based on research figgerits

Harvesting metadata in clinical care: a crosswalk between FHIR, OMOP, CDISC and openEHR metadata

scientific work based on research figgerits

Reconciling the biomedical data commons and the GDPR: three lessons from the EUCAN ELSI collaboratory

Introduction.

The amount of health data has been growing rapidly over the past years. To search, find, (re-)use, analyze and exchange these huge amounts of data, the FAIR guiding principles – f indable, a ccessible, i nteroperable and r eusable – were established 1 . However, in healthcare as well as in health and epidemiological research, data is often not complying with any of these principles: data is frequently unstructured and stored in different, decentralized silos. This work focuses on problems related to interoperability deficiency. When there is lack of interoperability, data cannot be exchanged in a structured and meaningful manner across different institutions and software systems without substantial additional efforts.

International standards are needed to enable interoperability. Standards developing organizations (SDO) focus on the development, maintenance and promotion of standards for a specific group of users or for industry needs. The work of SDOs is mainly performed by volunteers collaborating over many years in small working groups. Proposals of each working group are usually presented to a much larger audience to achieve consensus. The number of created standards differs from one SDO to another, depending on the focus of each organization 2 . Semantic standards involve the use of structured vocabularies, terminologies and classification systems to represent healthcare concepts 3 . These standards ensure that health information is accurately and consistently represented across different systems, facilitating clear and precise communication within the healthcare sector. Syntactic standards define the structure or format of data exchange, ensuring that the meaning of data is preserved during transmission 4 . Further definitions of terms used in this manuscript are provided in additional file 1 in our GitHub Repository ( https://github.com/nfdi4health/IdentifiedStandards.git ).

This work was performed within the NFDI4Health initiative 5 , a German research initiative developing a national research data infrastructure for personal health data. NFDI4Health represents an interdisciplinary research community which develops harmonized informatics standards for public health, epidemiological studies, and clinical trials to improve their FAIRness. We focus on the standardization of health research data to foster collaboration within these three domains. This work includes a comprehensive yet non-exhaustive list of standardization projects and initiatives at both global and national levels, along with syntactic and semantic standards. These can be utilized by the research community to describe metadata, data types, and formats from clinical, epidemiological, and public health research in a structured manner. Further standards, ontologies and terminologies might be applicable. We present an initial overview of the collaborative standardization efforts and current use of standards within a national infrastructure project for epidemiological, public health and clinical studies. Through the dissemination of these insights, we aim to empower the research community to leverage standardized practices, thereby advancing the pursuit of breakthroughs in health and medical sciences.

Standardization efforts in health research

The independent International Organization for Standardization (ISO) is a non-governmental organization focusing on the development and publication of international standards. To date, 171 national standards bodies are members, facilitating the exchange of expert knowledge to tackle global challenges and foster innovation by developing relevant consensus-based, voluntary standards 6 . The Research Data Alliance (RDA) collects, develops and refines several standards and information to enable interoperability between research data repositories 7 . One example is the RDA COVID-19 Recommendations and Guidelines on Data Sharing 8 that also can be seen as model for data sharing guidelines for other research studies in the health sector. In the US and Canada, the Accredited Standards Committee (ASC) is the prevailing SDO. At the European level three SDOs are responsible for defining and developing voluntary standards: the Comité Européen de Normalisation (short: CEN; for various kinds of services, processes, products and materials), Comité Européen de Normalisation Electrotechnique (short: CENELEC; for electrotechnical standardization) 9 and European Telecommunications Standards Institute (short: ETSI; for information and communication technologies) 10 .

In the domain of healthcare, nine global initiatives work together since 2007 within the Joint Initiative Council (JIC) on solving real-world problems: Clinical Data Interchange Standards Consortium (CDISC), Digital Imaging and Communications in Medicine (DICOM), CEN/TC 251, GS1 Healthcare, Health Level 7 (HL7) International, Integrating the Healthcare Enterprise (IHE) International, ISO/Technical Committee 215, Logical Observation Identifiers Names and Codes (LOINC) and Systematized Nomenclature of Medicine (SNOMED) International. They enable real-time information exchange in healthcare by using standards based on full interoperability of information and processes 11 . The Global Alliance for Genomics & Health (GA4GH) 12 reunites a growing number of public and private institutions from healthcare delivery and (health) research, companies, societies, funders, agencies and NGOs with the overarching goal of allowing responsible sharing of genomic data while respecting human rights. GA4GH frames policies and develops and/or refines technical standards 13 . Global Digital Health Partnership (GDHP), an international collaboration on digital health, was established in 2018 by several governments, government agencies, territories, multinational organizations and the World Health Organization (WHO). The alliance comprises currently 36 members and intercedes for the best use of digital technologies backed by evidence to improve well-being and health 14 . GDHP publishes regularly white papers about interoperability, clinical and consumer engagement, cybersecurity, policy environments and evidence and evaluation topics 15 , 16 . Further collaboration entail the Personal Connected Health Alliance (PCHA) 17 , or the collaboration between the American Office of the National Coordinator for Health Information Technology (ONC) 18 and the European Union 19 or the United Kingdom 20 . ONC serves also as the lead US representative to the GDHP 21 .

The ISO committee for standards in biotechnology (ISO/TC 276) 22 and its working group ISO/TC 276/WG 5“Data Processing and Integration” are working on standards for data in life sciences that can and should be considered for health data (Table  3 ). Initial releases include guideline standards for data publication (ISO/TR 3985) 23 and requirements for data formatting and description in life sciences (ISO 20691) 24 . Additionally, a series of standards for provenance information models for biological material and data (ISO 23494) is currently under development in ISO/TC 276/WG 5 and will be published progressively in the coming years. Moreover, in ISO/TC 215, as well as in ISO/TC 276/WG 5 several standard drafts are currently being developed for data and metadata in personalized medicine.

Identified standards

We identified 7 syntactic, 32 semantic and 9 combined syntactic and semantic standards that are potentially relevant to NFDI4Health (Fig.  1 ). In addition, we identified further 101 ISO Standards (Table  3 ) from ISO/TC 215 Health Informatics and ISO/TC 276 Biotechnology, which are presented in additional file 2 . Features of syntactic and semantic standards are represented in Table  1 and Table  2 , respectively.

figure 1

Identified syntactic and semantic standards in health research. We categorized health research and interoperability standards into three types: semantic, syntactic, or both. Semantic standards focus on the meaning and interpretation of data, including terminologies, vocabularies, and ontologies (e.g., SNOMED CT, LOINC, ICD). Syntactic standards focus on the structure and format of data exchange, defining how data is formatted and transmitted (e.g., HL7 CDA). Combined standards include elements of both, defining data structure and format while also ensuring consistent meaning with value sets or terminologies (e.g., HL7 FHIR).

Current standards in NFDI4Health

Within NFDI4Health, a tailored metadata schema (MDS) was created to collect information from German clinical, epidemiological and public health studies collecting information on studies and their comprised study resources (e.g., study documents, instruments, data collections, etc.) 25 , 26 . To ensure the syntactic and semantic interoperability of the register based on the MDS, a mapping of the MDS elements to FHIR was performed and the feasibility was analyzed 27 . In addition, metadata included in the re3data 28 schema and clinicaltrials.gov were compared to the NFDI4Health MDS. The metadata from ECRIN 29 and DDI 30 were also compared to the MDS. SNOMED CT, HL7 Terminology, NCIt, MeSH, ISO and ICD were used for Value Sets in the NFDI4Health MDS. The suitability of SNOMED CT for the data annotation of variables from questionnaires originating from clinical but also epidemiological and Public Health studies was evaluated by performing mappings to SNOMED CT. The results of the annotation were implemented on a test basis in OPAL/MICA 31 , 32 . OPAL/MICA are open software solutions built for managing and harmonizing epidemiological data 33 . With our mapping activities we evaluated suitability of different standards for our NFDI4Health use-cases.

Next to the existence of several SDOs responsible for developing standards in health research, we found in total, 7 syntactic, 32 semantic and 9 combined syntactic and semantic standards that may be pertinent to the epidemiological, public health and clinical research. Furthermore, we also identified an additional 101 ISO Standards sourced from ISO/TC 215 Health Informatics and ISO/TC 276 Biotechnology (Table  3 ).

While there is literature and guidance on the use of standards in health records 34 , our literature review revealed a notable lack of comprehensive overviews to guide the selection of standards for health research studies.

In our use case for NFDI4Health, we specifically focused on metadata describing studies and study questionnaires. Our ultimate goal is to make these metadata elements exchangeable and comparable across clinical, epidemiological, and public health studies.

The World Health Organization’s “International Standards for Clinical Registries” does not specifically recommend any semantic or syntactic standards. It merely advises that “in addition to free text, controlled vocabularies may be used,” citing SNOMED, ICD, and MeSH as examples and recommending controlled vocabularies that can be mapped to the Unified Medical Language System (UMLS) Metathesaurus, as used by the ICTRP Search Portal 35 . In our current MDS, we implemented SNOMED and MeSH amongst NCI, LOINC and ISO and provided concept maps to UMLS 36 , 37 .

The “Second Joint Action Towards the European Health Data Space – TEHDAS2” project provided a list of relevant standards for harmonizing semantic and syntactic interoperability in the European Health Data Space 38 . Our list includes all the semantic and syntactic standards identified by this working group. However, they also provided a list of metadata standards, which we did not explore in this manuscript. When developing the MDS, we performed mappings to several of these metadata standards, such as ECRIN which we will report on the future.

Harmonization of retrospective data is only one goal of NFDI4Health. There is a need to increase global awareness about the importance of standards and to incentivize the prospective use of internationally, widely recognized standards in studies, starting with the planning phase. As NFDI4Health targets health data, it is crucial to apply standards used in the healthcare system such as SNOMED CT, ICD and LOINC. By doing so, the entire community may benefit from improved data exchange possibilities.

Due to the heterogeneity of studies, multiple standards might need to be combined based on the specific needs and variables assessed. Evaluating the mappings between these standards is also essential. However, it is important to avoid creating further data silos by installing an excessive number of standards, which could hinder interoperability. Relying on existing concepts and aligning with other projects can benefit the entire community by improving data exchange possibilities. This vast array of health standards highlights the need for interoperability at an organizational level to implement standards on a consensus basis. Therefore, it is essential to consider already existing guidelines and established standards on both national and international levels. For NFDI4Health, this means considering already established data models and standards in the German healthcare system as well as other projects, such as the medical informatics initiative 39 . The transport and content standard HL7 FHIR is part of several new requirements in the European healthcare system to rely on one common standard 40 . FHIR is easy to use, adaptable and relies on already existing web technologies which can be used in web and mobile applications, and we therefore decided to use it as exchange standard for our MDS 36 . Of course, other standards are not to be missed and will be identified according to the requirements of the use cases. This work serves as basis for future (meta-)data repositories, establishing services necessary to harmonize and standardize (meta-)data, enabling analyze and access those (meta-)data, and introducing relevant guidelines for the entire NFDI4Health consortium and beyond.

Identification of standards

To identify relevant SDOs and standards for health(care) data within NFDI4Health, we conducted a literature search and searches in community-driven portals such as FAIRsharing, BioPortal, and EMBL-EBI Ontology Lookup Service (short: OLS Ontology Search) as well as the website of ISO. In addition, over a time of three years, we gathered information from use cases and domain experts in the field of health research and interoperability. Therefore, interviews were conducted with each of the five use cases at least once and up to three times. We held a workshop on metadata standards with the entire community discussing the community needs and identifying relevant standards. We performed mappings of study instruments and the developed metadata schema to international standards and analysed these for their suitability and finally developed value sets to be used in NFDI4Health’ MDS 27 , 32 . Standards such as terminologies, ontologies, vocabularies were considered semantic standards. We included requirements from the user community and their use cases, feedback and experiences, existing guidelines and recommended (inter-)national standards. Each activity was reviewed in interdisciplinary biweekly meetings and commented by the general assembly of NFDI4Health. All authors have either significant expertise and/or practical experience in the field of interoperability and/or health research.

Categorization of standards

In this study, we categorized standards used in health research into three categories: semantic, syntactic, or both. The categorization process was based on specific criteria related to the nature and application of each standard. Standards were classified as semantic if they primarily focused on meaning and interpretation of data. This included terminologies, vocabularies, and ontologies. Examples of such standards include SNOMED CT, LOINC, and ICD. These standards provide a structured way to describe the data and ensure consistent interpretation across different systems and contexts. Standards were classified as syntactic if they focused on the structure and format of data exchange. These standards define how data is formatted, encoded, and transmitted between systems. Examples include HL7 CDA. These standards ensure that the data can be correctly parsed and understood at a structural level by receiving systems. Some standards encompass both semantic and syntactic elements. These standards not only define the structure and format of data but also include value sets or terminologies for ensuring consistent meaning. An example of such a standard is HL7 FHIR, which includes both a syntactic framework for data exchange and its own value sets for semantic consistency.

Results were presented in tables and Fig.  1 was created using R statistical software (version 2024.04.1; R Foundation for Statistical Computing) 41 and the VennDiagramm packages.

All identified standards can be found in our GitHub repository ( https://github.com/nfdi4health/IdentifiedStandards.git )

Code availability

The code for generating Fig.  1 is provided in our GitHub repository ( https://github.com/nfdi4health/IdentifiedStandards.git ).

Wilkinson, M. D. et al . The FAIR Guiding Principles for scientific data management and stewardship. Sci Data 3 , (2016).

Chapter 2 - SDO Education Playbook. https://www.healthit.gov/playbook/sdo-education/chapter-2/ .

Terminology Standards | HIMSS. https://www.himss.org/terminology-standards .

Umberfield, E. E. et al . Syntactic interoperability and the role of syntactic standards in health information exchange. Health Information Exchange: Navigating and Managing a Network of Health Information Systems 217–236 https://doi.org/10.1016/B978-0-323-90802-3.00004-6 (2023).

Home - NFDI4Health. https://www.nfdi4health.de/ .

ISO - ISO/IEC Guide 2:2004 - Standardization and related activities — General vocabulary. https://www.iso.org/standard/39976.html .

RDA | Research Data Sharing without barriers. https://www.rd-alliance.org/ .

Group, R. C.-19 W. RDA COVID-19 Recommendations and Guidelines on Data Sharing, https://doi.org/10.15497/RDA00052 (2020).

CEN-CENELEC - CEN-CENELEC. https://www.cencenelec.eu/ .

ETSI - Welcome to the World of Standards! https://www.etsi.org/ .

Welcome to Joint Initiative Council. http://www.jointinitiativecouncil.org/ .

GA4GH. https://www.ga4gh.org/ .

Global Alliance for Genomics and Health (GA4GH).

https://gdhp.health/ Global Digital Health Partnership.

GDHP. https://gdhp.nhp.gov.in/home/index/white-paper-2019 .

GDHP. https://gdhp.nhp.gov.in/home/index/white-paper-2020 .

Personal Connected Health Alliance. https://www.pchalliance.org/ .

About ONC | HealthIT.gov. https://www.healthit.gov/topic/about-onc .

Collaboration with the European Union | HealthIT.gov. https://www.healthit.gov/topic/collaboration-european-union .

Collaboration with the United Kingdom | HealthIT.gov. https://www.healthit.gov/topic/collaboration-united-kingdom .

The Global Digital Health Partnership | HealthIT.gov. https://www.healthit.gov/topic/global-digital-health-partnership .

ISO - ISO/TC 276 - Biotechnology. https://www.iso.org/committee/4514241.html .

ISO - ISO/TR 3985:2021 - Biotechnology — Data publication — Preliminary considerations and concepts. https://www.iso.org/standard/79690.html .

ISO - ISO 20691:2022 - Biotechnology — Requirements for data formatting and description in the life sciences. https://www.iso.org/standard/68848.html .

Schmidt, C. O. et al . [Making COVID-19 research data more accessible-building a nationwide information infrastructure]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 64 , 1084–1092 (2021).

Article   PubMed   PubMed Central   Google Scholar  

NFDI4Health Task Force COVID-19 Metadata Schema. https://fairdomhub.org/data_files/3972?version=1 .

Klofenstein, S. A. I. et al . Fast Healthcare Interoperability Resources (FHIR) in a FAIR Metadata Registry for COVID-19 Research. Stud Health Technol Inform 287 , 73–77 (2021).

Google Scholar  

Home | re3data.org. https://www.re3data.org/ .

Ecrin | Facilitating European Clinical Research. https://ecrin.org/ .

Welcome to the Data Documentation Initiative | Data Documentation Initiative. https://ddialliance.org/ .

Vorisek, C. N. et al . Evaluating Suitability of SNOMED CT in Structured Searches for COVID-19 Studies. Stud Health Technol Inform 281 , 88–92 (2021).

PubMed   Google Scholar  

Vorisek, C. N. et al . Implementing SNOMED CT in Open Software Solutions to Enhance the Findability of COVID-19 Questionnaires. Stud Health Technol Inform 294 , 649–653 (2022).

Doiron, D., Marcon, Y., Fortier, I., Burton, P. & Ferretti, V. Software Application Profile: Opal and Mica: open-source software solutions for epidemiological data management, harmonization and dissemination. Int J Epidemiol 46 , 1372–1378 (2017).

de Mello, B. H. et al . Semantic interoperability in health records standards: a systematic literature review. Health Technol (Berl) 12 , 255 (2022).

Article   PubMed   Google Scholar  

International Standards for Clinical Trial Registries. (2018).

NFDI4Health Metadata Schema - SIMPLIFIER.NET. https://simplifier.net/NFDI4Health-Metadata-Schema/~introduction .

ART-DECOR®. https://art-decor.org/ad/#/nfdhtfcov19-/project/overview .

Bernal-Delgado, E., Estupiñán-Romero, F. & Launa-Garces, R. Identification of relevant standards and data models for semantic harmonization 0 Document info 0.1 Authors Author Partner. (2021).

Medical Informatics Initiative | Medical Informatics Initiative. https://www.medizininformatik-initiative.de/en/start .

Gesundheitsdaten: FHIR wird europaweiter Standard. https://www.aerzteblatt.de/nachrichten/142159/Gesundheitsdaten-FHIR-wird-europaweiter-Standard .

R: The R Project for Statistical Computing. https://www.r-project.org .

ISO/TR 12300:2014 - Health informatics — Principles of mapping between terminological systems. https://www.iso.org/standard/51344.html .

ISO 13119:2022 - Health informatics — Clinical knowledge resources — Metadata. https://www.iso.org/standard/78392.html .

ISO 13940:2015 - Health informatics — System of concepts to support continuity of care. https://www.iso.org/standard/58102.html .

ISO 13972:2022 - Health informatics — Clinical information models — Characteristics, structures and requirements. https://www.iso.org/standard/79498.html .

ISO 14199:2015 - Health informatics — Information models — Biomedical Research Integrated Domain Group (BRIDG) Model. https://www.iso.org/standard/66767.html .

ISO 16278:2016 - Health informatics — Categorial structure for terminological systems of human anatomy. https://www.iso.org/standard/56047.html .

ISO/TS 21526:2019 - Health informatics — Metadata repository requirements (MetaRep). https://www.iso.org/standard/71041.html .

ISO/TS 21564:2019 - Health Informatics — Terminology resource map quality measures (MapQual). https://www.iso.org/standard/71088.html .

ISO/HL7 21731:2014 - Health informatics — HL7 version 3 — Reference information model — Release 4. https://www.iso.org/standard/61454.html .

ISO 27269:2021 - Health informatics — International patient summary. https://www.iso.org/standard/79491.html .

ISO/HL7 27931:2009 - Data Exchange Standards — Health Level Seven Version 2.5 — An application protocol for electronic data exchange in healthcare environments. https://www.iso.org/standard/44428.html .

ISO/HL7 27932:2009 - Data Exchange Standards — HL7 Clinical Document Architecture, Release 2. https://www.iso.org/standard/44429.html .

ISO/HL7 27951:2009 - Health informatics — Common terminology services, release 1. https://www.iso.org/standard/44437.html .

ISO/TS 23494-1:2023 - Biotechnology — Provenance information model for biological material and data — Part 1: Design concepts and general requirements. https://www.iso.org/standard/80715.html .

ISO/TR 3985:2021 - Biotechnology — Data publication — Preliminary considerations and concepts. https://www.iso.org/standard/79690.html .

ISO/CD TS 6201 - Health Informatics — Personalized Digital Health -Framework. https://www.iso.org/standard/82107.html .

ISO/PWI TS 6203. https://iss.rs/en/project/show/iso:proj:82109 .

ISO/TR 11147:2023 - Health informatics — Personalized digital health — Digital therapeutics health software systems. https://www.iso.org/standard/83767.html .

ISO/CD 9472-10000 - Health informatics — Personalized health navigation — Part 10000: Architecture. https://www.iso.org/standard/83497.html .

ISO/AWI TR 24305 - Health informatics - Guidelines for implementation of HL7/FHIR based on ISO 13940 and ISO 13606. https://www.iso.org/standard/78390.html .

ISO 29585 - 2023-06 - Beuth.de. https://www.beuth.de/de/norm/iso-29585/370015864 .

ISO 10781:2023 - Health informatics — HL7 Electronic Health Record-System Functional Model, Release 2.1 (EHR FM). https://www.iso.org/standard/84722.html .

ISO 4454:2022 - Genomics informatics — Phenopackets: A format for phenotypic data exchange. https://www.iso.org/standard/79991.html .

Download references

Acknowledgements

This work was done as part of the NFDI4Health Consortium ( www.nfdi4health.de ). We gratefully acknowledge the financial support of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – project number 442326535.

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

Berlin Institute of Health at Charité – Universitätsmedizin Berlin, Core Facility Digital Medicine and Interoperability, Charitéplatz 1, 10117, Berlin, Germany

Carina Nina Vorisek, Sophie Anne Inès Klopfenstein, Paula Josephine Mayer & Sylvia Thun

Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institut für Medizinische Informatik, Charitéplatz 1, 10117, Berlin, Germany

Sophie Anne Inès Klopfenstein

Institut für Medizinische Informatik, Statistik und Epidemiologie (IMISE), Universität Leipzig, Leipzig, Germany

Matthias Löbe

Institut für Community Medicine, Universitätsmedizin Greifswald, Greifswald, Germany

Carsten Oliver Schmidt

Heidelberg Institute for Theoretical Studies (HITS gGmbH), Schloss-Wolfsbrunnenweg 35, 69118, Heidelberg, Germany

Martin Golebiewski

You can also search for this author in PubMed   Google Scholar

Contributions

S.A.I.K. and C.N.V. performed the literature search and analysis of possible standards, participating and conducting workshops, interviews and meetings, interpreting the data as well as writing and revising the manuscript. S.T. was part of the literature search and analysis and revised the manuscript. P.J.M. supported interpretation of the data as well as writing the manuscript. C.O.S. and M.G. collaborated on the development of the MDS and initial standards comparisons. All authors provided feedback on the manuscript.

Corresponding author

Correspondence to Carina Nina Vorisek .

Ethics declarations

Competing interests.

S.T. is chair of HL7 Deutschland e.V. MG is convenor of ISO/TC 276/WG 5. The remaining authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Vorisek, C.N., Klopfenstein, S.A.I., Löbe, M. et al. Towards an Interoperability Landscape for a National Research Data Infrastructure for Personal Health Data. Sci Data 11 , 772 (2024). https://doi.org/10.1038/s41597-024-03615-3

Download citation

Received : 22 December 2023

Accepted : 05 July 2024

Published : 13 July 2024

DOI : https://doi.org/10.1038/s41597-024-03615-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

scientific work based on research figgerits

To read this content please select one of the options below:

Please note you do not have access to teaching notes, the scientific nature of work-based learning and research: an introduction to first principles.

Higher Education, Skills and Work-Based Learning

ISSN : 2042-3896

Article publication date: 4 October 2019

Issue publication date: 20 January 2020

The purpose of this paper is to explore the scientific nature of work-based learning (WBL) and research as operationalized in Professional Studies by examining first principles of scientific inquiry.

Design/methodology/approach

This paper introduces a Professional Studies program as it has been implemented at University of Southern Queensland in Australia and examines it from the perspective of five first principles of scientific inquiry: systematic exploration and reporting, use of models, objectivity, testability and applicability. The authors do so not to privilege the meritorious qualities of science or to legitimise WBL or its example in Professional Studies by conferring on them the status of science, but to highlight their systematised approach to learning and research.

If the authors define Professional Studies to mean the systematic inquiry of work-based people, processes and phenomena, evidence affirmatively suggests that it is scientific “in nature”.

Originality/value

WBL has been well documented, but its orientation to research, particularly mixed methods (MM) research through Professional Studies, and its adherence to first principles of science have never been explored; this paper begins to uncover the value of work-based pedagogical approaches to learning and research.

  • Mixed methods
  • Work-based learning (WBL)
  • Work-based research
  • Professional studies
  • Scientific inquiry

Fergusson, L. , Shallies, B. and Meijer, G. (2020), "The scientific nature of work-based learning and research: An introduction to first principles", Higher Education, Skills and Work-Based Learning , Vol. 10 No. 1, pp. 171-186. https://doi.org/10.1108/HESWBL-05-2019-0060

Emerald Publishing Limited

Copyright © 2019, Emerald Publishing Limited

Related articles

All feedback is valuable.

Please share your general feedback

Report an issue or find answers to frequently asked questions

Contact Customer Support

Rigorous scientific inquiry guided by creativity, curiosity and support: One of the last Renaissance scientists of our time

  • Published: 09 July 2024

Cite this article

scientific work based on research figgerits

  • David C. Muddiman 1 &
  • Lucinda R. Hittle 2  

199 Accesses

Explore all metrics

Avoid common mistakes on your manuscript.

“Most of the work still to be done in science and the useful arts is precisely that which needs the knowledge and cooperation of many scientists . . . that is why it is necessary for scientists and technologists to meet . . . Even in those branches of knowledge which seem to have least relation and connection with one another.” –Antoine Lavoisier, Reflexions sur L’instruction Publique , 1793

David Michael Hercules passed away January 20, 2024, following a battle with cancer. David was born August 10, 1932, and spent his childhood in rural Pennsylvania. He developed his interest in science very early with the gift of a chemistry set and enjoyed tinkering with electronics. In addition to science, David was very interested in music, playing the cornet and French horn through high school and college. He considered pursuing a career in music, but science won out (Fig.  1 ).

figure 1

Photo of David M. Hercules, 2023

David received his Bachelor’s in Science at Juniata College in Huntingdon, PA in 1954. He then received his Ph.D. in analytical chemistry from Massachusetts Institute of Technology (MIT) under the tutelage of Lockhart “Buck” Rogers. David’s academic pedigree can be traced back to Etienne de Clave, professor at the Jardin du Roi in Paris (Fig.  2 ). After completing his Ph.D., he joined the chemistry faculty at Juniata (1960–63) and then at MIT (1963–69), followed by positions at the University of Georgia, the University of Pittsburgh, and Vanderbilt University. At Pittsburgh and Vanderbilt David led the chemistry departments, developing a reputation for being fierce yet fair. Throughout his career, David earned many honors and awards including the Spectroscopy Society of Pittsburgh Award (1996), the American Chemical Society National Award in Surface Chemistry (1993), the American Chemical Society National Award in Analytical Chemistry (1986), and the Alexander von Humboldt Prize (1983) where he collaborated over a several year period with Professor Alfred Benninghoven at the University of Münster. He was a prolific writer, publishing over 500 research articles and garnering > 17,000 citations. He enjoyed teaching as well as research, receiving the Excellence in Teaching Award from the Student Affiliates of the American Chemical Society.

figure 2

Academic Lineage of David M. Hercules (Design by Alexandria Sohn, Academic Granddaughter of David M. Hercules)

David was the quintessential Renaissance scientist, constantly pushing himself and others to explore new avenues of applied science. He taught the scientists around him to challenge assumptions and consider new ways of thinking. He often stressed the importance of embracing and understanding new analytical tools and expanding the capabilities of existing tools to enable solving even more complex questions. His scientific career was ever changing—early in his career he explored chemiluminescence and pioneered bioassays for measuring glucose in blood and was the first to observe electrochemilumescence, the generation of chemiluminescence from electrochemically generated species ( Science , 1964 , 145, 808–809). He then moved into surface science, materials characterization and catalysis, developing and applying numerous techniques including electron spectroscopy for chemical analysis (ESCA), Mössbauer Spectroscopy, ion scattering spectroscopy (ISS), dynamic and static secondary ion mass spectrometry (SIMS), extended X-ray absorption fine structure (EXAFS), and Auger electron spectroscopy (AES). He then pivoted to mass spectrometry techniques including Time of Flight-SIMS, laser microprobe mass analysis and Matrix Assisted Laser Desorption/Ionization (MALDI). He also ensured that future generations would understand the fundamentals of these techniques by writing educational pieces, all of which have withstood the test of time ( J. Chem. Ed. , 1984 , 61, 592; J. Chem. Ed. , 1984 , 61, 6, 483; J. Chem. Ed. , 1984 , 61, 5, 402.). David is often credited (with others) in building the foundation in the burgeoning area of polymer mass spectrometry ( J. Chem. Ed. , 2007 , 84, 81–90). This eventually led him back to his bioassay beginnings where he developed and applied TOF–SIMS and MALDI-TOF mass spectrometry to address important biological problems ( Mass Spectrom. Rev. , 1995 , 14, 6, 383–429). He would always say his career was a “random walk” through analytical chemistry. However, arguably his most significant accomplishment was mentoring of more than 130 graduate students and post-docs, teaching them to be scholars and leaders. (Fig.  3 ).

figure 3

Group Meeting, Department of Chemistry, University of Pittsburgh, circa early 1990’s

One of his mantras was that any measurement was worthless without a confirmatory technique as a reference. We often lose sight of this fundamental premise, becoming overly enamored with the wealth of data obtained from sophisticated analytical instrumentation. Dave ignored this glamour, relentlessly focusing on confirming or rejecting the hypothesis in question and moving on to the next.

Due to the breadth of Dave’s scientific interests, he was engaged in a host of industrial collaborations across a variety of areas including chemistry, polymers, clean energy and medicine. He consulted for the Central Intelligence Agency, Exxon Mobil, instrument laboratories, and W. S. Merrill and Company. His research group included graduate students, postdocs, visiting scientists and emeritus professors from academic, industrial and government institutions. Thanks to his Humboldt fellowship and consulting roles, this diversity was truly global, including scientists from Japan to Germany. He encouraged us all to build our own expansive networks, and connected his students to the best opportunities in academia and industry.

Rather than micromanaging, Dave practiced benign neglect to encourage autonomy and independent scientific inquiry in his group members. However, he did hold us accountable to make progress against our objectives, and would critically, and incisively, assess our manuscripts, research proposals and presentations in our weekly group meetings. He held himself to even higher standards—always raising his own bar in relentless pursuit of scientific scholarship and impact.

Dave’s greatest strengths, ultimately, were his authenticity and warmth. Even after his students graduated, he was only a phone call or e-mail away, providing sage advice, humor and compassion to help us navigate various challenges in our careers and personal lives. Of all the faculty and leadership positions he held over the years, he took his role as our academic father most seriously.

His tenacity, scholarship, and raw intellect contributed to numerous and disparate fields. Many of these disciples assembled at Vanderbilt on the occasion of Dave’s 70th birthday in 2002 (Fig. 4 ) and contributed to a special issue dedicated to his career entitled “Advances in Optical Spectroscopy and Mass Spectrometry” ( Anal. Bional. Chem. , 2002 , 373, 7).

figure 4

70th Birthday Celebration and Scientific Symposium for David M. Hercules, Department of Chemistry, Vanderbilt University, 2002

FTMS Laboratory for Human Health Research, Department of Chemistry, North Carolina State University, Raleigh, NC, 27695, USA

David C. Muddiman

Discovery and Development Insights, LLC, Scotch Plains, NJ, USA

Lucinda R. Hittle

Correspondence to David C. Muddiman .

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Muddiman, D.C., Hittle, L.R. Rigorous scientific inquiry guided by creativity, curiosity and support: One of the last Renaissance scientists of our time. Anal Bioanal Chem (2024). https://doi.org/10.1007/s00216-024-05417-3

Published : 09 July 2024

DOI : https://doi.org/10.1007/s00216-024-05417-3

  • Find a journal
  • Track your research

Method of scientific reasoning Figgerits [ Answers ]

  • by Game Answer
  • 2024-02-24 2024-05-21

Figgerits Answers

Icon of the game Figgerits © Hitapps.

Striving for the right answers? Lucky You! You are in the right place and time to meet your ambition. In fact, this topic is meant to untwist the answers of Figgerits Method of scientific reasoning . Accordingly, we provide you with all hints and cheats and needed answers to accomplish the required crossword and find a final solution phrase.

Figgerits Method of scientific reasoning Answers:

PS: Check out this topic below if you are seeking to solve another level answers :

Figgerits Answers

We are pleased to help you find the word you searched for. Hence, don’t you want to continue this great winning adventure? You can either go back the Main Puzzle : Figgerits Special Rare Level 113 or discover the word of the next clue here : Well-regulated and organized .

if you have any feedback or comments on this, please post it below. Thank You. Michael

In fact, this topic is meant to untwist the answers of Figgerits Scientific work based on research. Accordingly, we provide you with all hints and cheats and needed answers to accomplish the required crossword and find a final solution phrase.

On this page you may find the Scientific work based on research Answers and Solutions. Figgerits is a fantastic logic puzzle game available for both iOS and Android devices. It is developed by Hitapps Inc and has over 300 levels for you to solve and enjoy.

Here are the possible answers for Scientific work based on research Figgerits. This is one of the most popular word puzzle games available for both iOS and Android.

The Figgerits Scientific work based on research answer above is from level 159. Visit the Figgerits level 159 answers page if you need help with any other clues in this particular puzzle to help you figure out the cryptogram.

Scientific work based on research This clue has appeared on Figgerits puzzle. Increase your vocabulary and your knowledge while using words from different topics. There is a variety of topics you can choose such as Proverbs, Historical facts, Space, Fauna, Sports and more.

Figgerits Scientific work based on research answers with the Phrase, cheat are provided on this page, This game is developed by Figgerits - Word Puzzle Game Hitapps and is available on the Google PlayStore & Apple AppStore. Figgerits is a kind of cross logic and word puzzle game for adults that will blow your mind and train brainpower.

Can't find Scientific work based on research Figgerits answers? This page is all you need. It is useful assistance for those looking for updated Scientific work based on research Figgerits answers.

You will find here Scientific work based on research Figgerits Answer . And the link to all the list of other clues that may help you skip actual level ? This is the only topic you have to use to skip missing words.

10 000+ words to figure out. 600+ curious facts that will blow your mind. 15 regular categories. 25 additional rare categories. Two types of difficulty.

Reflexivity, invoked in almost every qualitative research work, is conceived of as a practice that a researcher should carry out to make the politics of research transparent.

Research conducted for the purpose of contributing towards science by the systematic collection, interpretation and evaluation of data and that, too, in a planned manner is called scientific research: a researcher is the one who conducts this research. The results obtained from a small group through scientific studies are socialised, and new ...

The science concerning these norms is a specific branch of meta-science, or "research on research", led by scientists who promote these values through the education of early career scientists, identifying areas of concern for scientific validity, and postulating paths toward stronger, more credible science ( Ioannidis et al ., 2015 ).

In this work, we propose a new approach to scientific discovery that leverages ideas from real algebraic geometry and mixed-integer optimization to discover new scientific laws from a possibly ...

The findings suggest that more discoveries could be made if science agencies and research institutions provide greater incentives for researchers to work against the common trend of narrow ...

The work emphasizes the utilization of standards for epidemiological and health research data ensuring interoperability as well as the compatibility to NFDI4Health, its use cases, and to (inter ...

Purpose The purpose of this paper is to explore the scientific nature of work-based learning (WBL) and research as operationalized in Professional Studies by examining first principles of scientific inquiry.

Rather than micromanaging, Dave practiced benign neglect to encourage autonomy and independent scientific inquiry in his group members. However, he did hold us accountable to make progress against our objectives, and would critically, and incisively, assess our manuscripts, research proposals and presentations in our weekly group meetings.

In fact, this topic is meant to untwist the answers of Figgerits Method of scientific reasoning. Accordingly, we provide you with all hints and cheats and needed answers to accomplish the required crossword and find a final solution phrase.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Turk J Anaesthesiol Reanim
  • v.44(4); 2016 Aug

Logo of tjar

What is Scientific Research and How Can it be Done?

Scientific researches are studies that should be systematically planned before performing them. In this review, classification and description of scientific studies, planning stage randomisation and bias are explained.

Research conducted for the purpose of contributing towards science by the systematic collection, interpretation and evaluation of data and that, too, in a planned manner is called scientific research: a researcher is the one who conducts this research. The results obtained from a small group through scientific studies are socialised, and new information is revealed with respect to diagnosis, treatment and reliability of applications. The purpose of this review is to provide information about the definition, classification and methodology of scientific research.

Before beginning the scientific research, the researcher should determine the subject, do planning and specify the methodology. In the Declaration of Helsinki, it is stated that ‘the primary purpose of medical researches on volunteers is to understand the reasons, development and effects of diseases and develop protective, diagnostic and therapeutic interventions (method, operation and therapies). Even the best proven interventions should be evaluated continuously by investigations with regard to reliability, effectiveness, efficiency, accessibility and quality’ ( 1 ).

The questions, methods of response to questions and difficulties in scientific research may vary, but the design and structure are generally the same ( 2 ).

Classification of Scientific Research

Scientific research can be classified in several ways. Classification can be made according to the data collection techniques based on causality, relationship with time and the medium through which they are applied.

  • Observational
  • Experimental
  • Descriptive
  • Retrospective
  • Prospective
  • Cross-sectional
  • Social descriptive research ( 3 )

Another method is to classify the research according to its descriptive or analytical features. This review is written according to this classification method.

I. Descriptive research

  • Case series
  • Surveillance studies

II. Analytical research

  • Observational studies: cohort, case control and cross- sectional research
  • Interventional research: quasi-experimental and clinical research
  • Case Report: it is the most common type of descriptive study. It is the examination of a single case having a different quality in the society, e.g. conducting general anaesthesia in a pregnant patient with mucopolysaccharidosis.
  • Case Series: it is the description of repetitive cases having common features. For instance; case series involving interscapular pain related to neuraxial labour analgesia. Interestingly, malignant hyperthermia cases are not accepted as case series since they are rarely seen during historical development.
  • Surveillance Studies: these are the results obtained from the databases that follow and record a health problem for a certain time, e.g. the surveillance of cross-infections during anaesthesia in the intensive care unit.

Moreover, some studies may be experimental. After the researcher intervenes, the researcher waits for the result, observes and obtains data. Experimental studies are, more often, in the form of clinical trials or laboratory animal trials ( 2 ).

Analytical observational research can be classified as cohort, case-control and cross-sectional studies.

Firstly, the participants are controlled with regard to the disease under investigation. Patients are excluded from the study. Healthy participants are evaluated with regard to the exposure to the effect. Then, the group (cohort) is followed-up for a sufficient period of time with respect to the occurrence of disease, and the progress of disease is studied. The risk of the healthy participants getting sick is considered an incident. In cohort studies, the risk of disease between the groups exposed and not exposed to the effect is calculated and rated. This rate is called relative risk. Relative risk indicates the strength of exposure to the effect on the disease.

Cohort research may be observational and experimental. The follow-up of patients prospectively is called a prospective cohort study . The results are obtained after the research starts. The researcher’s following-up of cohort subjects from a certain point towards the past is called a retrospective cohort study . Prospective cohort studies are more valuable than retrospective cohort studies: this is because in the former, the researcher observes and records the data. The researcher plans the study before the research and determines what data will be used. On the other hand, in retrospective studies, the research is made on recorded data: no new data can be added.

In fact, retrospective and prospective studies are not observational. They determine the relationship between the date on which the researcher has begun the study and the disease development period. The most critical disadvantage of this type of research is that if the follow-up period is long, participants may leave the study at their own behest or due to physical conditions. Cohort studies that begin after exposure and before disease development are called ambidirectional studies . Public healthcare studies generally fall within this group, e.g. lung cancer development in smokers.

  • Case-Control Studies: these studies are retrospective cohort studies. They examine the cause and effect relationship from the effect to the cause. The detection or determination of data depends on the information recorded in the past. The researcher has no control over the data ( 2 ).

Cross-sectional studies are advantageous since they can be concluded relatively quickly. It may be difficult to obtain a reliable result from such studies for rare diseases ( 2 ).

Cross-sectional studies are characterised by timing. In such studies, the exposure and result are simultaneously evaluated. While cross-sectional studies are restrictedly used in studies involving anaesthesia (since the process of exposure is limited), they can be used in studies conducted in intensive care units.

  • Quasi-Experimental Research: they are conducted in cases in which a quick result is requested and the participants or research areas cannot be randomised, e.g. giving hand-wash training and comparing the frequency of nosocomial infections before and after hand wash.
  • Clinical Research: they are prospective studies carried out with a control group for the purpose of comparing the effect and value of an intervention in a clinical case. Clinical study and research have the same meaning. Drugs, invasive interventions, medical devices and operations, diets, physical therapy and diagnostic tools are relevant in this context ( 6 ).

Clinical studies are conducted by a responsible researcher, generally a physician. In the research team, there may be other healthcare staff besides physicians. Clinical studies may be financed by healthcare institutes, drug companies, academic medical centres, volunteer groups, physicians, healthcare service providers and other individuals. They may be conducted in several places including hospitals, universities, physicians’ offices and community clinics based on the researcher’s requirements. The participants are made aware of the duration of the study before their inclusion. Clinical studies should include the evaluation of recommendations (drug, device and surgical) for the treatment of a disease, syndrome or a comparison of one or more applications; finding different ways for recognition of a disease or case and prevention of their recurrence ( 7 ).

Clinical Research

In this review, clinical research is explained in more detail since it is the most valuable study in scientific research.

Clinical research starts with forming a hypothesis. A hypothesis can be defined as a claim put forward about the value of a population parameter based on sampling. There are two types of hypotheses in statistics.

  • H 0 hypothesis is called a control or null hypothesis. It is the hypothesis put forward in research, which implies that there is no difference between the groups under consideration. If this hypothesis is rejected at the end of the study, it indicates that a difference exists between the two treatments under consideration.
  • H 1 hypothesis is called an alternative hypothesis. It is hypothesised against a null hypothesis, which implies that a difference exists between the groups under consideration. For example, consider the following hypothesis: drug A has an analgesic effect. Control or null hypothesis (H 0 ): there is no difference between drug A and placebo with regard to the analgesic effect. The alternative hypothesis (H 1 ) is applicable if a difference exists between drug A and placebo with regard to the analgesic effect.

The planning phase comes after the determination of a hypothesis. A clinical research plan is called a protocol . In a protocol, the reasons for research, number and qualities of participants, tests to be applied, study duration and what information to be gathered from the participants should be found and conformity criteria should be developed.

The selection of participant groups to be included in the study is important. Inclusion and exclusion criteria of the study for the participants should be determined. Inclusion criteria should be defined in the form of demographic characteristics (age, gender, etc.) of the participant group and the exclusion criteria as the diseases that may influence the study, age ranges, cases involving pregnancy and lactation, continuously used drugs and participants’ cooperation.

The next stage is methodology. Methodology can be grouped under subheadings, namely, the calculation of number of subjects, blinding (masking), randomisation, selection of operation to be applied, use of placebo and criteria for stopping and changing the treatment.

I. Calculation of the Number of Subjects

The entire source from which the data are obtained is called a universe or population . A small group selected from a certain universe based on certain rules and which is accepted to highly represent the universe from which it is selected is called a sample and the characteristics of the population from which the data are collected are called variables. If data is collected from the entire population, such an instance is called a parameter . Conducting a study on the sample rather than the entire population is easier and less costly. Many factors influence the determination of the sample size. Firstly, the type of variable should be determined. Variables are classified as categorical (qualitative, non-numerical) or numerical (quantitative). Individuals in categorical variables are classified according to their characteristics. Categorical variables are indicated as nominal and ordinal (ordered). In nominal variables, the application of a category depends on the researcher’s preference. For instance, a female participant can be considered first and then the male participant, or vice versa. An ordinal (ordered) variable is ordered from small to large or vice versa (e.g. ordering obese patients based on their weights-from the lightest to the heaviest or vice versa). A categorical variable may have more than one characteristic: such variables are called binary or dichotomous (e.g. a participant may be both female and obese).

If the variable has numerical (quantitative) characteristics and these characteristics cannot be categorised, then it is called a numerical variable. Numerical variables are either discrete or continuous. For example, the number of operations with spinal anaesthesia represents a discrete variable. The haemoglobin value or height represents a continuous variable.

Statistical analyses that need to be employed depend on the type of variable. The determination of variables is necessary for selecting the statistical method as well as software in SPSS. While categorical variables are presented as numbers and percentages, numerical variables are represented using measures such as mean and standard deviation. It may be necessary to use mean in categorising some cases such as the following: even though the variable is categorical (qualitative, non-numerical) when Visual Analogue Scale (VAS) is used (since a numerical value is obtained), it is classified as a numerical variable: such variables are averaged.

Clinical research is carried out on the sample and generalised to the population. Accordingly, the number of samples should be correctly determined. Different sample size formulas are used on the basis of the statistical method to be used. When the sample size increases, error probability decreases. The sample size is calculated based on the primary hypothesis. The determination of a sample size before beginning the research specifies the power of the study. Power analysis enables the acquisition of realistic results in the research, and it is used for comparing two or more clinical research methods.

Because of the difference in the formulas used in calculating power analysis and number of samples for clinical research, it facilitates the use of computer programs for making calculations.

It is necessary to know certain parameters in order to calculate the number of samples by power analysis.

  • Type-I (α) and type-II (β) error levels
  • Difference between groups (d-difference) and effect size (ES)
  • Distribution ratio of groups
  • Direction of research hypothesis (H1)

a. Type-I (α) and Type-II (β) Error (β) Levels

Two types of errors can be made while accepting or rejecting H 0 hypothesis in a hypothesis test. Type-I error (α) level is the probability of finding a difference at the end of the research when there is no difference between the two applications. In other words, it is the rejection of the hypothesis when H 0 is actually correct and it is known as α error or p value. For instance, when the size is determined, type-I error level is accepted as 0.05 or 0.01.

Another error that can be made during a hypothesis test is a type-II error. It is the acceptance of a wrongly hypothesised H 0 hypothesis. In fact, it is the probability of failing to find a difference when there is a difference between the two applications. The power of a test is the ability of that test to find a difference that actually exists. Therefore, it is related to the type-II error level.

Since the type-II error risk is expressed as β, the power of the test is defined as 1–β. When a type-II error is 0.20, the power of the test is 0.80. Type-I (α) and type-II (β) errors can be intentional. The reason to intentionally make such an error is the necessity to look at the events from the opposite perspective.

b. Difference between Groups and ES

ES is defined as the state in which statistical difference also has clinically significance: ES≥0.5 is desirable. The difference between groups is the absolute difference between the groups compared in clinical research.

c. Allocation Ratio of Groups

The allocation ratio of groups is effective in determining the number of samples. If the number of samples is desired to be determined at the lowest level, the rate should be kept as 1/1.

d. Direction of Hypothesis (H1)

The direction of hypothesis in clinical research may be one-sided or two-sided. While one-sided hypotheses hypothesis test differences in the direction of size, two-sided hypotheses hypothesis test differences without direction. The power of the test in two-sided hypotheses is lower than one-sided hypotheses.

After these four variables are determined, they are entered in the appropriate computer program and the number of samples is calculated. Statistical packaged software programs such as Statistica, NCSS and G-Power may be used for power analysis and calculating the number of samples. When the samples size is calculated, if there is a decrease in α, difference between groups, ES and number of samples, then the standard deviation increases and power decreases. The power in two-sided hypothesis is lower. It is ethically appropriate to consider the determination of sample size, particularly in animal experiments, at the beginning of the study. The phase of the study is also important in the determination of number of subjects to be included in drug studies. Usually, phase-I studies are used to determine the safety profile of a drug or product, and they are generally conducted on a few healthy volunteers. If no unacceptable toxicity is detected during phase-I studies, phase-II studies may be carried out. Phase-II studies are proof-of-concept studies conducted on a larger number (100–500) of volunteer patients. When the effectiveness of the drug or product is evident in phase-II studies, phase-III studies can be initiated. These are randomised, double-blinded, placebo or standard treatment-controlled studies. Volunteer patients are periodically followed-up with respect to the effectiveness and side effects of the drug. It can generally last 1–4 years and is valuable during licensing and releasing the drug to the general market. Then, phase-IV studies begin in which long-term safety is investigated (indication, dose, mode of application, safety, effectiveness, etc.) on thousands of volunteer patients.

II. Blinding (Masking) and Randomisation Methods

When the methodology of clinical research is prepared, precautions should be taken to prevent taking sides. For this reason, techniques such as randomisation and blinding (masking) are used. Comparative studies are the most ideal ones in clinical research.

Blinding Method

A case in which the treatments applied to participants of clinical research should be kept unknown is called the blinding method . If the participant does not know what it receives, it is called a single-blind study; if even the researcher does not know, it is called a double-blind study. When there is a probability of knowing which drug is given in the order of application, when uninformed staff administers the drug, it is called in-house blinding. In case the study drug is known in its pharmaceutical form, a double-dummy blinding test is conducted. Intravenous drug is given to one group and a placebo tablet is given to the comparison group; then, the placebo tablet is given to the group that received the intravenous drug and intravenous drug in addition to placebo tablet is given to the comparison group. In this manner, each group receives both the intravenous and tablet forms of the drug. In case a third party interested in the study is involved and it also does not know about the drug (along with the statistician), it is called third-party blinding.

Randomisation Method

The selection of patients for the study groups should be random. Randomisation methods are used for such selection, which prevent conscious or unconscious manipulations in the selection of patients ( 8 ).

No factor pertaining to the patient should provide preference of one treatment to the other during randomisation. This characteristic is the most important difference separating randomised clinical studies from prospective and synchronous studies with experimental groups. Randomisation strengthens the study design and enables the determination of reliable scientific knowledge ( 2 ).

The easiest method is simple randomisation, e.g. determination of the type of anaesthesia to be administered to a patient by tossing a coin. In this method, when the number of samples is kept high, a balanced distribution is created. When the number of samples is low, there will be an imbalance between the groups. In this case, stratification and blocking have to be added to randomisation. Stratification is the classification of patients one or more times according to prognostic features determined by the researcher and blocking is the selection of a certain number of patients for each stratification process. The number of stratification processes should be determined at the beginning of the study.

As the number of stratification processes increases, performing the study and balancing the groups become difficult. For this reason, stratification characteristics and limitations should be effectively determined at the beginning of the study. It is not mandatory for the stratifications to have equal intervals. Despite all the precautions, an imbalance might occur between the groups before beginning the research. In such circumstances, post-stratification or restandardisation may be conducted according to the prognostic factors.

The main characteristic of applying blinding (masking) and randomisation is the prevention of bias. Therefore, it is worthwhile to comprehensively examine bias at this stage.

Bias and Chicanery

While conducting clinical research, errors can be introduced voluntarily or involuntarily at a number of stages, such as design, population selection, calculating the number of samples, non-compliance with study protocol, data entry and selection of statistical method. Bias is taking sides of individuals in line with their own decisions, views and ideological preferences ( 9 ). In order for an error to lead to bias, it has to be a systematic error. Systematic errors in controlled studies generally cause the results of one group to move in a different direction as compared to the other. It has to be understood that scientific research is generally prone to errors. However, random errors (or, in other words, ‘the luck factor’-in which bias is unintended-do not lead to bias ( 10 ).

Another issue, which is different from bias, is chicanery. It is defined as voluntarily changing the interventions, results and data of patients in an unethical manner or copying data from other studies. Comparatively, bias may not be done consciously.

In case unexpected results or outliers are found while the study is analysed, if possible, such data should be re-included into the study since the complete exclusion of data from a study endangers its reliability. In such a case, evaluation needs to be made with and without outliers. It is insignificant if no difference is found. However, if there is a difference, the results with outliers are re-evaluated. If there is no error, then the outlier is included in the study (as the outlier may be a result). It should be noted that re-evaluation of data in anaesthesiology is not possible.

Statistical evaluation methods should be determined at the design stage so as not to encounter unexpected results in clinical research. The data should be evaluated before the end of the study and without entering into details in research that are time-consuming and involve several samples. This is called an interim analysis . The date of interim analysis should be determined at the beginning of the study. The purpose of making interim analysis is to prevent unnecessary cost and effort since it may be necessary to conclude the research after the interim analysis, e.g. studies in which there is no possibility to validate the hypothesis at the end or the occurrence of different side effects of the drug to be used. The accuracy of the hypothesis and number of samples are compared. Statistical significance levels in interim analysis are very important. If the data level is significant, the hypothesis is validated even if the result turns out to be insignificant after the date of the analysis.

Another important point to be considered is the necessity to conclude the participants’ treatment within the period specified in the study protocol. When the result of the study is achieved earlier and unexpected situations develop, the treatment is concluded earlier. Moreover, the participant may quit the study at its own behest, may die or unpredictable situations (e.g. pregnancy) may develop. The participant can also quit the study whenever it wants, even if the study has not ended ( 7 ).

In case the results of a study are contrary to already known or expected results, the expected quality level of the study suggesting the contradiction may be higher than the studies supporting what is known in that subject. This type of bias is called confirmation bias. The presence of well-known mechanisms and logical inference from them may create problems in the evaluation of data. This is called plausibility bias.

Another type of bias is expectation bias. If a result different from the known results has been achieved and it is against the editor’s will, it can be challenged. Bias may be introduced during the publication of studies, such as publishing only positive results, selection of study results in a way to support a view or prevention of their publication. Some editors may only publish research that extols only the positive results or results that they desire.

Bias may be introduced for advertisement or economic reasons. Economic pressure may be applied on the editor, particularly in the cases of studies involving drugs and new medical devices. This is called commercial bias.

In recent years, before beginning a study, it has been recommended to record it on the Web site www.clinicaltrials.gov for the purpose of facilitating systematic interpretation and analysis in scientific research, informing other researchers, preventing bias, provision of writing in a standard format, enhancing contribution of research results to the general literature and enabling early intervention of an institution for support. This Web site is a service of the US National Institutes of Health.

The last stage in the methodology of clinical studies is the selection of intervention to be conducted. Placebo use assumes an important place in interventions. In Latin, placebo means ‘I will be fine’. In medical literature, it refers to substances that are not curative, do not have active ingredients and have various pharmaceutical forms. Although placebos do not have active drug characteristic, they have shown effective analgesic characteristics, particularly in algology applications; further, its use prevents bias in comparative studies. If a placebo has a positive impact on a participant, it is called the placebo effect ; on the contrary, if it has a negative impact, it is called the nocebo effect . Another type of therapy that can be used in clinical research is sham application. Although a researcher does not cure the patient, the researcher may compare those who receive therapy and undergo sham. It has been seen that sham therapies also exhibit a placebo effect. In particular, sham therapies are used in acupuncture applications ( 11 ). While placebo is a substance, sham is a type of clinical application.

Ethically, the patient has to receive appropriate therapy. For this reason, if its use prevents effective treatment, it causes great problem with regard to patient health and legalities.

Before medical research is conducted with human subjects, predictable risks, drawbacks and benefits must be evaluated for individuals or groups participating in the study. Precautions must be taken for reducing the risk to a minimum level. The risks during the study should be followed, evaluated and recorded by the researcher ( 1 ).

After the methodology for a clinical study is determined, dealing with the ‘Ethics Committee’ forms the next stage. The purpose of the ethics committee is to protect the rights, safety and well-being of volunteers taking part in the clinical research, considering the scientific method and concerns of society. The ethics committee examines the studies presented in time, comprehensively and independently, with regard to ethics and science; in line with the Declaration of Helsinki and following national and international standards concerning ‘Good Clinical Practice’. The method to be followed in the formation of the ethics committee should be developed without any kind of prejudice and to examine the applications with regard to ethics and science within the framework of the ethics committee, Regulation on Clinical Trials and Good Clinical Practice ( www.iku.com ). The necessary documents to be presented to the ethics committee are research protocol, volunteer consent form, budget contract, Declaration of Helsinki, curriculum vitae of researchers, similar or explanatory literature samples, supporting institution approval certificate and patient follow-up form.

Only one sister/brother, mother, father, son/daughter and wife/husband can take charge in the same ethics committee. A rector, vice rector, dean, deputy dean, provincial healthcare director and chief physician cannot be members of the ethics committee.

Members of the ethics committee can work as researchers or coordinators in clinical research. However, during research meetings in which members of the ethics committee are researchers or coordinators, they must leave the session and they cannot sign-off on decisions. If the number of members in the ethics committee for a particular research is so high that it is impossible to take a decision, the clinical research is presented to another ethics committee in the same province. If there is no ethics committee in the same province, an ethics committee in the closest settlement is found.

Thereafter, researchers need to inform the participants using an informed consent form. This form should explain the content of clinical study, potential benefits of the study, alternatives and risks (if any). It should be easy, comprehensible, conforming to spelling rules and written in plain language understandable by the participant.

This form assists the participants in taking a decision regarding participation in the study. It should aim to protect the participants. The participant should be included in the study only after it signs the informed consent form; the participant can quit the study whenever required, even when the study has not ended ( 7 ).

Peer-review: Externally peer-reviewed.

Author Contributions: Concept - C.Ö.Ç., A.D.; Design - C.Ö.Ç.; Supervision - A.D.; Resource - C.Ö.Ç., A.D.; Materials - C.Ö.Ç., A.D.; Analysis and/or Interpretation - C.Ö.Ç., A.D.; Literature Search - C.Ö.Ç.; Writing Manuscript - C.Ö.Ç.; Critical Review - A.D.; Other - C.Ö.Ç., A.D.

Conflict of Interest: No conflict of interest was declared by the authors.

Financial Disclosure: The authors declared that this study has received no financial support.

What's a scientific research space? Figgerits

What's a scientific research space, more from this puzzle:, more app solutions.

Word Cookies

Science and the scientific method: Definitions and examples

Here's a look at the foundation of doing science — the scientific method.

Kids follow the scientific method to carry out an experiment.

The scientific method

Hypothesis, theory and law, a brief history of science, additional resources, bibliography.

Science is a systematic and logical approach to discovering how things in the universe work. It is also the body of knowledge accumulated through the discoveries about all the things in the universe. 

The word "science" is derived from the Latin word "scientia," which means knowledge based on demonstrable and reproducible data, according to the Merriam-Webster dictionary . True to this definition, science aims for measurable results through testing and analysis, a process known as the scientific method. Science is based on fact, not opinion or preferences. The process of science is designed to challenge ideas through research. One important aspect of the scientific process is that it focuses only on the natural world, according to the University of California, Berkeley . Anything that is considered supernatural, or beyond physical reality, does not fit into the definition of science.

When conducting research, scientists use the scientific method to collect measurable, empirical evidence in an experiment related to a hypothesis (often in the form of an if/then statement) that is designed to support or contradict a scientific theory .

"As a field biologist, my favorite part of the scientific method is being in the field collecting the data," Jaime Tanner, a professor of biology at Marlboro College, told Live Science. "But what really makes that fun is knowing that you are trying to answer an interesting question. So the first step in identifying questions and generating possible answers (hypotheses) is also very important and is a creative process. Then once you collect the data you analyze it to see if your hypothesis is supported or not."

Here's an illustration showing the steps in the scientific method.

The steps of the scientific method go something like this, according to Highline College :

  • Make an observation or observations.
  • Form a hypothesis — a tentative description of what's been observed, and make predictions based on that hypothesis.
  • Test the hypothesis and predictions in an experiment that can be reproduced.
  • Analyze the data and draw conclusions; accept or reject the hypothesis or modify the hypothesis if necessary.
  • Reproduce the experiment until there are no discrepancies between observations and theory. "Replication of methods and results is my favorite step in the scientific method," Moshe Pritsker, a former post-doctoral researcher at Harvard Medical School and CEO of JoVE, told Live Science. "The reproducibility of published experiments is the foundation of science. No reproducibility — no science."

Some key underpinnings to the scientific method:

  • The hypothesis must be testable and falsifiable, according to North Carolina State University . Falsifiable means that there must be a possible negative answer to the hypothesis.
  • Research must involve deductive reasoning and inductive reasoning . Deductive reasoning is the process of using true premises to reach a logical true conclusion while inductive reasoning uses observations to infer an explanation for those observations.
  • An experiment should include a dependent variable (which does not change) and an independent variable (which does change), according to the University of California, Santa Barbara .
  • An experiment should include an experimental group and a control group. The control group is what the experimental group is compared against, according to Britannica .

The process of generating and testing a hypothesis forms the backbone of the scientific method. When an idea has been confirmed over many experiments, it can be called a scientific theory. While a theory provides an explanation for a phenomenon, a scientific law provides a description of a phenomenon, according to The University of Waikato . One example would be the law of conservation of energy, which is the first law of thermodynamics that says that energy can neither be created nor destroyed. 

A law describes an observed phenomenon, but it doesn't explain why the phenomenon exists or what causes it. "In science, laws are a starting place," said Peter Coppinger, an associate professor of biology and biomedical engineering at the Rose-Hulman Institute of Technology. "From there, scientists can then ask the questions, 'Why and how?'"

Laws are generally considered to be without exception, though some laws have been modified over time after further testing found discrepancies. For instance, Newton's laws of motion describe everything we've observed in the macroscopic world, but they break down at the subatomic level.

This does not mean theories are not meaningful. For a hypothesis to become a theory, scientists must conduct rigorous testing, typically across multiple disciplines by separate groups of scientists. Saying something is "just a theory" confuses the scientific definition of "theory" with the layperson's definition. To most people a theory is a hunch. In science, a theory is the framework for observations and facts, Tanner told Live Science.

This Copernican heliocentric solar system, from 1708, shows the orbit of the moon around the Earth, and the orbits of the Earth and planets round the sun, including Jupiter and its moons, all surrounded by the 12 signs of the zodiac.

The earliest evidence of science can be found as far back as records exist. Early tablets contain numerals and information about the solar system , which were derived by using careful observation, prediction and testing of those predictions. Science became decidedly more "scientific" over time, however.

1200s: Robert Grosseteste developed the framework for the proper methods of modern scientific experimentation, according to the Stanford Encyclopedia of Philosophy. His works included the principle that an inquiry must be based on measurable evidence that is confirmed through testing.

1400s: Leonardo da Vinci began his notebooks in pursuit of evidence that the human body is microcosmic. The artist, scientist and mathematician also gathered information about optics and hydrodynamics.

1500s: Nicolaus Copernicus advanced the understanding of the solar system with his discovery of heliocentrism. This is a model in which Earth and the other planets revolve around the sun, which is the center of the solar system.

1600s: Johannes Kepler built upon those observations with his laws of planetary motion. Galileo Galilei improved on a new invention, the telescope, and used it to study the sun and planets. The 1600s also saw advancements in the study of physics as Isaac Newton developed his laws of motion.

1700s: Benjamin Franklin discovered that lightning is electrical. He also contributed to the study of oceanography and meteorology. The understanding of chemistry also evolved during this century as Antoine Lavoisier, dubbed the father of modern chemistry , developed the law of conservation of mass.

1800s: Milestones included Alessandro Volta's discoveries regarding electrochemical series, which led to the invention of the battery. John Dalton also introduced atomic theory, which stated that all matter is composed of atoms that combine to form molecules. The basis of modern study of genetics advanced as Gregor Mendel unveiled his laws of inheritance. Later in the century, Wilhelm Conrad Röntgen discovered X-rays , while George Ohm's law provided the basis for understanding how to harness electrical charges.

1900s: The discoveries of Albert Einstein , who is best known for his theory of relativity, dominated the beginning of the 20th century. Einstein's theory of relativity is actually two separate theories. His special theory of relativity, which he outlined in a 1905 paper, " The Electrodynamics of Moving Bodies ," concluded that time must change according to the speed of a moving object relative to the frame of reference of an observer. His second theory of general relativity, which he published as " The Foundation of the General Theory of Relativity ," advanced the idea that matter causes space to curve.

In 1952, Jonas Salk developed the polio vaccine , which reduced the incidence of polio in the United States by nearly 90%, according to Britannica . The following year, James D. Watson and Francis Crick discovered the structure of DNA , which is a double helix formed by base pairs attached to a sugar-phosphate backbone, according to the National Human Genome Research Institute .

2000s: The 21st century saw the first draft of the human genome completed, leading to a greater understanding of DNA. This advanced the study of genetics, its role in human biology and its use as a predictor of diseases and other disorders, according to the National Human Genome Research Institute .

  • This video from City University of New York delves into the basics of what defines science.
  • Learn about what makes science science in this book excerpt from Washington State University .
  • This resource from the University of Michigan — Flint explains how to design your own scientific study.

Merriam-Webster Dictionary, Scientia. 2022. https://www.merriam-webster.com/dictionary/scientia

University of California, Berkeley, "Understanding Science: An Overview." 2022. ​​ https://undsci.berkeley.edu/article/0_0_0/intro_01  

Highline College, "Scientific method." July 12, 2015. https://people.highline.edu/iglozman/classes/astronotes/scimeth.htm  

North Carolina State University, "Science Scripts." https://projects.ncsu.edu/project/bio183de/Black/science/science_scripts.html  

University of California, Santa Barbara. "What is an Independent variable?" October 31,2017. http://scienceline.ucsb.edu/getkey.php?key=6045  

Encyclopedia Britannica, "Control group." May 14, 2020. https://www.britannica.com/science/control-group  

The University of Waikato, "Scientific Hypothesis, Theories and Laws." https://sci.waikato.ac.nz/evolution/Theories.shtml  

Stanford Encyclopedia of Philosophy, Robert Grosseteste. May 3, 2019. https://plato.stanford.edu/entries/grosseteste/  

Encyclopedia Britannica, "Jonas Salk." October 21, 2021. https://www.britannica.com/ biography /Jonas-Salk

National Human Genome Research Institute, "​Phosphate Backbone." https://www.genome.gov/genetics-glossary/Phosphate-Backbone  

National Human Genome Research Institute, "What is the Human Genome Project?" https://www.genome.gov/human-genome-project/What  

‌ Live Science contributor Ashley Hamer updated this article on Jan. 16, 2022.

Sign up for the Live Science daily newsletter now

Get the world’s most fascinating discoveries delivered straight to your inbox.

'Doomsday glacier' won't collapse the way we thought, new study suggests

Evolution: Facts about the processes that shape the diversity of life on Earth

Antibiotic resistance makes once-lifesaving drugs useless. Could we reverse it?

Most Popular

  • 2 Salps: The world's fastest-growing animals that look like buckets of snot
  • 3 Supermoon Blue Moon 2024: Top photos from around the world
  • 4 Why do cats hate closed doors?
  • 5 'Doomsday glacier' won't collapse the way we thought, new study suggests

scientific work based on research figgerits

To read this content please select one of the options below:

Please note you do not have access to teaching notes, the scientific nature of work-based learning and research: an introduction to first principles.

Higher Education, Skills and Work-Based Learning

ISSN : 2042-3896

Article publication date: 4 October 2019

Issue publication date: 20 January 2020

The purpose of this paper is to explore the scientific nature of work-based learning (WBL) and research as operationalized in Professional Studies by examining first principles of scientific inquiry.

Design/methodology/approach

This paper introduces a Professional Studies program as it has been implemented at University of Southern Queensland in Australia and examines it from the perspective of five first principles of scientific inquiry: systematic exploration and reporting, use of models, objectivity, testability and applicability. The authors do so not to privilege the meritorious qualities of science or to legitimise WBL or its example in Professional Studies by conferring on them the status of science, but to highlight their systematised approach to learning and research.

If the authors define Professional Studies to mean the systematic inquiry of work-based people, processes and phenomena, evidence affirmatively suggests that it is scientific “in nature”.

Originality/value

WBL has been well documented, but its orientation to research, particularly mixed methods (MM) research through Professional Studies, and its adherence to first principles of science have never been explored; this paper begins to uncover the value of work-based pedagogical approaches to learning and research.

  • Mixed methods
  • Work-based learning (WBL)
  • Work-based research
  • Professional studies
  • Scientific inquiry

Fergusson, L. , Shallies, B. and Meijer, G. (2020), "The scientific nature of work-based learning and research: An introduction to first principles", Higher Education, Skills and Work-Based Learning , Vol. 10 No. 1, pp. 171-186. https://doi.org/10.1108/HESWBL-05-2019-0060

Emerald Publishing Limited

Copyright © 2019, Emerald Publishing Limited

Related articles

All feedback is valuable.

Please share your general feedback

Report an issue or find answers to frequently asked questions

Contact Customer Support

Collaboration-based scientific productivity: evidence from Nobel laureates

  • Published: 15 June 2024
  • Volume 129 , pages 3735–3768, ( 2024 )

Cite this article

scientific work based on research figgerits

  • Chih-Hsing Liu 1 , 2 &
  • Jun-You Lin   ORCID: orcid.org/0000-0002-8579-4457 3  

230 Accesses

1 Altmetric

Explore all metrics

Nobel laureates offer a range of expertise to researchers interested in generating scientific productivity by capitalizing on their ability to collaborate with other outstanding researchers. However, current knowledge on whether and how a scholar’s research areas can be leveraged for scientific productivity has not been examined empirically. There has been scant conceptualization of the underlying processes responsible for utilizing research areas, and the results have been equivocal. We propose and test the intermediate mechanisms of number of collaborations and collaboration diversity as two distinctive capabilities that may explain how a research area drives a scientist’s productivity. Our conceptual model posits that the link between research areas and scientific productivity is neither simple nor direct. An empirical test on Nobel laureates demonstrates the complexity of innovation generation. Two pathways from research areas to scientific productivity are revealed: number of collaborations and collaboration diversity both mediate the link, but the role of research areas is negatively moderated by the scholar’s dependence on external knowledge to their academic collaboration. Our theory is thereby confirmed. Finally, expected findings and contributions are also discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

scientific work based on research figgerits

Similar content being viewed by others

Impelling research productivity and impact through collaboration: a scientometric case study of knowledge management.

scientific work based on research figgerits

The effect of multidisciplinary collaborations on research diversification

scientific work based on research figgerits

Collaboration and topic switches in science

Explore related subjects.

  • Artificial Intelligence

Abbu, H. R., & Gopalakrishna, P. (2021). Synergistic effects of market orientation implementation and internalization on firm performance: Direct marketing service provider industry. Journal of Business Research, 125 (6), 851–863.

Article   Google Scholar  

Amin, R., & Wani, Z. A. (2021). Assessment of Prize Impact on Publication Productivity, Citations and Collaboration of Nobel Laureates.  Knowledge Management in Higher Education Institutions , 247.

Ao, W., Lyu, D., Ruan, X., Li, J., & Cheng, Y. (2023). Scientific creativity patterns in scholars’ academic careers: Evidence from PubMed. Journal of Informetrics, 17 (4), 101463.

Arbesman, S. (2011). Quantifying the ease of scientific discovery. Scientometrics, 86 (2), 245–250.

Bacci, S., Bertaccini, B., & Petrucci, A. (2023). Insights from the co-authorship network of the Italian academic statisticians. Scientometrics . https://doi.org/10.1007/s11192-023-04761-y

Bai, X., Zhang, F., Li, J., Xu, Z., Patoli, Z., & Lee, I. (2021). Quantifying scientific collaboration impact by exploiting collaboration-citation network. Scientometrics, 126 (9), 7993–8008.

Beaudry, C., & Allaoui, S. (2012). Impact of public and private research funding on scientific production: The case of nanotechnology. Research Policy, 41 (9), 1589–1606.

Beaver, D. D. (2001). Reflections on scientific collaboration (and its study): Past, present, and future. Scientometrics, 52 (3), 365–377.

Bordons, M., Morillo, F., Fernández, M. T., & Gómez, I. (2003). One step further in the production of bibliometric indicators at the micro level: Differences by gender and professional category of scientists. Scientometrics, 57 (2), 159–173.

Bozeman, B., & Corley, E. (2004). Scientists’ collaboration strategies: Implications for scientific and technical human capital. Research Policy, 33 (4), 599–616.

Bozeman, B., Fay, D., & Slade, C. P. (2013). Research collaboration in universities and academic entrepreneurship: The-state-of-the-art. Journal of Technology Transfer, 38 , 1–67.

Bruneel, J., D’Este, P., & Salter, A. (2010). Investigating the factors that diminish the barriers to university-industry collaboration. Research Policy, 39 (7), 858–868.

Bu, Y., Murray, D. S., Ding, Y., Huang, Y., & Zhao, Y. (2018). Measuring the stability of scientific collaboration. Scientometrics, 114 (2), 463–479.

Caloghirou, Y., Giotopoulos, I., Kontolaimou, A., Korra, E., & Tsakanikas, A. (2021). Industry-university knowledge flows and product innovation: How do knowledge stocks and crisis matter? Research Policy, 50 (3), 104195.

Caplar, N., Tacchella, S., & Birrer, S. (2017). Quantitative evaluation of gender bias in astronomical publications from citation counts. Nature Astronomy . https://doi.org/10.1038/s41550-017-0141

Chan, H. F., Önder, A. S., & Torgler, B. (2015). Do Nobel laureates change their patterns of collaboration following prize reception? Scientometrics, 105 (3), 2215–2235.

Chan, H. F., Önder, A. S., & Torgler, B. (2016). The first cut is the deepest: Repeated interactions of coauthorship and academic productivity in Nobel laureate teams. Scientometrics, 106 , 509–524.

Chan, H. F., & Torgler, B. (2020). Gender differences in performance of top cited scientists by field and country. Scientometrics, 125 (3), 2421–2447.

Chen, K., Zhang, Y., Zhu, G., & Mu, R. (2020). Do research institutes benefit from their network positions in research collaboration networks with industries or/and universities? Technovation . https://doi.org/10.1016/j.technovation.2017.10.005

Chen, Y., Börner, K., & Fang, S. (2013). Evolving collaboration networks in Scientometrics in 1978–2010: A micro–macro analysis. Scientometrics, 95 , 1051–1070.

Chesbrough, H. W. (2003). Open Innovation: The new imperative for creating and profiting from technology . Harvard Business School Press.

Google Scholar  

Chhikara, R., Garg, R., Chhabra, S., Karnatak, U., & Agrawal, G. (2021). Factors affecting adoption of electric vehicles in India: An exploratory study. Transportation Research Part D: Transport and Environment, 100 , 103084.

Cohen, W. M., & Levinthal, D. A. (1989). Innovation and learning: The two faces of R&D. Economic Journal, 99 (379), 569–596.

Collins, C. J. (2021). Expanding the resource based view model of strategic human resource management. The International Journal of Human Resource Management, 32 (2), 331–358.

Costas, R., van Leeuwen, T., & Bordons, M. (2010). Self-citations at the meso and individual levels: Effects of different calculation methods. Scientometrics, 82 (3), 517–537.

Defazio, D., Lockett, A., & Wright, M. (2009). Funding incentives, collaborative dynamics and scientific productivity: Evidence from the EU framework program. Research Policy, 38 (2), 293–305.

Di Bella, E., Gandullia, L., & Preti, S. (2021). Analysis of scientific collaboration network of Italian Institute of Technology. Scientometrics, 126 (10), 8517–8539.

Dobrucali, B., & Özkan, T. (2021). What is the role of narcissism in the relationship between impulsivity and driving anger expression? Transportation Research Part F: Traffic Psychology and Behaviour, 77 , 246–256.

Dodgson, M. (1993). Learning, trust, and technological collaboration. Human Relations, 46 , 77–95.

Drejer, A. (2000). Organizational learning and competence development. The Learning Organization, 7 (4), 206–220.

Fávero, L. P., & Belfiore, P. (2018). Data science for business and decision making . Elsevier.

Freeman, R. B., & Huang, W. (2014). Collaboration: Strength in diversity. Science, 513 (7518), 305.

Fry, C. V., Lynham, J., & Tran, S. (2023). Ranking researchers: Evidence from Indonesia. Research Policy, 52 (5), 104753.

Gallagher, C., Lusher, D., Koskinen, J., Roden, B., Wang, P., Gosling, A., & Simpson, G. (2023). Network patterns of university-industry collaboration: A case study of the chemical sciences in Australia. Scientometrics . https://doi.org/10.1007/s11192-023-04749-8

Garfield, E. (2007). The evolution of the Science Citation Index. International Microbiology, 10 (1), 65–69.

MathSciNet   Google Scholar  

Gerhart, B., & Feng, J. (2021). The resource-based view of the firm, human resources, and human capital: Progress and prospects. Journal of Management, 47 (7), 1796–1819.

Gonzalez-Brambila, C., & Veloso, F. M. (2007). The determinants of research output and impact: A study of Mexican researchers. Research Policy, 36 (7), 1035–1051.

Grant, R. M. (1996). Prospering in dynamically competitive environments: Organizational capabilities as knowledge integration. Organization Science, 7 , 375–387.

Grant, R. M., & Baden-Fuller, C. (2004). A knowledge accessing theory of strategic alliances. Journal of Management Studies, 41 (1), 61–84.

Guan, J., & Pang, L. (2018). Bidirectional relationship between network position and knowledge creation in Scientometrics. Scientometrics, 115 , 201–222.

Haeussler, C., & Sauermann, H. (2020). Division of labor in collaborative knowledge production: The role of team size and interdisciplinary. Research Policy, 49 (6), 103987.

Hall, B. H., Jaffe, A. B., & Trajtenberg, M. (2005). Market value and patent citations. RAND Journal of Economics, 36 , 16–38.

Hamel, G. (1991). Competition for competence and inter-partner learning within international strategic alliances. Strategic Management Journal, 12 , 83–104.

Hausmane, J., Hall, B. H., & Griliches, Z. (1984). Econometric models for count data with an application to the patents-R&D relationship. Econometrica, 52 (4), 909–938.

Hayes, A. F., 2021. The PROCESS macro for SPSS, SAS, and R. https://www.processmacro.org/download.html .

Hayes, A. F. (2018). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (2nd ed.). The Guilford Press.

Hou, J., Zheng, B., Zhang, Y., & Chen, C. (2021). How do Price medalists’ scholarly impact change before and after their awards? Scientometrics, 126 (7), 5945–5981.

Huang, K. G., & Li, J. T. (2019). Adopting knowledge from reverse innovations? Transnational patents and signaling from an emerging economy. Journal of International Business Studies, 50 (7), 1078–1102.

Huang, L., Joshi, P. D., Wakslak, C. J., & Wu, A. (2021). Sizing up entrepreneurial potential: Gender differences in communication and investor perceptions of long-term growth and scalability. Academy of Management Journal, 64 (3), 716–740.

Huang, Y., Tian, C., & Ma, Y. (2023). Practical operation and theoretical basis of difference-in-difference regression in science of science: The comparative trial on the scientific performance of Nobel laureates versus their coauthors. Journal of Data and Information Science, 8 (1), 29–46.

Jacquemin, A. P., & Berry, C. H. (1979). Entropy measure of diversification and corporate growth. Journal of Industrial Economics, 27 , 359–369.

Jin, Y., Yuan, S., Shao, Z., Hall, W., & Tang, J. (2021). Turing Award elites revisited: Patterns of productivity, collaboration, authorship and impact. Scientometrics, 126 , 2329–2348.

Jokić, M. (2020). Productivity, visibility, authorship, and collaboration in library and information science journals: Central and Eastern European authors. Scientometrics, 122 (2), 1189–1219.

Jones, B. F., & Weinberg, B. A. (2011). Age dynamics in scientific creativity. PNAS, 108 (47), 18910–18914.

Katz, J. S., & Martin, B. R. (1997). What is research collaboration? Research Policy, 26 (1), 1–18.

Khaksar, S. M. S., Chu, M. T., Rozario, S., & Slade, B. (2023). Knowledge-based dynamic capabilities and knowledge worker productivity in professional service firms The moderating role of organisational culture. Knowledge Management Research & Practice, 21 (2), 241–258.

Khamseh, H. M., Jolly, D., & Morel, L. (2017). The effect of learning approaches on the utilization of external knowledge in strategic alliances. Industrial Marketing Management, 63 , 92–104.

Ko, B. K., Jang, Y., & Yang, J. S. (2024). Universalism and particularism in the recommendations of the nobel prize for science. Scientometrics, 129 (2), 847–868.

Kogut, B. (1988). Joint ventures: Theoretical and empirical perspectives. Strategic Management Journal, 9 , 319–322.

Kozhakhmet, S., Moldashev, K., Yenikeyeva, A., & Nurgabdeshov, A. (2022). How training and development practices contribute to research productivity: A moderated mediation model. Studies in Higher Education, 47 (2), 437–449.

Kwiek, M., & Roszka, W. (2023). Once highly productive, forever highly productive? Full professors’ research productivity from a longitudinal perspective. Higher Education . https://doi.org/10.1007/s10734-023-01022-y

Lakitan, B., Hidayat, D., & Herlinda, S. (2012). Scientific productivity and the collaboration intensity of Indonesian universities and public R&D institutions: Are there dependencies on collaborative R&D with foreign institutions? Technology in Society, 34 (3), 227–238.

Larimo, J., Nguyen, H. L., & Ali, T. (2006). Performance measurement choices in international joint ventures: What factors drive them? Journal of Business Research, 69 (2), 877–887.

Laursen, K., & Salter, A. (2006). Open for innovation: The role of openness in explaining innovation performance among U.K. manufacturing firms. Strategic Management Journal, 27 (2), 131–150.

Lee, S., & Bozeman, B. (2005). The impact of research collaboration on scientific productivity. Social Studies of Science, 35 (5), 673–702.

Leeuwet, T. D., Lokshin, B., & Duysters, G. (2014). Returns to alliance portfolio diversity: The relative effects of partner diversity on firm’s innovative performance and productivity. Journal of Business Research, 67 (1839–12), 1839–1849.

Li, D., Wang, Y., & Liu, Z. P. (2021). Academic background of Nobel prize laureates reveals the importance of multidisciplinary education in medicine. Social Sciences & Humanities Open, 3 (1), 100114.

Li, E. Y., Liao, C. H., & Yen, H. R. (2013). Co-authorship networks and research impact: A social capital perspective. Research Policy, 42 (9), 1515–1530.

Li, F., Miao, Y., & Yang, C. (2015). How do alumni faculty behave in research collaboration? An analysis of Chang Jiang Scholars in China. Research Policy, 44 (2), 438–450.

Li, J., Li, Y., Yu, Y., & Yuan, L. (2019). Co-authorship networks and research impact: A social capital perspective. Journal of Knowledge Management, 23 (5), 809–835.

Liang, W., Song, H., & Sun, R. (2020). Can a professional learning community facilitate teacher well-being in China? The mediating role of teaching self-efficacy. Educational Studies . https://doi.org/10.1080/03055698.2020.1755953

Lin, B. W., & Wu, C. H. (2010). How does knowledge depth moderate the performance of internal and external knowledge sourcing strategies? Technovation, 30 (11–12), 582–589.

Lin, J. Y. (2017). Balancing industry collaboration and academic innovation: The contingent role of collaboration-specific attributes. Technological Forecasting & Social Change, 123 (3), 216–228.

Lin, J. Y., & Yang, C. H. (2020). Heterogeneity in industry–university R&D collaboration and firm innovative performance. Scientometrics, 124 (1), 1–25.

Lindahl, J., Colliander, C., & Danell, R. (2019). Early career performance and its correlation with gender and publication output during doctoral education. Scientometrics, 122 (1), 1–22.

Lissoni, F., Mairesse, J., Montobbio, F., & Pezzoni, M. (2011). Scientific productivity and academic promotion: A study on French and Italian physicists. Industrial and Corporate Change, 20 (1), 253–294.

Lou, W., He, J., Zhang, L., Zhu, Z., & Zhu, Y. (2023). Support behind the scenes: The relationship between acknowledgement, coauthor, and citation in Nobel articles. Scientometrics, 128 (10), 5767–5790.

Loureiro, S. M. C., Guerreiro, J., Eloy, S., Langaro, D., & Panchapakesan, P. (2019). Understanding the use of Virtual Reality in Marketing: A text mining-based review. Journal of Business Research, 100 , 514–530.

Lu, Y., Zhou, L., Bruton, G., & Li, W. (2010). Capabilities as a mediator linking resources and the international performance of entrepreneurial firms in an emerging economy. Journal of International Business Studies, 41 (3), 419–436.

Makhija, M., & Ganesh, U. (1997). The relationship between control and partner learning in learning-related joint ventures. Organization Science . https://doi.org/10.1287/orsc.8.5.508

Michels, C., & Schmoch, U. (2012). The growth of science and database coverage. Scientometrics, 93 (3), 831–846.

Mirnezami, S. R., Beaudry, C., & Larivière, V. (2015). What determines researchers’ scientific impact? A case study of Quebec researchers. Science and Public Policy, 43 (2), 262–274.

Molina, J. A., Iñiguez, D., Ruiz, G., & Tarancón, A. (2021). Leaders among the leaders in Economics: A network analysis of the Nobel Prize laureates. Applied Economics Letters, 28 (7), 584–589.

Nederhof, A. J. (2006). Bibliometric monitoring of research performance in the Social Sciences and the Humanities: A Review. Scientometrics, 66 , 81–100.

Nelson, R., & Winter, S. (1982). An Evolutionary Economic change . Harvard University Press.

Nonaka, I. (1994). A dynamic theory of organizational knowledge. Organization Science, 5 (1), 14–37.

Peroni, S., Ciancarini, P., Gangemi, A., Nuzzolese, A. G., Poggi, F., & Presutti, V. (2020). The practice of self-citations: A longitudinal study. Scientometrics, 123 (1), 253–282.

Pinto, T., & Teixeira, A. A. (2020). The impact of research output on economic growth by fields of science: A dynamic panel data analysis, 1980–2016. Scientometrics, 123 (2), 945–978.

Polemis, M., & Stengos, T. (2022). What shapes the delay in the Nobel Prize discoveries? A research note. Scientometrics, 127 , 803–811.

Porter, A. L., & Rafols, I. (2009). Is science becoming more interdisciplinary? Measuring and mapping six research fields over time. Scientometrics, 81 (3), 719–745.

Quintana-Garcia, C., & Benavides-Velasco, C. A. (2008). Innovative competence, exploration and exploitation: The influence of technological diversification. Research Policy, 37 (3), 492–507.

Rafols, I., & Meyer, I. (2007). How cross-disciplinary is bionanotechnology? Explorations in the specialty of molecular motors. Scientometrics, 70 (3), 633–650.

Reagans, R., & Zuckerman, E. W. (2001). Networks, diversity, and productivity: The social capital of corporate R&D teams. Organization Science, 12 (4), 502–517.

Ren, J., Wang, F., & Li, M. (2023). Dynamics and characteristics of interdisciplinary research in scientific breakthroughs: case studies of Nobel-winning research in the past 120 years. Scientometrics . https://doi.org/10.1007/s11192-023-04762-x

Rodríguez, J. G. (2022). Making the most of world talent for science? The Nobel Prize and Fields Medal experience. Scientometrics, 127 (2), 813–847.

Root-Bernstein, M., & Root-Bernstein, R. (2023). Polymathy among Nobel laureates as a creative strategy—The qualitative and phenomenological evidence. Creativity Research Journal, 35 (1), 116–142.

Schäfermeier, B., Hirth, J., & Hanika, T. (2023). Research topic flows in co-authorship networks. Scientometrics, 128 (9), 5051–5078.

Sidhu, J. S., Commandeur, H. R., & Volberda, H. W. (2007). The multifaceted nature of exploration and exploitation: Value of supply, demand, and spatial search for innovation. Organization Science, 18 (1), 20–38.

Simeth, M., & Lhuillery, S. (2015). How do firms develop capabilities for scientific disclosure? Research Policy, 44 (7), 1283–1295.

Söderström, K. R. (2023). The structure and dynamics of instrument collaboration networks. Scientometrics . https://doi.org/10.1007/s11192-023-04658-w

Soosay, C. A., Hyland, P. W., & Ferrer, M. (2008). Supply chain collaboration: Capabilities for continuous innovation. Supply Chain Management, 13 (2), 160–169.

Sun, M., Ma, T., Zhou, L., & Yue, M. (2023). Analysis of the relationships among paper citation and its influencing factors: A Bayesian network-based approach. Scientometrics, 128 (5), 3017–3033.

Sun, Y., Zhang, C., & Kok, R. A. (2020). The role of research outcome quality in the relationship between university research collaboration and technology transfer: Empirical results from China. Scientometrics, 122 , 1003–1026.

Takahashi, R., Kaibe, K., Suzuki, K., Takahashi, S., Takeda, K., Hansen, M., & Yumoto, M. (2023). New concept of the affinity between research fields using academic journal data in Scopus. Scientometrics . https://doi.org/10.1007/s11192-023-04711-8

Taşkın, Z., Doğan, G., Kulczycki, E., & Zuccala, A. A. (2021). Self-citation patterns of journals indexed in the journal citation reports. Journal of Informetrics, 15 (4), 101221.

Teece, D. J., Pisano, G., & Shuen, A. (1997). Dynamic capabilities and strategic management. Strategic Management Journal, 18 (7), 509–533.

Tian, J., Nakamori, Y., & Wierzbicki, A. P. (2009). Knowledge management and knowledge creation in academia: A study based onsurveys in a Japanese research university. Journal of Knowledge Management, 13 (2), 76–92.

Tong, S., & Ahlgren, P. (2017). Evolution of three Nobel Prize themes and a Nobel snub theme in chemistry: A bibliometric study with focus on international collaboration. Scientometrics, 112 (1), 75–90.

Triana, M. D., Richard, O. C., & Su, W. (2019). Gender diversity in senior management, strategic change, and firm performance: Examining the mediating nature of strategic change in high tech firms. Research Policy, 48 , 1681–1693.

Tsai, K. H., & Wang, J. C. (2009). External technology sourcing and innovation performance in LMT sectors: An analysis based on the Taiwanese Technological Innovation Survey. Research Policy, 38 (3), 518–526.

van Rijnsoever, F. J., & Hessels, L. K. (2011). Factors associated with disciplinary and interdisciplinary research collaboration. Research Policy, 40 (3), 463–472.

Varda, D. M., & Retrum, J. H. (2015). Collaborative performance as a function of network members’ perceptions of success. Public Performance & Management Review, 38 (4), 632–653.

Verma, M. K., Khan, D., & Yuvaraj, M. (2023). Scientometric assessment of funded scientometrics and bibliometrics research (2011–2021). Scientometrics . https://doi.org/10.1007/s11192-023-04767-6

Web of Science, 2021. Web of Science Core Collection: A trusted, high quality collection of journals, books, and conference proceedings. https://clarivate.com/webofsciencegroup/solutions/web-of-science-core-collection/ .

Web of Science, 2021. Web of Science Core Collection: Searching for an Institution. https://clarivate.libguides.com/woscc/institution .

Wooldridge, J. M. (2019). Introductory econometrics (7th ed.). Thomson South-Western.

Wu, J., Ma, Z., & Liu, Z. (2019). The moderated mediating effect of international diversification, technological capability, and market orientation on emerging market firms’ new product performance. Journal of Business Research, 99 , 524–533.

Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The increasing dominance of teams in production of knowledge. Science, 316 (5827), 1036–1039.

Yang, K., Lee, H., Kim, S., Lee, J., & Oh, D. G. (2021). KCI vs. WoS: Comparative analysis of Korean and international journal publications in library and information science. Journal of Information Science Theory and Practice, 9 (3), 76–106.

Yu, C., Wang, C., Zhang, T., Bu, Y., & Xu, J. (2023). Analyzing research diversity of scholars based on multi-dimensional calculation of knowledge entities. Scientometrics . https://doi.org/10.1007/s11192-023-04821-3

Zhao, Y., & Zhao, R. (2016). An evolutionary analysis of collaboration networks in scientometrics. Scientometrics, 107 , 759–772.

Zhou, K. Z., & Li, C. B. (2012). How knowledge affects radical innovation: Knowledge base, market knowledge acquisition, and internal knowledge sharing. Strategic Management Journal, 33 (9), 1090–1102.

Zucker, L. G., Darby, M. R., Furner, J., Liu, R. C., & Ma, H. (2007). Minerva unbound: Knowledge stocks, knowledge flows and new knowledge production. Research Policy, 36 (6), 850–863.

Download references

Acknowledgements

I would like to thank the Editor-in-Chief of Scientometrics, Professor Wolfgang Glänzel, and the two anonymous reviewers whose constructive criticism led to significant improvements in the paper. Financial support from the Ministry of Science and Technology Council, R.O.C. (MOST 111-2410-H-180-002 -) is highly appreciated.

Funding was provided by National Science and Technology Council (Grant No. MOST 111-2410-H-180-002)

Author information

Authors and affiliations.

Department of Tourism Management, National Kaohsiung University of Science and Technology, Kaohsiung, Taiwan

Chih-Hsing Liu

Department of Leisure and Recreation Management, Ming Chuan University, Taipei, Taiwan

Department of Management and Information, National Open University, 172, Chung-Cheng Road, Lu-Chow District, 247, New Taipei City, Taiwan, ROC

Jun-You Lin

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jun-You Lin .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Liu, CH., Lin, JY. Collaboration-based scientific productivity: evidence from Nobel laureates. Scientometrics 129 , 3735–3768 (2024). https://doi.org/10.1007/s11192-024-05062-8

Download citation

Received : 31 May 2023

Accepted : 16 May 2024

Published : 15 June 2024

Issue Date : July 2024

DOI : https://doi.org/10.1007/s11192-024-05062-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Nobel Prize
  • Nobel laureates
  • Research area
  • Academic collaboration
  • Collaboration diversity
  • Scientific productivity
  • Find a journal
  • Publish with us
  • Track your research

Research center Figgerits [ Answers ]

  • by Game Answer
  • 2022-05-16 2024-08-01

Figgerits Answers

Icon of the game Figgerits © Hitapps.

Striving for the right answers? Lucky You! You are in the right place and time to meet your ambition. In fact, this topic is meant to untwist the answers of Figgerits Research center . Accordingly, we provide you with all hints and cheats and needed answers to accomplish the required crossword and find a final solution phrase.

Figgerits Research center Answers:

PS: Check out this topic below if you are seeking to solve another level answers :

We are pleased to help you find the word you searched for. Hence, don’t you want to continue this great winning adventure? You can either go back the Main Puzzle : Figgerits Level 23 or discover the word of the next clue here : (syn.) High-and-mighty, haughty .

if you have any feedback or comments on this, please post it below. Thank You. Michael

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

IMAGES

  1. Figgerits Melc 1

    scientific work based on research figgerits

  2. The Scientific Method

    scientific work based on research figgerits

  3. Ch 2: Psychological Research Methods

    scientific work based on research figgerits

  4. The Scientific Method: Steps and Examples

    scientific work based on research figgerits

  5. Research Process: 8 Steps in Research Process

    scientific work based on research figgerits

  6. Scientific Methods

    scientific work based on research figgerits

COMMENTS

  1. Scientific work based on research Figgerits [ Answers ]

    You can either go back the Main Puzzle : Figgerits Level 159 or discover the word of the next clue here : (syn.) Boring, dull, insipid. if you have any feedback or comments on this, please post it below. Thank You. This is the answer to the clue : Scientific work based on research Figgerits.

  2. Scientific work based on research

    Figgerits is a fantastic logic puzzle game available for both iOS and Android devices. It is developed by Hitapps Inc and has over 300 levels for you to solve and enjoy. If you are stuck with Scientific work based on research then no worries because on this page you will find any of the Figgerits Answers and Solutions. This definition is part ...

  3. Scientific work based on research Figgerits

    Scientific work based on research This clue has appeared on Figgerits puzzle. Increase your vocabulary and your knowledge while using words from different topics. There is a variety of topics you can choose such as Proverbs, Historical facts, Space, Fauna, Sports and more.

  4. Figgerits Scientific work based on research Answer

    The Figgerits Scientific work based on research answer above is from level 159. Visit the Figgerits level 159 answers page if you need help with any other clues in this particular puzzle to help you figure out the cryptogram. As new levels are added to the game, ...

  5. Scientific work based on research Figgerits

    Scientific work based on research Here you go for Scientific work based on research Figgerits answer. This clue was last seen on Figgerits Level 160 Answers.The answer we have for Scientific work based on research in our database has a total of 6 letters.At Answers.org, we understand that sometimes even the most seasoned puzzle solvers can encounter challenges they can't overcome.

  6. Scientific work based on research: Figgerits Answer + Phrase

    Figgerits Scientific work based on research answers with the Phrase, cheat are provided on this page, This game is developed by Figgerits - Word Puzzle Game Hitapps and is available on the Google PlayStore & Apple AppStore. Figgerits is a kind of cross logic and word puzzle game for adults that will blow your mind and train brainpower.

  7. Scientific work based on research Figgerits

    Need someone to help or just stuck on some level? Earlier or later you will need help to pass this challenging game and our website is here to equip you with Figgerits Scientific work based on research answers and other useful information like tips, solutions and cheats. In addition to Figgerits, the developer Hitapps has created other amazing ...

  8. Scientific work based on research Figgerits

    Scientific work based on research. On this page you may find the Scientific work based on research answer. This clue was last seen on Figgerits Level 160 Answers.The answer we have for Scientific work based on research in our database has a total of 6 letters.If something is wrong or missing kindly let us know and we will be more than happy to help you out!

  9. Scientific work based on research Figgerits Answer

    Answer of Figgerits Scientific work based on research: THESIS; Please remember that I'll always mention the master topic of the game : Figgerits Answers, the link to the previous level : Spot for vows Figgerits and the link to the main level Figgerits answers level 159. You may want to know the content of nearby topics so these links will ...

  10. Scientific work based on research Figgerits

    Scientific work based on research Figgerits Hi and thank you for visiting our website, here you will be able to find all the answers for Figgerits Game. Figgerits is the new wonderful word game developed by Hitapps, known by his best puzzle word games on the android and apple store.

  11. Figgerits: What's a scientific research space?

    Figgerits is the new wonderful word game developed by Hitapps, known by his best puzzle word games on the android and apple store. Figgerits isn't only a logic puzzle and smart game, it's a kind of cross logic and word puzzle games for adults that will blow your mind and train brainpower.

  12. scientific work based on research figgerits

    FiggeritsAnswers.com. Scientific work based on research. If you already solved this puzzle and are looking for other definitions from the same level then head over to Figgerits Le

  13. Research center Figgerits

    Research center Here you go for Research center Figgerits answer. This clue was last seen on Figgerits Level 23 Answers.The answer we have for Research center in our database has a total of 9 letters.At Answers.org, we understand that sometimes even the most seasoned puzzle solvers can encounter challenges they can't overcome.That's why we've compiled a complete guide to Figgerits, providing ...

  14. Evaluating the Evidence for Fidget Toys in the Classroom

    Fidget toys have been marketed as universal educational supports in the absence of a scientific evidence base. This article gives an overview of the existing literature on the effect of fidget toy use on student attention, behavior, and learning, and a review of two competing theoretical approaches to fidget toys: sensory processing theory and cognitive load theory.

  15. What is Scientific Research and How Can it be Done?

    Research conducted for the purpose of contributing towards science by the systematic collection, interpretation and evaluation of data and that, too, in a planned manner is called scientific research: a researcher is the one who conducts this research. The results obtained from a small group through scientific studies are socialised, and new ...

  16. What's a scientific research space? Figgerits

    This clue has appeared on Figgerits puzzle. Increase your vocabulary and your knowledge while using words from different topics. There is a variety of topics you can choose such as Proverbs, Historical facts, Space, Fauna, Sports and more. In the Figgerits there are puzzles for everyone, each day there is a new puzzle and get daily rewards.

  17. Science and the scientific method: Definitions and examples

    Science is a systematic and logical approach to discovering how things in the universe work. Scientists use the scientific method to make observations, form hypotheses and gather evidence in an ...

  18. 15: Scientific Reasoning

    We begin with a description of science and a review of some of the method s of doing science that were introduced in previous chapters. 15.2.1: Testability, Accuracy, and Precision. 15.2.2: Reliability of Scientific Reporting. 15.2.3: Causal Explanations vs. Causal Arguments. 15.2.4: Good Evidence.

  19. The scientific nature of work-based learning and research: An

    The authors do so not to privilege the meritorious qualities of science or to legitimise WBL or its example in Professional Studies by conferring on them the status of science, but to highlight their systematised approach to learning and research.,If the authors define Professional Studies to mean the systematic inquiry of work-based people ...

  20. Collaboration-based scientific productivity: evidence from Nobel

    Nobel laureates offer a range of expertise to researchers interested in generating scientific productivity by capitalizing on their ability to collaborate with other outstanding researchers. However, current knowledge on whether and how a scholar's research areas can be leveraged for scientific productivity has not been examined empirically. There has been scant conceptualization of the ...

  21. Research center Figgerits [ Answers ]

    You can either go back the Main Puzzle : Figgerits Level 23 or discover the word of the next clue here : (syn.) High-and-mighty, haughty. if you have any feedback or comments on this, please post it below. Thank You. Michael. This is the answer to the clue : Research center Figgerits.

  22. Attosecond electron microscopy and diffraction

    In addition, this material is based upon work partially supported by the Air Force Office of Scientific Research under awards numbers FA9550-19-1-0025 and FA9550-22-1-0494. We are also grateful to the W.M. Keck Foundation for supporting this project with a Science and Engineering award given to M. Hassan.