honesty in research experimental results and conclusions

Exploring Academic Integrity in Your Research: A Tutorial

  • 2 - Scholarly Conversation
  • 3 - Scholarly Conversation & Justice
  • 4 - LACE at the University of Oregon
  • 5 - Academic Discourse
  • 6 - Student Responsibilities
  • 7 - Academic Discourse & Student Success

2 - Academic Honesty

  • 3 - Academic Honesty
  • 4 - Academic Dishonesty
  • 5 - Academic Dishonesty
  • 6 - Academic Dishonesty
  • 7 - Academic Dishonesty
  • 8 - Academic Dishonesty
  • 9 - Academic Dishonesty
  • 10 - Academic Dishonesty
  • 11 - Plagiarism
  • 12 - Plagiarism
  • 13 - Paraphrasing
  • 14 - Paraphrasing
  • 15 - Consequences
  • 16 - Consequences
  • 17 - Review
  • 2 - Attribution
  • 3 - Citations
  • 4 - Citations
  • 5 - Plagiarism
  • 6 - Plagiarism
  • 7 - To Cite or Not?
  • 8 - To Cite or Not?
  • 9 - To Cite or Not?
  • 10 - To Cite or Not?
  • 11 - To Cite or Not?
  • 12 - To Cite or Not?
  • 13 - Citation Styles
  • 14 - Citation Styles
  • 15 - Citation Management
  • 16 - Citation Management
  • 2 - Copyright
  • 3 - Copyright
  • 3.5-Copyright
  • 4 - Copyright
  • 5 - Copyright
  • 6 - Fair Use
  • 7 - Fair Use
  • 8 - Permissions
  • 9 - Permissions
  • 10 - Resources
  • 11 - Review
  • More Resources

What is academic honesty?

Academic honesty ensures acknowledgement of other people’s hard work and thought.  The International Center for Academic Integrity defines it as “a commitment, even in the face of adversity, to six fundamental values: honesty, trust, fairness, respect, responsibility, and courage. From these values flow principles of behavior that enable academic communities to translate ideals to action.”

Different cultures and traditions often have distinct definitions of what behaviors constitute academic honesty. For example, in some cultures, it is considered a sign of respect to use the exact wording of a well-known thinker, and attribution is considered unnecessary. However, that is not an accepted practice for scholars in the United States.

Book cover "Standing in the Shadow of Giants: Plagiarism, Authors, Collaborators"

To learn more about cultural differences with regards to academic honesty, check out this book: Howard, Rebecca Moore. Standing in the Shadow of Giants : Plagiarists, Authors, Collaborators . Stamford, Conn.: Ablex Pub., c1999.

  • << Previous: Part II - Academic Honesty
  • Next: 3 - Academic Honesty >>
  • Last Updated: Jul 17, 2023 10:23 AM
  • URL: https://researchguides.uoregon.edu/academic-integrity

Contact Us Library Accessibility UO Libraries Privacy Notices and Procedures

Make a Gift

1501 Kincaid Street Eugene, OR 97403 P: 541-346-3053 F: 541-346-3485

  • Visit us on Facebook
  • Visit us on Twitter
  • Visit us on Youtube
  • Visit us on Instagram
  • Report a Concern
  • Nondiscrimination and Title IX
  • Accessibility
  • Privacy Policy
  • Find People

Book cover

Research and Publication Ethics pp 59–79 Cite as

Intellectual Honesty and Research Integrity

  • Santosh Kumar Yadav 2  
  • First Online: 30 August 2023

138 Accesses

As per Wikiversity, intellectual honesty is an applied method of problem solving, characterized by an unbiased, honest attitude, which can be demonstrated in a number of different ways including:

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Further Reading

Banner JM Jr, Cannon HC (1997) The elements of teaching. Yale University Press, New Haven, CT

Google Scholar  

Bebeau MJ, Pimple KD, Muskavitch KMT, Borden SL, Smith DL (1995) Moral reasoning in scientific research: cases for teaching and assessment. Indiana University, Bloomington, IN

Belenky MF, Clinchy BM, Goldberger NR, Tarule JM (1997) Women’s ways of knowing: the development of self, voice and mind. Basic Books, New York, NY

Bulger RE, Heitman E, Reiser SJ (1993) The ethical dimensions of the biological sciences. Cambridge University Press, New York, NY

Burgess RG (2005) The ethics of educational research. Falmer Press, Taylor & Francis Inc, London

Book   Google Scholar  

D’Angelo J (2012) Ethics in science ethical misconduct in scientific research. CRC Press Taylor & Francis Group, New York, NY

Guenin LM (2005) Intellectual honesty. Springer, New York

Israel M, Hay I (2006) Research ethics for social scientists: between ethical conduct and regulatory compliance. SAGE Publications Ltd, New Delhi

Macrina FL (2014) Scientific integrity, text and cases in responsible conduct of research, 4th edn. ASM Press, Washington, DC

McKeachie WJ, Gibbs G (1998) Teaching tips: strategies, research, and theory for college and university teachers, 10th edn. Houghton Mifflin, New York, NY

Penslar RL (1995) Research ethics: cases and materials. Indiana University Press, Indianapolis, IN

Resnik DB (1998) The ethics of science: an introduction. Routledge, New York, NY

Rubenstein AH (2002) Integrity in scientific research. National Academy of Sciences, Washington, DC

Samantha L (2015) Elliott, perspectives on research integrity. ASM Press, Washington, DC

Taylor PC, Gilmer PJ, Tobin K (eds) (2002) Transforming undergraduate science teaching: social constructivist perspectives. Peter Lang Publishing, Inc, New York, NY

Download references

Author information

Authors and affiliations.

Shri Jagdishprasad Jhabarmal Tibrewala University, Jhunjhunu, Rajasthan, India

Santosh Kumar Yadav

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Santosh Kumar Yadav .

Review Questions

What is the basic difference between intellectual honesty and research integrity?

What are the parameters of good practices of research integrity?

Explain the main elements of professionalism in intellectual honesty.

Discuss the practical elements which are responsible for research conduct.

Explain briefly the environment and bases of research integrity.

The responsible conduct of research is not distinct from research. Explain this term.

What is necessary for the performance of a researcher?

What the institutions should evaluate and enhance the integrity of their research environments?

What are the key practices that pertain to the responsible conduct of research by individual?

Why government oversight of scientific research is important for research integrity?

Write Short Notes on the following:

Professional Quality in Research

Bases of Research Integrity

Promoting Integrity in Research

Fostering Integrity in Research

Integrity of the Individual Research

Fairness in Peer Review

Open-Systems Model

Benchmarking in Research.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Cite this chapter.

Yadav, S.K. (2023). Intellectual Honesty and Research Integrity. In: Research and Publication Ethics. Springer, Cham. https://doi.org/10.1007/978-3-031-26971-4_4

Download citation

DOI : https://doi.org/10.1007/978-3-031-26971-4_4

Published : 30 August 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-26970-7

Online ISBN : 978-3-031-26971-4

eBook Packages : Engineering Engineering (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Published: 28 October 2021

Aspiring to greater intellectual humility in science

  • Rink Hoekstra   ORCID: orcid.org/0000-0002-1588-7527 1   na1 &
  • Simine Vazire   ORCID: orcid.org/0000-0002-3933-9752 2 , 3   na1  

Nature Human Behaviour volume  5 ,  pages 1602–1607 ( 2021 ) Cite this article

5063 Accesses

34 Citations

276 Altmetric

Metrics details

  • Human behaviour
  • Scientific community

The replication crisis in the social, behavioural and life sciences has spurred a reform movement aimed at increasing the credibility of scientific studies. Many of these credibility-enhancing reforms focus, appropriately, on specific research and publication practices. A less often mentioned aspect of credibility is the need for intellectual humility or being transparent about and owning the limitations of our work. Although intellectual humility is presented as a widely accepted scientific norm, we argue that current research practice does not incentivize intellectual humility. We provide a set of recommendations on how to increase intellectual humility in research articles and highlight the central role peer reviewers can play in incentivizing authors to foreground the flaws and uncertainty in their work, thus enabling full and transparent evaluation of the validity of research.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

111,21 € per year

only 9,27 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

Similar content being viewed by others

honesty in research experimental results and conclusions

The natural selection of good science

Alexander J. Stewart & Joshua B. Plotkin

honesty in research experimental results and conclusions

Low replicability can support robust and efficient science

Stephan Lewandowsky & Klaus Oberauer

honesty in research experimental results and conclusions

The past, present and future of Registered Reports

Christopher D. Chambers & Loukia Tzavella

[Editorial]. Tell it like it is. Nat. Hum. Behav . https://doi.org/10.1038/s41562-020-0818-9 (2020).

Pashler, H. & De Ruiter, J. P. Taking responsibility for our field’s reputation. APS Letter/Observer Forum https://www.psychologicalscience.org/observer/taking-responsibility-for-our-fields-reputation (2017).

Whitcomb, D., Battaly, H., Baehr, J. & Howard-Snyder, D. Intellectual humility: owning our limitations. Phil. Phenom. Res . https://doi.org/10.1111/phpr.12228 (2015).

Alfano, M. et al. Development and validation of a multi-dimensional measure of intellectual humility. PLoS ONE 12 , e0182950 (2017).

Article   Google Scholar  

Leary, M. R. et al. Cognitive and interpersonal features of intellectual humility. Pers. Soc. Psych. Bull. 43 , 793–813 (2017).

Krumrei-Mancuso, E. J. & Rouse, S. V. The development and validation of the Comprehensive Intellectual Humility Scale. J. Pers. Assess. 98 , 209–221 (2015).

Van Tongeren, D. R., Davis, D. E., Hook, J. N. & Witvliet, C. V. Humility. Curr. Dir. Psych. Sci. 28 , 463–468 (2019).

Merton, R. K. Science and technology in a democratic order. J. Leg. Pol. Soc. 1 , 115–126 (1942).

Google Scholar  

Vinkers, C. H., Tijdink, J. K. & Otte, W. M. Use of positive and negative words in scientific PubMed abstracts between 1974 and 2014: retrospective analysis. BMJ 351 , h6467 (2015).

Yarkoni, T. The generalizability crisis. Behav. Brain Sci. https://doi.org/10.1017/S0140525X20001685 (2020).

Riddle, T. Linguistic overfitting in empirical psychology. Preprint at https://doi.org/10.31234/osf.io/qasde (2018).

Anderson, M. S., Martinson, B. C. & De Vries, R. Normative dissonance in science: results from a national survey of US scientists. J. Empir. Res. Hum. Res. Ethics 2 , 3–14 (2007).

Article   CAS   Google Scholar  

Mitroff, I. I. Norms and counter-norms in a select group of the Apollo moon scientists: a case study of the ambivalence of scientists. Am. Soc. Rev. 39 , 579–595 (1974).

Sessions, R. How a ‘difficult’ composer gets that way. New York Times (8 January 1950).

Bem, D. J. in The Compleat Academic: A Practical Guide for the Beginning Social Scientist (eds Darley, J. M. et al.) Ch. 10 (American Psychological Association, 2004).

Kail, R. V. Reflections on five years as editor. APS Observer https://www.psychologicalscience.org/observer/reflections-on-five-years-as-editor (2012).

Nosek, B. A. et al. Promoting an open research culture. Science 348 , 1422–1425 (2015).

Davis, W. E. et al. Peer-review guidelines promoting replicability and transparency in psychological science. Adv. Methods Pract. Psychol. Sci. 1 , 556–573 (2018).

Morey, R. D. et al. The Peer Reviewers’ Openness Initiative: incentivizing open research practices through peer review. R. Soc. Open Sci. 3 , 150547 (2016).

Simmons, J. P., Nelson, L. & Simonsohn, U. A 21 word solution. Soc. Pers. Soc. Psych. Dial. 26 , 4–7 (2012).

Chambers, C. D. Registered Reports: a new publishing initiative at Cortex . Cortex 49 , 609–610 (2013).

Loftus, G. R. A picture is worth a thousand P values: on the irrelevance of hypothesis testing in the microcomputer age. Behav. Res. Meth. Inst. Comp. 25 , 250–256 (1993).

Dragicevic, P., Jansen, Y., Sarma, A., Kay, M. & Chevalier, F. Increasing the transparency of research papers with explorable multiverse analyses. In Proc. 2019 Conf. on Human Factors in Computing Systems (eds Brewster, S. et al.) Paper no. 65 (Association for Computing Machinery, 2019).

Steegen, S., Tuerlinckx, F., Gelman, A. & Vanpaemel, W. Increasing transparency through a multiverse analysis. Perspect. Psychol. Sci. 11 , 702–712 (2016).

Wagenmakers, E. J., Wetzels, R., Borsboom, D., van der Maas, H. L. & Kievit, R. A. An agenda for purely confirmatory research. Perspect. Psychol. Sci. 7 , 632–638 (2012).

Hoekstra, R., Finch, S., Kiers, H. A. L. & Johnson, A. Probability as certainty: dichotomous thinking and the misuse of P values. Psychon. Bull. Rev. 13 , 1033–1037 (2006).

Van der Bles, A. M. et al. Communicating uncertainty about facts, numbers and science. R. Soc. Open Sci. 6 , 181870 (2019).

Hoekstra, R. Risk as an explanatory factor for researchers’ inferential interpretations. Math. Enthus. 12 , 103–112 (2015).

Wasserstein, R. L., Schirm, A. L. & Lazar, N. A. Moving to a world beyond ‘ P < 0.05’. Am. Stat. 73 , 1–19 (2019).

Etz, A. & Vandekerckhove, J. Introduction to Bayesian inference for psychology. Psychon. Bull. Rev. 25 , 5–34 (2018).

Ly, A. et al. Bayesian reanalyses from summary statistics: a guide for academic consumers. Adv. Methods Pract. Psychol. Sci. 1 , 367–374 (2018).

Vazire, S. Editorial. Soc. Psychol. Personal. Sci. 7 , 3–7 (2016).

Simons, D. J., Shoda, Y. & Lindsay, D. S. Constraints on generality (COG): a proposed addition to all empirical papers. Perspect. Psychol. Sci. 12 , 1123–1128 (2017).

IJzerman, H. et al. Use caution when applying behavioural science to policy. Nat. Hum. Behav. 4 , 1092–1094 (2020).

Flake, J. K., & Fried, E. I. Measurement schmeasurement: questionable measurement practices and how to avoid them. Preprint at https://doi.org/10.31234/osf.io/hs7wm (2019).

Schubert, S. Hedge-drift and advanced motte-and-bailey. LessWrong Blog https://www.lesswrong.com/posts/oMYeJrQmCeoY5sEzg/hedge-drift-and-advanced-motte-and-bailey (2016).

Sumner, P. et al. The association between exaggeration in health related science news and academic press releases: retrospective observational study. BMJ 349 , g7015 (2014).

Rohrer, J. M. et al. Putting the self in self-correction: findings from the loss-of-confidence project. Perspect. Psychol. Sci ., https://doi.org/10.1177/1745691620964106 (2021).

Yarkoni, T. (2018). No, it’s not the incentives—it’s you. Talyarkoni Blog https://www.talyarkoni.org/blog/2018/10/02/no-its-not-the-incentives-its-you/ (2018).

Download references

Acknowledgements

We thank A. Allard, A. Holcombe, H. Kiers, L. King, S. Lindsay and D. Trafimow for valuable input for this paper.

Author information

These authors contributed equally: Rink Hoekstra, Simine Vazire.

Authors and Affiliations

University of Groningen, Groningen, The Netherlands

Rink Hoekstra

University of California, Davis, Davis, CA, USA

Simine Vazire

University of Melbourne, Melbourne, Victoria, Australia

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Rink Hoekstra .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Hoekstra, R., Vazire, S. Aspiring to greater intellectual humility in science. Nat Hum Behav 5 , 1602–1607 (2021). https://doi.org/10.1038/s41562-021-01203-8

Download citation

Received : 12 June 2020

Accepted : 18 August 2021

Published : 28 October 2021

Issue Date : December 2021

DOI : https://doi.org/10.1038/s41562-021-01203-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Reducing bias, increasing transparency and calibrating confidence with preregistration.

  • Tom E. Hardwicke
  • Eric-Jan Wagenmakers

Nature Human Behaviour (2023)

Tighter nets for smaller fishes? Mapping the development of statistical practices in consumer research between 2008 and 2020

  • Antonia Krefeld-Schwalb
  • Benjamin Scheibehenne

Marketing Letters (2023)

Predictors and consequences of intellectual humility

  • Tenelle Porter
  • Abdo Elnakouri
  • Igor Grossmann

Nature Reviews Psychology (2022)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

honesty in research experimental results and conclusions

Scientific Integrity and the Ethics of 'Utter Honesty'

honesty in research experimental results and conclusions

In his famous commencement address in 1974 at the California Institute of Technology, physicist Richard Feynman gave an engaging talk that tried to express his understanding of the concept of integrity for science. He began by recounting several amusing stories of pseudoscientific beliefs, including the curious Cargo Cult religions of Melanesia that arose following World War II, wherein tribespeople imitated the American soldiers they had encountered whose planes had brought valuable supplies, in the hope of once again receiving their bounty. For instance, they made headphones out of wood and bamboo to wear while sitting in imitation control towers by fire-lit runways they built, all with the goal of causing the return of the planes.

honesty in research experimental results and conclusions

Feynman used this and a couple of similar cases of what he called “cargo cult science,” such as reflexology and telekinetic spoon-bending — goofy, unscientific ideas that he encountered among Californians — to illustrate how easy it is for human beings to fool themselves. He noted that the difficulty extends to pedagogy, where he suggested that teachers are often no better than “witch doctors” when it comes to validating their techniques. We may talk about educational science and such, but need to admit that these are not (or at least not yet) scientific, because we had not yet tested our pedagogical practices.

With this kind of difficulty in mind, he offered would-be science students some advice that gets to the essence of what it means to be a scientist. He admonished students to cultivate:

a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty — a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid — not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked — to make sure the other fellow can tell they have been eliminated … In summary, the idea is to try to give all the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.

Feynman first cited a core value — honesty — which is a central scientific character virtue, and then went on to show an example of what this means for behavior. In saying that this requires a kind of “leaning over backwards,” Feynman clearly recognized that this prescription goes well beyond what is normally done or expected. It is an ideal. It may not be impossible to achieve, but certainly it will be very difficult.

One might reasonably object that utter honesty and full reporting cannot always coexist in science, at least not in all the detailed ways that Feynman recommends. For instance, while it might indeed be a helpful contribution to the progress of science to be able to access all the data from unsuccessful studies, this may not be feasible or desirable in practice. Who would publish those data or make them available, and how, if not through publication? Every working scientist knows that for any successful bit of research that leads to discovery, and a published paper, there were many failed experiments and rejected hypotheses.

Would anyone really care to subscribe to, let alone publish in, the American Journal of Discarded Hypotheses , the Annals of Failed Experiments , or PLOS Dumpster ?

It would seem far too easy to publish scientific papers detailing such negative results. It is sometimes said that the secret to science is to make mistakes quickly, but is a scientist who has more failures more productive? Would it be economically viable for journals to publish what would be a massive literature of false starts and blind alleys? Moreover, would anyone really care to subscribe to, let alone publish in, the American Journal of Discarded Hypotheses , the Annals of Failed Experiments , or PLOS Dumpster ? This seems absurd on its face. Indeed, the idea is sufficiently amusing to scientists that there is a real Journal of Irreproducible Results that is devoted to scientific satire and humor.

That said, Feynman is surely right that in many cases such information would be useful for other scientists and important to include for the reasons he listed, namely, that it helps them judge the value of whatever positive result you are presenting. Indeed, most of the specific examples Feynman gave involve tests of alternative hypotheses or possible confounding factors that a good experimenter should check as part of the process of hypothesis testing. It would indeed be dishonest to neglect to report the results of such tests, but notice that Feynman was not just saying that it would be wrong to be dishonest. Rather, he was arguing for a positive standard of honesty that sets a higher bar and requires scientists to actively commit to that more demanding standard as a matter of scientific integrity.

Integrity is the right word here, for the kind of utter honesty that Feynman is talking about involves the integration of values and methods in just the way that is required for the exemplary practice of science. Given that the goal of science is to answer empirical questions to satisfy our curiosity about the world, it is only by striving for the highest level of rigor and care in our methods and practices that a scientist, and the scientific community, can be confident that a question has been satisfactorily answered, and we have indeed made a real discovery.

However, this is not equivalent to publishing every failed experiment. There are many hypotheses that seem reasonable, or even likely, given the current state of understanding, that scientists would be very excited to know did not stand up to empirical test — some failures are interesting failures and worth publishing. If someone were to found the International Journal of Interesting Negative Results , it would surely find a receptive scientific audience. But other failures are uninteresting and of no particular value for the scientific community. Not publishing such failed studies is no more dishonest than not publishing uninteresting successful studies.

Of course, there will be many borderline cases where the degree of interest is a judgment call. We might even agree that publishers should take a broader perspective and publish more than they have done in the past, especially now that electronic publishing has greatly mitigated the previous economic constraints of journal publication. There are some thought-provoking questions to consider and trade-offs to weigh regarding social and professional policy that arise here, but in general this sort of issue is not really a moral counterexample to the idea of utter honesty. However, other issues are more challenging.

Harder cases

Should utter honesty, which some think implies completely open communication of scientific results, be upheld in cases, for example, where there is a reasonable fear that significant social harm would result from dangerous scientific knowledge becoming known?

Many of the Manhattan Project physicists who solved a sweet theoretical and technical problem, including Feynman, regretted their roles in releasing the nuclear genie. Leaving aside the question of whether they should have pursued this research in the first place, surely everyone would agree that honesty does not compel full publication of the details that would allow others to replicate that work.

In 2001, two British journalists reported that among materials found in the rubble of a Taliban military building after the fall of Kabul were not only weapons but also instructions for how to build an atom bomb. The fact that the document was later shown to be a version of a satirical piece from none other than the aforementioned Journal of Irreproducible Results is amusing, but does not negate what is a legitimate and serious concern.

Sometimes doing the right thing may mean not being completely honest in a larger social setting in order to prevent a great harm.

Biological research may also carry a high risk of potential significant threat, and the U.S. government issued policy guidelines requiring oversight and limits on what is termed Dual Use Research of Concern (DURC). It has at times even restricted funding of certain types of scientific investigations, such as so-called gain-of-function research in virology that investigated ways to increase the pathogenicity and/or transmissibility of viruses like influenza and SARS. A critic might point to such examples where science must be kept secret as a way of questioning whether honesty is always the best policy for scientists.

There are several things we should say in response to this criticism of the virtue of honesty in science. The first thing is to distinguish the question of whether dangerous research should be pursued at all from the question of whether, if the research is done, it should be published openly and honestly. The first question is important, but it is not a question about honesty. Rather, it is about the ethical limits of curiosity. Perhaps Feynman and the other Manhattan Project scientists were right that they should not have released the nuclear genie. For the current case, the presumption is that, for whatever reason, the research has been done, and the issue is whether it violates scientific honesty to not publish the findings or perhaps even to publish misleading results to throw others off the track.

A virtue-theoretic approach helps one begin to think through such cases. Honesty is a virtue in science because it is essential for the satisfaction of curiosity; it is a practiced disposition that is important for discovering truths about the natural world. In this sense, one must assume honesty as a core virtue even if we conclude that there are instances where the possibility of severe public harm requires secrecy, in that it was involved in discovering the danger in the first place. What is really going on in such cases is that scientific honesty is taken for granted but must be weighed against other more general social interests that come into play and ought also be taken into account.

The discovery of empirical truths is one important end for human beings, but it is not the only one. To say that veracity is a core value in science is not to say that scientific findings should never be kept hidden. Sometimes doing the right thing may mean not being completely honest in a larger social setting in order to prevent a great harm. The scientist who recognizes the possibility of such cases is not denigrating or undermining this core value but rather is properly accepting that professional scientific values may sometimes need to be overridden if they come into conflict with broader human values. While all these cases show the importance of developing scientific and ethical judgment, they also affirm the importance of veracity as a core scientific value. A more difficult immediate question is how to understand cases where veracity breaks down within science itself. It is to that issue that we must now turn.

When honesty breaks down

In the early 2000s, Jan Hendrik Schön was a rising star in physics. His research was at the intersection of condensed matter physics, nanotechnology, and materials science. While working at Bell Labs he published a series of papers — an incredible 16 articles as first author in Science and Nature , the top science journals in the world, over a two-year period — giving results of work using organic molecules as transistors. One of his papers was recognized by Science in 2001 as a “breakthrough of the year.” He received several prestigious prizes, including the Outstanding Young Investigator Award by the Materials Research Society. His research looked like it would revolutionize the semiconductor industry, allowing computer chips to continue to shrink in size beyond the limits of traditional materials. The only problem was that none of it was true .

Other researchers tried to build on Schön’s findings but were unable to replicate his results. One scientist then noticed that two of Schön’s papers included graphs with the same noise in the reported data, which was extraordinarily unlikely to happen by chance. This was suspicious, but Schön explained it away as an accidental inclusion of the same graph. But then other scientists noticed similar duplications in other papers. It began to appear that at least some of the data were fraudulent. Bell Labs launched a formal investigation and discovered multiple cases of misconduct, including outright fabrication of data. All of Schön’s Nature and Science publications had to be withdrawn, plus a dozen or so more in other journals. Schön was fired from his job, his prizes were rescinded, he was banned from receiving research grants, and in 2004 the University of Konstanz stripped him of his PhD because of his dishonorable conduct.

honesty in research experimental results and conclusions

Schön’s fraud was an especially egregious example of scientific misconduct, but unfortunately it is not unique. Every few years there is some case of scientific misconduct that makes headline news. Are these exceptions? It is hard to get good data, but a 2009 analysis of published studies on the issue found that an average of 1 percent of scientists admitted to having fabricated or falsified research data at least once in their career, and about 12 percent reported knowing of a colleague who had. What is one to make of such cases where honesty breaks down in science?

Self-correction and trust

The Schön case is demoralizing as a breach of scientific ethics, but it also leads one to question how 16 fraudulent papers could have made it through the peer review process of the two premier science journals in the world, and 12 more in other top journals. Peer review is an imperfect process. It generally weeds out papers where the presented evidence is inadequate to support the offered conclusion, though, shockingly, there have even been a few papers with creationist elements that somehow made it through the peer-review filter when reviewers were not paying attention. However, even a careful reviewer is likely to miss individual cases where someone fabricated the presented data.

The naturalist methodology of science is predicated on the idea that findings are replicable — that is implied by the idea of a world governed by causal laws — but for obvious practical reasons it is impossible for a journal reviewer to actually repeat an experiment to check the accuracy of the data. Journals very rarely require that a lab demonstrate its data-production process; mostly they trust researchers to have collected and reported their data accurately. Trust is the operative term here.

There are various circumstances that can justify trust. One is history — trust can be earned through experience. Another is trust based on interests — knowing that someone shares your interests means that you can count on their actions. A third is character — knowing that someone has certain character traits means you can count on their intentions. In general, scientists assume these common values with other researchers. In the vast majority of cases, it is completely reasonable to do so. Of course, this kind of prima facie trust provides an opening for the unscrupulous, which is why someone like Schön could infiltrate science the way he did. In the short term, it is hard to prevent such intentional deceptions, but science has a built-in safeguard. Reality is the secret sauce.

Adhering to scientific methods can help one avoid or correct one’s own errors. But even if one fails to self-correct in an individual case, science has a way of self-correcting such deceptions, intentional or unintentional, in the aggregate. This is because true scientific discoveries make a difference. The first difference is the difference one sees in a controlled experiment, which is a basic test for the truth of some individual causal hypothesis. They also make a difference to the fabric of science as a whole, adding an interlinking strand to the network of confirmed hypotheses. Because of the interdependencies within science, any discovery will be connected to many others that came before and others yet to come.

That means that fraudulent claims will not remain unnoticed. Other scientists pursuing investigations in the same field will at some point try to use that finding as part of some other study. At first, they may be puzzled by unexpected anomalies or inconsistent results in their experiment and recheck their measurements, assuming that their own work was wrong. However, if the problem persists, they will start checking factors that they had initially taken for granted. The test of reality will reveal that the original “finding” cannot be trusted. Even if investigators do not go on to uncover the original fraud, they will note the discrepancy, and future researchers will be able to avoid the error. Indeed, the more significant the purported finding, the more likely it is that an error, fraudulent or accidental, will be discovered, for it affects the work and stimulates the interest of all the more researchers.

Even if one fails to self-correct in an individual case, science has a way of self-correcting such deceptions, intentional or unintentional, in the aggregate.

From this point of view, Schön’s case is heartening in that it shows how the scientific process does work to self-correct. His meteoric success was short-lived; within two years, scientists discovered the deception as his fraudulent findings failed the reality test. A second heartening point is that in the aftermath of such cases, the scientific community typically reassesses its practices and attempts to improve its methods. The Schön case led the scientific community to discuss ways that peer review could be improved to better detect fraud prior to publication. Although we typically think of scientific methodology in terms of experimental protocols, statistical methods, and so on, procedures of community assessment such as peer review are also an important aspect of its methodological practice. Science never guarantees absolute truth, but it aims to seek better ways to assess empirical claims and to attain higher degrees of certainty and trust in scientific conclusions.

The most important thing

The kind of scientific integrity that is fundamental in science involves more than behaviors that investigators should not do. It is a mistake to think of ethics as a list of thou-shalt-nots. Feynman’s description of what he meant by “utter honesty” in science, remember, was couched entirely in positive terms. Scientific integrity involves “leaning over backward” to provide a full and honest picture of the evidence that will allow others to judge the value of one’s scientific contribution. For the most part, this sort of behavior is part of the ordinary practice of science and occurs without special notice.

In my book , I explore the scientific mindset — the cultural ideals that define the practice. As a practice that aims to satisfy our curiosity about the natural world, veracity is a core value. Scientific integrity is the integration of all the values involved in truth seeking, with intellectual honesty standing at the center. Feynman’s notion of “utter honesty” is but one expression of this. To reconstruct science’s moral structure, one must identify and integrate other virtues that orbit the bright lodestar that guides scientific practice.

Scientists do not always live up to these ideals, but the scientific community recognizes them as aspirational values that define what it means to be a member of the practice. Those who flout them do not deserve to be called scientists. Those who exemplify them with excellence are properly honored as exemplars.

Feynman closed his commencement address with a wish for his listeners, the graduating science students: “So I have just one wish for you — the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom.” Fulfilling this wish requires more than individual virtue; it requires unified, vigilant support from the scientific community. Integrity in science involves a community of practice, unified by its shared values.

Robert T. Pennock is University Distinguished Professor of History, Philosophy, and Sociology of Science at Michigan State University in the Lyman Briggs College and the Departments of Philosophy and Computer Science and Engineering. He is the author of “ Tower of Babel: The Evidence against the New Creationism ” and “ An Instinct for Truth: Curiosity and the Moral Character of Science ,” from which this article is adapted.

ORIGINAL RESEARCH article

Collective honesty experimental evidence on the effectiveness of honesty nudging for teams.

\nYuri Dunaiev

  • 1 Independent Researcher, Frankfurt, Germany
  • 2 Department of Spatial Economics, School of Business and Economics, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
  • 3 Tinbergen Institute, Amsterdam, Netherlands
  • 4 Kiel Institute for the World Economy, Kiel, Germany

A growing literature in economics studies ethical behavior and honesty, as it is imperative for functioning societies in a world of incomplete information and contracts. A majority of studies found more pronounced dishonesty among teams compared to individuals. Scholars identified certain nudges as effective and cost-neutral measures to curb individuals' dishonesty, yet little is known about the effectiveness of such nudges for teams. We replicate a seminal nudge treatment effect, signing on the top of a reporting form vs. no signature, with individuals and confirm the original nudge treatment effect. We further ran the same experiment with teams of two that have to make a joint reporting decision. Our results show the effectiveness of the nudge for teams, which provides further confidence in the applicability of the nudge.

Introduction

The subject of dishonesty and deception is undergoing intense study and arouses high concerns in the society, attracting much attention of policymakers and researchers from the fields of behavioral economics and psychology (e.g., Rosenbaum et al., 2014 ; Abeler et al., 2019 ; Gerlach et al., 2019 ; Köbis et al., 2019 ). Beyond ethical considerations, the economic harm caused by dishonesty is tremendous. The Association of Certified Examiners estimates that the typical firm losses are about 5% of revenues to occupational fraud each year, which translates into a loss of $3.6 billion at the global level ( ACFE, 2020 ). Recent examples show that practices such as manipulation of financial and audit reports and fraudulent accounting methods are a major problem. Among convicted companies are big names such as Enron, Lehman Brothers, Madoff Investment Securities, and Parmalat. Other famous fraudulent practices are spying (Hewlett-Packard), violations of safety regulations (Southwest Airlines), and concealing emission levels (Volkswagen). In all of these fraud cases it was not a single individual who made the decision and guarded misconduct from coming to light, but teams of individuals who deceived in a conspirative manner.

Since Thaler and Sunstein (2009) introduced the concept of nudging to a larger audience, a number of experiments from psychology and economics have shown that certain nudges can work to reduce individual dishonesty (e.g., Mazar et al., 2008 ; Shu et al., 2012 ; Fellner et al., 2013 1 ). A related literature on individual vs. team (dis)honesty developed contemporaneously and suggests that teams are often more dishonest than individuals (e.g., Cohen et al., 2009 ; Sutter, 2009 ; Danilov et al., 2013 ; Mühlheußer et al., 2015 ; Weisel and Shalvi, 2015 ; Korbel, 2017 ; Wouda et al., 2017 ; Kocher et al., 2018 ; Dannenberg and Khachatryan, 2020 ) 2 . The mechanisms that cause teams to be more dishonest include greater sophistication regarding the consequences of lying ( Cohen et al., 2009 ; Sutter, 2009 ) and diffusion of responsibility regarding the moral misconduct of lying ( Kocher et al., 2018 ) 3 .

As dishonesty levels and mechanisms differ between individuals and teams, we regard it as a natural question whether nudges that are able to curb individual dishonesty remain effective for teams. In this paper we answer this question by employing the well-established math puzzle task paradigm and honesty nudge of Shu et al. (2012) 4 . To this end, we test whether we are able to replicate one of the treatment effects of Shu et al. (2012) —asking decision makers to sign that they will report honestly at the top of a reporting form compared to a no-signature control treatment. We ran the experiment for individuals and for teams to test for the robustness of this nudge.

Our experiment indeed successfully replicates the treatment effect of Shu et al. (2012) for individuals, adding further evidence that signing on top of the form can decrease dishonesty (compared to the no signature condition). For teams we find the same treatment effect, which shows further robustness of this nudge. The nudge seems to be able to work against the team dishonesty drivers like the diffusion of responsibility. We regard our finding as good news for policy makers who seek to employ such nudges as a tool for low-cost and effective anti-fraud and anti-corruption measures.

This paper proceeds as follows. In second section we provide the details of the experimental design, hypotheses and procedures. Third section presents the results and fourth section concludes.

Experimental Design

In this section we explain the details of the math puzzle (or matrix) task and the treatments we employed. We subsequently relate our treatments to hypotheses that originate from the current literature on lying of individuals and teams and finally provide information about the procedures of the experiment.

The math puzzle (matrix) task comprised sheets of paper with math puzzles (matrices) where two numbers sum exactly to a specific target number that is defined beforehand. In the case of Shu et al. (2012) and our experiment, each puzzle consisted of 12 three-digit numbers (with two decimal digits) of which two numbers sum exactly to the number 10. The task was to identify these two numbers and circle them in order to “solve” the respective puzzle. Each correctly solved puzzle yielded a piece-rate income, in our experiment 0.50 EUR. In the treatments with individuals (teams) we provided one (two) sheets of paper, with 20 puzzles per sheet of paper. Hence, a maximum of 10 EUR could be earned per participant in this task. Teammates could choose to work on each sheet separately or together. The time limit was strictly set to 5 min and stopped with a stop-clock. We calibrated the time limit to ensure that the solved puzzles are well-distributed between 0 and 20. Participants were asked to sum the score at the bottom of the puzzle sheet. Figure 1 shows a complete sheet as used in our experiment.

www.frontiersin.org

Figure 1 . A complete math puzzle sheet (original is in A4 format).

If the number of correctly solved puzzles (or matrix exercises), i.e., the true score, is common knowledge, then it is straightforward for the researcher who conducts the experiment to multiply this score with 0.50 EUR and pay out the individual or team accordingly. If the true score is private knowledge of the individual or team, then it becomes interesting to investigate under which circumstances there is correct or elevated reporting of the true score.

In order to create a scenario in which participants would feel comfortable to over-report their score, we closely followed the procedure of Shu et al. (2011) —a study by three of the five authors of Shu et al. (2012) whose treatment effect we aim to replicate. We asked participants to dispose of the matrix paper sheet by inserting it into a paper shredder. The shredder was prepared in a way that the sheet would be partly shredded at the sides, but remain intact to retrace the scores. This incomplete shredding was not visible to participants, as the sheets moved through the shredder into a non-transparent bin. Note that for this replication approach we followed procedures of Shu et al. (2011) closely, which falls into a gray area of omitted information as categorized by Charness et al. (2021) . While the scenario is suggestive of sheets being destroyed, we neither commented on sheets being destroyed nor did we indicate that we would not have a look at sheets after the sessions. This gave us the chance to learn the true score of all individuals and teams after the sessions and link them to the reported scores.

For score reporting we used the participation receipt (see Figure 2 ). The receipt included reporting the score, guessing the average score of others in the session (not incentivized), multiplying the score with 0.50 EUR and adding a 5-EUR show-up fee per person. It is on this receipt that the individuals or teams could misreport their scores. Receipt forms for the respective treatments were handed to the participants after they had completed the matrix task. All individuals and teams had envelopes at their desk with 15 EUR (individuals) or 30 EUR (teams) in cash, so that any payment dividable by 0.50 EUR was possible. Subsequently, they took their payments out of the envelopes, folded and inserted the receipts into their envelopes, kept their cash payment and dropped the envelopes with the receipts, and unclaimed cash into a return box.

www.frontiersin.org

Figure 2 . The receipt forms in team treatments. Appendix 2 , 3 provide the receipt forms in a larger resolution.

The receipt forms in all treatments included a line (lines) to provide the name of the individual (names of teammates). The difference between the no-signature and signature treatments consisted of the following additional statement at the top of the receipt form that the participants in the signature treatments: “We, [line(s) for name(s)], hereby declare that I (we) have completed this receipt to the best of my (our) knowledge and belief completely and truthfully.” Participants in the signature treatments had to sign underneath the statement. Note that there were no instructions or information that suggested any form of detection or punishment related to the statement.

Shu et al. (2012) introduced an honesty nudge which is able to decrease dishonesty and fraud of individuals—signing on the top of a form compared to no signature. They suggested that this nudge helps to turn to an individual's morality and to promote honesty right before the deception may take place—in our experiment before potentially over-reporting the score.

Literature on the dishonesty of teams often points into the direction that teams are more prone to lying than individuals ( Danilov et al., 2013 ; Mühlheußer et al., 2015 ; Weisel and Shalvi, 2015 ; Korbel, 2017 ; Wouda et al., 2017 ; Kocher et al., 2018 ; Dannenberg and Khachatryan, 2020 ). Teams tend to me more strategic about lying and deception ( Cohen et al., 2009 ; Sutter, 2009 ) and diffusion of responsibility and moral disutility appear to be key drivers ( Kocher et al., 2018 ). Given that these mechanisms appear to promote dishonesty of teams, it is questionable whether the signature honesty nudge remains effective for teams. If it does, it would be good news for practitioners who employ pledges with signatures to curb dishonesty—yet if the nudge treatment effect is limited to individuals, it would greatly reduce the usefulness of the nudge and potentially other similar nudges, as many fraudulent situations actually involve teams of decision makers. Table 1 provides an overview of our treatments.

www.frontiersin.org

Table 1 . Treatment cells.

Based on the literature described above, we therefore formulate our key hypothesis that over-reporting of scores is lower in the _sig treatments compared to _NOsig treatments—both when comparing individuals' reporting decisions and teams' reporting decisions. Hence, we hypothesize that the nudge is effective for teams despite possible counteracting effects from diffusion of responsibility. In order to proceed with a testing our hypothesis, it was essential to replicate finding of Shu et al. (2012) for individual decision makers in our environment and conditions. A total of 127 students of the University of Kiel were recruited through the hroot platform ( Bock et al., 2014 ) and participated in the experiment in the time period February to April 2018. There were 20 and 23 participants in Ind_NOsig and Ind_sig treatments, respectively. In the Team_NOsig and Team_sig treatments there were 42 participants per treatment, yielding 21 independent team observations per treatment 5 . The teams were formed randomly by participants of a session drawing numbers on balls from a non-transparent bag.

Following the literature on team dishonesty (e.g. Sutter, 2009 ), communication between team members may be important to let them get to know each other, develop intra-team trust, exchange thoughts on the task and on motivation to (mis)report the effort. For this reason, we implemented our experiment in a way that team members sat together in a large cubicle. Hence, face-to-face communication of team members was possible throughout the session.

In order to facilitate the team feeling even more, we implemented an additional stage using a creativity task before the actual matrix task and reporting 6 . This task was included in order to help teammates to get to know each other a bit better and “break the ice.” Allowing communication when completing tasks together was supposed to mimic situations when teams are working and making decisions together in the real environment. In the creativity task individuals (in the Ind_ treatments) and teams (in the Team_ treatments) were given 10 min to create a picture of their choice by using a whiteboard and pins of different colors (see Appendix 4 for an example). The instructions explicitly informed the participants that there were no incentives related to their creativity or performance and that they were free to do whatever they like. Note that all individuals and teams created a picture, even though an empty whiteboard would have been just as acceptable. In order to be consistent, participants in the Ind_ treatments also performed this task, but alone. After this creativity task, we ran the matrix task describe above.

Table 2 provides summary statistics of our treatments and Figure 3 provides an overview of mean reported as well as actually solved matrices. For the following analysis we compare the reported number of solved matrices with the number of solved matrices as noted down on the matrix sheet (see bottom of Figure 1 ) to detect willful dishonesty. We begin this section with an examination of the Ind_ treatments in order to see whether our results confirm the treatment effect of Shu et al. (2012) . In Ind_sig fewer individuals over-reported (10%, 2 out of 20) as compared to Ind_NOsig (39%, 9 out of 23), which is different based on a (two-sided) Fisher's exact test ( p = 0.039). Employing Wilcoxon signed-rank tests for differences between score summaries and claimed scores in the receipt for each individual, we find that there is significant over-reporting in Ind_NOsig (8.74 reported matrices vs. 6.91 summarized matrices, p = 0.0039) and no detectable over-reporting in Ind_sig (7.05 vs. 6.80, p = 0.500). We therefore find strong support that including the signature nudge at the top of the receipt form reduces dishonesty significantly. Hence, we replicate Shu et al. (2012) 's result (signature on top vs. no signature) for individual decision makers.

www.frontiersin.org

Table 2 . Descriptive statistics.

www.frontiersin.org

Figure 3 . Mean true and reported scores in the four treatments. The bars depict ±1 standard error.

We proceed with a similar analysis for the Team_ treatments to detect whether the signature nudge remains effective in this scenario. Indeed, we find that there are 7 out of 21 teams (33.3%) that over-report their scores on the receipts in Team_NOsig compared to only 1 out of 21 (4.7%) in Team_sig . These propensities are, again, significantly different from each other (two-sided Fisher's exact test, p = 0.045). Wilcoxon signed-rank tests confirm that there is detectable over-reporting in Team_NOsig (17.38 matrices claimed vs. 14.19 matrices summarized as solved, p = 0.0156), there is no detectable different in Team_sig (17.09 vs. 17.24, p = 0.9725) 7 . We therefore find clear evidence that the signature nudge curbs dishonesty of teams effectively, alike the scenario for individuals. The result does not support a claim that teams' dishonesty is qualitatively different in a way that makes teams immune to this nudge.

This paper asked whether moral nudges that work to curb dishonesty of individuals also remain effective for teams—units that are ubiquitous in companies and have been shown to act more sophisticatedly and feel less responsible for their actions as the outcome of the team's decision rests on the shoulders of several team members ( Falk and Szech, 2013 ; Kocher et al., 2018 ; Falk et al., 2020 ). We employ the seminal finding of Shu et al. (2012) who showed that asking for a signature to confirm honesty at the top of a form fosters honesty compared to no signature. The main argument is that this can help to turn to an individual's morality and promote honesty exactly before misreporting may take place.

After the successful replication of Shu et al. (2012) 's effect for individuals, we extended the finding by confirming that this nudge is equally effective for a team setting, resulting in an 86% decrease in the amount of cheating teams. In our eyes, the presented research makes an important contribution to a better understanding of team behavior and in developing instruments for preventing teams and individuals from deception and cheating.

To the best of our knowledge, this is the first study to investigate the effectiveness of moral nudges for teams and it should be considered as a starting point for avenue of future research. Future research may investigate the dimensions of familiarity of team members, which our creativity task aimed for, further. Likewise, our teams consisted of two members and future research could vary this dimension by examining behavior of larger teams. Field experimental methods could be used decrease scrutiny of laboratory experiments and similar studies with higher stakes could check for the robustness of our and Shu et al.'s findings. Such investigations seem promising to test the ecological validity of our results. We regard as highly policy-relevant to investigate team decision-making and develop cost-effective instruments like nudges that can be implemented in practice by organizations and policymakers to curb fraud and dishonesty of teams.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics Statement

Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.

Author Contributions

YD wrote his master thesis under the supervision of MK and this work is a concise product coming out of this collaboration. YD and MK developed the research question and the experiment material together. YD and MK ran the experiment together and YD analyzed the data. MK contributed the creativity task as a team-building exercise, wrote the paper, and financed the experiment. All authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2021.684755/full#supplementary-material

1. ^ Note that there is a replication discussion around Mazar et al. (2008) : see also Amir et al. (2018) and Verschuere et al. (2018) . Verschuere et al. (2018) report one of the results of Mazar et al. (2008) does not replicate based on a meta-analysis with more than 5,000 participants. Amir et al. (2018) reply to Verschuere et al. (2018) and discuss conceptual challenges with direct replication studies.

2. ^ There is also a broader literature that compares economic decisions of individuals and teams, e.g., Bornstein et al. (2004) , Charness and Sutter (2012) , and Kugler et al. (2012) .

3. ^ Regarding the diffusion of responsibility and ethical behavior, see also Falk and Szech (2013) and Falk et al. (2020) .

4. ^ There are several treatments in Shu et al. (2012) : Note that Kristal et al. (2020) report that the top-vs.-bottom-signature treatment effect of Shu et al. (2012) does not replicate for individuals. This is not the treatment effect we aim to replicate in this paper—we concentrate on the top-signature versus no-signature comparison. In the task participants need to find two numbers in a 4 × 3 table that sum to a specific number. In Shu et al. (2012) , Mazar et al. (2008) and in our experiment this number is 10.

5. ^ See Appendix 1 for instructions.

6. ^ See Kachelmeier et al. (2008) ; Erat and Gneezy (2016 , 2017) ; Charness and Grieco (2019) ; Grözinger et al. (2020) , and Kachelmeier and Williamson (2010) for economic experiments on creativity.

7. ^ In Team_sig there was even one team that reported a lower number than summarized on the matrix sheet, yet indeed the correct number when comparing the reported number of matrices with the correctly solved number checked by the research team.

Abeler, J., Nosenzo, D., and Raymond, C. (2019). Preferences for truth-telling. Econometrica 87, 1115–1153. doi: 10.3982/ECTA14673

CrossRef Full Text | Google Scholar

ACFE (2020). Report to the Nations – 2020 Global Study on Occupational Fraud and Abuse. Association of Certified Fraud Examiners . Available online at: https://acfepublic.s3-us-west-2.amazonaws.com/2020-Report-to-the-Nations.pdf (accessed March 20, 2021).

Amir, O., Mazar, N., and Ariely, D. (2018). Replicating the effect of the accessibility of moral standards on dishonesty: authors' response to the replication attempt. Adv. Methods Pract. Psychol. Sci. 1, 318–320. doi: 10.1177/2515245918769062

Bock, O., Baetge, I., and Nicklisch, A. (2014). hroot: Hamburg registration and organization online tool. Eur. Econ. Rev. 71, 117–120. doi: 10.1016/j.euroecorev.2014.07.003

Bornstein, G., Kugler, T., and Ziegelmeyer, A. (2004). Individual and team decisions in the centipede game: are teams more “rational” players?. J. Exp. Soc. Psychol. 40, 599–605. doi: 10.1016/j.jesp.2003.11.003

Charness, G., and Grieco, D. (2019). Creativity and incentives. J. Eur. Econ. Assoc. 17, 454–496. doi: 10.1093/jeea/jvx055

Charness, G., Samek, A., and van de Ven, J. (2021). What is Considered Deception in Experimental Economics? Working paper.

Charness, G., and Sutter, M. (2012). Teams make better self-interested decisions. J. Econ. Perspect. 26, 157–176. doi: 10.1257/jep.26.3.157

Cohen, T. R., Gunia, B. C., Kim-Jun, S. Y., and Murnighan, J. K. (2009). Do teams lie more than individuals? Honesty and deception as a function of strategic self-interest. J. Exp. Soc. Psychol. 45, 1321–1324. doi: 10.1016/j.jesp.2009.08.007

Danilov, A., Biemann, T., Kring, T., and Sliwka, D. (2013). The dark side of team incentives: experimental evidence on advice quality from financial service professionals. J. Econ. Behav. Organ. 93, 266–272. doi: 10.1016/j.jebo.2013.03.012

Dannenberg, A., and Khachatryan, E. (2020). A comparison of individual and team behavior in a competition with cheating opportunities. J. Econ. Behav. Organ. 177, 533–547. doi: 10.1016/j.jebo.2020.06.028

Erat, S., and Gneezy, U. (2016). Incentives for creativity. Exp. Econ. 19, 269–280. doi: 10.1007/s10683-015-9440-5

Erat, S., and Gneezy, U. (2017). Erratum to: Incentives for creativity. Exp. Econ. 20, 274–275. doi: 10.1007/s10683-016-9495-y

Falk, A., Neuber, T., and Szech, N. (2020). Diffusion of being pivotal and immoral outcomes. Rev. Econ. Stud. 87, 2205–2229. doi: 10.1093/restud/rdz064

Falk, A., and Szech, N. (2013). Morals and markets. Science 340, 707–711. doi: 10.1126/science.1231566

Fellner, G., Sausgruber, R., and Traxler, C. (2013). Testing enforcement strategies in the field: threat, moral appeal and social information. J. Eur. Econ. Assoc. 11, 634–660. doi: 10.1111/jeea.12013

Gerlach, P., Teodorescu, K., and Hertwig, R. (2019). The truth about lies: a meta-analysis on dishonest behavior. Psychol. Bull. 145, 1. doi: 10.1037/bul0000174

PubMed Abstract | CrossRef Full Text | Google Scholar

Grözinger, N., Irlenbusch, B., Laske, K., and Schröder, M. (2020). Innovation and communication media in virtual teams-an experimental study. J. Econ. Behav. Organ. 180, 201–218. doi: 10.1016/j.jebo.2020.09.009

Kachelmeier, S. J., Reichert, B. E., and Williamson, M. G. (2008). Measuring and motivating quantity, creativity, or both. J. Account. Res. 46, 341–373. doi: 10.1111/j.1475-679X.2008.00277.x

Kachelmeier, S. J., and Williamson, M. G. (2010). Attracting creativity: the initial and aggregate effects of contract section on creativity-weighted productivity. Account. Rev. 85, 1669–1691. doi: 10.2308/accr.2010.85.5.1669

Köbis, N. C., Verschuere, B., Bereby-Meyer, Y., Rand, D., and Shalvi, S. (2019). Intuitive honesty versus dishonesty: meta-analytic evidence. Perspect. Psychol. Sci. 14, 778–796. doi: 10.1177/1745691619851778

Kocher, M. G., Schudy, S., and Spantig, L. (2018). I lie? We lie! why? Experimental evidence on a dishonesty shift in teams. Manage. Sci. 64, 3971–4470. doi: 10.1287/mnsc.2017.2800

Korbel, V. (2017). Do we lie in teams? An experimental evidence. Appl. Econ. Lett. 24, 1107–1111. doi: 10.1080/13504851.2016.1259734

Kristal, A. S., Whillans, A. V., Bazerman, M. H., Gino, F., Shu, L. L., Mazar, N., et al. (2020). Signing at the beginning versus at the end does not decrease dishonesty. Proc. Natl. Acad. Sci. U.S.A. 117, 7103–7107. doi: 10.1073/pnas.1911695117

Kugler, T., Kausel, E. E., and Kocher, M. G. (2012). Are teams more rational than individuals? A review of interactive decision making in teams. Wiley Interdisciplinary Reviews: Cogn. Sci. 3, 471–482. doi: 10.1002/wcs.1184

Mazar, N., Amir, O., and Ariely, D. (2008). The dishonesty of honest people: a theory of self-concept maintenance. J. Market. Res. 45, 633–644. doi: 10.1509/jmkr.45.6.633

Mühlheußer, G., Roider, A., and Wallmeier, N. (2015). Gender differences in honesty: teams versus individuals. Econ. Lett. 128, 25–29. doi: 10.1016/j.econlet.2014.12.019

Rosenbaum, S. M., Billinger, S., and Stieglitz, N. (2014). Let's be honest: a review of experimental evidence of honesty and truth-telling. J. Econ. Psychol. 45, 181–196. doi: 10.1016/j.joep.2014.10.002

Shu, L. L., Gino, F., and Bazerman, M. H. (2011). Dishonest deed, clear conscience: when cheating leads to moral disengagement and motivated forgetting. Pers. Soc. Psychol. Bulletin 37, 330–349. doi: 10.1177/0146167211398138

Shu, L. L., Mazar, N., Gino, F., Ariely, D., and Bazerman, M. H. (2012). Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end. Proc. Natl. Acad. Sci. U.S.A. 109, 15197–15200. doi: 10.1073/pnas.1209746109

Sutter, M. (2009). Deception through telling the truth?! Experimental evidence from individuals and teams. Econ. J. 119, 47–60. doi: 10.1111/j.1468-0297.2008.02205.x

Thaler, R. H., and Sunstein, C. R. (2009). Nudge . London: Penguin Books.

Google Scholar

Verschuere, B., Meijer, E. H., Jim, A., Hoogesteyn, K., Orthey, R., McCarthy, R. J., et al. (2018). Registered replication report on Mazar, Amir, and Ariely (2008). Adv. Methods Pract. Psychol. Sci. 1, 299–317. doi: 10.1177/2515245918781032

Weisel, O., and Shalvi, S. (2015). The collaborative roots of corruption. Proc. Natl. Acad. Sci. U.S.A. 112, 10651–10656. doi: 10.1073/pnas.1423035112

Wouda, J., Bijlstra, G., Frankenhuis, W. E., Wigboldus, D. H., and Moore, D. (2017). The collaborative roots of corruption? A replication of Weisel and Shalvi (2015). Collab. Psychol. 3, 1–3. doi: 10.1525/collabra.97

Keywords: honesty, lying, nudge, team, experiment

Citation: Dunaiev Y and Khadjavi M (2021) Collective Honesty? Experimental Evidence on the Effectiveness of Honesty Nudging for Teams. Front. Psychol. 12:684755. doi: 10.3389/fpsyg.2021.684755

Received: 23 March 2021; Accepted: 15 June 2021; Published: 08 July 2021.

Reviewed by:

Copyright © 2021 Dunaiev and Khadjavi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Menusch Khadjavi, m.khadjavipour@vu.nl

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

National Academies Press: OpenBook

Responsible Science: Ensuring the Integrity of the Research Process: Volume I (1992)

Chapter: 2 scientific principles and research practices, scientific principles and research practices.

Until the past decade, scientists, research institutions, and government agencies relied solely on a system of self-regulation based on shared ethical principles and generally accepted research practices to ensure integrity in the research process. Among the very basic principles that guide scientists, as well as many other scholars, are those expressed as respect for the integrity of knowledge, collegiality, honesty, objectivity, and openness. These principles are at work in the fundamental elements of the scientific method, such as formulating a hypothesis, designing an experiment to test the hypothesis, and collecting and interpreting data. In addition, more particular principles characteristic of specific scientific disciplines influence the methods of observation; the acquisition, storage, management, and sharing of data; the communication of scientific knowledge and information; and the training of younger scientists. 1 How these principles are applied varies considerably among the several scientific disciplines, different research organizations, and individual investigators.

The basic and particular principles that guide scientific research practices exist primarily in an unwritten code of ethics. Although some have proposed that these principles should be written down and formalized, 2 the principles and traditions of science are, for the most part, conveyed to successive generations of scientists through example, discussion, and informal education. As was pointed out in an early Academy report on responsible conduct of research in the

health sciences, “a variety of informal and formal practices and procedures currently exist in the academic research environment to assure and maintain the high quality of research conduct” (IOM, 1989a, p. 18).

Physicist Richard Feynman invoked the informal approach to communicating the basic principles of science in his 1974 commencement address at the California Institute of Technology (Feynman, 1985):

[There is an] idea that we all hope you have learned in studying science in school—we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it; other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. In summary, the idea is to try to give all the information to help others to judge the value of your contribution, not just the information that leads to judgment in one particular direction or another. (pp. 311-312)

Many scholars have noted the implicit nature and informal character of the processes that often guide scientific research practices and inference. 3 Research in well-established fields of scientific knowledge, guided by commonly accepted theoretical paradigms and experimental methods, involves few disagreements about what is recognized as sound scientific evidence. Even in a revolutionary scientific field like molecular biology, students and trainees have learned the basic principles governing judgments made in such standardized procedures as cloning a new gene and determining its sequence.

In evaluating practices that guide research endeavors, it is important to consider the individual character of scientific fields. Research fields that yield highly replicable results, such as ordinary organic chemical structures, are quite different from fields such as cellular immunology, which are in a much earlier stage of development and accumulate much erroneous or uninterpretable material before the pieces fit together coherently. When a research field is too new or too fragmented to support consensual paradigms or established methods, different scientific practices can emerge.

A well-established discipline can also experience profound changes during periods of new conceptual insights. In these moments, when scientists must cope with shifting concepts, the matter of what counts as scientific evidence can be subject to dispute. Historian Jan Sapp has described the complex interplay between theory and observation that characterizes the operation of scientific judgment in the selection of research data during revolutionary periods of paradigmatic shift (Sapp, 1990, p. 113):

What “liberties” scientists are allowed in selecting positive data and omitting conflicting or “messy” data from their reports is not defined by any timeless method. It is a matter of negotiation. It is learned, acquired socially; scientists make judgments about what fellow scientists might expect in order to be convincing. What counts as good evidence may be more or less well-defined after a new discipline or specialty is formed; however, at revolutionary stages in science, when new theories and techniques are being put forward, when standards have yet to be negotiated, scientists are less certain as to what others may require of them to be deemed competent and convincing.

Explicit statements of the values and traditions that guide research practice have evolved through the disciplines and have been given in textbooks on scientific methodologies. 4 In the past few decades, many scientific and engineering societies representing individual disciplines have also adopted codes of ethics (see Volume II of this report for examples), 5 and more recently, a few research institutions have developed guidelines for the conduct of research (see Chapter 6 ).

But the responsibilities of the research community and research institutions in assuring individual compliance with scientific principles, traditions, and codes of ethics are not well defined. In recent

years, the absence of formal statements by research institutions of the principles that should guide research conducted by their members has prompted criticism that scientists and their institutions lack a clearly identifiable means to ensure the integrity of the research process.

FACTORS AFFECTING THE DEVELOPMENT OF RESEARCH PRACTICES

In all of science, but with unequal emphasis in the several disciplines, inquiry proceeds based on observation and experimentation, the exercising of informed judgment, and the development of theory. Research practices are influenced by a variety of factors, including:

The general norms of science;

The nature of particular scientific disciplines and the traditions of organizing a specific body of scientific knowledge;

The example of individual scientists, particularly those who hold positions of authority or respect based on scientific achievements;

The policies and procedures of research institutions and funding agencies; and

Socially determined expectations.

The first three factors have been important in the evolution of modern science. The latter two have acquired more importance in recent times.

Norms of Science

As members of a professional group, scientists share a set of common values, aspirations, training, and work experiences. 6 Scientists are distinguished from other groups by their beliefs about the kinds of relationships that should exist among them, about the obligations incurred by members of their profession, and about their role in society. A set of general norms are imbedded in the methods and the disciplines of science that guide individual, scientists in the organization and performance of their research efforts and that also provide a basis for nonscientists to understand and evaluate the performance of scientists.

But there is uncertainty about the extent to which individual scientists adhere to such norms. Most social scientists conclude that all behavior is influenced to some degree by norms that reflect socially or morally supported patterns of preference when alternative courses of action are possible. However, perfect conformity with any rele-

vant set of norms is always lacking for a variety of reasons: the existence of competing norms, constraints, and obstacles in organizational or group settings, and personality factors. The strength of these influences, and the circumstances that may affect them, are not well understood.

In a classic statement of the importance of scientific norms, Robert Merton specified four norms as essential for the effective functioning of science: communism (by which Merton meant the communal sharing of ideas and findings), universalism, disinterestedness, and organized skepticism (Merton, 1973). Neither Merton nor other sociologists of science have provided solid empirical evidence for the degree of influence of these norms in a representative sample of scientists. In opposition to Merton, a British sociologist of science, Michael Mulkay, has argued that these norms are “ideological” covers for self-interested behavior that reflects status and politics (Mulkay, 1975). And the British physicist and sociologist of science John Ziman, in an article synthesizing critiques of Merton's formulation, has specified a set of structural factors in the bureaucratic and corporate research environment that impede the realization of that particular set of norms: the proprietary nature of research, the local importance and funding of research, the authoritarian role of the research manager, commissioned research, and the required expertise in understanding how to use modern instruments (Ziman, 1990).

It is clear that the specific influence of norms on the development of scientific research practices is simply not known and that further study of key determinants is required, both theoretically and empirically. Commonsense views, ideologies, and anecdotes will not support a conclusive appraisal.

Individual Scientific Disciplines

Science comprises individual disciplines that reflect historical developments and the organization of natural and social phenomena for study. Social scientists may have methods for recording research data that differ from the methods of biologists, and scientists who depend on complex instrumentation may have authorship practices different from those of scientists who work in small groups or carry out field studies. Even within a discipline, experimentalists engage in research practices that differ from the procedures followed by theorists.

Disciplines are the “building blocks of science,” and they “designate the theories, problems, procedures, and solutions that are prescribed, proscribed, permitted, and preferred” (Zuckerman, 1988a,

p. 520). The disciplines have traditionally provided the vital connections between scientific knowledge and its social organization. Scientific societies and scientific journals, some of which have tens of thousands of members and readers, and the peer review processes used by journals and research sponsors are visible forms of the social organization of the disciplines.

The power of the disciplines to shape research practices and standards is derived from their ability to provide a common frame of reference in evaluating the significance of new discoveries and theories in science. It is the members of a discipline, for example, who determine what is “good biology” or “good physics” by examining the implications of new research results. The disciplines' abilities to influence research standards are affected by the subjective quality of peer review and the extent to which factors other than disciplinary quality may affect judgments about scientific achievements. Disciplinary departments rely primarily on informal social and professional controls to promote responsible behavior and to penalize deviant behavior. These controls, such as social ostracism, the denial of letters of support for future employment, and the withholding of research resources, can deter and penalize unprofessional behavior within research institutions. 7

Many scientific societies representing individual disciplines have adopted explicit standards in the form of codes of ethics or guidelines governing, for example, the editorial practices of their journals and other publications. 8 Many societies have also established procedures for enforcing their standards. In the past decade, the societies' codes of ethics—which historically have been exhortations to uphold high standards of professional behavior —have incorporated specific guidelines relevant to authorship practices, data management, training and mentoring, conflict of interest, reporting research findings, treatment of confidential or proprietary information, and addressing error or misconduct.

The Role of Individual Scientists and Research Teams

The methods by which individual scientists and students are socialized in the principles and traditions of science are poorly understood. The principles of science and the practices of the disciplines are transmitted by scientists in classroom settings and, perhaps more importantly, in research groups and teams. The social setting of the research group is a strong and valuable characteristic of American science and education. The dynamics of research groups can foster —or inhibit—innovation, creativity, education, and collaboration.

One author of a historical study of research groups in the chemical and biochemical sciences has observed that the laboratory director or group leader is the primary determinant of a group's practices (Fruton, 1990). Individuals in positions of authority are visible and are also influential in determining funding and other support for the career paths of their associates and students. Research directors and department chairs, by virtue of personal example, thus can reinforce, or weaken, the power of disciplinary standards and scientific norms to affect research practices.

To the extent that the behavior of senior scientists conforms with general expectations for appropriate scientific and disciplinary practice, the research system is coherent and mutually reinforcing. When the behavior of research directors or department chairs diverges from expectations for good practice, however, the expected norms of science become ambiguous, and their effects are thus weakened. Thus personal example and the perceived behavior of role models and leaders in the research community can be powerful stimuli in shaping the research practices of colleagues, associates, and students.

The role of individuals in influencing research practices can vary by research field, institution, or time. The standards and expectations for behavior exemplified by scientists who are highly regarded for their technical competence or creative insight may have greater influence than the standards of others. Individual and group behaviors may also be more influential in times of uncertainty and change in science, especially when new scientific theories, paradigms, or institutional relationships are being established.

Institutional Policies

Universities, independent institutes, and government and industrial research organizations create the environment in which research is done. As the recipients of federal funds and the institutional sponsors of research activities, administrative officers must comply with regulatory and legal requirements that accompany public support. They are required, for example, “to foster a research environment that discourages misconduct in all research and that deals forthrightly with possible misconduct” (DHHS, 1989a, p. 32451).

Academic institutions traditionally have relied on their faculty to ensure that appropriate scientific and disciplinary standards are maintained. A few universities and other research institutions have also adopted policies or guidelines to clarify the principles that their members are expected to observe in the conduct of scientific research. 9 In addition, as a result of several highly publicized incidents of miscon-

duct in science and the subsequent enactment of governmental regulations, most major research institutions have now adopted policies and procedures for handling allegations of misconduct in science.

Institutional policies governing research practices can have a powerful effect on research practices if they are commensurate with the norms that apply to a wide spectrum of research investigators. In particular, the process of adopting and implementing strong institutional policies can sensitize the members of those institutions to the potential for ethical problems in their work. Institutional policies can establish explicit standards that institutional officers then have the power to enforce with sanctions and penalties.

Institutional policies are limited, however, in their ability to specify the details of every problematic situation, and they can weaken or displace individual professional judgment in such situations. Currently, academic institutions have very few formal policies and programs in specific areas such as authorship, communication and publication, and training and supervision.

Government Regulations and Policies

Government agencies have developed specific rules and procedures that directly affect research practices in areas such as laboratory safety, the treatment of human and animal research subjects, and the use of toxic or potentially hazardous substances in research.

But policies and procedures adopted by some government research agencies to address misconduct in science (see Chapter 5 ) represent a significant new regulatory development in the relationships between research institutions and government sponsors. The standards and criteria used to monitor institutional compliance with an increasing number of government regulations and policies affecting research practices have been a source of significant disagreement and tension within the research community.

In recent years, some government research agencies have also adopted policies and procedures for the treatment of research data and materials in their extramural research programs. For example, the National Science Foundation (NSF) has implemented a data-sharing policy through program management actions, including proposal review and award negotiations and conditions. The NSF policy acknowledges that grantee institutions will “keep principal rights to intellectual property conceived under NSF sponsorship” to encourage appropriate commercialization of the results of research (NSF, 1989b, p. 1). However, the NSF policy emphasizes “that retention of such rights does not reduce the responsibility of researchers and in-

stitutions to make results and supporting materials openly accessible ” (p. 1).

In seeking to foster data sharing under federal grant awards, the government relies extensively on the scientific traditions of openness and sharing. Research agency officials have observed candidly that if the vast majority of scientists were not so committed to openness and dissemination, government policy might require more aggressive action. But the principles that have traditionally characterized scientific inquiry can be difficult to maintain. For example, NSF staff have commented, “Unless we can arrange real returns or incentives for the original investigator, either in financial support or in professional recognition, another researcher's request for sharing is likely to present itself as ‘hassle'—an unwelcome nuisance and diversion. Therefore, we should hardly be surprised if researchers display some reluctance to share in practice, however much they may declare and genuinely feel devotion to the ideal of open scientific communication ” (NSF, 1989a, p. 4).

Social Attitudes and Expectations

Research scientists are part of a larger human society that has recently experienced profound changes in attitudes about ethics, morality, and accountability in business, the professions, and government. These attitudes have included greater skepticism of the authority of experts and broader expectations about the need for visible mechanisms to assure proper research practices, especially in areas that affect the public welfare. Social attitudes are also having a more direct influence on research practices as science achieves a more prominent and public role in society. In particular, concern about waste, fraud, and abuse involving government funds has emerged as a factor that now directly influences the practices of the research community.

Varying historical and conceptual perspectives also can affect expectations about standards of research practice. For example, some journalists have criticized several prominent scientists, such as Mendel, Newton, and Millikan, because they “cut corners in order to make their theories prevail” (Broad and Wade, 1982, p. 35). The criticism suggests that all scientists at all times, in all phases of their work, should be bound by identical standards.

Yet historical studies of the social context in which scientific knowledge has been attained suggest that modern criticism of early scientific work often imposes contemporary standards of objectivity and empiricism that have in fact been developed in an evolutionary manner. 10 Holton has argued, for example, that in selecting data for

publication, Millikan exercised creative insight in excluding unreliable data resulting from experimental error. But such practices, by today 's standards, would not be acceptable without reporting the justification for omission of recorded data.

In the early stages of pioneering studies, particularly when fundamental hypotheses are subject to change, scientists must be free to use creative judgment in deciding which data are truly significant. In such moments, the standards of proof may be quite different from those that apply at stages when confirmation and consensus are sought from peers. Scientists must consistently guard against self-deception, however, particularly when theoretical prejudices tend to overwhelm the skepticism and objectivity basic to experimental practices.

In discussing “the theory-ladenness of observations,” Sapp (1990) observed the fundamental paradox that can exist in determining the “appropriateness” of data selection in certain experiments done in the past: scientists often craft their experiments so that the scientific problems and research subjects conform closely with the theory that they expect to verify or refute. Thus, in some cases, their observations may come closer to theoretical expectations than what might be statistically proper.

This source of bias may be acceptable when it is influenced by scientific insight and judgment. But political, financial, or other sources of bias can corrupt the process of data selection. In situations where both kinds of influence exist, it is particularly important for scientists to be forthcoming about possible sources of bias in the interpretation of research results. The coupling of science to other social purposes in fostering economic growth and commercial technology requires renewed vigilance to maintain acceptable standards for disclosure and control of financial or competitive conflicts of interest and bias in the research environment. The failure to distinguish between appropriate and inappropriate sources of bias in research practices can lead to erosion of public trust in the autonomy of the research enterprise.

RESEARCH PRACTICES

In reviewing modern research practices for a range of disciplines, and analyzing factors that could affect the integrity of the research process, the panel focused on the following four areas:

Data handling—acquisition, management, and storage;

Communication and publication;

Correction of errors; and

Research training and mentorship.

Commonly understood practices operate in each area to promote responsible research conduct; nevertheless, some questionable research practices also occur. Some research institutions, scientific societies, and journals have established policies to discourage questionable practices, but there is not yet a consensus on how to treat violations of these policies. 11 Furthermore, there is concern that some questionable practices may be encouraged or stimulated by other institutional factors. For example, promotion or appointment policies that stress quantity rather than the quality of publications as a measure of productivity could contribute to questionable practices.

Data Handling

Acquisition and management.

Scientific experiments and measurements are transformed into research data. The term “research data” applies to many different forms of scientific information, including raw numbers and field notes, machine tapes and notebooks, edited and categorized observations, interpretations and analyses, derived reagents and vectors, and tables, charts, slides, and photographs.

Research data are the basis for reporting discoveries and experimental results. Scientists traditionally describe the methods used for an experiment, along with appropriate calibrations, instrument types, the number of repeated measurements, and particular conditions that may have led to the omission of some datain the reported version. Standard procedures, innovations for particular purposes, and judgments concerning the data are also reported. The general standard of practice is to provide information that is sufficiently complete so that another scientist can repeat or extend the experiment.

When a scientist communicates a set of results and a related piece of theory or interpretation in any form (at a meeting, in a journal article, or in a book), it is assumed that the research has been conducted as reported. It is a violation of the most fundamental aspect of the scientific research process to set forth measurements that have not, in fact, been performed (fabrication) or to ignore or change relevant data that contradict the reported findings (falsification).

On occasion what is actually proper research practice may be confused with misconduct in science. Thus, for example, applying scientific judgment to refine data and to remove spurious results places

special responsibility on the researcher to avoid misrepresentation of findings. Responsible practice requires that scientists disclose the basis for omitting or modifying data in their analyses of research results, especially when such omissions or modifications could alter the interpretation or significance of their work.

In the last decade, the methods by which research scientists handle, store, and provide access to research data have received increased scrutiny, owing to conflicts, over ownership, such as those described by Nelkin (1984); advances in the methods and technologies that are used to collect, retain, and share data; and the costs of data storage. More specific concerns have involved the profitability associated with the patenting of science-based results in some fields and the need to verify independently the accuracy of research results used in public or private decision making. In resolving competing claims, the interests of individual scientists and research institutions may not always coincide: researchers may be willing to exchange scientific data of possible economic significance without regard for financial or institutional implications, whereas their institutions may wish to establish intellectual property rights and obligations prior to any disclosure.

The general norms of science emphasize the principle of openness. Scientists are generally expected to exchange research data as well as unique research materials that are essential to the replication or extension of reported findings. The 1985 report Sharing Research Data concluded that the general principle of data sharing is widely accepted, especially in the behavioral and social sciences (NRC, 1985). The report catalogued the benefits of data sharing, including maintaining the integrity of the research process by providing independent opportunities for verification, refutation, or refinement of original results and data; promoting new research and the development and testing of new theories; and encouraging appropriate use of empirical data in policy formulation and evaluation. The same report examined obstacles to data sharing, which include the criticism or competition that might be stimulated by data sharing; technical barriers that may impede the exchange of computer-readable data; lack of documentation of data sets; and the considerable costs of documentation, duplication, and transfer of data.

The exchange of research data and reagents is ideally governed by principles of collegiality and reciprocity: scientists often distribute reagents with the hope that the recipient will reciprocate in the future, and some give materials out freely with no stipulations attached. 12 Scientists who repeatedly or flagrantly deviate from the tradition of sharing become known to their peers and may suffer

subtle forms of professional isolation. Such cases may be well known to senior research investigators, but they are not well documented.

Some scientists may share materials as part of a collaborative agreement in exchange for co-authorship on resulting publications. Some donors stipulate that the shared materials are not to be used for applications already being pursued by the donor's laboratory. Other stipulations include that the material not be passed on to third parties without prior authorization, that the material not be used for proprietary research, or that the donor receive prepublication copies of research publications derived from the material. In some instances, so-called materials transfer agreements are executed to specify the responsibilities of donor and recipient. As more academic research is being supported under proprietary agreements, researchers and institutions are experiencing the effects of these arrangements on research practices.

Governmental support for research studies may raise fundamental questions of ownership and rights of control, particularly when data are subsequently used in proprietary efforts, public policy decisions, or litigation. Some federal research agencies have adopted policies for data sharing to mitigate conflicts over issues of ownership and access (NIH, 1987; NSF, 1989b).

Many research investigators store primary data in the laboratories in which the data were initially derived, generally as electronic records or data sheets in laboratory notebooks. For most academic laboratories, local customary practice governs the storage (or discarding) of research data. Formal rules or guidelines concerning their disposition are rare.

Many laboratories customarily store primary data for a set period (often 3 to 5 years) after they are initially collected. Data that support publications are usually retained for a longer period than are those tangential to reported results. Some research laboratories serve as the proprietor of data and data books that are under the stewardship of the principal investigator. Others maintain that it is the responsibility of the individuals who collected the data to retain proprietorship, even if they leave the laboratory.

Concerns about misconduct in science have raised questions about the roles of research investigators and of institutions in maintaining and providing access to primary data. In some cases of alleged misconduct, the inability or unwillingness of an investigator to provide

primary data or witnesses to support published reports sometimes has constituted a presumption that the experiments were not conducted as reported. 13 Furthermore, there is disagreement about the responsibilities of investigators to provide access to raw data, particularly when the reported results have been challenged by others. Many scientists believe that access should be restricted to peers and colleagues, usually following publication of research results, to reduce external demands on the time of the investigator. Others have suggested that raw data supporting research reports should be accessible to any critic or competitor, at any time, especially if the research is conducted with public funds. This topic, in particular, could benefit from further research and systematic discussion to clarify the rights and responsibilities of research investigators, institutions, and sponsors.

Institutional policies have been developed to guide data storage practices in some fields, often stimulated by desires to support the patenting of scientific results and to provide documentation for resolving disputes over patent claims. Laboratories concerned with patents usually have very strict rules concerning data storage and note keeping, often requiring that notes be recorded in an indelible form and be countersigned by an authorized person each day. A few universities have also considered the creation of central storage repositories for all primary data collected by their research investigators. Some government research institutions and industrial research centers maintain such repositories to safeguard the record of research developments for scientific, historical, proprietary, and national security interests.

In the academic environment, however, centralized research records raise complex problems of ownership, control, and access. Centralized data storage is costly in terms of money and space, and it presents logistical problems of cataloguing and retrieving data. There have been suggestions that some types of scientific data should be incorporated into centralized computerized data banks, a portion of which could be subject to periodic auditing or certification. 14 But much investigator-initiated research is not suitable for random data audits because of the exploratory nature of basic or discovery research. 15

Some scientific journals now require that full data for research papers be deposited in a centralized data bank before final publication. Policies and practices differ, but in some fields support is growing for compulsory deposit to enhance researchers' access to supporting data.

Issues Related to Advances in Information Technology

Advances in electronic and other information technologies have raised new questions about the customs and practices that influence the storage, ownership, and exchange of electronic data and software. A number of special issues, not addressed by the panel, are associated with computer modeling, simulation, and other approaches that are becoming more prevalent in the research environment. Computer technology can enhance research collaboration; it can also create new impediments to data sharing resulting from increased costs, the need for specialized equipment, or liabilities or uncertainties about responsibilities for faulty data, software, or computer-generated models.

Advances in computer technology may assist in maintaining and preserving accurate records of research data. Such records could help resolve questions about the timing or accuracy of specific research findings, especially when a principal investigator is not available or is uncooperative in responding to such questions. In principle, properly managed information technologies, utilizing advances in nonerasable optical disk systems, might reinforce openness in scientific research and make primary data more transparent to collaborators and research managers. For example, the so-called WORM (write once, read many) systems provide a high-density digital storage medium that supplies an ineradicable audit trail and historical record for all entered information (Haas, 1991).

Advances in information technologies could thus provide an important benefit to research institutions that wish to emphasize greater access to and storage of primary research data. But the development of centralized information systems in the academic research environment raises difficult issues of ownership, control, and principle that reflect the decentralized character of university governance. Such systems are also a source of additional research expense, often borne by individual investigators. Moreover, if centralized systems are perceived by scientists as an inappropriate or ineffective form of management or oversight of individual research groups, they simply may not work in an academic environment.

Communication and Publication

Scientists communicate research results by a variety of formal and informal means. In earlier times, new findings and interpretations were communicated by letter, personal meeting, and publication. Today, computer networks and facsimile machines have sup-

plemented letters and telephones in facilitating rapid exchange of results. Scientific meetings routinely include poster sessions and press conferences as well as formal presentations. Although research publications continue to document research findings, the appearance of electronic publications and other information technologies heralds change. In addition, incidents of plagiarism, the increasing number of authors per article in selected fields, and the methods by which publications are assessed in determining appointments and promotions have all increased concerns about the traditions and practices that have guided communication and publication.

Journal publication, traditionally an important means of sharing information and perspectives among scientists, is also a principal means of establishing a record of achievement in science. Evaluation of the accomplishments of individual scientists often involves not only the numbers of articles that have resulted from a selected research effort, but also the particular journals in which the articles have appeared. Journal submission dates are often important in establishing priority and intellectual property claims.

Authorship of original research reports is an important indicator of accomplishment, priority, and prestige within the scientific community. Questions of authorship in science are intimately connected with issues of credit and responsibility. Authorship practices are guided by disciplinary traditions, customary practices within research groups, and professional and journal standards and policies. 16 There is general acceptance of the principle that each named author has made a significant intellectual contribution to the paper, even though there remains substantial disagreement over the types of contributions that are judged to be significant.

A general rule is that an author must have participated sufficiently in the work to take responsibility for its content and vouch for its validity. Some journals have adopted more specific guidelines, suggesting that credit for authorship be contingent on substantial participation in one or more of the following categories: (1) conception and design of the experiment, (2) execution of the experiment and collection and storage of the supporting data, (3) analysis and interpretation of the primary data, and (4) preparation and revision of the manuscript. The extent of participation in these four activities required for authorship varies across journals, disciplines, and research groups. 17

“Honorary,” “gift,” or other forms of noncontributing authorship

are problems with several dimensions. 18 Honorary authors reap an inflated list of publications incommensurate with their scientific contributions (Zen, 1988). Some scientists have requested or been given authorship as a form of recognition of their status or influence rather than their intellectual contribution. Some research leaders have a custom of including their own names in any paper issuing from their laboratory, although this practice is increasingly discouraged. Some students or junior staff encourage such “gift authorship” because they feel that the inclusion of prestigious names on their papers increases the chance of publication in well-known journals. In some cases, noncontributing authors have been listed without their consent, or even without their being told. In response to these practices, some journals now require all named authors to sign the letter that accompanies submission of the original article, to ensure that no author is named without consent.

“Specialized” authorship is another issue that has received increasing attention. In these cases, a co-author may claim responsibility for a specialized portion of the paper and may not even see or be able to defend the paper as a whole. 19 “Specialized” authorship may also result from demands that co-authorship be given as a condition of sharing a unique research reagent or selected data that do not constitute a major contribution—demands that many scientists believe are inappropriate. “Specialized” authorship may be appropriate in cross-disciplinary collaborations, in which each participant has made an important contribution that deserves recognition. However, the risks associated with the inabilities of co-authors to vouch for the integrity of an entire paper are great; scientists may unwittingly become associated with a discredited publication.

Another problem of lesser importance, except to the scientists involved, is the order of authors listed on a paper. The meaning of author order varies among and within disciplines. For example, in physics the ordering of authors is frequently alphabetical, whereas in the social sciences and other fields, the ordering reflects a descending order of contribution to the described research. Another practice, common in biology, is to list the senior author last.

Appropriate recognition for the contributions of junior investigators, postdoctoral fellows, and graduate students is sometimes a source of discontent and unease in the contemporary research environment. Junior researchers have raised concerns about treatment of their contributions when research papers are prepared and submitted, particularly if they are attempting to secure promotions or independent research funding or if they have left the original project. In some cases, well-meaning senior scientists may grant junior colleagues

undeserved authorship or placement as a means of enhancing the junior colleague's reputation. In others, significant contributions may not receive appropriate recognition.

Authorship practices are further complicated by large-scale projects, especially those that involve specialized contributions. Mission teams for space probes, oceanographic expeditions, and projects in high-energy physics, for example, all involve large numbers of senior scientists who depend on the long-term functioning of complex equipment. Some questions about communication and publication that arise from large science projects such as the Superconducting Super Collider include: Who decides when an experiment is ready to be published? How is the spokesperson for the experiment determined? Who determines who can give talks on the experiment? How should credit for technical or hardware contributions be acknowledged?

Apart from plagiarism, problems of authorship and credit allocation usually do not involve misconduct in science. Although some forms of “gift authorship,” in which a designated author made no identifiable contribution to a paper, may be viewed as instances of falsification, authorship disputes more commonly involve unresolved differences of judgment and style. Many research groups have found that the best method of resolving authorship questions is to agree on a designation of authors at the outset of the project. The negotiation and decision process provides initial recognition of each member's effort, and it may prevent misunderstandings that can arise during the course of the project when individuals may be in transition to new efforts or may become preoccupied with other matters.

Plagiarism. Plagiarism is using the ideas or words of another person without giving appropriate credit. Plagiarism includes the unacknowledged use of text and ideas from published work, as well as the misuse of privileged information obtained through confidential review of research proposals and manuscripts.

As described in Honor in Science, plagiarism can take many forms: at one extreme is the exact replication of another's writing without appropriate attribution (Sigma Xi, 1986). At the other is the more subtle “borrowing” of ideas, terms, or paraphrases, as described by Martin et al., “so that the result is a mosaic of other people's ideas and words, the writer's sole contribution being the cement to hold the pieces together.” 20 The importance of recognition for one's intellectual abilities in science demands high standards of accuracy and diligence in ensuring appropriate recognition for the work of others.

The misuse of privileged information may be less clear-cut because it does not involve published work. But the general principles

of the importance of giving credit to the accomplishments of others are the same. The use of ideas or information obtained from peer review is not acceptable because the reviewer is in a privileged position. Some organizations, such as the American Chemical Society, have adopted policies to address these concerns (ACS, 1986).

Additional Concerns. Other problems related to authorship include overspecialization, overemphasis on short-term projects, and the organization of research communication around the “least publishable unit.” In a research system that rewards quantity at the expense of quality and favors speed over attention to detail (the effects of “publish or perish”), scientists who wait until their research data are complete before releasing them for publication may be at a disadvantage. Some institutions, such as Harvard Medical School, have responded to these problems by limiting the number of publications reviewed for promotion. Others have placed greater emphasis on major contributions as the basis for evaluating research productivity.

As gatekeepers of scientific journals, editors are expected to use good judgment and fairness in selecting papers for publication. Although editors cannot be held responsible for the errors or inaccuracies of papers that may appear in their journals, editors have obligations to consider criticism and evidence that might contradict the claims of an author and to facilitate publication of critical letters, errata, or retractions. 21 Some institutions, including the National Library of Medicine and professional societies that represent editors of scientific journals, are exploring the development of standards relevant to these obligations (Bailar et al., 1990).

Should questions be raised about the integrity of a published work, the editor may request an author's institution to address the matter. Editors often request written assurances that research reported conforms to all appropriate guidelines involving human or animal subjects, materials of human origin, or recombinant DNA.

In theory, editors set standards of authorship for their journals. In practice, scientists in the specialty do. Editors may specify the. terms of acknowledgment of contributors who fall short of authorship status, and make decisions regarding appropriate forms of disclosure of sources of bias or other potential conflicts of interest related to published articles. For example, the New England Journal of Medicine has established a category of prohibited contributions from authors engaged in for-profit ventures: the journal will not allow

such persons to prepare review articles or editorial commentaries for publication. Editors can clarify and insist on the confidentiality of review and take appropriate actions against reviewers who violate it. Journals also may require or encourage their authors to deposit reagents and sequence and crystallographic data into appropriate databases or storage facilities. 22

Peer Review

Peer review is the process by which editors and journals seek to be advised by knowledgeable colleagues about the quality and suitability of a manuscript for publication in a journal. Peer review is also used by funding agencies to seek advice concerning the quality and promise of proposals for research support. The proliferation of research journals and the rewards associated with publication and with obtaining research grants have put substantial stress on the peer review system. Reviewers for journals or research agencies receive privileged information and must exert great care to avoid sharing such information with colleagues or allowing it to enter their own work prematurely.

Although the system of peer review is generally effective, it has been suggested that the quality of refereeing has declined, that self-interest has crept into the review process, and that some journal editors and reviewers exert inappropriate influence on the type of work they deem publishable. 23

Correction of Errors

At some level, all scientific reports, even those that mark profound advances, contain errors of fact or interpretation. In part, such errors reflect uncertainties intrinsic to the research process itself —a hypothesis is formulated, an experimental test is devised, and based on the interpretation of the results, the hypothesis is refined, revised, or discarded. Each step in this cycle is subject to error. For any given report, “correctness” is limited by the following:

The precision and accuracy of the measurements. These in turn depend on available technology, the use of proper statistical and analytical methods, and the skills of the investigator.

Generality of the experimental system and approach. Studies must often be carried out using “model systems.” In biology, for example, a given phenomenon is examined in only one or a few among millions of organismal species.

Experimental design—a product of the background and expertise of the investigator.

Interpretation and speculation regarding the significance of the findings—judgments that depend on expert knowledge, experience, and the insightfulness and boldness of the investigator.

Viewed in this context, errors are an integral aspect of progress in attaining scientific knowledge. They are consequences of the fact that scientists seek fundamental truths about natural processes of vast complexity. In the best experimental systems, it is common that relatively few variables have been identified and that even fewer can be controlled experimentally. Even when important variables are accounted for, the interpretation of the experimental results may be incorrect and may lead to an erroneous conclusion. Such conclusions are sometimes overturned by the original investigator or by others when new insights from another study prompt a reexamination of older reported data. In addition, however, erroneous information can also reach the scientific literature as a consequence of misconduct in science.

What becomes of these errors or incorrect interpretations? Much has been made of the concept that science is “self-correcting”—that errors, whether honest or products of misconduct, will be exposed in future experiments because scientific truth is founded on the principle that results must be verifiable and reproducible. This implies that errors will generally not long confound the direction of thinking or experimentation in actively pursued areas of research. Clearly, published experiments are not routinely replicated precisely by independent investigators. However, each experiment is based on conclusions from prior studies; repeated failure of the experiment eventually calls into question those conclusions and leads to reevaluation of the measurements, generality, design, and interpretation of the earlier work.

Thus publication of a scientific report provides an opportunity for the community at large to critique and build on the substance of the report, and serves as one stage at which errors and misinterpretations can be detected and corrected. Each new finding is considered by the community in light of what is already known about the system investigated, and disagreements with established measurements and interpretations must be justified. For example, a particular interpretation of an electrical measurement of a material may implicitly predict the results of an optical experiment. If the reported optical results are in disagreement with the electrical interpretation, then the latter is unlikely to be correct—even though the measurements them-

selves were carefully and correctly performed. It is also possible, however, that the contradictory results are themselves incorrect, and this possibility will also be evaluated by the scientists working in the field. It is by this process of examination and reexamination that science advances.

The research endeavor can therefore be viewed as a two-tiered process: first, hypotheses are formulated, tested, and modified; second, results and conclusions are reevaluated in the course of additional study. In fact, the two tiers are interrelated, and the goals and traditions of science mandate major responsibilities in both areas for individual investigators. Importantly, the principle of self-correction does not diminish the responsibilities of the investigator in either area. The investigator has a fundamental responsibility to ensure that the reported results can be replicated in his or her laboratory. The scientific community in general adheres strongly to this principle, but practical constraints exist as a result of the availability of specialized instrumentation, research materials, and expert personnel. Other forces, such as competition, commercial interest, funding trends and availability, or pressure to publish may also erode the role of replication as a mechanism for fostering integrity in the research process. The panel is unaware of any quantitative studies of this issue.

The process of reevaluating prior findings is closely related to the formulation and testing of hypotheses. 24 Indeed, within an individual laboratory, the formulation/testing phase and the reevaluation phase are ideally ongoing interactive processes. In that setting, the precise replication of a prior result commonly serves as a crucial control in attempts to extend the original findings. It is not unusual that experimental flaws or errors of interpretation are revealed as the scope of an investigation deepens and broadens.

If new findings or significant questions emerge in the course of a reevaluation that affect the claims of a published report, the investigator is obliged to make public a correction of the erroneous result or to indicate the nature of the questions. Occasionally, this takes the form of a formal published retraction, especially in situations in which a central claim is found to be fundamentally incorrect or irreproducible. More commonly, a somewhat different version of the original experiment, or a revised interpretation of the original result, is published as part of a subsequent report that extends in other ways the initial work. Some concerns have been raised that such “revisions” can sometimes be so subtle and obscure as to be unrecognizable. Such behavior is, at best, a questionable research practice. Clearly, each scientist has a responsibility to foster an environment that en-

courages and demands rigorous evaluation and reevaluation of every key finding.

Much greater complexity is encountered when an investigator in one research group is unable to confirm the published findings of another. In such situations, precise replication of the original result is commonly not attempted because of the lack of identical reagents, differences in experimental protocols, diverse experimental goals, or differences in personnel. Under these circumstances, attempts to obtain the published result may simply be dropped if the central claim of the original study is not the major focus of the new study. Alternatively, the inability to obtain the original finding may be documented in a paper by the second investigator as part of a challenge to the original claim. In any case, such questions about a published finding usually provoke the initial investigator to attempt to reconfirm the original result, or to pursue additional studies that support and extend the original findings.

In accordance with established principles of science, scientists have the responsibility to replicate and reconfirm their results as a normal part of the research process. The cycles of theoretical and methodological formulation, testing, and reevaluation, both within and between laboratories, produce an ongoing process of revision and refinement that corrects errors and strengthens the fabric of research.

Research Training and Mentorship

The panel defined a mentor as that person directly responsible for the professional development of a research trainee. 25 Professional development includes both technical training, such as instruction in the methods of scientific research (e.g., research design, instrument use, and selection of research questions and data), and socialization in basic research practices (e.g., authorship practices and sharing of research data).

Positive Aspects of Mentorship

The relationship of the mentor and research trainee is usually characterized by extraordinary mutual commitment and personal involvement. A mentor, as a research advisor, is generally expected to supervise the work of the trainee and ensure that the trainee's research is completed in a sound, honest, and timely manner. The ideal mentor challenges the trainee, spurs the trainee to higher scientific achievement, and helps socialize the trainee into the community

of scientists by demonstrating and discussing methods and practices that are not well understood.

Research mentors thus have complex and diverse roles. Many individuals excel in providing guidance and instruction as well as personal support, and some mentors are resourceful in providing funds and securing professional opportunities for their trainees. The mentoring relationship may also combine elements of other relationships, such as parenting, coaching, and guildmastering. One mentor has written that his “research group is like an extended family or small tribe, dependent on one another, but led by the mentor, who acts as their consultant, critic, judge, advisor, and scientific father” (Cram, 1989, p. 1). Another mentor described as “orphaned graduate students” trainees who had lost their mentors to death, job changes, or in other ways (Sindermann, 1987). Many students come to respect and admire their mentors, who act as role models for their younger colleagues.

Difficulties Associated with Mentorship

However, the mentoring relationship does not always function properly or even satisfactorily. Almost no literature exists that evaluates which problems are idiosyncratic and which are systemic. However, it is clear that traditional practices in the area of mentorship and training are under stress. In some research fields, for example, concerns are being raised about how the increasing size and diverse composition of research groups affect the quality of the relationship between trainee and mentor. As the size of research laboratories expands, the quality of the training environment is at risk (CGS, 1990a).

Large laboratories may provide valuable instrumentation and access to unique research skills and resources as well as an opportunity to work in pioneering fields of science. But as only one contribution to the efforts of a large research team, a graduate student's work may become highly specialized, leading to a narrowing of experience and greater dependency on senior personnel; in a period when the availability of funding may limit research opportunities, laboratory heads may find it necessary to balance research decisions for the good of the team against the individual educational interests of each trainee. Moreover, the demands of obtaining sufficient resources to maintain a laboratory in the contemporary research environment often separate faculty from their trainees. When laboratory heads fail to participate in the everyday workings of the laboratory—even for the most beneficent of reasons, such as finding funds to support young investigators—their inattention may harm their trainees' education.

Although the size of a research group can influence the quality of mentorship, the more important issues are the level of supervision received by trainees, the degree of independence that is appropriate for the trainees' experience and interests, and the allocation of credit for achievements that are accomplished by groups composed of individuals with different status. Certain studies involving large groups of 40 to 100 or more are commonly carried out by collaborative or hierarchical arrangements under a single investigator. These factors may affect the ability of research mentors to transmit the methods and ethical principles according to which research should be conducted.

Problems also arise when faculty members are not directly rewarded for their graduate teaching or training skills. Although faculty may receive indirect rewards from the contributions of well-trained graduate students to their own research as well as the satisfaction of seeing their students excelling elsewhere, these rewards may not be sufficiently significant in tenure or promotion decisions. When institutional policies fail to recognize and reward the value of good teaching and mentorship, the pressures to maintain stable funding for research teams in a competitive environment can overwhelm the time allocated to teaching and mentorship by a single investigator.

The increasing duration of the training period in many research fields is another source of concern, particularly when it prolongs the dependent status of the junior investigator. The formal period of graduate and postdoctoral training varies considerably among fields of study. In 1988, the median time to the doctorate from the baccalaureate degree was 6.5 years (NRC, 1989). The disciplinary median varied: 5.5 years in chemistry; 5.9 years in engineering; 7.1 years in health sciences and in earth, atmospheric, and marine sciences; and 9.0 years in anthropology and sociology. 26

Students, research associates, and faculty are currently raising various questions about the rights and obligations of trainees. Sexist behavior by some research directors and other senior scientists is a particular source of concern. Another significant concern is that research trainees may be subject to exploitation because of their subordinate status in the research laboratory, particularly when their income, access to research resources, and future recommendations are dependent on the goodwill of the mentor. Foreign students and postdoctoral fellows may be especially vulnerable, since their immigration status often depends on continuation of a research relationship with the selected mentor.

Inequalities between mentor and trainee can exacerbate ordinary conflicts such as the distribution of credit or blame for research error (NAS, 1989). When conflicts arise, the expectations and assumptions

that govern authorship practices, ownership of intellectual property, and the giving of references and recommendations are exposed for professional—and even legal—scrutiny (Nelkin, 1984; Weil and Snapper, 1989).

Making Mentorship Better

Ideally, mentors and trainees should select each other with an eye toward scientific merit, intellectual and personal compatibility, and other relevant factors. But this situation operates only under conditions of freely available information and unconstrained choice —conditions that usually do not exist in academic research groups. The trainee may choose to work with a faculty member based solely on criteria of patronage, perceived influence, or ability to provide financial support.

Good mentors may be well known and highly regarded within their research communities and institutions. Unfortunately, individuals who exploit the mentorship relationship may be less visible. Poor mentorship practices may be self-correcting over time, if students can detect and avoid research groups characterized by disturbing practices. However, individual trainees who experience abusive relationships with a mentor may discover only too late that the practices that constitute the abuse were well known but were not disclosed to new initiates.

It is common practice for a graduate student to be supervised not only by an individual mentor but also by a committee that represents the graduate department or research field of the student. However, departmental oversight is rare for the postdoctoral research fellow. In order to foster good mentorship practices for all research trainees, many groups and institutions have taken steps to clarify the nature of individual and institutional responsibilities in the mentor–trainee relationship. 27

FINDINGS AND CONCLUSIONS

The self-regulatory system that characterizes the research process has evolved from a diverse set of principles, traditions, standards, and customs transmitted from senior scientists, research directors, and department chairs to younger scientists by example, discussion, and informal education. The principles of honesty, collegiality, respect for others, and commitment to dissemination, critical evaluation, and rigorous training are characteristic of all the sciences. Methods and techniques of experimentation, styles of communicating findings,

the relationship between theory and experimentation, and laboratory groupings for research and for training vary with the particular scientific disciplines. Within those disciplines, practices combine the general with the specific. Ideally, research practices reflect the values of the wider research community and also embody the practical skills needed to conduct scientific research.

Practicing scientists are guided by the principles of science and the standard practices of their particular scientific discipline as well as their personal moral principles. But conflicts are inherent among these principles. For example, loyalty to one's group of colleagues can be in conflict with the need to correct or report an abuse of scientific practice on the part of a member of that group.

Because scientists and the achievements of science have earned the respect of society at large, the behavior of scientists must accord not only with the expectations of scientific colleagues, but also with those of a larger community. As science becomes more closely linked to economic and political objectives, the processes by which scientists formulate and adhere to responsible research practices will be subject to increasing public scrutiny. This is one reason for scientists and research institutions to clarify and strengthen the methods by which they foster responsible research practices.

Accordingly, the panel emphasizes the following conclusions:

The panel believes that the existing self-regulatory system in science is sound. But modifications are necessary to foster integrity in a changing research environment, to handle cases of misconduct in science, and to discourage questionable research practices.

Individual scientists have a fundamental responsibility to ensure that their results are reproducible, that their research is reported thoroughly enough so that results are reproducible, and that significant errors are corrected when they are recognized. Editors of scientific journals share these last two responsibilities.

Research mentors, laboratory directors, department heads, and senior faculty are responsible for defining, explaining, exemplifying, and requiring adherence to the value systems of their institutions. The neglect of sound training in a mentor's laboratory will over time compromise the integrity of the research process.

Administrative officials within the research institution also bear responsibility for ensuring that good scientific practices are observed in units of appropriate jurisdiction and that balanced reward systems appropriately recognize research quality, integrity, teaching, and mentorship. Adherence to scientific principles and disciplinary standards is at the root of a vital and productive research environment.

At present, scientific principles are passed on to trainees primarily by example and discussion, including training in customary practices. Most research institutions do not have explicit programs of instruction and discussion to foster responsible research practices, but the communication of values and traditions is critical to fostering responsible research practices and detering misconduct in science.

Efforts to foster responsible research practices in areas such as data handling, communication and publication, and research training and mentorship deserve encouragement by the entire research community. Problems have also developed in these areas that require explicit attention and correction by scientists and their institutions. If not properly resolved, these problems may weaken the integrity of the research process.

1. See, for example, Kuyper (1991).

2. See, for example, the proposal by Pigman and Carmichael (1950).

3. See, for example, Holton (1988) and Ravetz (1971).

4. Several excellent books on experimental design and statistical methods are available. See, for example, Wilson (1952) and Beveridge (1957).

5. For a somewhat dated review of codes of ethics adopted by the scientific and engineering societies, see Chalk et al. (1981).

6. The discussion in this section is derived from Mark Frankel's background paper, “Professional Societies and Responsible Research Conduct,” included in Volume II of this report.

7. For a broader discussion on this point, see Zuckerman (1977).

8. For a full discussion of the roles of scientific societies in fostering responsible research practices, see the background paper prepared by Mark Frankel, “Professional Societies and Responsible Research Conduct,” in Volume II of this report.

9. Selected examples of academic research conduct policies and guidelines are included in Volume II of this report.

10. See, for example, Holton's response to the criticisms of Millikan in Chapter 12 of Thematic Origins of Scientific Thought (Holton, 1988). See also Holton (1978).

11. See, for example, responses to the Proceedings of the National Academy of Sciences action against Friedman: Hamilton (1990) and Abelson et al. (1990). See also the discussion in Bailar et al. (1990).

12. Much of the discussion in this section is derived from a background paper, “Reflections on the Current State of Data and Reagent Exchange Among Biomedical Researchers,” prepared by Robert Weinberg and included in Volume II of this report.

13. See, for example, Culliton (1990) and Bradshaw et al. (1990). For the impact of the inability to provide corroborating data or witnesses, also see Ross et al. (1989).

14. See, for example, Rennie (1989) and Cassidy and Shamoo (1989).

15. See, for example, the discussion on random data audits in Institute of Medicine (1989a), pp. 26-27.

16. For a full discussion of the practices and policies that govern authorship in the biological sciences, see Bailar et al. (1990).

17. Note that these general guidelines exclude the provision of reagents or facilities or the supervision of research as a criteria of authorship.

18. A full discussion of problematic practices in authorship is included in Bailar et al. (1990). A controversial review of the responsibilities of co-authors is presented by Stewart and Feder (1987).

19. In the past, scientific papers often included a special note by a named researcher, not a co-author of the paper, who described, for example, a particular substance or procedure in a footnote or appendix. This practice seems to.have been abandoned for reasons that are not well understood.

20. Martin et al. (1969), as cited in Sigma Xi (1986), p. 41.

21. Huth (1988) suggests a “notice of fraud or notice of suspected fraud” issued by the journal editor to call attention to the controversy (p. 38). Angell (1983) advocates closer coordination between institutions and editors when institutions have ascertained misconduct.

22. Such facilities include Cambridge Crystallographic Data Base, GenBank at Los Alamos National Laboratory, the American Type Culture Collection, and the Protein Data Bank at Brookhaven National Laboratory. Deposition is important for data that cannot be directly printed because of large volume.

23. For more complete discussions of peer review in the wider context, see, for example, Cole et al. (1977) and Chubin and Hackett (1990).

24. The strength of theories as sources of the formulation of scientific laws and predictive power varies among different fields of science. For example, theories derived from observations in the field of evolutionary biology lack a great deal of predictive power. The role of chance in mutation and natural selection is great, and the future directions that evolution may take are essentially impossible to predict. Theory has enormous power for clarifying understanding of how evolution has occurred and for making sense of detailed data, but its predictive power in this field is very limited. See, for example, Mayr (1982, 1988).

25. Much of the discussion on mentorship is derived from a background paper prepared for the panel by David Guston. A copy of the full paper, “Mentorship and the Research Training Experience,” is included in Volume II of this report.

26. Although the time to the doctorate is increasing, there is some evidence that the magnitude of the increase may be affected by the organization of the cohort chosen for study. In the humanities, the increased time to the doctorate is not as large if one chooses as an organizational base the year in which the baccalaureate was received by Ph.D. recipients, rather than the year in which the Ph.D. was completed; see Bowen et al. (1991).

27. Some universities have written guidelines for the supervision or mentorship of trainees as part of their institutional research policy guidelines (see, for example, the guidelines adopted by Harvard University and the University of Michigan that are included in Volume II of this report). Other groups or institutions have written “guidelines ” (IOM, 1989a; NIH, 1990), “checklists” (CGS, 1990a), and statements of “areas of concern” and suggested “devices” (CGS, 1990c).

The guidelines often affirm the need for regular, personal interaction between the mentor and the trainee. They indicate that mentors may need to limit the size of their laboratories so that they are able to interact directly and frequently with all of their trainees. Although there are many ways to ensure responsible mentorship, methods that provide continuous feedback, whether through formal or informal mechanisms, are apt to be the most successful (CGS, 1990a). Departmental mentorship awards (comparable to teaching or research prizes) can recognize, encourage, and enhance the

mentoring relationship. For other discussions on mentorship, see the paper by David Guston in Volume II of this report.

One group convened by the Institute of Medicine has suggested “that the university has a responsibility to ensure that the size of a research unit does not outstrip the mentor's ability to maintain adequate supervision” (IOM, 1989a, p. 85). Others have noted that although it may be desirable to limit the number of trainees assigned to a senior investigator, there is insufficient information at this time to suggest that numbers alone significantly affect the quality of research supervision (IOM, 1989a, p. 33).

Responsible Science is a comprehensive review of factors that influence the integrity of the research process. Volume I examines reports on the incidence of misconduct in science and reviews institutional and governmental efforts to handle cases of misconduct.

The result of a two-year study by a panel of experts convened by the National Academy of Sciences, this book critically analyzes the impact of today's research environment on the traditional checks and balances that foster integrity in science.

Responsible Science is a provocative examination of the role of educational efforts; research guidelines; and the contributions of individual scientists, mentors, and institutional officials in encouraging responsible research practices.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

"I'm just being honest." When and why honesty enables help versus harm

Affiliation.

  • 1 Booth School of Business.
  • PMID: 32463271
  • DOI: 10.1037/pspi0000242

Although honesty is typically conceptualized as a virtue, it often conflicts with other equally important moral values, such as avoiding interpersonal harm. In the present research, we explore when and why honesty enables helpful versus harmful behavior. Across 5 incentive-compatible experiments in the context of advice-giving and economic games, we document four central results. First, honesty enables selfish harm: people are more likely to engage in and justify selfish behavior when selfishness is associated with honesty than when it is not. Second, people are selectively honest: people are more likely to be honest when honesty is associated with selfishness than when honesty is associated with altruism. Third, these effects are more consistent with genuine, rather than motivated, preferences for honesty. Fourth, even when individuals have no selfish incentive to be honest, honesty can lead to interpersonal harm because people avoid information about how their honest behavior affects others. This research unearths new insights on the mechanisms underlying moral choice, and consequently, the contexts in which moral principles are a force of good versus a force of evil. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

  • Motivation*
  • Young Adult

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Academies of Sciences, Engineering, and Medicine; Policy and Global Affairs; Committee on Science, Engineering, Medicine, and Public Policy; Committee on Responsible Science. Fostering Integrity in Research. Washington (DC): National Academies Press (US); 2017 Apr 11.

Cover of Fostering Integrity in Research

Fostering Integrity in Research.

  • Hardcopy Version at National Academies Press

9 Identifying and Promoting Best Practices for Research Integrity

An article about computational science in a scientific publication is not the scholarship itself, it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set of instructions which generated the figures. — Jonathan Buckheit and David Donoho (1995) , paraphrasing Jon Claerbout

The promotion of responsible research practices is one of the primary responses to concerns about research integrity. Other responses include the development of policies and procedures to respond to allegations of misconduct (covered in Chapter 7 ) and education in the responsible conduct of research (covered in Chapter 10 ). Exploring best practices in research helps to clarify that promoting these practices is not only a moral imperative but is also essential to good science.

Over the past three decades, government agencies, advisory bodies, scientific societies, and others have issued reports, educational guides, and other materials that address the topic of research practices. For example, the 1992 report Responsible Science points to a number of factors that affect research practices, including general scientific norms, the nature and traditions of disciplines, the example of individuals who either hold positions of authority or command respect, institutional and funding agency policies, and the expectations of peers and the larger society ( NAS-NAE-IOM, 1992 ). That committee's review of research practices focused on four areas: data handling (including acquisition, management, and storage); communication and publication; correction of errors; and research training and mentorship. The report explained how commonly understood practices in each of these areas promote research integrity.

A number of other documents and codes of conduct from around the world have specified good or appropriate research practices ( CCA, 2010 ; DCSD, 2009 ; ESF-ALLEA, 2011 ; ICB, 2010 ; IOM-NRC, 2002 ; MPG, 2009 ; NHMRC-ARC- Singapore Statement , 2010; TENK, 2002 ; UA, 2007; UKRIO, 2009 ). In addition, responsible research practices have constituted the primary subject matter for responsible conduct of research education activities, as illustrated by various educational guides ( Gustafsson et al., 2006 ; Steneck, 2007 ; NAS-NAE-IOM, 2009b ; IAP, 2016 ). These materials address the topics covered in Responsible Science —data handling, publication, correcting errors, and mentoring. Some add other topics, including research collaboration, peer review, conflicts of interest, and communicating with the public. Formulations of responsible research practices specific to certain fields address additional requirements, such as protection of human research subjects, care of laboratory animals, and prevention of the misuse of research and technology. For example, the National Institutes of Health ( NIH, 2009 ) has specified nine core areas of responsible conduct of research instruction.

Given the extensive effort to formulate responsible research practices, what does this report hope to add to the discussion? One goal is to reexamine the primary elements of responsible research practices in light of current conditions for doing scientific and scholarly work. A key conclusion of this study is that significant threats to research integrity exist in the United States and elsewhere, arising from a combination of factors present in the modern research environment. As discussed elsewhere, determining the incidence and trends of research misconduct and detrimental research practices is difficult or impossible with the existing data. However, failure to respond effectively, or in some cases an apparent tolerance for detrimental research practices by researchers, research institutions, journals, and funding agencies, has clearly contributed to delays in uncovering misconduct in several well-publicized cases. In some instances, this misconduct occurred over many years, and fabricated results were reported in many papers. And while survey data have limitations, a growing number of studies indicate that the prevalence of detrimental and questionable practices is too high and that the adherence to responsible practices is too low, both in general and in particular fields that are facing problems with irreproducibility of reported results ( John et al., 2012 ).

One reason that holding to best practices is such a challenge and is ultimately so important is that researchers, research institutions, journals, and sponsors may face incentive structures that are not completely aligned with the responsible practice of research. While individual researchers have long been recognized and discussed as potentially conflicted, it is reasonable to apply this perspective as well to other actors. For example, externally funded research is a revenue stream for research institutions and plays a business function in those settings, in addition to providing the necessary funding for scientists to conduct research. The need for institutions to maximize such funding streams may sometimes detract from their ability to uphold best practices. Institutions may not exercise the necessary degree of skepticism and oversight toward researchers who are very successful and valuable to the institution in terms of securing resources or enhancing its reputation.

Likewise, journal publishers and the editors who work for them may have incentives to take actions that are not consistent with best practices for fostering research integrity. In particular, the rise of bibliometric indicators such as the journal impact factor may pose difficulties as journal editors seek to publish the best research but also have an incentive to see the impact factor of their journals rise as far as possible. The inappropriate practice known as coercive citation, in which authors are pressured by journals to cite other papers from the journal, is an example ( Wilhite and Fong, 2012 ).

Finally, sponsors of research and users of research may be subject to pressures or incentives of their own that are not completely aligned with maintaining the integrity of science.

One element of this committee's task was to address the question of whether the research enterprise itself is capable of defining and strengthening basic standards for scientists and their institutions. A critical aspect of this question is that the integrity of the research enterprise is achieved not solely through the integrity of individual researchers and their research practices but through the integrity of the system of which they are a part—the combination of participants and processes that constitute the system as illustrated in Figure 1-1 . The best practices outlined here aim to reflect best practices in the context of the entire system of research and the interdependence of researchers, research institutions, funding agencies, journals, societies, and other participants. Developing this updated framework of responsible research practices will help the research enterprise identify particular practices that should be better understood and adhered to and how such understanding and adherence might be promoted and fostered.

FRAMING BEST PRACTICES FOR RESEARCH INTEGRITY

As described in Chapter 2 , the values of objectivity, honesty, openness, accountability, fairness, and stewardship underlie the effective functioning of research. These values are realized through the norms that apply to research practices. For example, honesty requires that researchers do not alter the data an experiment has produced, and openness means that researchers share the methods they used.

Norms permeate research. Some are formal and explicit, such as the regulatory requirements for treatment of animal and human subjects. Others are informal and sometimes implicit. For example, although there may be no policy that explicitly prohibits practices such as taking undeserved credit for the work of graduate students or postdocs that one is supervising or not extending deserved credit to them, researchers who exploit those who they supervise for personal ends are working against the norms of science.

Norms can be descriptive as well as aspirational. Descriptive norms are those that are generally adhered to and are expected of members of the enterprise. Sanctions may be attached to serious violations of descriptive norms; for example, all those involved expect that researchers will accurately report the results of their research. Aspirational norms are ideals that members of the research enterprise hold and attempt to achieve; for example, researchers seek excellence in the design and execution of their research and seek results that will make significant contributions to the body of knowledge in a field ( Anderson et al., 2010 ).

The best practices described here are aimed at individuals and entities serving different roles within the research system, including researchers, reviewers, institutions, journals, and funders. The committee uses the term best practices here to refer to prescriptive and aspirational norms. The committee has drawn these best practices from the relevant literature, from the experts that it has consulted, and from the accumulated knowledge and experiences of its members. The practices identified encompass principles, strategies, modes of behavior, and activities that preserve the integrity of research and avoid the pitfalls that impede scientific progress. Except where noted, these practices do not require significant additional resources to implement and are indeed practiced in a variety of locations and settings. For most of these practices, the necessary conditions for implementation are recognition on the part of the identified stakeholders that the integrity of research is central to the practice and progress of research, and willingness to act on that recognition. One of the major impediments to such recognition and willingness, of course, is that these practices may not be completely aligned with the perceived self-interests of some stakeholders.

These best practices do not cover every possible ethical situation encountered in research. Nor do they include matters of science and technology policy that are largely administrative, procedural, or discipline specific, such as data retention policies in particular fields or the distribution of research funds. However, the ethical and the administrative overlap in many areas, especially in areas involving obligations of stewardship to the research system as a whole (e.g., in workforce policies), and these overlapping areas are addressed in what follows.

These best practices apply across all areas and forms of research. In contrast, specific codes of conduct are more prescriptive than best practices and can vary from discipline to discipline, such as the number and order of authors on a paper. The application of best practices may also vary in some particulars depending on whether research is undertaken in academia, industry, or government laboratories. The following compilation will strike many readers who are experienced in research as self-evident. These responsibilities are delineated here in part to demonstrate the dense web of relationships and obligations that characterize the research enterprise.

The committee has aimed to describe best practices that are specific enough to be implemented but that may also encompass a number of detailed components. Responsible research practice checklists are provided to enumerate these components.

Researchers

Principal investigators and other scientists (including technicians, undergraduate and graduate students, and postdocs) are the foundation of the research enterprise. The research record begins with their work, and researchers are the primary evaluators and verifiers of work done by others in their respective fields. Every scientific finding a researcher reports contributes to progress in the discipline, and failings made in the conduct or reporting of the research can immensely harm the progress of the field. Every researcher has the responsibility to ensure that these tasks are carried out to the best of his or her ability.

Researchers may play a number of roles during their careers, often simultaneously, including student, trainee, young investigator, principal investigator, department head, reviewer, editor, and administrator. The research process itself includes planning research, performing research, and disseminating results, and researchers have responsibilities at all points during the process. In planning research, they need to consider the effects of research, both positive and negative, on the broader society. It is especially important that they be vigilant about the possibility of unanticipated and potentially dangerous consequences of research, whether on a local or global scale. In interdisciplinary or international research collaborations, investigators may need to engage in continuing discussions about the standards that apply to such efforts.

As they perform research, scientists are expected to maintain high standards of proof and scientific credibility through validation of methods and rigorous confirmation of findings. They should keep clear and accurate records. They should follow the rules and procedures of their institution and laboratory regarding the physical and electronic security of data and the devices on which they are stored. They need to adhere to policies and regulations on the conduct of research related to personal safety. They should be open with supervisors and funders regarding progress, including positive and negative results.

Disseminating research entails responsibilities as well. Researchers should give credit to colleagues for help in completing work, whether in a presentation or a manuscript. They should reveal all methods and corresponding experimental findings that support conclusions as well as any unexplained outlying data that do not fit with the conclusions, allowing others to decide whether the conclusions are still valid despite the outliers.

Best Practice R-1: Research Integrity. Uphold research integrity with vigilance, professionalism, and collegiality

According to one formulation, integrity for the researcher “embodies above all the individual's commitment to intellectual honesty and personal responsibility” ( IOM-NRC, 2002 ). The duty of researchers to uphold research integrity is multifaceted. Fulfilling this duty starts with a broad understanding of scientific methods and the research enterprise as a human institution. Research requires the constant exercise of judgment and is subject to bias, whether conscious or unconscious. Researchers need to be aware of their own personal potential sources of bias in designing, carrying out, evaluating, and reporting their own work. They need to understand that knowledge advances over time, although errors and mistaken interpretations can occur along the way. Researchers who acknowledge and correct their own errors or misinterpretations with equanimity contribute to the progress of science. Likewise, researchers should be fair and generous when critiquing the work of others. Criticisms should focus on errors in the work and disagreements about interpretation, but not on the person.

In addition to meeting their field's standards of integrity and quality in their own work, as specified in the best practices on data handling and authorship, researchers need to promote high standards among colleagues. They should take careful and timely action when a concern about research integrity arises. As a prerequisite, they should understand the definitions of, and policies to address, research misconduct adopted by their institutions and funding agencies. They should be familiar with the appropriate formal procedures for expressing concerns and making allegations, as well as informal rules and steps to help ensure that such concerns and allegations are made responsibly ( Gunsalus, 1998a ). These informal rules include accounting for one's own biases, appreciating that one's knowledge of a situation may be incomplete or incorrect, and getting confidential perspectives on possible misconduct from a trusted advisor before making a formal allegation.

Researchers should maintain an active commitment to openness in research as the essential foundation of academic freedom, not just the integrity and credibility of science. A commitment to openness means both acting and advocating for openness.

Best Practice R-2: Data Handling. Manage research data effectively, responsibly, and transparently throughout the research process. This includes providing free and open access to research data, models, and code underlying reported results to the extent possible, consistent with disciplinary standards, funder requirements, employer policies, and relevant laws and regulations (such as those governing intellectual property)

Effective record keeping and data management while undertaking research, and complete sharing of data, models, and code when publicly reporting results, are fundamental to research integrity. The importance of updating knowledge and practices related to data is increasingly recognized around the world ( NAS-NAE-IOM, 2009a ; KNAW, 2013 ). The pitfalls that can occur when dishonest, closed, or ineffective data management practices are employed are illustrated by the translational omics case and other examples discussed in Chapter 7 and Appendix D .

Researchers need to understand and follow the data collection and analysis standards of their own fields. For example, research data will often contain potential outlying results. While refining data to remove outliers is appropriate, any data refinements should be made to the entire dataset and should similarly improve subdatasets as it does the entire set. The refinement should also be well documented wherever the dataset appears. Some data refinements made after an experiment may be acceptable, since the types of noise that will show up in a dataset may be unclear until after the data are collected, but should be based on an analytic principle that provides an explicit rationale for exclusion. Researchers should guard against the temptation to use a post hoc rationale to make undocumented refinements that strengthen support for a favored hypothesis. Such behavior is a detrimental practice or could even cross the line and become falsification.

In some settings and some cases, data, models, and code may not be made available, or sharing may be delayed due to legal or regulatory restrictions, including those related to privacy, intellectual property protection, and national security classification. For research that does not result in publicly reported results, such as some work performed by industrial or government labs, sharing of data and code is not a requirement but should be undertaken where possible.

In the 21st century, many novel findings and published works are based on nonobvious analysis of large datasets. How to effectively manage these datasets and properly provide them or refer to them during review and publication are challenging issues that are being considered across many fields and disciplines. Internal curation of large datasets may be expensive for research groups, and many journals do not have resources to host the datasets. However, examples of falsification, fabrication, or error discussed in Chapter 7 illustrate that posting of data and code can enable researchers to identify problematic conclusions and correct the research record.

Researchers need to ensure that appropriate statistical and analytical expertise is utilized in the project. The use and misuse of statistical tests such as p -values are current topics of discussion in a number of fields; the American Statistical Association recently released a statement listing six principles on the misconceptions and misuse of the p -value ( Wasserstein and Lazar, 2016 ). Researchers should avoid detrimental practices such as p -hacking, in which statistical and analytical parameters are adjusted until a desired result is achieved ( Nuzzo, 2014 ). Supervisors should stay close to the primary data even if they lack the technical skills to generate those data themselves.

Best Practice R-3. Authorship and Communication. Follow general and disciplinary authorship standards when communicating through formal publications. Describe the roles and contributions of all authors. Be transparent when communicating with researchers from other disciplines, policy makers, and the broader public

Decisions about authorship of research publications are an important aspect of the responsible conduct of research. Although many individuals other than those who conceive of and implement a research project typically contribute to the production of successful research, authors are considered to be the person or persons who made a significant and substantial intellectual contribution to the production and presentation of the new knowledge being published. 1

As discussed in Chapter 3 and Chapter 7 , authorship is also the “coin of the realm” in science—the mechanism through which scientists receive credit for intellectual work. Authorship, particularly lead authorship, carries with it credit that affects careers and promotions. Because of this, authorship often becomes a fraught topic and can invite misconduct and detrimental research practices.

In addition, authorship carries responsibilities. For example, authors are responsible for the veracity and reliability of the reported results, for ensuring that the research was performed according to relevant laws and regulations, for interacting with journal editors and staff during the publication, and for defending the work following publication ( Smith and Williams-Jones, 2012 ). The article or paper presented by researchers “should be complete, and, where applicable, include negative findings and results contrary to their hypotheses” ( NHMRCARC-UA, 2007 ). Publication bias, selective reporting, and poor reporting are serious problems that damage the research record. Authors also need to follow discipline-specific reporting guidelines, such as those covering the registration and reporting of clinical trial results. They are responsible for ensuring that previous work is appropriately and accurately cited. In all fields, responsible authorship involves avoiding detrimental practices such as honorary authorship and duplicate publication, as well as the affirmative responsibility to ensure that all who deserve credit on a paper receive it.

As discussed in Chapter 3 , authorship practices vary among disciplines and within research groups and may change over time; professional and journal standards and policies on authorship also vary (journal best practices are discussed below). Technological changes in how research is done and the prevalence of multidisciplinary and even global research teams have raised challenges for authors, such as an increase in the number of authors per paper and more limited knowledge by all authors of the methods used by other contributors.

Authors should clearly identify which portion of a research project each coauthor performed (see the section on best practices for journals below). Even in cases where this is not required, this information can help readers interpret the work and may also avoid blanket condemnations if the work is later shown to be flawed. If responsibility for an article or other communication is not specified as clearly as possible, all authors can be held accountable for its contents.

Researchers may also need to communicate with specialists from other fields in interdisciplinary studies or may have opportunities to explain their work to policy makers and the broader public. Similar standards of accuracy and transparency should apply. For example, “any attempt to exaggerate the importance and practical applicability of the findings should be resisted” ( ESF-ALLEA, 2011 ). The authors of a research article or other communication have a responsibility to ensure that press releases and other institutional documents describing that work are accurate and unexaggerated. Researchers should work with their institutional media affairs office to avoid unfounded claims and reveal both the positive and the negative aspects of research results. Researchers should also become more sophisticated in distinguishing between reporting research results and advocating policy positions related to their research. Issues of advocacy can be complex, and no hard-and-fast rules cover all situations.

Best Practice R-4: Mentoring and Supervision. Know your responsibilities as a mentor and supervisor. Be a helpful, effective mentor and supervisor to early-career researchers

The 1992 report Responsible Science defines a mentor as “that person directly responsible for the professional development of a research trainee” ( NAS-NAE-IOM, 1992 ). In this report, the term supervisor is used to describe the person directly responsible for the professional development of a trainee. Here, the term mentor refers to a broader group that includes supervisors as well as other more senior researchers who are in a position to contribute to the professional development of trainees and junior researchers. Professional development encompasses the development of technical expertise, socialization in research practices, and adherence to the highest standards of research integrity. The 2002 report Integrity in Scientific Research: Creating an Environment That Promotes Responsible Conduct outlines the responsibilities of supervisors as including “a commitment to continuous education and guidance of trainees, appropriate delegation of responsibility, regular review and constructive appraisal of trainees, fair attribution of accomplishment and authorship, and career guidance, as well as help in creating opportunities for employment and funding” ( IOM-NRC, 2002 ).

Since supervisor-trainee relationships are often complex, it is important that supervisors and trainees clarify their mutual expectations for the relationship ( NAS-NAE-IOM, 2009b ). Conflicts can sometimes occur over the time and opportunities allocated to trainees, credit for and ownership of results, and other issues related to research practices. Supervisors should make sure that trainees are aware of the risks of misrepresenting data, should be aware that subordinates can have an overzealous concern to meet expectations, and should recognize that periods of heightened stress may impair their judgment.

In the context of this report, ensuring that trainees understand and follow best practices in research is an important element of mentorship. This includes checking the work of trainees, particularly work that is being submitted for publication. In several of the individual cases that the committee examined during the study, failures and deficiencies in mentorship and supervision were factors contributing to significant delays in addressing serious problems with data underlying reported results.

Supervisors and other mentors should ensure that trainees receive high-quality instruction in, and appropriate socialization into, the responsible conduct of research. This may involve incorporating activities within the lab as well as institutional and other instruction. A potentially useful practice is to set aside portions of group meetings to discuss issues of research integrity, including group analysis of current examples of detrimental practices. Supervisors should be certain that all persons working under them understand their commitment to responsible research and their expectation for responsible conduct. Students, researchers, and staff should be encouraged to be open about results. Constructive skepticism serves a valuable function in research. “Show me the data” is always a legitimate request. Supervisors should cultivate the expectation that others in the group may be asked to confirm complex experiments or unexpected findings, not as a check on the individual competence or integrity of research group members, but as needed to ensure validity.

In addition to the formal supervisory relationships discussed above, mentoring occurs informally in many cases. Individuals may have multiple mentors, both formal and informal, and all have some responsibility for the appropriate socialization of those they mentor. Mentors should be sensitive to the challenges that mentees belonging to underrepresented groups may be facing. Mentors need to avoid the reality and even the appearance of exploitative practices, such as asking graduate students to babysit or house sit. Although the responsibility of avoiding hypercompetitive research environments characterized by intense resource competition lies mainly with institutions and sponsors, as described below, individual supervisors should do what they can to prevent competitiveness in the lab from reaching the point where it becomes harmful.

Best Practice R-5: Peer Review. Strive to be a fair and effective peer reviewer who provides careful reviews, maintains confidentiality, and recognizes and discloses conflicts of interest

Peer reviewers of grants and journal submissions provide the guiding and corrective machinery that enables the research enterprise to progress. As in other contexts of their work, researchers who serve as reviewers are expected to be honest, objective, and accountable and to preserve confidentiality and protect the ideas of others during the review process. In the context of grant review, peer reviewers are responsible for determining whether a research direction is worthy of funding based on novelty, importance, available data, and whether the proposed methods are suitable for the investigation. For journal submissions, the reviewer's responsibility is to carefully evaluate the experimental design, presented data, and analysis techniques to determine whether they cumulatively support the presented interpretation and conclusions from the data.

Potential reviewers should completely disclose conflicts of interest to the program office for a grant proposal or to the editor for a journal submission. Upholding fairness as a research value, as discussed in Chapter 2 , requires that reviewers be aware of their own biases so as to avoid critiques that are motivated by a desire to defend their own work. The program officer or editor has the responsibility to decide whether a bias or conflict of interest affects a potential reviewer's eligibility.

Reviewers also need to uphold the confidentiality of the review process by not sharing materials or ideas from grants or manuscripts under review. Appropriating ideas from grants or manuscripts under review is a form of plagiarism.

Best Practice R-6. Research Compliance. Understand and comply with relevant institutional and governmental regulations governing research, including those specific to a given discipline or field

Research often involves risks to human subjects and animals, to those in the lab, or to those in the buildings where the research takes place. Because research has a potential for harm, it is regulated by local, state, or federal laws, and human and animal studies are governed by Institutional Review Board and Institutional Animal Care and Use Committee rules, respectively, and regulations imposed by the federal government. Failure to comply with governing rules and regulations can lead to civil—or in some cases criminal—penalties for researchers. Moreover, compliance failures undermine public confidence in the researcher, the institution, the field, and the broader research enterprise.

Researchers have the responsibility to determine what the governing rules are for a designed experiment before the work is conducted. Most institutions have offices that specialize in safety, human experiments, and animal use. These offices should be consulted fully to ensure safety—of the researchers and participants in the experiment or the larger community—and that all governing rules and regulations are satisfied. In some fields, researchers also need to be aware of the risks inherent in doing science, understand the possibilities of harmful consequences that could arise accidentally or through misuse, and take steps to reduce those risks as much as possible.

Finally, researchers need to disclose personal financial interests that might reasonably appear to be related to the research for review by institutional officials at the appropriate time. In many cases, the conflict can be managed through the actions of the researchers involved and through oversight. In some cases, the conflict may not be manageable and must be eliminated or the project may have to be abandoned. Personal financial interests related to the research may have the effect of undermining a reader's view of the credibility of the results, but honesty and objectivity require that they be listed so that others can draw conclusions about the possible effects.

A best practices checklist for researchers is provided in Box 9-1 .

Best Practices Checklist for Researchers.

Research Institutions

As the employers of researchers and the institutional stewards of financial and other resources that support research, universities and other research institutions in the United States have a number of responsibilities (both formal and informal) for ensuring integrity. According to the Institute of Medicine and the National Research Council, “Each research institution should develop and implement a comprehensive program designed to promote integrity in research, using multiple approaches adapted to the specific environments within each institution.” ( IOM-NRC, 2002 ) Specific responsibilities include the maintenance of policies and procedures to investigate and address research misconduct—including the responsibility to notify the appropriate federal agency of misconduct investigations involving that agency's funds—and the provision of educational and training programs for students and faculty to raise awareness of research integrity ( IOM-NRC, 2002 ; NAS-NAE-IOM, 1992 ; NSF-OIG, 2013 ; OSTP, 2000 ).

In addition, research institutions carry a range of research-related legal and regulatory compliance responsibilities, such as administering regulations governing research on human subjects and laboratory animals; acting as stewards, as required, of data from federally funded research (see NAS-NAE-IOM, 2009a ); enforcing environmental and hazardous substance regulations; ensuring proper financial accounting of research funds; and implementing general workplace laws and regulations in areas such as discrimination and harassment. The challenges presented by these myriad, often overlapping regulations are many. Institutional leadership must take a role in seeking a responsible compliance environment that is designed to facilitate and support a quality working and learning environment for all.

Some specific policies and practices of research institutions may differ according to whether they are controlled and operated by public or private universities, other nonprofit entities, for-profit companies, or government bodies. Presentations to the committee by corporate representatives indicated that some multinational companies take a very thorough and systematic approach to training and mentoring young researchers ( Williams, 2012 ).

As experience has accumulated over the past several decades, new perspectives have appeared regarding how research institutions can best foster research integrity. For example, the practice of assessing the climate for research integrity in an institution has emerged and is becoming more widely adopted, and its benefits are becoming more clearly understood ( CGS, 2012 ; IOM-NRC, 2002 ). Around the world, more attention is being paid to the role of universities and research institutions in ensuring integrity ( ESF-ALLEA, 2011 ; UUK, 2012 ). The responsibilities of universities and research institutions may change over time due to the challenges raised by new technologies and collaborations ( IOM, 2009 , 2012 ).

Best Practice I-1: Management. Integrate research integrity considerations into overall approaches to research, education, and institutional management

Changes in the funding, structure, and organization of research in the United States and the possible effects of these changes on the incentives of researchers to uphold best practices are discussed in several places in this report. In fulfilling their responsibilities to create an environment where the fundamental values of research are valued and reinforced, institutions need to consider organizational and management issues that have not traditionally been associated with research integrity and have not been traditionally seen as organizational responsibilities. In this regard, institutional leaders and others with research administration responsibilities need to demonstrate through their approach to oversight and implementation of policies that fostering research integrity is a central priority that supports the quality of research. It would be a mistake for institutional and faculty leaders to observe that the institution has basic policies and administrative procedures in place and assume that research integrity issues do not require their attention.

While this is a broad exhortation compared with other best practices presented here, the committee identified several areas for particular focus during the course of the study. To begin, institutions should explicitly evaluate mentoring as part of their evaluation of faculty. Mentoring and supervision of young researchers at U.S. institutions needs systematic attention and improvement. A review of closed Office of Research Integrity (ORI) cases found that almost three-quarters of supervisors had not reviewed source data with trainees who committed misconduct and two-thirds had not set standards for responsible conduct ( Wright et al., 2008 ). Another recent survey of research faculty found that less than a quarter have had opportunities to participate in faculty training to be a better mentor, advisor, or research teacher, and about one-third of faculty did not or could not remember whether they had guidelines related to their responsibilities to PhD students ( Titus and Ballou, 2014 ). Recent work by the InterAcademy Partnership indicates that the need for improved mentoring of young researchers is a global issue ( IAP, 2016 ).

Another imperative is to regularly communicate relevant institutional policies—such as the definition of research misconduct—as well as the rights and responsibilities of researchers directly to young researchers. Compacts between institutions and postdocs, students, and faculty are one mechanism for such communication. The American Association of Medical Colleges has developed several sample compacts, including one between graduate students and their research advisors and one between postdocs and their mentors ( AAMC, 2006 , 2008 ). These are documents of several pages that include bullet points outlining the responsibilities of both parties, such as the responsibility of graduate students to seek regular feedback and the responsibility of graduate advisors not to require students to perform duties unrelated to training and professional development. A particularly important and sometimes vulnerable group is postdocs ( Phillips, 2012 ). Postdocs are formally trainees but are often called upon to be mentors of students or younger postdocs. A 2005 survey of postdocs found that less than half of respondents were aware of institutional policies toward determining authorship, defining misconduct, resolving grievances, or determining the ownership of intellectual property ( Davis, 2005 ).

A related responsibility is for institutions to collect data on career outcomes for recent science and engineering graduate cohorts and postdocs and to provide these data to incoming students and trainees at the front end of their training programs so they are better informed. Providing this information is one indication that the institutions have the students' best interests at heart. To the extent that students have a realistic perspective of their career prospects and the likelihood of being able to pursue research as a career, they will be better equipped to make decisions about how to proceed with their graduate training

Further, institutions might benefit from keeping track of such organizational and funding issues as the number and proportion of soft-money positions in various departments, as well as trends. As explored elsewhere in the report, the combination of increasing emphasis on soft-money positions and declining success rates for grant applications at agencies such as the National Institutes of Health may have a negative impact on researcher incentives to uphold high standards.

Finally, the committee has noted a trend toward institutions and researchers undertaking more aggressive public relations efforts on behalf of their research activities. Institutions and researchers should impose careful quality control on such efforts. One recent study indicates that the quality of media reporting on discoveries is directly related to the quality of press releases ( Schwartz et al., 2011 ). Well-known cases over the years of aggressively promoted results that turned out to be based on fabricated data, such as the Hwang stem cell case, or were otherwise irreproducible, such as the Fleischmanm-Pons “cold fusion” discovery, provide cautionary tales ( Appendix D ; Goodstein, 2010 ). Overhyping may ultimately be both a cause and a consequence of a “winner take all” culture in research where disincentives to cutting corners, or even worse behaviors, are weakened over time ( Freeman and Gelber, 2006 ; Freeman et al., 2001a , b ). It may also damage public trust in researchers and in the research enterprise.

Best Practice I-2: Assessment. Perform regular assessments of the climate for research integrity at the institutional and department levels and address weaknesses that are identified

A baseline expectation is that institutions should create a climate for research integrity and institute supportive policies and practices. The 2002 report Integrity in Scientific Research explains that research organizations “engage in activities that help establish an internal climate and organizational culture that are either supportive of or ambivalent toward the responsible conduct of research” ( IOM-NRC, 2002 ). That report recommended that institutions utilize ongoing self-assessment and peer review in order to evaluate their climate for research integrity and guide continuous improvement. At that time, instruments for that purpose had not been developed.

In recent years, an instrument to assess the organizational climate for research integrity has been developed and validated ( Crain et al., 2013 ; Martinson et al., 2013 ). A recent Council of Graduate Schools (CGS) project worked with a group of universities to integrate “research ethics and the responsible conduct of research (RCR) into graduate education” ( CGS, 2012 ). The participating universities utilized climate assessment as an important tool to identify areas for improvement and to track progress. One participating institution reports that the data produced by the assessment tool helped efforts to improve research integrity approaches gain traction among the faculty ( May, 2013 ).

Institutions can also assess the effectiveness of their own efforts to promote research integrity. Are allegations or concerns addressed in an appropriate and timely way? Are policies related to transparency and data sharing well understood and followed?

Strengthening education and training in the responsible conduct of research, discussed below, is an important approach to addressing issues uncovered in assessment exercises and improving local research climates. As illustrated by several of the cases discussed in Appendix D and in other parts of the report, if detrimental research practices are tolerated at the laboratory or department level, it can lead to a vicious circle where young researchers perpetuate these practices in the belief that they are behaving appropriately. In response, institutions might look for other proactive approaches such as placing succinct posters on bulletin boards to encourage best practices. ORI has produced an infographic on how research supervisors can foster integrity that provides an example of the sorts of information that might be communicated ( ORI, 2016 ). The Singapore Statement on Research Integrity (2010) produced by the Second World Conference on Research Integrity is also available as a single-page pdf. Such posters would perhaps be more effective if they were locally produced by labs or departments.

Best Practice I-3: Performing Research Misconduct Investigations. Perform regular inventories of institutional policies, procedures, and capabilities for investigating and addressing research misconduct and address weaknesses that are identified

Universities and other research institutions are responsible for undertaking fair, thorough, and timely investigations into allegations of research misconduct. A comprehensive assessment of how U.S. research institutions are performing in the area of addressing research misconduct is not possible, because most investigation results and reports are never made public due to confidentiality rules. Over the course of the study, experts who briefed the committee pointed to considerable unevenness in the capabilities of universities to investigate and address allegations of research misconduct ( Garfinkel, 2012 ). In addition, the examples described in other parts of the report, particularly Chapter 7 and Appendix D , illustrate that even the most highly regarded institutions can fail in the performance of basic tasks, such as following appropriate investigation procedures, ensuring that internal committees have the right knowledge and expertise, and ensuring that investigation processes avoid the pitfalls that can result from institutional conflicts of interest.

Regular inventories of institutional policies, procedures, and capabilities can help to ensure that the minimum requirements needed to comply with existing regulations are met, but universities should aim for more than compliance. The requirements of ORI and the National Science Foundation (NSF) should be a floor, not a ceiling.

Ensuring that institutions have the appropriate policies and resources in place to address research misconduct allegations starts with the support and involvement of institutional leaders. Often, concerns can be addressed and questions can be answered at an early stage, obviating the need for formal investigations ( Gunsalus, 1998b ).

Elements that should be part of institutional capabilities include a trained Research Integrity Officer or other professional who can act on allegations, involvement of the institution's general counsel's office, clear policies and procedures that are understood and followed, and support from institutional leadership. In research universities, faculty leaders play a critical role in the effective communication and implementation of these policies and procedures. Institutions should also protect good-faith whistleblowers and prevent negative career consequences for young researchers who become whistleblowers. This demonstrates the institution's moral commitment to its students and employees. As illustrated by the Goodwin case, young researchers who do the right thing by raising concerns or making allegations against superiors may find that their research careers are effectively over, even when they uncover misconduct.

Maintaining confidentiality during an investigation, protecting the accused, and minimizing the negative consequences of investigations for those who are cleared are also essential. Institutions need to communicate with federal agencies such as ORI and the NSF Office of Inspector General, sponsors, and journals, as appropriate, to ensure that these entities can fulfill their responsibilities related to the stewardship of funds and correcting the research record.

Institutions also need to have policies and mechanisms in place that allow them to call in external sources of expertise, particularly when their financial, reputational, or other interests may be affected by an allegation. Incorporating external members on the institutional committees that undertake research misconduct investigations is one mechanism for accomplishing this. In some particularly serious or problematic cases, an institution may decide that all members of such a committee should come from outside the institution, although considerations of logistics and cost would make it difficult to institute this as a normal practice. The University of Illinois requires that all investigation committees should include at least one external member ( University of Illinois, 2009 ). In addition, institutions may ask external experts to review the mission statements of investigation committees at the start of the process and the draft reports of committees to help ensure that the appropriate questions and issues are addressed. It is not clear how common external review is currently.

Regular evaluations of capabilities, incorporating perspectives external to the institution, can also help institutions improve their systems and processes over time. For example, in addition to designated institutional points of contact for allegations of misconduct, such as Research Integrity Officers, some institutions have found additional resources, such as ombudsmen and hotlines, to be helpful. In managing a system with multiple entry points, it is necessary to clearly define roles and coordinate responses so that those who are bringing their concerns to the institution do not receive incorrect or conflicting advice. Mediation mechanisms can be put in place for disputes that arise between colleagues or between subordinates and superiors. Ideally, enhanced communication and related interventions will allow many issues and concerns to be addressed before research misconduct occurs. Ensuring that this information is widely disseminated through posting on bulletin boards in labs and through other mechanisms is also important.

Best Practice I-4: Training and Education. Strive for continuous improvement in RCR training and education

The development of RCR training and education programs and related issues—including funder mandates, content, delivery mechanisms, and assessment—are covered in detail in Chapter 10 . The 1992 report Responsible Science noted that institutional RCR education programs were not very common at that time and that the research enterprise was ambivalent about such programs ( NAS-NAE-IOM, 1992 ). Although there is still much to be learned about the effectiveness of particular educational approaches, recognition that institutions have clear responsibilities has grown over time, both in the United States and around the world. The report Integrity in Scientific Research recommended that “institutions should implement effective educational programs that enhance the responsible conduct of research” ( IOM-NRC, 2002 ). The Australian Code for the Responsible Conduct of Research states that

Each institution must provide induction and training for all research trainees. This training should cover research ethics, occupational health and safety, and environmental protection, as well as technical matters appropriate to the discipline. ( NHMRC-ARC-UA, 2007 )

As is the case with institutional policies and resources to address allegations of research misconduct, the formal requirements of funders should constitute the floor, not the ceiling, for institutional efforts. NIH mandates participation in RCR education for all persons receiving NIH support. This requirement includes instruction in nine core areas: (1) data acquisition, management, sharing, and ownership; (2) mentor/trainee responsibilities; (3) publication practices and responsible authorship; (4) peer review; (5) collaborative science; (6) human subjects; (7) research involving animals; (8) research misconduct; and (9) conflict of interest and commitment ( Steneck, 2004 ). A 2009 update on the Requirement for Instruction in the Responsible Conduct of Research requires RCR training to be provided in person, noting that online instruction is a helpful supplement but is insufficient as the sole provider of RCR training ( NIH, 2009 ). The guidance suggests at least a semester-long series of RCR instruction from faculty on a rotating basis to ensure full faculty participation and that instruction recur through the different levels of a scientist's career ( NIH, 2009 ). The CGS project discussed below produced a number of possible approaches for institutions aiming to improve RCR education, such as engaging faculty in developing discipline-specific content, holding lunchtime workshops for graduate students, integrating RCR content into courses, and developing courses that escalate in complexity ( CGS, 2012 ). The Integrity in Scientific Research report also recommends RCR instruction be provided by “faculty who are actively engaged in research related to that of the trainees” ( IOM-NRC, 2002 ). The CGS project made recommendations for institutional leaders to demonstrate engagement in RCR education through public endorsement from the university president and by assembling a steering committee of institutional leaders and a project director to oversee a plan to integrate RCR education into the curriculum ( CGS, 2012 ).

Institutions can participate in and take advantage of other RCR education development efforts. Recently, RCR training has shifted emphasis from the traditional focus on imparting knowledge, specifically of regulations and compliance requirements, toward the potential value of imparting skills in ethical decision making (see Appendix C ). The effectiveness of techniques such as team-based learning is also being explored ( McCormack and Garvan, 2014 ). An organization involved in RCR is the National Postdoctoral Association, which oversaw a project aimed at developing RCR educational approaches specifically for postdocs ( NPA, 2013 ).

Box 9-2 provides a best practices checklist for research institutions.

Best Practices Checklist for Research Institutions.

Journals and Other Scholarly Communicators

This section and the associated practices are addressed to journals—editors, governing bodies, and publishers—and other individuals and groups involved with scientific publishing and other forms of scholarly communication, including university librarians, digital archivists, and academic presses.

The basics of responsible publishing include ensuring that a journal's existing rules and guidelines have been followed, such as those related to data sharing and research involving human subjects ( Gustafsson et al., 2006 ). Editors are also responsible for the scientific quality of the journal. Journals should clearly articulate their publication criteria and evaluate submissions based on those criteria. They should provide the authors of proposed publications with a fair and full account of reviewers' comments and ensure transparent communication in the event of disputes, questions, or difficulties in the publication process. Journals should make their principles and processes visible to authors, readers, librarians, and peer reviewers. As an example, publishers should disclose sources of funding or other issues that may affect the choice of work to disseminate.

The 1992 report Responsible Science mentions scientific journals and editors and contains a general recommendation that journals and societies support research integrity. Journal concerns and responsibilities related to research integrity have grown and shifted in recent years, as article retractions have increased, a series of high-profile cases of fabricated research published in several high-profile journals has come to light, and relatively new challenges such as image manipulation have prompted journals to develop new policies and approaches. The fact that detecting fabrication often requires specialized technical and analytical tools makes it unlikely that it will be uncovered in the normal peer review process (i.e., before publication).

Although it is sometimes assumed that journal peer review processes are or should be effective mechanisms for uncovering fabricated data and other research misconduct, history and recent experience indicate that this is not the case ( Ioannidis, 2012 ; Stroebe et al., 2012 ). Most misconduct is uncovered through revelations by whistleblowers or by other scientists who have tried and failed to replicate fabricated research.

Over the years, a number of individual journals and publishing groups, journal associations, and other groups have developed ethical codes and good practice guidelines for scientific publishing ( COPE, 2011 ; CSE, 2012b ; ICMJE, 2013 ; SfN, 2010 ). Some publication executives and boards regard the Committee on Publication Ethics (COPE) principles and recommendations as directive and more or less adhere to them. Others regard them as informative and suggestive while holding independent views on responsible publishing that occasionally vary from COPE's advice. COPE promulgates a mandatory code of conduct for journal editors and a more aspirational set of best practices. COPE has also published a number of guidelines and monographs intended to assist editors and publishers in the course of their work.

Digital innovation has been a major source of disruption in science, engineering, technology, and medical research and publishing, and this has implications for responsible research. Predicting the directions and extent of progress in information technologies is difficult, yet principles and best practices in publishing should be flexible enough to be applied as innovations in research practice arise. The Society for Neuroscience's recently revised ethics policy and guidelines for responsible conduct in scientific publishing are useful examples ( SfN, 2010 ). The set of guidelines put forward for authors is notable for the detailed specifications given for describing the intellectual contribution of authors.

Some journals have introduced technical checks to detect plagiarism and image manipulation. These tools have been useful in detecting misconduct and detrimental practices in proposed papers. In addition, a recent trend among biomedical journals has been to hire ethics officers. It should be noted that these sorts of steps contribute to rising costs that are passed on to university libraries, other subscribers, and, in the “open access” arena, the authors of research. Still, these costs need to be balanced against the costs incurred in editorial time when a journal has to retract a paper.

Best Practice J-1: Practicing Transparency. Practice transparency in journal policies and practices related to research integrity, including publication of retractions and corrections and the reasons for them

Openness is fundamental to the success of the entire chain of processes and relationships involved in scholarly communication. This principle translates directly into best practices in publishing, with just a few exceptions. The one obvious exception is that of peer review, in which the identity of peer reviewers has traditionally been hidden so that undue influence on reviewers is minimized, pre- or postpublication, thus creating an environment enabling direct and frank critical commentary for authors and editors by reviewers. As discussed in Chapter 3 , improving peer review policies and practices and considering other models—such as unblinded review—are issues currently facing journals and disciplines.

Following this best practice begins with maintaining an up-to-date set of author instructions, as well as ethical policies for authors, reviewers, and editors. The policies should include procedures to be followed when allegations of misconduct arise. Journals should communicate retractions (including the reasons for retractions or why a reason cannot be provided), corrections, clarifications, and apologies promptly and openly to ensure that the published record of research is as free of bias, error, and falsehoods as possible. New means of electronic communication provide new and potentially powerful ways of correcting the research literature. There is great value in putting retractions in the place of the target article and in tables of contents. Metadata—which is information about a dataset embedded within it—associating each with the target article should be included for ongoing observation and analysis.

In addition, data and code that support an article should be published with the article (or chapter or book) or made otherwise available (e.g., through linking) in its original position in an issue (or edition) as well as a separate issue- or title-level section with its own explicit entry in the table of contents. Publishers and editors should provide for postpublication review and commentary attached to scientific, technical, and medical articles. Such commentary can be helpful in uncovering problems with published work and in exploring promising areas for research that would confirm or extend the reported results.

Journals should have policies in place to prevent conflicts of interest on the part of editorial staff from affecting editorial decisions. One way of handling this would be for editorial staff to provide conflicts of interest in narrative form in articles and as metadata for systematic observation and analysis. Alternatively, the journal might define what constitutes a conflict of interest for any editor, and then state that if an editor has a conflict of interest with any of the authors of a paper, he or she is excluded from handling the paper. Journals would have on hand declarations from their editors that are updated annually or more often as circumstances change. Addressing conflicts of interest of other participants in the publication process is covered below.

Throughout the publishing process, journals should negotiate fairly and as transparently as possible in author, author-reviewer, and author-reader disputes.

While not as directly supportive of research integrity as the other steps outlined above, journals contribute to the effective functioning of the research enterprise by providing open access to publications, perhaps after an embargo period so as not to interfere with a publisher's business viability.

Best Practice J-2: Requiring Openness. Require openness from authors regarding public access to data, code, and other information necessary to verify or reproduce reported results. Require openness from authors and peer reviewers regarding funding sources and conflicts of interest

As described in other parts of this report, including Chapter 7 , requiring authors to share data and code for purposes of verification, replication, and reuse is an important step that the research enterprise can take to help ensure research integrity. Journals are in a powerful position to implement this step, and some are developing new policies and procedures aimed at ensuring access to data and code ( Nature , 2013 ). Although making data available with the article is the traditional approach in many disciplines, linking to a specialized database or repository will likely be the preferred way to provide access to data in most cases. One example of efforts to expand the availability of data is a 2016 proposal by the International Committee of Medical Journal Editors that in order for an article to be considered for publication authors should be required to commit to publish “deidentified individual-patient data underlying the results” of clinical trial research within 6 months of the corresponding article for reproducibility purposes ( Taichman et al., 2016 ).

The data to be made available should include outlier data and negative results if appropriate. Alterations to images should be specified. In cases where regulatory, legal, or technological constraints prevent authors from providing full access to data, an explanation should be published along with the paper.

Journals should work with sponsors, authors, and research institutions to ensure long-term access to data, code, and other information supplementary to the article. Archiving of articles and supplementary information by third parties is the ultimate goal, although securing the necessary resources and developing the appropriate mechanisms remain challenging tasks in some fields and disciplines.

It is also important for full method descriptions to be included in every publication. Currently, references to method sections in previously published work are common in some fields, but this may cause ambiguity as to what was actually done. With the availability of electronic supplements, there is no reason why full methods cannot be included, even if this means reprinting what the same author published previously. Good practice should not be discouraged by concerns about self-duplication if this increases transparency and reduces ambiguity.

Financial conflicts of interests, other relevant financial relationships, and relevant nonfinancial interests should be identified by all authors and included in print and as metadata (PLOS Medicine Editors, 2008). For example, “publishing relevant competing interests for all contributors and publishing corrections if competing interests are revealed after publication” is a best practice listed in COPE's guidelines ( COPE, 2011 ). This disclosure should include an explicit citation of support from funders, whether corporate or not for profit.

Journals should also take steps to safeguard the integrity of the peer review process. COPE's guidelines for peer reviewers include submitting a declaration of potential competing interests, respecting the confidentiality of the process, and not intentionally delaying the process ( Hames, 2013 ). Journals might ask reviewers to explicitly commit to these guidelines by signing a statement.

Best Practice J-3: Authorship Contributions. Require that the contributions and roles of all authors be described. 2

Article authors are the researchers who have contributed significantly to the article and are listed in the article byline. Authorship determines who receives credit for the work and fixes responsibility if or when mistakes or misconduct is uncovered. While guidance on authorship is provided by journals, institutions, societies, and other groups, specific practices vary by discipline. Although detrimental authorship practices other than plagiarism have not been included in the U.S. government's definition of research misconduct, practices such as honorary authorship and unacknowledged ghost authorship, as well as authorship disputes, pose challenges to research integrity. The Council of Science Editors points out that “problems with authorship are not uncommon and can threaten the integrity of scientific research” ( CSE, 2012b ). A recent review of research on authorship across all fields found that 29 percent of researchers in several separate studies reported that they or others they know had experiences involving the misuse of authorship (this figure could be inflated by multiple reports of the same behavior in some of the reviewed studies) ( Marušić et al., 2011 ).

In an environment of increasing collaboration across institutions and borders, it may be more difficult to determine who is responsible for mistakes or fabricated work. In some cases of fabricated or falsified research, senior researchers have claimed that they were merely honorary authors and therefore were not responsible for the integrity of the reported work.

These issues pose challenges to journals, which have responded by paying increasing attention to authorship. One journal practice that has become fairly widespread is to require authors to describe their individual contributions, which are published in a designated place in the article. Journals such as the Lancet began adopting this practice in the 1990s ( Yank and Rennie, 1999 ). The Nature Publishing Group journals, which had requested that authors provide contribution disclosures beginning in 1999, made them mandatory in 2009 ( Nature , 2009 ). At the same time, Nature had considered requiring corresponding authors to sign a statement that they had taken some integrity assurance steps, but there was significant skepticism about this proposal.

Most current contribution disclosures tend to be fairly broad. For example, the Proceedings of the National Academy of Sciences provides an example list of contributions that includes research design, research performance, contribution of new reagents or analytic tools, data analysis, and writing ( PNAS, 2013 ). Advances in technology hold out the possibility that such contribution disclosures can become more detailed and useful in the future, providing the underlying tools for researchers to maintain up-to-date, verified accounts of their work ( Frische, 2012 ).

For now, journals should require contribution disclosures at as detailed a level as practical and be open to adjusting these requirements as technologies and tools evolve. For peer-reviewed papers, all authors should be identified along with the sources of funding for their work. To avoid questions of duplication, previously published materials should be identified and cited.

Best Practice J-4: Training and Education. Facilitate regular training and education in responsible publishing policies and best practices for editors, reviewers, and authors

Best practices for research institutions and mentors in RCR training and education are described above. Journals can play an important role in focused areas of RCR education as well. It is particularly important for editors to be knowledgeable about responsible publishing practices, requirements that need to be communicated to authors and reviewers, and what to do if problems arise. Some aspects of responsible writing, reviewing, and editing may not be covered in RCR training provided to graduate students. A recent review indicates that many writers, reviewers, and editors lack the necessary training to play their roles effectively, but little is known about the availability and effectiveness of such training ( Galipeau et al., 2013 ). The Council of Science Editors, which has provided training for editors for some time, recently launched a certificate program in scholarly publication management ( CSE, 2012a ). A 2006 paper recommended that an international online training and accreditation program for peer reviewers should be established ( Benos et al., 2007 ).

Journals have varied capabilities and resources to encourage training or to undertake their own educational programs. They should take what steps are appropriate to their own circumstances to help ensure that authors, reviewers, and editors are well prepared to perform their tasks.

Best Practice J-5: Collaboration. Work with other journals to develop common approaches and tools to foster research integrity

As described elsewhere in this section, the work of groups such as the Committee on Publication Ethics, International Committee of Medical Journal Editors, and Council of Science Editors has been of great value to the research enterprise in developing policies, tools, and approaches to ensure research integrity. While individual journals and other scholarly communicators need to maintain the independence to adopt policies and practices that are appropriate to their circumstances, continued collective efforts by journals can contribute to improvements in standards and practices across the enterprise. Uniform policies reinforce the norms of research integrity.

Box 9-3 provides a best practices checklist for journals and other scholarly communicators.

Best Practices Checklist for Journals.

Research Sponsors and Users of Research Results

Sponsors and users of research occupy particularly important positions in the research enterprise. In general, researchers and research institutions rely on funding from government and private-sector sponsors such as industry and foundations to perform their work. The incentive structures created by sponsors can have a significant influence on the motivations and behaviors of researchers and institutions. The changing environment for research funding and the resulting pressures on researchers are described in Chapter 3 and Chapter 6 . While specific recommendations to sponsors are developed in Chapter 11 , this section identifies several specific best practices that research sponsors and users of research results can adopt to ensure research integrity.

The 1992 report Responsible Science recommended several roles for government research sponsors related to integrity, including adopting a common framework of definitions of research misconduct and common policies, adopting policies and procedures that ensure appropriate and prompt responses to allegations of misconduct, and providing support for institutional efforts to discourage questionable research practices ( NAS-NAE-IOM, 1992 ). The 2002 report Integrity in Scientific Research recommended that research sponsors support work to increase understanding of the factors that influence research integrity, including monitoring and assessing those factors ( IOM-NRC, 2002 ). As discussed in Chapter 6 , the Office of Research Integrity and the National Science Foundation maintain programs to support such research.

U.S. government research sponsors such as the National Institutes of Health and the National Science Foundation have imposed several mandates and other regulatory requirements on research institutions and researchers over the past several decades covering RCR education and training. The Office of Research Integrity also requires institutions to file an assurance that they have developed and will comply with policies for addressing allegations of misconduct in Public Health Service–sponsored research that meet Public Health Service policies.

The need for research sponsors to take an active role in fostering research integrity is becoming more recognized around the world. The Irish Council for Bioethics report Recommendations for Promoting Research Integrity ( ICB, 2010 ) provides a useful overview of various approaches. The Global Research Council's Statement of Principles on Research Integrity is a succinct list of funding agency responsibilities that includes promotion of education, leading by example, and conditioning support on upholding research integrity ( GRC, 2013 ). The InterAcademy Council and InterAcademy Panel ( IAC-IAP, 2012 ) have also described the responsibilities of funding agencies in Responsible Conduct in the Global Research Enterprise: A Policy Report .

Best Practice RS-1. Research Integrity and Quality. Align funding and regulatory policies with the promotion of research integrity and research quality

Aligning funding and regulatory policies with the promotion of research integrity and research quality has several distinct aspects. For example, as described in Chapter 4 , some funding agencies and regulatory bodies maintain policies on research misconduct and exercise oversight over how institutions address allegations of misconduct. Private foundations such as the Howard Hughes Medical Institute also have research misconduct policies ( HHMI, 2007 ). As discussed in Chapter 9 , agencies require grantee institutions to provide RCR education. Funders that play these roles should ensure that their policies are clear and implemented consistently. Additional commentary on the policies and practices of U.S. government agencies is provided in Chapter 7 in support of the committee's recommendations in this area.

A second aspect of aligning policies and practices with the promotion of research integrity is to increase awareness of how funding policies affect research integrity and to make adjustments when possible and necessary. This may involve support for research that illuminates issues related to research integrity. For example, in recent years the Office of Research Integrity has responded to evidence that the institutional environment has a major impact on research integrity by supporting efforts to study, assess, and strengthen those environments. Some policy initiatives might be based on direct understanding of a situation rather than the results of sponsored research—ORI has also sought to address unevenness in institutional capacity to respond to allegations of misconduct by supporting professional training for research integrity officers.

A recent international report has pointed out that funders have a responsibility to ensure that funding policies not cause researchers and research institutions to emphasize quantity over quality ( IAC-IAP, 2012 ). Chapter 6 explores whether changes in the level and structure of research funding might be associated with detrimental research practices or misconduct. As explained there, this is a complex issue. Evaluating the extent of possible problems and recommending solutions are beyond the scope of this committee's task. Nevertheless, agencies may already be collecting relevant data on how changes in funding and organization are affecting research environments ( NIH, 2012a ). Sponsors should look for opportunities to develop evidence on possible impacts of funding policies on the researchers and institutions that are supported, including impacts on integrity, and take appropriate actions. One example is the NIH policy that limits the number of publications that can be listed in the biosketch submitted in grant and cooperative agreement applications, which may help reduce incentives for researchers to maximize the number of publications ( NIH, 2014 ).

Finally, research funders can take steps to coordinate and harmonize their activities within their own domestic contexts as well as internationally. Examples of international cooperation include NSF's participation in the Global Research Council and Organisation for Economic Co-operation and Development Working Group activities to develop common approaches to dealing with research integrity issues across member countries ( GRC, 2013 ; OECD, 2009 , 2007 ). The Fogarty International Center, part of NIH, supports capacity building in bioethics and research integrity in the developing world.

Best Practice RS-2. Data and Code. Promote access to data and code underlying publicly reported results

The importance of ensuring access to data and code for research integrity and quality is covered above with reference to journal practices and policies. Funders have important roles to play as well. The America COMPETES Reauthorization of 2010 called on federal agencies to ensure access to publications and data resulting from work that they support, and the Office of Science and Technology Policy began working with agencies on implementing the legislation in early 2013 ( Holdren, 2013 ). Federal sponsors can also play a role in providing resources to cover the costs borne by researchers and institutions in making data and code available. Funders will play a critical role in supporting the development of necessary infrastructure, such as data and sample repositories, efforts to develop metadata standards, and the development of applications that facilitate the direct deposit of data to the repositories complete with the metadata. Without those efforts and tools, compliance for data deposition will be low, and the ability of others to use the data for reproducibility will be hampered.

Industry research sponsors also have important contributions to make in this area. Clinical trial data constitute a prominent specific example. Over the years, the share of clinical trials funded by industry has grown ( Buchkowsky and Jewesson, 2004 ). At the same time, pressure has grown to make the clinical trial process more transparent through mechanisms such as public registration of all trials and encouraging the release of all results, including negative results. A recent report states that there are “compelling justifications for sharing clinical trial data to benefit society and future patients” ( IOM, 2015 ). There is a need to ensure that data sharing is done responsibly and protects privacy. Lack of timely reporting of clinical trials is not solely or even primarily an issue in industry-performed or industry-sponsored work; clinical trials performed at academic medical centers and sponsored by federal agencies and other nonindustry sources also need to improve their practices ( Chen et al., 2016 ). Still, since clinical trials are an important component of industry-sponsored research that is published in peer-reviewed journals, industry sponsors can make an important contribution by registering all of their trials, reporting all results in a timely way, and sharing data responsibly.

In September 2016, NIH issued a final policy to promote broad and responsible dissemination of information from NIH-funded clinical trials through ClinicalTrials.gov . Under this policy, every clinical trial funded in whole or in part by NIH is expected to be registered on ClinicalTrials.gov and have summary results information submitted and posted in a timely manner, whether subject to section 402(j) of the Public Health Service Act or not ( NIH, 2016 ).

Best Practice RS-3: Utilizing Research. Practice impartiality and transparency in utilizing research for the development of policy and regulations

As discussed in Chapter 3 , scientific evidence and inputs are increasingly important to numerous areas of policy making—public health, environmental protection, economic development, criminology, food safety, education, and many other areas. The interpretation of research results is a central part of many contentious policy debates, which often feature accusations that science is being manipulated or distorted by powerful interests.

One recent report identifies the five “tasks” that science has in relation to policy: “(1) identify problems, such as endangered species, obesity, unemployment, and vulnerability to natural disasters or terrorist acts; (2) measure their magnitude and seriousness; (3) review alternative policy interventions; (4) systematically assess the likely consequences of particular policy actions—intended and unintended, desired and unwanted; and (5) evaluate what, in fact, results from policy” ( NRC, 2012b ). The report also develops a framework for understanding how science is used in policy and points to areas where better knowledge could improve the utilization of science in policy making.

The utilization of science as an input to policy is a broad, complex field that this report cannot cover in detail. It raises questions and issues of global concern that scientists, policy makers, and citizens of nations around the world will be wrestling with for years to come ( Gluckman, 2014 ). At the same time, the responsible communication of results to policy makers and the public by researchers, and the adoption of best practices by governments in utilizing that input, are important components of scientific integrity that are closely related to other issues discussed in this report.

Recent efforts to define and implement best practices in utilizing science for policy making have focused on the development of clear policies and procedures and the utilization of transparent processes. For example, a 2009 report of the Bipartisan Policy Center explored the need for clearer policies governing the disclosure of relevant relationships by potential members of federal advisory committees, including expert testimony and consulting relationships, to prevent conflicts of interest in these activities ( BPC, 2009 ).

As discussed in Chapter 3 , the Obama administration launched an initiative in 2010 to require all federal agencies to develop and adopt scientific integrity policies ( Holdren, 2010 ). Although an analysis by the Union of Concerned Scientists concluded that the efforts of a number of agencies fell short of what is needed to “promote and support a culture of scientific integrity,” the universal adoption of such policies is certainly an important step ( Grifo, 2013 ).

Box 9-4 provides a best practices checklist for research sponsors and users of research.

Best Practices Checklist for Research Sponsors and Users of Research.

According to one perspective on the role of scientific societies in fostering research integrity, “As visible, stable, and enduring institutions, scientific societies serve as the custodian for a discipline's norms and traditions, transmitting them to their members and helping to translate them into accepted research practices” ( Frankel and Bird, 2003 ). The focus here is on disciplinary societies, although it should be noted that the largest general professional association of scientists, the American Association for the Advancement of Science, has been active over the years in a number of areas related to research integrity. Several members of the committee met with a large number of scientific society representatives as part of this study, discussing the concerns and issues facing societies and learning about what they are doing to foster integrity. Many societies publish journals as one of their core activities, and best practices associated with publishing are covered above.

Honorific academies can also play a constructive role in fostering research integrity in their national contexts, and interacademy networks can contribute at the international level by developing and disseminating guidelines and educational materials ( ESF-ALLEA, 2011 ; IAP, 2016 ; NAS-NAE-IOM, 2009b ).

Best Practice S-1. Standards and Education. Serve as a focal point within their disciplines for the development and updating of standards, dissemination of best practices, and fostering RCR education appropriate to the discipline

The specific areas where many societies are active, apart from those related to publication, are the formulation of codes of conduct and educational efforts ( Macrina, 2007 ). Responsible Science asserted that societies should play a key role in developing guidelines for research conduct appropriate to their specific fields ( NAS-NAE-IOM, 1992 ). Many societies developed codes of conduct when research misconduct became a prominent issue in the late 1980s and 1990s, covering issues such as data handling, authorship, mentoring, and research misconduct. An American Association for the Advancement of Science survey undertaken in 2000 reported on the content and subject matter coverage of society ethics codes ( Iverson et al., 2003 ). The American Society for Microbiology, for example, developed its first code of conduct in 1988, and it has been revised several times since ( Macrina, 2007 ). This points to the importance of regularly updating codes of conduct in order to keep pace with changing research practices within disciplines and new ethical issues.

Societies have been active in fostering RCR education. One mechanism for doing this is through workshops or symposia held during the society's annual meeting ( Iverson et al., 2003 ). ORI has provided support for these efforts ( Macrina, 2007 ). Societies can also develop case studies and other educational materials that illustrate ethical issues that can arise in their disciplines. One example is the American Physical Society, which developed an extensive set of case studies in the mid-2000s following several high-profile cases of research misconduct in physics ( APS, 2004 ).

Box 9-5 provides a best practices checklist for scientific societies and professional organizations.

Best Practices Checklist for Scientific Societies and Professional Organizations.

In Recommendation Five, this report calls for the development and adoption of authorship standards and suggests a framework that if adopted would formally codify several of the best practices discussed here, such as describing the roles of all authors. See Chapter 8 for the rationale underlying the recommendation and Chapter 11 for the recommendation text.

In Recommendation Five, this report calls for the development and adoption of authorship standards and suggests a framework that if adopted would formally codify the requirement that the roles of authors be disclosed across all fields and disciplines. See Chapter 8 for the rationale underlying the recommendation and Chapter 11 for the recommendation text.

  • Cite this Page National Academies of Sciences, Engineering, and Medicine; Policy and Global Affairs; Committee on Science, Engineering, Medicine, and Public Policy; Committee on Responsible Science. Fostering Integrity in Research. Washington (DC): National Academies Press (US); 2017 Apr 11. 9, Identifying and Promoting Best Practices for Research Integrity.
  • PDF version of this title (3.0M)

Recent Activity

  • Identifying and Promoting Best Practices for Research Integrity - Fostering Inte... Identifying and Promoting Best Practices for Research Integrity - Fostering Integrity in Research

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

IMAGES

  1. What is Research Integrity?

    honesty in research experimental results and conclusions

  2. What is Academic Honesty

    honesty in research experimental results and conclusions

  3. Summary of the Findings, Conclusion and Recommendation

    honesty in research experimental results and conclusions

  4. How to Write an Effective Conclusion for the Research Paper

    honesty in research experimental results and conclusions

  5. What is Experimental Research & How is it Significant for Your Business

    honesty in research experimental results and conclusions

  6. Intellectual Honesty and Research integrity

    honesty in research experimental results and conclusions

VIDEO

  1. Honesty Is The Best Policy

  2. Honesty is The Best Policy

  3. Intellectual Honesty & Research Integrity: Scientific Misconducts & Redundant Publications

  4. Why there's so much confusion about nutrition research

  5. Honesty is the best business tool to set realistic expectations and communicate results

  6. Characteristics of Qualitative Research ft. Grade 11 Honesty Students of LUNHS

COMMENTS

  1. Foundations of Integrity in Research: Core Values and Guiding Norms

    Synopsis:The integrity of research is based on adherence to core values—objectivity, honesty, openness, fairness, accountability, and stewardship. These core values help to ensure that the research enterprise advances knowledge. Integrity in science means planning, proposing, performing, reporting, and reviewing research in accordance with these values. Participants in the research ...

  2. Let's be honest: A review of experimental evidence of honesty and truth

    Highlights. •. Honesty can be considered an important norm in any given society. •. However, the lack of generally-accepted games confounds our understanding. •. This paper reviews 63 economic and psychological experiments into honesty. •. The review compares and contrasts experimental design, treatments, and findings.

  3. Integrity in Research

    INTEGRITY IN RESEARCH. Integrity characterizes both individual researchers and the institutions in which they work. For individuals, it is an aspect of moral character and experience. 1 For institutions, it is a matter of creating an environment that promotes responsible conduct by embracing standards of excellence, trustworthiness, and lawfulness that inform institutional practices.

  4. 2

    What is academic honesty? Academic honesty ensures acknowledgement of other people's hard work and thought. The International Center for Academic Integrity defines it as "a commitment, even in the face of adversity, to six fundamental values: honesty, trust, fairness, respect, responsibility, and courage.From these values flow principles of behavior that enable academic communities to ...

  5. Scientific Integrity Principles and Best Practices: Recommendations

    Introduction. In the twenty-first century, scientists work in a research environment "that is being transformed by globalization, interdisciplinary research projects, team science, and information technologies" (Interacademy Partnership 2016).As the scientific enterprise evolves, all stakeholders in the scientific community have an ethical obligation to place a high priority on instilling ...

  6. Evolution of research on honesty and dishonesty in academic work: a

    The discourse on honesty and dishonesty in academic work has seen considerable growth over the past two decades. This study empirically analyses the shifts in the literature over the past two decades in the research focus and most prolific authors, institutions, countries, and journals. ... A bibliometric analysis was conducted for each decade ...

  7. Fostering Integrity in Research

    Synopsis: The integrity of research is based on adherence to core values—objectivity, honesty, openness, fairness, accountability, and stewardship. These core values help to ensure that the research enterprise advances knowledge. Integrity in science means planning, proposing, performing, reporting, and reviewing research in accordance with these values.

  8. PDF Intellectual Honesty and Research Integrity 4

    Intellectual Honesty and Research Integrity 4. Overview. As per Wikiversity, intellectual honesty is an applied method of problem solving, characterized by an unbiased, honest attitude, which can be demonstrated in a number of different ways including: † Ensuring support for chosen ideologies does not interfere with the pursuit of truth ...

  9. Intellectual Honesty and Research Integrity

    Research integrity may be defined as active adherence to the ethical principles and professional standards essential for the responsible practice of research. By active adherence we mean adoption of the principles and practices as a personal credo, not simply accepting them as impositions by rule makers. By ethical principles we mean honesty ...

  10. Research integrity is much more than misconduct

    The biggest impact on research integrity is achieved through sustained improvements in day-to-day research practices — better record-keeping, vetting experimental designs, techniques to reduce ...

  11. Aspiring to greater intellectual humility in science

    When our results and conclusions might be dependent on arbitrary choices in data cleaning or analysis (which is often the case), a multiverse analysis 24 might enable us to evaluate the robustness ...

  12. Scientific Integrity and the Ethics of 'Utter Honesty'

    Feynman's description of what he meant by "utter honesty" in science, remember, was couched entirely in positive terms. Scientific integrity involves "leaning over backward" to provide a full and honest picture of the evidence that will allow others to judge the value of one's scientific contribution.

  13. Frontiers

    Likewise, our teams consisted of two members and future research could vary this dimension by examining behavior of larger teams. Field experimental methods could be used decrease scrutiny of laboratory experiments and similar studies with higher stakes could check for the robustness of our and Shu et al.'s findings.

  14. Evolution of research on honesty and dishonesty in academic work: a

    PDF | On Dec 4, 2021, Saadia Mahmud and others published Evolution of research on honesty and dishonesty in academic work: a bibliometric analysis of two decades | Find, read and cite all the ...

  15. Read "Responsible Science: Ensuring the Integrity of the Research

    Research data are the basis for reporting discoveries and experimental results. Scientists traditionally describe the methods used for an experiment, along with appropriate calibrations, instrument types, the number of repeated measurements, and particular conditions that may have led to the omission of some datain the reported version.

  16. Context and Definitions

    Synopsis:Integrity is essential to the functioning of the research enterprise and personally important to the vast majority of those who dedicate their lives to science. Yet research misconduct and detrimental research practices are facts of life. They must be understood and addressed. This chapter begins with a brief historical overview of misconduct in science, followed by a discussion of ...

  17. Honesty as a trait

    Honesty as an action versus honesty as a trait. Most conceptions of honesty focus on single acts of honesty, with honesty defined as a match between what a person communicates and what the person believes to be true [ 29 ]. However, most research includes acts of cheating, stealing, or other immoral actions under the umbrella of honesty.

  18. You can handle the truth: Mispredicting the consequences of honest

    People highly value the moral principle of honesty, and yet, they frequently avoid being honest with others. In the present research, we explore the actual and predicted consequences of honesty in everyday life. We use field and laboratory experiments that feature 2 types of honesty interventions: (1) instructing individuals to focus on complete honesty across their interactions for a period ...

  19. Let's be honest: A review of experimental evidence of honesty and truth

    Highlights. •. Honesty can be considered an important norm in any given society. •. However, the lack of generally-accepted games confounds our understanding. •. This paper reviews 63 economic and psychological experiments into honesty. •. The review compares and contrasts experimental design, treatments, and findings.

  20. How to Conduct Responsible Research: A Guide for Graduate Students

    The article begins with an overview of the responsible conduct of research, research misconduct, and ethical behavior in the scientific workplace. The takeaway message is that responsible conduct of research requires a thoughtful approach to doing research to ensure trustworthy results and conclusions and that researchers receive fair credit.

  21. "I'm just being honest." When and why honesty enables help ...

    In the present research, we explore when and why honesty enables helpful versus harmful behavior. Across 5 incentive-compatible experiments in the context of advice-giving and economic games, we document four central results. First, honesty enables selfish harm: people are more likely to engage in and justify selfish behavior when selfishness ...

  22. Karma and honest behavior: An experimental study

    Following analysis and submission of our results and in response to suggestions from the reviewers and the editor, we replicated our study with a new outcome variable that would allow us to obtain a more precise measurement of lying by creating the opportunity for partial lying: die rolls. 2,149 US-based Prolific users aged 18 and above, who had not already completed the coin toss experiment ...

  23. Identifying and Promoting Best Practices for Research Integrity

    The promotion of responsible research practices is one of the primary responses to concerns about research integrity. Other responses include the development of policies and procedures to respond to allegations of misconduct (covered in Chapter 7) and education in the responsible conduct of research (covered in Chapter 10). Exploring best practices in research helps to clarify that promoting ...