National Academies Press: OpenBook

Fostering Integrity in Research (2017)

Chapter: 3 important trends and challenges in the research environment, 3 important trends and challenges in the research environment.

By working collaboratively, researchers can hope to answer questions never addressed before, including those with substantial influence on society. At the same time, today’s international, interdisciplinary, team-oriented, and technology-intensive research has created an environment more fraught with the potential for error and distortion.

— Indira Nath and Ernst-Ludwig Winnacker (2012)

Synopsis: A number of the elements in the research environment that were identified in the early 1990s as perhaps problematic for ensuring research integrity and maintaining good scientific practices have generally continued along their long-term trend lines, including the size and scope of the research enterprise, the complexity of collaboration, the growth of regulatory requirements, and the importance of industry sponsorship and entrepreneurial research. Several important new trends that were not examined in the 1992 Responsible Science report have also emerged, including the pervasive and growing importance of information technology in research, the globalization of research, and the increasing relevance of knowledge generated in certain fields to policy issues and political debates. These changes—the growing importance of information technology in particular—have led to important shifts in the institutions that support and underlie the research enterprise, such as scholarly publishing. They also have important implications for the ability of researchers, research institutions, journals, and sponsors to foster integrity and prevent research misconduct and detrimental research practices.

The 1992 report Responsible Science: Ensuring the Integrity of the Research Process devoted a chapter to describing the contemporary research environment and outlining the most important changes that had occurred over the previous decades ( NAS-NAE-IOM, 1992 ). Responsible Science also described several additional features of the U.S. research scene of the early 1990s that had become the subject of discussion and concern due to possible negative impacts on the research environment, including research integrity ( NAS-NAE-IOM, 1992 ). This chapter will first explore the research environment issues identified in 1992—except for the reward system in science, which is covered in Chapter 6 —and describe trends over the past two decades. The second part of the chapter will

explore several important shifts in the research environment that have appeared since 1992 and were not considered in Responsible Science . These shifts carry several important implications for research integrity.

HOW RESEARCH ENVIRONMENT ISSUES IDENTIFIED IN RESPONSIBLE SCIENCE HAVE EVOLVED SINCE THE EARLY 1990s

Size and scope of the research enterprise.

The 1992 report’s overview described growth in the size and scope of the research enterprise. The report observed that research in the pre–World War II United States—academic research in particular—was a mostly small-scale avocation of individual scientists, supported by limited funding from industry, government, and foundations. Following the significant wartime contributions of research efforts such as MIT’s Radiation Laboratory, federal support for science and engineering research increased rapidly. By 1991, research and development (R&D) was a $160 billion (current dollars) enterprise in the United States, employing about 744,000 people in industrial, academic, and governmental laboratories and producing more than 140,000 research articles annually ( NSB, 1996 , 2014b ; OECD, 2015 ).

Over the following two decades, the enterprise has continued to grow, with U.S. R&D totaling $456 billion in 2013, R&D employment rising to about 1,252,000, and the number of published research articles reaching more than 412,000 ( NSB, 2014b , 2016 ; OECD, 2015 ). The 1992 report paid particular attention to the growth in academic research and federal support, and this growth has continued. Between 1991 and 2014, academic R&D grew from around $17.5 billion to $67.1 billion, with federal support constituting 60–75 percent of the total ( NSB, 2016 ). 1 The number of science, engineering, and health doctorate holders employed in academia rose from 211,000 in 1991 to almost 309,000 in 2013 ( NSB, 2016 ). The number of PhDs awarded in science and engineering more than doubled, from approximately 19,000 in 1988 to almost 37,000 in 2013, with an increasing percentage of these doctorate recipients going to work outside academia ( NSB, 2016 ).

The 1992 Responsible Science report raised the concern that the increased size of the research enterprise might put stresses on key capabilities, such as the “overall workload associated with critical evaluation” ( NAS-NAE-IOM, 1992 ). The number and capacity of effective peer reviewers might not be keeping pace with the relentless growth in manuscripts and proposals. Concerns also have been raised about the increasing use of bibliometric-based metrics in evaluating

___________________

1 From 2010, the total includes academic R&D outside of science and engineering, which adds several billion dollars.

research as a substitute or replacement for expert judgment ( P. B. Lowry et al., 2012 ).

Complexity of Collaboration

Responsible Science described the growth of collaborative research after World War II, which has continued since the early 1990s. In contrast to earlier times, when articles with more than four co-authors and work involving more than one laboratory or research institution were rare, collaborative research of various types is now very common. The number of authors listed on articles is only one measure of collaboration, but it clearly reveals the overall trend. In an analysis of approximately 20 million research articles published since 1955 and 2 million patents registered since 1975, the number of authors on scientific papers grew from an average of 1.9 in 1955 to 3.5 in 2000 ( Wuchty et al., 2007 ). At the same time, single-author articles are becoming less common, constituting only about 11 percent of the total in 2012 ( King, 2013 ).

Several factors are driving the trend toward larger-scale research in general and in specific fields ( Stephan, 2012a ). These include the need for more elaborate and expensive equipment and the often related requirement for a variety of specialized skills and knowledge. These characteristics of “big science” have long been a given in fields such as high-energy physics and astronomy, in the form of particle accelerators such as the Large Hadron Collider and modern telescopes. They have become more prominent recently in many areas of the life sciences as well. In describing the results of large life sciences research projects such as the Human Genome Project and ENCODE (Encyclopedia of DNA Elements), former Science editor-in-chief Bruce Alberts (2012) noted that “the increased efficiency of data production by such projects is impressive.” In addition, as will be discussed in more detail below, the information technology revolution has radically lowered the costs of communication and collaboration of all types, including research collaboration.

Another factor contributing to the growth of team research has been an increase in the amount of interdisciplinary research. Interdisciplinary research efforts have continued to grow in importance and are extremely diverse ( Derrick et al., 2012 ). Interdisciplinary teams can range from local and informal to transnational and highly structured. They can be composed largely or entirely of researchers accustomed to working within a disciplinary framework, or they can consist partly or wholly of researchers who have been educated and have worked in interdisciplinary fields. Integration of knowledge from multiple disciplines can occur within the mind of a single person or through the collaborative efforts of a large team. For example, with the advent of “big data” and computational science, statisticians are increasingly included on projects where researchers have collected domain-specific data that they do not have the expertise to analyze. Interdisciplinary research is often focused on problems that have important so-

cietal implications. One current example of a growing interdisciplinary field is synthetic biology, which seeks a fundamental understanding of the workings of living systems along with the capability of re-creating living systems for a variety of applications in areas such as medicine and the environment. Synthetic biology research involves “biologists of many specialties, engineers, physicists, computer scientists, and others” ( NRC, 2010 ).

According to one analysis of trends in interdisciplinary research in six research fields, the growth of interdisciplinarity has been modest—about 5 percent—even as the number of authors per article has grown by 75 percent ( Porter and Rafols, 2009 ). This study found that the number of disciplines cited by papers in these six fields—mathematics, neurosciences, electrical and electronic engineering, biotechnology and applied microbiology, research and experimental medicine, and atomic, molecular, and chemical physics—has increased, but the distribution of citations is within neighboring research areas and has only slightly broadened. According to the authors, “These findings suggest that science is indeed becoming more interdisciplinary, but in small steps—drawing mainly from neighboring fields and only modestly increasing the connections to distant cognitive areas.”

Collaborative science requires that researchers focus at least some attention on coordination and interaction, which in theory might detract from the time and effort devoted to research. Yet Wuchty et al. (2007) found that multiauthor teams produced more highly cited work in each broad area of research and at each point in time. In addition, though solo authors in 1955 were more likely to produce papers that were highly cited, suggesting that these papers reported on the most influential concepts, results, or technologies, teams are more likely to produce highly cited papers today. As the authors wrote, “solo authors did produce the papers of singular distinction in science and engineering and social science in the 1950s, but the mantle of extraordinarily cited work has passed to teams by 2000.”

As more researchers work collaboratively and as the size of teams grows, the relationships among team members can become more complex. Team members can be at different research institutions and have different disciplinary backgrounds. Teams can contain researchers at all stages of their careers, from undergraduate and graduate students involved in research to senior researchers. The diversity and geographic spread of people involved in teams can create opportunities for miscommunication, misunderstandings, unrealistic expectations, and unresolved disputes. Whether these opportunities account for part of the increase in reports of undesirable research practices is unclear, but they can make the research environment more complicated and difficult than when teams were smaller, colocated more regularly, and more homogeneous in terms of discipline or nationality.

As research projects are undertaken by larger groups that bring together a greater diversity of expertise, encompass a broader range of disciplines, and strive for a greater degree of synthesis, the potential for misunderstandings can grow. Coordination of research inevitably becomes more complex, and the members

of a team may have less familiarity with the discipline-specific practices of other team members, making it more difficult for each collaborator to check and verify the work done by others. As the number of collaborators increases, there is more scope for disagreements over the allocation of credit. It becomes much more challenging to reward and recognize individual contributions, which has a big impact on junior researchers in particular. In addition, the mentoring of students in responsible research practices can become more impersonal and generic. The mental model of graduate education and training in which mentors work closely with graduate students and are able to take the time and effort to ensure that mentees understand the rules and can follow them may describe a smaller and smaller part of the research enterprise. Interdisciplinary work increases the possibility that the standards and expectations of different fields may come into conflict.

Regulation and Accountability

The 1992 report also noted that research activities were “increasingly subject to government regulations and guidelines that impose financial and administrative requirements” in areas such as laboratory safety, human subjects protection, drug-free workplace assurance, laboratory animal care, and the research use of recombinant DNA and toxic and radioactive materials. Along with the relatively new requirements and regulations related to research misconduct, the development of which is covered in Chapter 4 of this report, ensuring compliance with these expanding regulatory requirements had resulted in an expansion of administrative and oversight functions and staff at universities and required increasing time and attention from investigators. As an increasing percentage of faculty time goes toward fulfilling the requirements of various regulations and reporting requirements, research-related tasks such as mentoring and checking the work of subordinates may be shortchanged.

The administrative and regulatory compliance burden on research institutions and researchers remains significant. For example, respondents to a 2012 survey of 13,453 principal investigators undertaken by the Federal Demonstration Partnership estimated that, on average, 42 percent of the time they spent working on federally funded research projects was devoted to meeting regulatory and administrative requirements ( Schneider et al., 2012 ). According to the survey results, areas of regulation where compliance is particularly time consuming include those related to finances, personnel, and effort reporting. In 2014 the National Science Board issued a report that analyzes the regulatory compliance burden on faculty and makes recommendations for how it might be reduced ( NSB, 2014c ). A 2016 National Academies report evaluated current approaches to regulating academic research and made recommendations for achieving the goals of regulation while reducing financial and time burdens on institutions and faculty ( National Academies of Sciences, Engineering, and Medicine, 2016 ).

Industry-Sponsored Research and Other Research Aimed at Commercialization

Increasingly, the scientific enterprise has been recognized not only as a place to expand knowledge but also as an engine for the creation of new products, novel therapies for disease, improved technologies, and new industries and jobs. To quote President Obama (2009b) , “scientific innovation offers us a chance to achieve prosperity.” The economic potential of science, however, also offers unique challenges to the responsible conduct of research, which were described in Responsible Science . These challenges can be seen in scientific research conducted in an industrial setting, scientific research conducted in university and research institutions in collaboration with industry, and university research that leads to entrepreneurial efforts by the researchers, requiring that they integrate both within themselves and in their professional behavior often divergent cultural understandings about the nature, purposes, and outcomes of research. These challenges include the potential of economic incentives to introduce scientific bias, the perception of conflict of interest due to economic incentives, and the potential effect of intellectual property protection on the timely dissemination of knowledge.

Industry funds and conducts a substantial amount of research in the United States. For both basic and applied research, as defined by the National Science Foundation, industry conducts 40 percent of the U.S. total ( NSB, 2016 ). Even considering just basic research, industry conducts approximately 24 percent, almost 90 percent of which it funds itself. Unlike academic research, corporate research is often driven by the needs of a company to remain financially solvent and to be accountable to shareholders. Corporate researchers often exist under hierarchical chains of supervision where management maintains greater control over the research process.

Only a fraction of the results of industry-funded research is published in the scientific and engineering literature and is thereby submitted to formal peer review. Of the articles published in 2013, authors from industry accounted for only 6 percent of the total, and that percentage has been declining over the past two decades ( NSB, 2014 ). This can be a product of the need to preserve intellectual property interests for trade secrets and obtaining patents. One consequence is that the knowledge gained in such research may not be widely disseminated or evaluated through the peer review process. This is not to say that such industry research is not of high quality or is not carefully reviewed. Companies can have strict protocols regarding the collection, documentation, and storage of data, particularly when there are strong regulatory or economic reasons to do so. Checking mechanisms may be built into industrial research to verify especially critical results ( Williams, 2012 ). And, as with all research, the use of research results in subsequent activities—including the production of commercial products—provides further checks on the validity of results.

However, both industrial research and industry-sponsored research in aca-

demic settings have been found to occasionally show signs of both unintentional and intentional bias. 2 For example, one might observe bias in the lack of publication of results with negative consequences for the profitability of a product or in the restriction of published findings to those that reflect positively on a product. An extreme case is the tobacco industry, which undertook a systematic effort over the course of decades to obscure the harmful effects of smoking ( Proctor, 2011 ). Other examples include episodes of alleged ghostwriting in some medical research, including the Paxil case described in Appendix D and also discussed in Chapter 7 . Such research tarnishes all other research by demonstrating that research agendas and techniques can be manipulated so severely as to subvert truth to other interests. Many journals have moved to reporting the financial interests of authors, whether the work has an industry sponsor or not, so that readers are made aware of the potential for bias.

In addition to collaborations with established industries, academic institutions have increasingly encouraged entrepreneurship and innovation for commercialization, particularly since the passage of the Bayh-Dole Act in 1980, which allowed institutions to hold patents on innovations produced with federal funding. Having seen the success of academic research products such as Gatorade and the Google search algorithm patent in generating revenue, institutions may hope that their researchers can achieve similar results. For fiscal 2011 the Association of University Technology Managers reported that the 186 institutions responding to its annual survey earned a total of $1.5 billion in running royalty income, executed 4,899 licenses, created 591 commercial products, and formed 671 start-up companies from their research (AUTM, 2012).

One result of the commercialization of university-generated technology is that the need to manage possible conflicts of interest has become an important issue in academic settings. A 2009 Institute of Medicine report explores the issue of institutional conflict of interest in more detail ( IOM, 2009 ). Individual conflicts of interest exist if the investigator is also the founder of a company conducting research or has a significant monetary stake in the research. This can also apply to an institution if it owns part of a company or has a financial stake in a faculty member’s research findings. Under the U.S. Financial Conflict of Interest (FCOI) policy, research funded by the Public Health Service requires institutions to maintain and enforce a FCOI policy; manage, reduce, or eliminate identified conflicts; report identified conflicts, the value of the conflicts, and a management plan to the Public Health Service Awarding Component; and publish significant financial interests of any personnel involved in the research on a publicly accessible website ( HHS, 2011b ). Currently, the Department of Health and Human Services does not have institutional regulations in the same manner as investigator FCOI regulations (required disclosure of FCOIs). Strengthened institutional FCOI regulations have been considered, but there is a need for further and separate consideration.

2 This is not meant to imply that research that is not sponsored by industry is necessarily unbiased.

The National Science Foundation policy is consistent with that of the Department of Health and Human Services. Regulations of individual financial conflicts of interest are further discussed in Chapter 7 and are also addressed in the context of best practices in Chapter 9 .

Additional individual conflicts of interest, or secondary interests, can also affect a research study, including political biases, white hat bias, commitment conflicts, career considerations, and favors to others ( IOM, 2009 ; Lesser et al., 2007 ). A political opinion, bias, or long-standing scientific viewpoint toward one position or another may influence the interpretation of findings, despite contradictory evidence ( Lesser et al., 2007 ). Similarly, white hat bias, or “bias leading to distortion of information in the service of what may be perceived to be righteous end,” also has the potential to influence conclusions ( Cope and Allison, 2010 ). An example of a conflict of commitment would be a principal investigator who does not have the time to perform all the duties for which he or she is responsible, such as securing funding, setting the overall direction for research in a lab, administrative responsibilities, and adequately supervising graduate students and postdocs. Secondary interests are rarely regulated, as they are considered a lesser incentive than financial interests.

Closer linkages between research and commercialization have introduced the possibility of financial gain from research more widely across the enterprise. This can pose challenges in terms of defining appropriate behavior and establishing guidelines for dealing with conflicts of interest, and it can complicate collaborations among individual researchers and among organizations.

MAJOR CHANGES IN THE RESEARCH ENVIRONMENT SINCE 1992

Information technologies in research.

The continued exponential rise in the power of information and computing technologies has had a dramatic impact on research across many disciplines. These technologies have not only increased the speed and scope of research but have made it possible to conduct investigations that were not possible before. Information technology advances have enabled new forms of inquiry such as those based on numerical simulation of physical and biological systems and the analysis of massive datasets to detect and assess the nature of relationships that otherwise would go unseen.

The contrast in computing capabilities since the publication of Responsible Science is especially stark. In 1992, use of e-mail was less than a decade old, and the World Wide Web had just been invented and was not widely known. Three-and-a-half-inch floppy disks for data storage had replaced 5-1/4-inch disks just a few years before. People made telephone calls on landlines, used letters to communicate in writing, and circulated preprints via the postal system. For

young researchers, the circumstances in which research was conducted in 1992 are almost entirely foreign.

One effect of information technologies in many areas of research has been to introduce intermediate analyses of considerable complexity between the “raw” data gathered by sensors and observations, and produced by data-creating devices such as DNA sequencers, and the results of research. Re-creating the steps from data to results can be impossible without a detailed knowledge of data production and analyzing software, which sometimes is dependent on the particular computer on which the software runs. This intermediate analysis complicates the replication of scientific results and can create opportunities to manipulate analyses so as to achieve desired results, as well as undermine the ability of others to validate findings.

Digital technologies can pose other temptations for researchers to violate the standards of scientific practice. For example, the manipulation of images using image-processing software has caused many journals to implement spot checks and other procedures to guard against falsification. The inappropriate application of statistical packages can lead to greater confidence in the results than is warranted. Data-mining techniques can generate false positives and spurious correlations. In many fields, the development of standards governing the application of technology in the derivation of research results remains incomplete even as continuing technological advances raise new issues. In a recent paper, two prominent biologists wrote, “Although scientists have always comforted themselves with the thought that science is self-correcting, the immediacy and rapidity with which knowledge disseminates today means that incorrect information can have a profound impact before any corrective process can take place” ( Casadevall and Fang, 2012 ).

The widespread utilization of information technologies in research may also introduce new sources of unintentional error and irreproducibility of results. A survey of researchers who utilize species distribution modeling software found that only 8 percent had validated the software they had chosen against other methods, with higher percentages relying on recommendations from colleagues or the reputation of the developer ( Joppa et al., 2013 ). The latter approaches pose risks of incorrect implementation and error for the research being pursued, particularly if software is not shared or subjected to critical review. Issues surrounding irreproducibility and information technologies are discussed further in Chapter 5 .

Besides affecting the conduct of research, information and communication technologies have transformed the communication of scientific results and interactions among researchers. In theory, if not always in practice, all the data contributing to a research result can now be stored electronically and communicated to interested researchers. This capability has contributed to a growing movement for much more open forms of research in which researchers work collectively on problems, often through electronic media ( Nielsen, 2012 ). However, this trend toward greater transparency has created tasks and responsibilities for research-

ers and the research enterprise that did not previously exist, such as creating, documenting, storing, and sharing scientific software and immense databases and providing guidance in the use of these new digital objects. For example, software produced by scientists in the course of analyzing the data is often carried out as a collaborative online process. This digitization makes it easier than ever to perform very complex analyses that not only lead to new discoveries but create new problems of opacity for the peer review process. And while technology is making many aspects of research more efficient, it may also create new tasks and responsibilities that are burdensome for researchers and that they may find difficult or impossible to fulfill.

The movement toward open science has encouraged the efforts of citizen scientists who are eager to monitor, contribute to, and in some cases criticize scientific advances ( Stodden, 2010 ). Review of scientific results from outside a research discipline can provide another check on the accuracy of results, but it also can introduce questions about the validity of findings that are not adequately grounded in knowledge of the research. Moreover, it can alter the relationship between researchers and the public in ways that require new levels of effort and sophistication among researchers involved in public outreach.

Advances in information technology are transforming the research enterprise, discipline by discipline, by changing the sorts of questions that can be addressed and the methods used to address them. There may be more opportunities to fabricate, falsify, or plagiarize, but there are also more tools to uncover such behavior. Issues related to research reproducibility and related practices are covered in Chapter 5 .

The Globalization of Research

Because knowledge passes freely across national borders, scientific research has always been an international endeavor. But this internationalization has intensified over the past two decades. Nations have realized that they cannot expect to benefit from the global research enterprise without national research systems that can absorb and build on that knowledge. As a result, they have incorporated science and technology into national plans and have established goals for increased R&D investments. They also have encouraged their own students and researchers to travel to other countries to study and work and have welcomed researchers from other countries. At the same time, private-sector companies have increased their R&D investments in other countries to take advantage of local talent, gain access to local markets, and in some cases lower their costs for labor and facilities. These and other trends, including cheaper transportation, better communications, and the spread of English as the worldwide language of science, are producing a new golden age of global science.

Once again, the trend is apparent in the author lists of scientific and engineering articles. Between 1988 and 2013, the percentage of science and engineer-

ing articles published worldwide with coauthors from more than one country increased from 8 percent to 19 percent ( NSB, 2016 ). Also, some countries have dramatically increased their representation in the science and engineering literature. Between 1999 and 2013, the average number of science and engineering articles published by Chinese authors rose 18.9 percent annually, so that by 2013 China, with 18 percent of the total, was the world’s second-largest national producer of science and engineering articles. Authors from China also increased their share of internationally coauthored articles from 5 percent to 13 percent between 2000 and 2010. Other countries that dramatically expanded their number of articles published included South Korea, India, Taiwan, Brazil, Turkey, Iran, Greece, Singapore, Portugal, Ireland, Thailand, Malaysia, Pakistan, and Tunisia, though some of these countries started from very low bases.

Another measure of the increasing internationalization of research is the number of foreign-born researchers studying and working in the United States. More than 193,000 foreign students were enrolled in U.S. graduate programs in science and engineering in 2013, and foreign-born U.S. science and engineering doctorate holders held 48 percent of postdoctoral positions in 2013 ( NSB, 2016 ). Science and engineering doctorate holders employed in U.S. colleges and universities who were born outside the United States increased from 12 percent in 1973 to nearly 27 percent in 2013. The United States remains the destination for the largest number of foreign students at the graduate and undergraduate levels, though its share of foreign students worldwide declined from 25 percent in 2000 to 19 percent in 2013.

Internationalization offers many benefits to the research enterprise. It can speed the advance of knowledge and permit projects that could not be done by any one country working alone. It increases cooperation across borders and can contribute to a reduction in tensions between nations. It enhances the use of resources by reducing duplication of effort and by combining disparate skills and viewpoints. The experiences students and researchers gain by working in other countries are irreplaceable.

But globalization also can complicate efforts to ensure that researchers adhere to responsible research practices ( Heitman and Petty, 2010 ). Education in the responsible conduct of research, while far from universal among U.S. science and engineering students, is nevertheless more extensive in the United States than in many other countries ( Heitman et al., 2007 ). Codes of responsible conduct differ from country to country, despite efforts to forge greater international consensus on basic principles ( ESF-ALLEA, 2011 ; IAC-IAP, 2012 ). In some countries with rapidly developing research systems, research misconduct and detrimental research practices appear to be more common than in countries with more established research systems ( Altman and Broad, 2005 ). Students from different countries may have quite different expectations regarding such issues as conflicts of interest, the deference to be accorded instructors and mentors, the treatment of research subjects, the handling of data, and the standards for authorship. For

example, one issue often noticed with foreign students in the United States is the different standards they apply to the use of ideas and phrases from others, which can lead to problems with plagiarism ( Heitman and Litewka, 2011 ).

As the sizes of individual national research enterprises grow and become more competitive, institutions and sponsors can experience more problems with research misconduct. Differences in national policy frameworks may constitute barriers to cross-border collaboration, but efforts are being made to harmonize or at least make these frameworks interoperable. Collaboration among researchers from different countries and cultures may expose differences in training, expectations, and values that affect behavior.

Relevance of Research Results to Policy and Political Debates

The rapid expansion of government support for scientific research in the decades after World War II was spurred by recognition of the importance of new knowledge in meeting human needs and solving problems. Over the past few decades, the link between scientific knowledge and issues in the broader society has become ever more apparent. Science is a critical factor in public discussions of and policy decisions concerning stem cells, food safety, climate change, nuclear proliferation, education, energy production, environmental influences on health, national competitiveness, and many other issues. Although all these topics cannot be covered here, this section will describe several of the key issues affecting science, policy, and the public and how they affect (and are affected by) research integrity.

To begin with, the federal government itself performs a significant amount of research through government laboratories, some of which is published. Federal agencies that perform research generally have policies and procedures in place to investigate allegations of research misconduct in their intramural programs (see NIH, 2012a , for an example of such policies and procedures, and see Chapter 7 for a more detailed discussion).

In addition, the Obama administration led an initiative on scientific integrity in the federal government starting in 2010 ( Holdren, 2010 ). Executive departments and agencies were instructed by the Office of Science and Technology Policy (OSTP) to develop policies that address a range of issues, including promoting a culture of scientific integrity, ensuring the credibility of government research, fostering open communication, and preventing bias from affecting how science is used in decision making or in communications with the public. The exercise is largely complete, as agencies have developed and implemented policies in response to the Office of Science and Technology Policy guidance ( Grifo, 2013 ; OSTP, 2013 ).

Research also comes into play in debates and decisions over numerous contentious policy issues. Science is not the only factor in these discussions. Many considerations outside of science influence policy choices, such as personal and

political beliefs, lessons from experience, trial-and-error learning, and reasoning by analogy ( NRC, 2012b ). To contribute to public policy decisions, researchers must be able to separate their expertise as scientists from their views as advocates for particular public policy positions. Furthermore, they often contribute to these discussions outside the peer-reviewed literature, whether in public forums, blogs, or opinion articles in newspapers. According to the document Responsible Conduct in the Global Research Enterprise: A Policy Report ( IAC-IAP, 2012 ), “Researchers should resist speaking or writing with the authority of science or scholarship on complex, unresolved topics outside their areas of expertise. Researchers can risk their credibility by becoming advocates for public policy issues that can be resolved only with inputs from outside the research community.”

One example of an area where science, public debate, and policy making have been closely tied and contentious in recent years is climate science. This has raised challenges for researchers and the institutions through which scientists provide policy advice. According to a recent National Research Council report, “Climate change is occurring, is very likely caused by human activities, and poses significant risks for a broad range of human and natural systems. The environmental, economic, and humanitarian risks of climate change indicate a pressing need for substantial action to limit the magnitude of climate change and to prepare to adapt to its impacts” ( NRC, 2011 ). The global climate is a highly complex system, and there is considerable uncertainty about the timing and magnitude of climate change, the effect of measures to reduce greenhouse gas emissions from human activities, regional impacts, and many other issues. Effectively limiting greenhouse gas emissions presents economic and technological challenges and affects countries and industries differently, making policy changes by individual countries difficult. The development of the United Nations Framework Convention on Climate Change and its evolution over time illustrate the barriers to collective action on a global level. 3

In this environment of significant uncertainty on key scientific questions, difficult policy choices, the possibility of large impacts on powerful economic interests, and highly mobilized advocacy operations on all sides of the climate change issue, the climate science community has faced challenges in maintaining its credibility and public trust as it contributes its expertise. This experience might provide lessons on what researchers and scientific institutions need to do and what they need to avoid as highly charged issues arise with important scientific components. For example, the Intergovernmental Panel on Climate Change (IPCC), which was awarded the Nobel Peace Prize in 2007, is an international body that undertakes periodic scientific assessments of climate science and constitutes the primary mechanism for scientists to inform policy makers at the global level. In November 2009 the unauthorized leak of e-mail conversations among climate researchers, a number of whom were heavily involved with the IPCC process,

3 See http://unfcc.int./meetings/warsaw_nov2013/meeting/7649.php .

appeared to reveal a number of questionable actions, including efforts to limit or deny access to data, failure to preserve raw data, and efforts to influence the peer review practices of journals. While subsequent investigations cleared the researchers of misconduct, the “Climategate” scandal and subsequent discovery of errors in IPCC’s most recent assessment raised questions about the quality and impartiality of the organization’s work. A 2010 study by the InterAcademy Council recommended a number of reforms in IPCC governance and management, review processes, methods for communicating uncertainty, and transparency ( IAC, 2010 ). One possible lesson from the recent climate change experience is that researchers, institutions, and fields whose work becomes relevant to controversial policy debates will need to consciously examine and upgrade their practices in areas such as data access and transparency ( NAS-NAE-IOM, 2009a ).

Recent high-profile international cases in which scientists have been criticized and even prosecuted based on their advisory activities include the statements of scientists in the aftermath of the Fukushima earthquake and tsunami in 2011, and the manslaughter convictions of seismologists whose statements were misconstrued by a government official, Bernardo De Bernardinis, to mean that there was no risk of danger immediately prior to an earthquake in L’Aquila, Italy, that killed more than 300 people ( Cartlidge, 2012 ; Jordan, 2013 ; Normile, 2012 ). An appeals court overturned the convictions 2 years later for the six seismologists involved, but not for De Bernardinis ( Cartlidge, 2014 ).

Other issues involving science and policy that raise questions about integrity seemingly appear in the media on a weekly basis. During 2012, controversy erupted over a University of Texas sociologist’s research findings that adult children of parents who had same-sex relationships fared worse than those raised by parents who had not had same-sex relationships; his research methodologies have been severely criticized, but an institutional inquiry cleared him of research misconduct ( Peterson, 2012 ). A federal appeals court upheld a South Dakota statute requiring doctors to tell women seeking abortions that they face an increased risk of suicide; despite extremely weak research evidence to support the statute, the court decided not to strike it down as an undue burden on abortion rights or on First Amendment grounds ( Planned Parenthood Minnesota, N.D., S.D. v. Rounds, 2012 ). A French paper found that rats consuming genetically modified corn developed more tumors and died earlier than a control group, although food safety agencies have stated that the sample sizes were too small to reach a conclusion ( Butler, 2012 ). And a criminal investigation of a Texas state agency established to fund research on cancer prevention and treatment revealed that some awards were made without scientific review, which led to a wave of resignations among staff and oversight board members ( Berger and Ackerman, 2012 ). Needless to say, these cases underscore the salient role of scientific research in policy discussions.

For researchers, exercising responsibility in relations with society encompasses an increasing array of issues. For example, health and social science research in some communities, such as Native American tribes, requires adher-

ence to community rules for gaining approval. Research on people’s behavior on social networking websites raises questions about how human subject protections apply. Some emerging areas of research, such as crisis mapping and monitoring, raise human rights issues ( AAAS, 2012 ). Finally, researchers in the life sciences are being asked to exercise responsibility in the area of preventing the misuse of research and technology ( IAP, 2005 ).

Research findings are increasingly relevant to a broader range of policy-relevant questions, raising the magnitude of possible negative consequences of research misconduct and detrimental research practices. Researchers in a variety of fields are faced with more complicated choices with ethical dimensions. In this environment, maintaining rigorous peer review processes in scientific journals is a critical task. Decisions based on science suffer when non-peer-reviewed science, or science that was not well reviewed, is used.

TRENDS IN RESEARCH AND IMPLICATIONS FOR AUTHORSHIP

Decisions about the authorship of research publications are an important aspect of the responsible conduct of research. Although many individuals other than those who conceive of and implement a research project typically contribute to the production of successful research, authors are considered to be the person or persons who made a significant and substantial contribution to the production and presentation of the new knowledge being published. A number of the conventions and practices that constitute scientific authorship have been influenced by the trends discussed previously in this chapter. Tracing how trends in research such as globalization and technology are affecting authorship provides a useful window into how research is changing more broadly.

Authorship practices have evolved to support the development and distribution of new knowledge, engaging the powerful human motivation to discover and receive credit for discovery. Researchers are often evaluated, rightly or wrongly, by the quantity and quality of their work, as measured by the number of their publications, the prestige of the journals in which their publications appear, and how widely cited their publications are. Authorship also serves to establish accountability for published work. For example, authors are responsible for the veracity and reliability of the reported results, for ensuring that the research was performed according to relevant laws and regulations, for interacting with journal editors and staff during publication, and for defending the work following publication ( Smith and Williams-Jones, 2012 ).

Authorship practices vary between disciplines. Professional and journal standards and policies on authorship also vary. For example, in some disciplines the names of authors are listed alphabetically, while in other disciplines names are listed in descending order of contribution. In some disciplines, senior researchers are listed last and in others they are listed first.

At least three significant factors have changed authorship practices in recent

decades. First, the degree to which researchers make use of technology and the ways in which they use technology have changed dramatically. Researchers now frequently rely on computer software and hardware for many of the processes and analyses they undertake. They rely more on sophisticated software and computer models both in the analysis and in the presentation of results. The extent to which researchers understand how these tools affect data and results is a topic of concern in 21st-century research. Second, as a result of new information and communication technologies, especially the Internet, researchers engage in much more collaboration at a distance. This facilitates national and global collaboration and can lead to larger, more broadly scoped projects. Data gathering and analysis can be parsed out to different locations, with information potentially easily accessed and shared regardless of location. Researchers are able to electronically maintain frequent contact, have group meetings, and coauthor documents. Third, as a result of software and hardware developments, huge databases of information can be gathered and used, and researchers have access to and must deal with much more information than ever before. Consequently, researchers have to manage data in new ways and may be held to higher standards of knowing and understanding other research that has been done in their area.

These changes raise a variety of challenges to researchers and the research enterprise. For example, in part because of the increased scale of research, the number of authors listed on papers in some disciplines has grown considerably. Extreme examples include the 1993 Global Utilization of Streptokinase and Tissue Plasminogen, or GUSTO, paper in the New England Journal of Medicine , which involved 976 authors ( GUSTO Investigators, 1993 ), and a 1997 Nature article on genome sequencing that had 151 authors ( Kunst et al., 1997 , from Smith and Williams-Jones, 2012 ). The recent joint paper from the two teams collaborating on the mass estimate of the Higgs boson particle lists more than 5,000 authors ( Castelvecchi, 2015 ). The original papers reporting the discovery of the Higgs boson had approximately 3,000 authors each ( Hornyak, 2012 ). How can the primary author or authors be responsible for the work of a hundred individual researchers who are geographically dispersed and come from a wide range of disciplines? When an error is found or an accusation of wrongdoing is made, the problem has to be traced back to the component of the research that is called into question. In the process of tracing back the possible wrongdoing, the primary author or authors, while accountable, may not understand the area or have had much control over the researchers involved. The primary author or authors may be accountable but not blameworthy. These challenges are complicated by disciplinary differences in authorship conventions.

Chapter 7 explores the challenges to research integrity arising in the area of authorship, and Chapter 8 considers alternatives for addressing them.

The integrity of knowledge that emerges from research is based on individual and collective adherence to core values of objectivity, honesty, openness, fairness, accountability, and stewardship. Integrity in science means that the organizations in which research is conducted encourage those involved to exemplify these values in every step of the research process. Understanding the dynamics that support – or distort – practices that uphold the integrity of research by all participants ensures that the research enterprise advances knowledge.

The 1992 report Responsible Science: Ensuring the Integrity of the Research Process evaluated issues related to scientific responsibility and the conduct of research. It provided a valuable service in describing and analyzing a very complicated set of issues, and has served as a crucial basis for thinking about research integrity for more than two decades. However, as experience has accumulated with various forms of research misconduct, detrimental research practices, and other forms of misconduct, as subsequent empirical research has revealed more about the nature of scientific misconduct, and because technological and social changes have altered the environment in which science is conducted, it is clear that the framework established more than two decades ago needs to be updated.

Responsible Science served as a valuable benchmark to set the context for this most recent analysis and to help guide the committee's thought process. Fostering Integrity in Research identifies best practices in research and recommends practical options for discouraging and addressing research misconduct and detrimental research practices.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative approach Quantitative approach

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

Type of design Purpose and characteristics
Experimental
Quasi-experimental
Correlational
Descriptive

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Type of design Purpose and characteristics
Grounded theory
Phenomenology

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling Non-probability sampling

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Questionnaires Interviews

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Quantitative observation

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

Field Examples of data collection methods
Media & communication Collecting a sample of texts (e.g., speeches, articles, or social media posts) for data on cultural norms and narratives
Psychology Using technologies like neuroimaging, eye-tracking, or computer-based tasks to collect data on things like attention, emotional response, or reaction time
Education Using tests or assignments to collect data on knowledge and skills
Physical sciences Using scientific instruments to collect data on things like weight, blood pressure, or chemical composition

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

Reliability Validity

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

Approach Characteristics
Thematic analysis
Discourse analysis

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 9 September 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process
  • 10 Research Question Examples to Guide Your Research Project

10 Research Question Examples to Guide your Research Project

Published on October 30, 2022 by Shona McCombes . Revised on October 19, 2023.

The research question is one of the most important parts of your research paper , thesis or dissertation . It’s important to spend some time assessing and refining your question before you get started.

The exact form of your question will depend on a few things, such as the length of your project, the type of research you’re conducting, the topic , and the research problem . However, all research questions should be focused, specific, and relevant to a timely social or scholarly issue.

Once you’ve read our guide on how to write a research question , you can use these examples to craft your own.

Research question Explanation
The first question is not enough. The second question is more , using .
Starting with “why” often means that your question is not enough: there are too many possible answers. By targeting just one aspect of the problem, the second question offers a clear path for research.
The first question is too broad and subjective: there’s no clear criteria for what counts as “better.” The second question is much more . It uses clearly defined terms and narrows its focus to a specific population.
It is generally not for academic research to answer broad normative questions. The second question is more specific, aiming to gain an understanding of possible solutions in order to make informed recommendations.
The first question is too simple: it can be answered with a simple yes or no. The second question is , requiring in-depth investigation and the development of an original argument.
The first question is too broad and not very . The second question identifies an underexplored aspect of the topic that requires investigation of various  to answer.
The first question is not enough: it tries to address two different (the quality of sexual health services and LGBT support services). Even though the two issues are related, it’s not clear how the research will bring them together. The second integrates the two problems into one focused, specific question.
The first question is too simple, asking for a straightforward fact that can be easily found online. The second is a more question that requires and detailed discussion to answer.
? dealt with the theme of racism through casting, staging, and allusion to contemporary events? The first question is not  — it would be very difficult to contribute anything new. The second question takes a specific angle to make an original argument, and has more relevance to current social concerns and debates.
The first question asks for a ready-made solution, and is not . The second question is a clearer comparative question, but note that it may not be practically . For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

Note that the design of your research question can depend on what method you are pursuing. Here are a few options for qualitative, quantitative, and statistical research questions.

Type of research Example question
Qualitative research question
Quantitative research question
Statistical research question

Other interesting articles

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

Methodology

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, October 19). 10 Research Question Examples to Guide your Research Project. Scribbr. Retrieved September 11, 2024, from https://www.scribbr.com/research-process/research-question-examples/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, writing strong research questions | criteria & examples, how to choose a dissertation topic | 8 steps to follow, evaluating sources | methods & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Educational Management
  • University Management
  • Research Management

Managing for the Ideal Research Environment

  • Journal of Higher Education Policy and Management 31(3)

Andrew D Madden at The University of Sheffield

  • The University of Sheffield

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Andreas Kjær Stage

  • J High Educ Pol Manag

Jennie Billot

  • Jenny Marshall

Lori E. Weeks

  • Lesley Willcoxson

Marie Kavanagh

  • Lily Cheung

Brendan Cantwell

  • Arnold Picot

Ralf Reichwald

  • Elizabeth Deane
  • Martin Trow
  • High Educ Q
  • Lewis Elton
  • Paul Feyerabend
  • Gareth Williams
  • P Feyerabend
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
  • Undergraduate study
  • Find a course
  • Open days and visits
  • New undergraduates
  • Postgraduate study
  • Find a programme
  • Visits and open days
  • New postgraduates
  • International students
  • Accommodation
  • Schools & faculties
  • Business & partnerships
  • Current students
  • Current staff

Academic Quality and Policy Office

  • Academic Integrity
  • Academic Student Support
  • Assessment and Feedback
  • Committees and groups
  • Degree Outcomes Statement
  • Educational Partnerships
  • Guidance for Doctoral Training Entities
  • Guidance for PGR supervisors
  • PGR examiners' guidance
  • Area A: Contents page
  • Area B: PGR programmes, registration and the period of study
  • Area C: PGR student entitlements and responsibilities 
  • Area D: PGR skills development and the research environment
  • Area E: PGR supervision
  • Area F: PGR progress and review arrangements
  • Area G: PGR dissertations, examinations, and outcomes 
  • Programme and Unit Development and Approval
  • Quality Framework
  • Student Surveys
  • Undergraduate Education
  • Unit feedback from students

Related links

  • Education and Student Success
  • Bristol Institute For Learning and Teaching
  • QAA Quality Code

Education and Student Success intranet

University home > Academic Quality and Policy Office > Postgraduate Education > Regulations and code of practice for research degree programmes > Area D: PGR skills development and the research environment

PGR skills development and the research environment

The regulations in this section set out the requirements for supporting PGR students in developing their skills and having access to an appropriate research environment.

On this page

Support for pgr student development, minimum requirements for skills development, expectations on access to the research environment.

The policy on PGR personal and professional development also relates to this section.

8.1. The University recognises the importance of training and development opportunities for PGR students within a high-quality research environment. These opportunities can enhance a PGR student’s effectiveness as a researcher and can underpin their subsequent career.

8.2. A PGR student’s training and development opportunities must be tailored to their needs, and will include activities provided by schools, faculties, and the personal and professional development programme . Some training and development opportunities might be provided by external sources.

8.3. Supervisors must provide guidance and support for PGR students on training and development opportunities with the expectation that the student will progressively take ownership of their own personal and professional development.

8.4. A PGR student must have access to relevant training and development opportunities in research skills and techniques, as well as in wider personal and professional development.

8.5. Supervisors must consider their PGR student’s training and development needs and assist them in identifying relevant activities at the beginning of the student’s period of study. Supervisors and the student must regularly review the student’s training and development needs.

8.6. Funded PGR students must complete any specific training required by their funder. The supervisors and student must ensure that any funder requirements for training are met within an appropriate timeframe.

8.7. The University provides a high-quality research environment in which PGR students develop their skills and conduct work on their research projects.

8.8. Schools and faculties must ensure that PGR students have access to an appropriate research environment, including the following:

8.8.1. Opportunities to interact with research-active staff in the student’s research area within the University and more widely.

8.8.2. Opportunities to experience and contribute to research activities within the school and faculty, such as presenting research at school seminars.

8.8.3. Access to any necessary facilities or resources to support the student’s work. PGR students who are working remotely must retain access to any required facilities or resources.

8.8.4. Access to any external facilities, resources, or expertise that is required for the student’s work and that cannot be provided from within the University.

University of Bristol Beacon House Queens Road Bristol, BS8 1QU, UK Tel: +44 (0)117 928 9000 Contact us

Information for

  • New students

Connect with us

Study at bristol.

  • Students' Union
  • Sport, exercise and health
  • Find a researcher
  • Faculty research
  • Impact of our research
  • Research quality and assessment
  • Engaging with the public

About the University

  • Maps and travel
  • Tours and visits
  • The University on film
  • Explore the city of Bristol
  • Board of Trustees

Support the University

  • Alumni and friends
  • Working at Bristol
  • Job listings

A–Z of the University

  • Terms and conditions
  • Accessibility statements
  • Privacy and cookie policy
  • Modern Slavery statement
  • © 2024 University of Bristol

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Chapter 3 Research Environment

Profile image of Jonand Rex Magallanes

Related Papers

Jasmin Nagayo

research environment in research paper example

Journal of Cleaner Production

Juan Carlos Camacho Chab , Manuel chan

Jean Suerte

zarana patel

prakash balu

Lab in Industrial, Plant & Animal Biotechnology Manual

Prof.Dr.Mohammad Abdur Rahman

Md. Zahidul Islam

The aim of the present study is to assess the phytochemical nature, antioxidant and antimicrobial activities of methanolic extract of Bougainvillea glabra flower. Antioxidants play an important role in protecting cellular damage by reactive oxygen species. The fractions of the flower extract were screened for antioxidant activities using DPPH radical scavenging activity, reducing power assay, total antioxidant capacity and reduction of ferric ions by o-phenanthroline color method. Antimicrobial activities of different solvent fractions were tested against gram positive and gram negative bacterial strains by observing the zone of inhibition using disc diffusion method, where eImipenem (10 µg discG1) was used as the standard. The bacterial strains used in the study were Staphylococcus aureus, Bacillus cereus, Escherichia coli and Pseudomonas aeruginosa. The extracts revealed the presence of phytochemicals. Almost all three fractions exhibited remarkable antioxidant and antimicrobial activity in terms of all the assays tested. Water fraction showed DPPH radical scavenging activity with IC50 value of 135.73 µg mLG1. Not surprisingly, the n-hexane fraction showed excellent antioxidant activity for reduction of ferric ions by o-phenanthroline color method. All three types of fractions showed inhibitory effect against all tested bacteria except P. aeruginosa. These findings indicate compounds isolated from carbon tetrachloride and water fractions possess pharmacological properties and potential to develop natural compounds based pharmaceutical products. All fractions of B. glabra were found to be the most effective free radical quencher, a potent source of natural antioxidants and antimicrobial agents, thus justifying their traditional use in green therapeutics.

Science and Education Development Institute

The book aims towards providing the basic and fundamental information to the researchers and scientists worldwide on the vast herbal and natural medicinal treasure available to us derived from plants, herbs and fruits obtained from traditional agricultural practices. This book is dedicated to the professionals of Agriculture, Horticulture and Forestry Sciences and has been composed exclusively for providing first-hand knowledge on the related issues for the development of science and education.

Methods in Molecular Biology

Ewa Dziedzic

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Kevyn Elizabeth Wightman

PHARMANEST JOURNAL

Food Research International

Izabela Konczak

Journal of Ethnobiology and Ethnomedicine

Cheryl Lans

HortScience: a publication of the American Society for Horticultural Science

John E Erwin

Acta Horticulturae

Randolph Beaudry

Gaurav Kumar

HortScience

Daniel S Kirschbaum , Theodore Dejong , Steven Weinbaum

Editor CSRL

Md Nur Kabidul Azam

Krisanna Machtmes

Paolo Sabbatini

Dario Stefanelli

Global Science Books

Hortscience

Jonathan Frantz

Nasilele Simate

Mark Renz , Stephanie Walker

Rafael Auras

Duru Vincent

Osmar Nascimento Silva

Pulipati Gangadhara Rao

Cary Mitchell

salah hussen

plant micropropagation

Yamil Hernández Urquieta

Paolo Sabbatini , Martin Bukovac

Phil Stewart

Suraj Shetiya

Ed Etxeberria

Nahla bassil

Jerry Stoller

olajide olatunji

Gabino Reginato

Ekaterina Jeliazkova

John Preece

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Environmental Issues Research Paper

Academic Writing Service

This sample environmental issues research paper features: 6700 words (approx. 22 pages), an outline, and a bibliography with 39 sources. Browse other research paper examples for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Introduction

Cultural beliefs and the environment, social construction and the environment, social construction and social movements, political economy and the environment, environmental issues: method and application, risk perception and environmental health, mobilization around toxic waste sites: love canal.

  • Bibliography

More Environment Research Papers:

  • Environmental Bioethics Research Paper
  • Environmental Crime Research Paper
  • Environmental Economics Research Paper
  • Environmental Psychology Research Paper
  • Environmental Regulation Research Paper
  • Environmental Regulations Compliance Research Paper
  • Environmental Rights Research Paper
  • International Environmental Politics Research Paper
  • Population and the Environment Research Paper
  • Technology and the Environment Research Paper

Environmental issues can be discussed within a number of different contexts. For anthropology and sociology, culture and society become important factors in understanding environmental issues. By incorporating a perspective that includes environmental history, aspects of environmental change, dialogue and culture, and future concerns, a more complete understanding of the relationship between sociocultural actions and the natural environment can be developed. In an effort to understand the nature of environmental problems, one must develop an understanding of the cultural paradigms that guide human behavior and interaction with the natural environment. Many perspectives seek to explain this relationship. Social scientists look toward dialogue and cultural perspectives to trace the history of environmental concern.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% off with 24start discount code.

Historically, humans have understood their role to be one of dominion over nature. This is explained in numerous classic works and referenced in many religious and spiritual texts as well (Bell, 2008; Dunlap & Mertig, 1992). Cultural paradigms exist that serve to guide our interactions with the environment. Most stem from the anthropocentric belief that the world is centered around people and that human society has the right to maintain dominion over nature. Structural beliefs provide the foundation of these understandings.

The belief that a free market system provides the greatest good for the greatest number of people leads us to place economic decision-making processes in private hands. Frequently, private decisions have public consequences, but these public consequences are not accounted for in production costs or covered by market costs. Instead, the costs are passed on to consumers in the form of taxes and higher base prices for goods and services. Esteemed environmentalists Al Gore Jr. and Robert Kennedy Jr. have argued that if the external costs of production were assumed by manufacturers, then the ultimate benefit would be a system that accounted for waste created in the production process. This is evident in their research on global warming. Coal-fired power plants are promoted as one of the cheapest forms of creating energy. This is misleading, because the health effects of pollution caused by coal are not included in the costs of production. Others argue that those costs would have to be passed on to the consumer. However, they are passed on now in the way of pollution and medical expenses for illnesses associated with environmental contaminants. Coal is one of the biggest contributors to greenhouse gases, thus leading to the overall societal costs of global warming.

Another cultural belief is that the natural world is inexhaustible. Extraction of natural resources happens at an incredible rate without a consideration to limits. Society’s constant dependence on nonrenewable energy forces mining and the refining of coal and oil to keep up with these demands. Consumer goods are deliberately planned to become obsolete within a relatively short time, and consumers are pressured to buy replacements. This process has been conceptualized in research focused on the treadmill of production. Production and utility processes, using natural resources, dominate the modes of production. The reliance on the treadmill model provides perpetual extraction and production, increasing the fragility of the natural environment.

Another cultural value resides in a lasting faith in technology. Culturally, we believe that technology can meet any challenge. Humans are seen as ingenious creatures able to devise solutions for any problem. However, technology itself is not sufficiently controlled and can create more problems that contribute to environmental degradation. This can lead to a situation known as culture lag, used here to describe a situation in which technology has outpaced the cultural ability to respond to the consequences of using a given technology.

The philosophy of the growth ethic argues that growth equals progress. Successful cultures are often defined by their levels of progress. Urban sprawl exemplifies the connection between progress and environmental destruction. Urban ecologists argue urban sprawl follows the concentric circle urban planning mode of the early 20th century. Residents were encouraged to develop space for residential purposes further away from city centers. This was culturally promoted as prime real estate, and individuals continued to purchase land as a showing of class standing. Urban sprawl results in the loss of green and open space, increased use of natural resources, and more vehicle miles traveled as commuting distance continues to increase.

Materialism is a cultural value that also contributes to how environmental problems emerge. Americans tend to measure success in terms of the consumption of material things. Globally, the most valued nation is one that can command and use the largest fraction of the world’s resources. Currently, the United States supports 5% of the world’s population and uses 25% of the world’s natural resources. This is evidence that the cultural emphasis on the consumption of material goods is in direct correlation with natural resource use.

Two final cultural values that impact environmental practices are individualism and an anthropocentric worldview. Cultures that emphasize individual rights and personal achievements tend to have a greater environmental impact. We place benefits to the self over what is best for the collective. Subsequently, the anthropocentric worldview is centered around human beings, thus inferring that human begins are superior to other beings and have natural rights to use the environment to ensure the progress of human beings as a species.

Subsequently, these cultural beliefs form the principles that overwhelmingly guide cultural interactions with nature. Theoretically, they serve as paradigms that explain the emergence of environmental issues. The following section provides specific theoretical underpinnings of environmental issues.

Theory and the Environment

Theory addressing environmental issues has been situated in the social constructionist and political economy approaches. Within these approaches, attention has been paid to developments of subfields in social science research, such as social movements and the environment, environmental health, and environmental justice.

Social constructionists focus on the construction of social problems and how this allows individuals to assign meaning and give importance to the social world. Sarbin and Kitsuse argued that “things are not given in the world, but constructed and negotiated by humans to make sense of the world” (1994, p. 3). When interests are at stake, claims are made around an activity in order to define the interests as problems. The process of claims making is more important than the task of assessing whether the claims are true (Hannigan, 1995).

Hannigan provides a three-step process for the construction of environmental problems: assembling, presenting, and contesting. He argues that each step develops the claimsmaking activities of environmental activists and antagonists. Environmental problems are different from other social problems, because claims are often based on physical, chemical, or biological scientific evidence (Hannigan, 1995). In nearly all cases of environmental problems, even though such problems are based on scientific evidence, the burden of proof falls on the claims-makers, the environmental actors.

When a claim about an environmental problem is presented, state and corporate actors emerge most often to challenge the validity of these problems. Although these actors are willing to construct the issue as a “problem,” support to alleviate the problem is often lacking. If it supports the alleviation of the problem, most probably through funding remedial efforts or research, the state or corporation is seen as taking responsibility for the problem. If the state is seen as responsible, its perceived legitimacy decreases, which may lead to decreased trust. On the other hand, if a problem is not acknowledged, then trust in government may also decrease, because the perception arises that the interests of the state are not the best for the people.

The power of individuals in roles and positions to define these claims is ultimately what allows problems to be defined as problems. Claims may be made by others not in a position of power, but they are often not seen as valid because of the lack of power associated with the role. Different claims of environmental problems then lead to different definitions of the problems.

Definitions of problems are framed to illustrate specific viewpoints of what the problem is. Goffman used the term frame in order to explain interpretations of occurrences. Frames can serve as explanations or guideposts to individual or collective action (Snow & Benford, 1988). Snow and Benford describe framing as an activity performed by social movements to express their viewpoints and “to assign meaning to and interpret relevant events and conditions in ways that are intended to mobilize potential adherents and constituents—to garner bystander support and demobilize antagonists” (p. 198).

By framing events in certain ways that assign meaning to them, actors can attempt to mobilize support and delegitimize opposing viewpoints. Because different frames may emerge surrounding the same problem, individuals may choose to adopt one or the other on the basis of the reliability of the frames. One factor in determining reliability is trust in the actors who present the frame. Constituents may mobilize around one frame because trust in that explanation and the organization that presents it is high (Robinson, 2009). This impacts how individuals interpret the seriousness of environmental problems and subsequently whether issues will be acted on and in what manner.

The framing process can serve to mobilize constituents for or against a particular cause. Mobilization against frames that are presented by actors emerges when the audience of the frame has low trust in the source of the frame. Social movement literature has acknowledged the emergence of mobilization over environmental issues where lack of trust is present. Examples include institutional recreancy, lack of trust in government agencies and officials, and the combination of the two (Brown & Mikkelsen, 1990; Cable & Cable, 1997; Freudenburg, 1993; Gaventa, 1980; Gibbs, 1982).

Charles Tilly provides a model for mobilization that bridges some of the ideological views of frame analysis with collective action and resource mobilization theory. Tilly’s (1978) definition of mobilization is “a process by which a group goes from a passive collection of individuals to an active participant in public life” (p. 69). A further extreme of this model is resource mobilization theory, which gives even less importance to ideological factors and, instead, emphasizes the need for available resources. The combination of ideologies, resources, and the power of frame presentation contribute to mobilization. Using this analytical framework, the emergence of environmental problems and mobilization around these problems can be better understood.

Environmental problems in communities provide a setting to further explore this connection. Community organizing around local problems has a long history in the United States. Many forms of community organizing exist. These have included writing and literacy circle newsletters in the late 19th and early 20th centuries, Saul Alinsky’s model of radical politics to create mass organizations to seize power and give it to the people (1971), and neighborhood block clubs. The goals to spread awareness, ensure social justice, and understand that city hall can be fought vary in scope and magnitude but have often proved to be effective models for organizing.

Citizen action in response to toxic waste at Love Canal has emerged as the premier example of community organizing over environmental issues. The story of neighborhood organizing and the quest for a clean, healthy environment is acknowledged in most major studies on environmental issues. The specifics of this case follow in a later section where the application of environmental issues is discussed.

Theories of political economy of environmental issues focus on the development of political and economic practices and policies that contribute to environmental problems. Primarily, the focus has been on the creation of the capitalist mode of production that leads to overwhelming environmental destruction. Furthermore, the development of capitalism promotes a political environment that is friendly to more profitable, but less environmentally friendly, practices.

In addition to physical environmental realities that production processes cause, issues of health and economic injustice exist. Bryant and Mohai (1992) asked whether a safe environment is a civil right. They argue that people of color see environmental degradation interrelated with economic and political justice. This is the fundamental idea behind environmental justice in both action and theory. Another issue in environmental justice arises because people of color and lower income are less likely to have access to health insurance; thus, they become more ill if exposed to environmental hazards without means of treatment. Therefore, these populations share more of the negative environmental burden and have fewer resources to resolve the given problems.

The connection between health and economic justice is not a new relationship. Since World War II, there has been an increase in the development of the petrochemical industry. Coinciding with an increased demand for synthetic chemicals was an increased demand for disposal sites for waste byproducts of these chemicals. Many disposal sites were created in vacant plots of land, without the regulated disposal standards in place today. Expensive land used for the disposal sites of the 1940s and 1950s became the residential suburban developments of the 1960s, 1970s, and 1980s. With the post–World War II increase in population, many families were moving into suburban neighborhoods. Families felt safe from the problems of the cities, but they were not aware that many residential properties were built near the abandoned chemical waste sites of prior decades.

The problems of environmental contamination were first addressed publicly in Rachel Carson’s Silent Spring (1962). Her warning of chemical contaminants silencing biological life was not heeded at the time her book was published. These issues were not addressed until the 1970s with the first Earth Day in 1970, followed by the passing of numerous pieces of environmental protection legislation and the creation of the Environmental Protection Agency (EPA). Through this period of uncertainty, unclear scientific findings overwhelmed policymakers and the public, leading to confusion about how to develop environmental policies and actions.

Environmental problems have manifested most directly in the form of pollution. Evidence of environmental destruction is seen in the form of air, water, and land pollution that has a direct impact on the health of the human population. One of the most direct links between pollution and negative health effects has been identified since the creation of the petrochemical industry in the 1940s. Since this time, we have seen more cases of cancer and respiratory illness in the human population. The rate remains high even when controlling for mitigating factors, such as the effects of advanced medical technology in treating these illnesses, and lifestyle factors, such as diet and smoking. This case was made with the infamous discovery of toxic waste at Love Canal, New York, in 1978.

Literature in this area addresses the possible effects of exposure to toxins on one’s health. However, few studies have provided irrefutable evidence supporting the research hypothesis (association exists) or the null hypothesis (no association exists). Scientists know that chemicals can have adverse effects on the human condition when ingested, but they argue that some indirect exposures through air, soil, water, or residential habitation in proximity to such toxins have not provided similar consequences. The basic disagreement emerges in how one views risk, either through the precautionary principle or through risk assessment and evaluation. Proponents of the precautionary principle argue that if the chance of danger is present, then precaution should be used to avoid exposure. Risk assessment would argue the opposite—that the risk must be known before action is taken to avoid exposure. The difficulty is that science has not provided irrefutable evidence on the dangers of many chemical substances; therefore action for their removal from products and the environment has been slow. Recently, Devra Davis took on this phenomenon in The Secret History of the War on Cancer (2008). She outlined the lack of scientific responsibility in reporting findings connecting cancer and chemical exposure.

Most reports have not described exposures accurately, or they have failed to completely identify a causal factor (National Research Council, 1991). The Committee on Environmental Epidemiology was formed to assess the progress on hazardous waste assessment since the creation of Superfund and the Agency for Toxic Substance and Disease Registry. The committee concluded that no conclusive reports could be used to base policy on, because there are no measures in place to accurately depict exposure assessments. Their conclusions continue: There exists no comprehensive inventory of waste sites, no site discovery program, no minimum data set on human exposures, and no policy for immediate action if exposure exists (National Research Council, 1991). The report indicates that “the nation is not adequately identifying, assessing, or ranking hazardous-waste site exposures and their potential effects on human health” (p. 21).

Environmental toxins have long been thought to be causally related to the incidence of disease. Air pollution, specifically with carbon dioxide and sulfur dioxide, has been studied in association with asthma and pulmonary disorders (Carnow, Lepper, Shekelle, & Stamler, 1969). Water pollution, particularly with trichloroethylene and tetrachloroethylene, sparked a concern about childhood and adult leukemia in Woburn, Massachusetts (Brown & Mikkelsen, 1990). Similarly, numerous studies have been conducted that investigate the exposure-ailment connection (Landrigan, 1990; Neutra, Lipscomb, Satin, & Shusterman, 1991; Paigen, Goldman, Mougnant, Highland, & Steegman, 1987). These studies use descriptive and case-control methods and field investigations consisting of surveys and physical examinations, resulting in quantitative analyses in order to test hypotheses.

Descriptive studies portray disease patterns in populations according to person, place, and time, and they include time-series analyses (National Research Council, 1991). For example, a study performed by the National Cancer Institute used maps of cancer incidences and toxic waste sites, concluding that the high incidence of bladder cancer in northwestern Illinois counties was significant and leading to the implementation of an incidence study using survey methods (National Research Council, 1991).

A cohort study was employed with North Carolina residents who consumed raw polluted river water contaminated by an industrial site from 1947 to 1976. Residents’ rates of all forms of cancer were more than twice those expected in the general population (National Research Council, 1991). Once exposure ceased, rates returned to the expected level, adjusting for latency.

The epidemiologic case-control study carried out in Woburn, Massachusetts, yielded an association between leukemia and drinking from contaminated wells. The EPA could not pinpoint the source of contamination; therefore, it could not infer conclusively that the cases of leukemia were due to the proximity of a hazardous waste site (Lagakos, Wessen, & Lelen, 1986).

Griffith, Duncan, Riggan, and Pellom (1989) analyzed EPA and cancer mortality data from 13 U.S. sites where there were major incidences of cancer between 1970 and 1979. They found evidence that contaminated ground water was used for human consumption at 593 waste sites in 339 U.S. counties in 49 states. Significant associations were found between several cancers and exposure to contaminated water in white males; these included cancers of the lung, bladder, esophagus, stomach, large intestine, and rectum (Griffith et al., 1989). Higher incidences of cancers of the lung, bladder, breast, stomach, large intestine, and rectum were found in white females in these counties (Griffith et al., 1989), when compared with females in counties that did not have hazardous waste sites. However, this study has been criticized based on its use of populationbased incidences of cancer rather than individual-level estimates. Researchers inferred that proximity to hazardous waste sites caused cancer.

Wong, Morgan, Whorton, Gordon, and Kheifets (1989) performed an ecologic and case-control analysis to evaluate whether there was an association between groundwater contamination with dibromochloropropane (DBCP) and mortality from gastric cancer and leukemia. The only positive association that was found was in farm workers. No relationship was found for gastric cancer or leukemia with DBCP contamination of drinking water.

Neutra et al. (1991) found that individuals living near toxic waste sites had one or more bothersome symptoms that those living in control areas did not have. However, rates of cancer and birth defects were not found to be statistically significantly different for these individuals than for those in the control neighborhoods. Symptoms such as worrying, depression, and nervousness were more likely to be the result of knowledge of the site and its contaminants than the result of chemical exposure. Although some practitioners argue that residents near these sites do show higher incidences of asthma and psychological disturbances than individuals in control groups, the findings remain highly controversial (Neutra et al., 1991).

For the most part, these studies consist of survey and field investigation methodologies, relying on self-report methods. One problem with explaining associations that rely on self-report methods is that if residents want to be relocated or have other agendas, then the degree to which symptoms are reported may increase. Many residents felt that this was what some homeowners were hoping for at Love Canal. This remains one of the most critical problems with state and federal agency studies that seek to provide evidence of community risk.

With the increase in studies in this area, the public has been partially reassured by having the knowledge that at least concerns are being recognized. Specifically, cancer rates are still high, but the fear of human-made chemicals has largely been dispelled. Most recently, the organic food movement has been gaining legitimacy. Yet, many still doubt the health benefits behind this movement. Studies concerning environmental racism have been more prevalent, focusing on the incidence of lower-income, nonwhite families living near toxic waste sites. This focus has taken attention away from specific health problems. Instead, the focus has been on issues of political economy and equity. This is not a criticism of environmental justice but rather a call for the convergence of natural science and sociology in order to address both issues. Other variables to be considered in these studies may include racial composition of counties, social class of counties, concentration of low-income occupations in counties, new housing starts in counties, and the percentage of welfare recipients per county.

The uncertainty of science had created cross-discipline dialogue. Social scientists have addressed environmental issues in studies of risk assessment, disaster relief (both natural and technological), toxic exposure, and other datadriven areas. Because of the risk of chemical exposure due to toxic waste, landfills emerged as one of the most imminent public health threats with the discovery of Love Canal. However, even in cases where studies to show an association between illness and exposure to toxic chemicals have been inconclusive, the message has been that these chemicals cause cancer and needed to be eradicated.

An important role of science is to inform the public of findings, usually through the media. Epidemiologic studies deal with human populations and are often questioned based on the legitimacy of the data and the willingness of the agency or corporation funding the research to share findings with the public. These studies are also usually based on relatively small populations and a small number of events; this results in a lack of significant findings, because sample sizes are too small to generate statistically reliable conclusions. Researchers are asked to report conclusions to various interest groups that may have a stake in the research problem. The pressure of the public arena and media, with emerging concerns and consequences for public health and the environment, has led to a decrease in the willingness to share data and be criticized if the data do not fit the public agenda. Politics and public perception surpass what science is able to provide. Science’s inability to prove negatives has led to public policy that tries to control what cannot be established. This uncertainty shapes policy to err on the side of protection; yet in many communities the risks are endured regardless.

Findings often snowball into hard line conclusions and the perception of a problem when one may not exist, or vice versa. Risk perception and the realization of risks are two different things. Risk perception may encompass what one believes might occur or an understanding based on secondary information. Risk realization occurs when one is physically affected by the agent or situation and a decision to act is based on that encounter. The problem arises in this discrepancy. Perception is what people perceive to be happening. With different information from different scientific experts, the public is left to decide on their own who or what is right, based on the health and well-being of themselves and their families.

Freudenburg (1993) discussed the concept of risk and recreancy in public decision making. He argues that an increase in institutional responsibility for risk management has created a system where responsibilities are often overlooked. This concept proposes increased frequency in institutional decision making in risk analysis. Freudenburg (1993) coined the term recreancy to identify the institutional failure to follow through on a duty or responsibility or broadly expected obligations to the collective. Questions are now raised by individuals deciphering scientific studies for themselves, but they now question the role of institutional actors. Without correlational data from an alternative institutional source that they trust, citizens do not know where to turn for clear answers about data regarding environmental toxins.

Community-based studies by community organizers have emerged in an attempt to address the failure of institutions to provide real, understandable answers regarding human health and exposure rates. Specifically, recent literature calls for more involvement of the scientific community in the decision-making process. A resurgence of popular epidemiology, since Lois Gibbs’s attempt in 1978– 1979, has found individuals using lay methods to determine association. Even if they don’t result in strong, scientific evidence, community-based studies at least provide the groundwork and show a need for more in-depth studies. Brown and Mikkelsen’s 1990 study is a strong example of this method. The question of whether there was a connection between childhood leukemia and known contaminated well water divided the community, but it forced epidemiologic studies.

Coinciding with these revelations, other studies were being conducted that attempted to link other contaminated sites with adverse health effects. As Gots (1993) stated, most were laboratory studies in simulated environments. Examples of human studies existed only in the sociological and epidemiological literature (Brown & Mikkelsen, 1990; Gibbs, 1982; Landrigan, 1990; Neutra et al., 1991). Incidences of chemical scares were also prevalent. Headlines concerning the dioxin scare at Times Beach, Missouri; contamination of apple crops with the synthetic growth regulator Alar; and use of Agent Orange created the fear that human-made chemicals cause disease. Evidence existed that these specific chemicals may cause health problems in humans, but data on the incidence of illness relative to exposure and on synergistic effects of these chemicals were missing. Furthermore, there was even less information available about other potential threats to health, such as airborne and waterborne contaminants, environmental sensitivity disorders, and living in proximity to hazardous waste sites. To establish a causal relationship between exposure and chemicals, obtaining valid measures and estimates for exposure is essential.

Environmental Movements

Contaminated Communities; The Challenge of Social Control; Environmental Problems as Conflicts of Interests; Disasters, Collective Behavior, and Social Organization; Love Canal: Science, Politics, People, and Power; and Powerlessness are just a few of the book titles that describe the scope and emergence of the mobilization surrounding environmental problems. Since the publication of Silent Spring, the struggle to define, understand, and resolve environmental problems has inundated environmental literature as well as the agendas of environmental organizations at both the national and local levels.

The environmental movement in the United States can be traced back to the early conservationists at the turn of the 20th century, whose focus was on control of natural resources for technological and societal use. Accompanying this was a movement toward the preservation of the natural environment simply for nature’s sake and separate from any use and/or value that human society had placed upon it.

The contemporary environmental movement embraced both of these traditions while focusing on building a political alliance to ensure the passage of legislation that would protect both nature and human health. As evidenced by the multitude of legislative victories the environmental movement claimed during the 1970s, the environmental movement was gaining prominence as one of the most successful efforts of social movement organizers.

Politically, momentum began to shift back toward the wise-use movement throughout the 1980s. Environmental problems were framed in opposition to capitalist goals. Politicians took an either/or stance: jobs or the environment. With one’s economic livelihood seemingly at stake, it is no wonder that concern for the environment was diminished in the public agenda. The environmental health movement is arguably one area that continued to keep environmental issues in the public’s consciousness. One of the classic and influential cases in environmental organizing, Love Canal, illustrates the interconnectedness of politics, science, and the environment.

To understand the factors contributing to the emergence, awareness, and mobilization around environmental problems, the scope and focus of the problem must be considered. This analysis focuses on the emergence of and mobilization around toxic waste sites found in residential communities. Literature addressing toxic waste sites in communities place Love Canal, New York, as the first community to encounter such a problem that received national media attention. Although community protests were occurring around the toxics issue as early as 1970, no other site received the same degree of national media attention (Szasz, 1994).

In 1978, Love Canal was declared a federal disaster area, but the final homeowner evacuation was voluntary, not mandatory, even though the state had said a health emergency may exist. Given the possibility of ill-health effects, residents were given the choice about whether to stay or move. Because of the lack of strong correlational evidence, public health officials were not able to substantiate a link between exposure to chemicals and disease (Robinson, 2002).

The questionable contaminated area was evacuated and became known as the Emergency Declaration Area (EDA). It was divided into seven sampling areas. Two studies were performed to assess the habitability and safety of the area. The first study was completed in 1982 by the New York State Department of Health (DOH), the EPA, and the U.S. Department of Health and Human Services. Problems arose about the study’s conclusion, which was that the EDA was as habitable as comparable control areas. The Congressional Office of Technology Assessment found that the study lacked information to determine whether unsafe levels of contamination existed and that it did not make clear what next steps should be taken. Thereafter, DOH and EPA conducted a second study on habitability; it was released in 1988. Habitability and safety have been studied in regard to numerous hazardous waste sites, but actual rates of illness have not been linked to exposure to toxic substances from nearby chemical waste sites.

The Superfund Act, passed in 1980, was written specifically in response to the known hazardous waste site at Love Canal. Policymakers recognized that industry used land-based disposal methods, that industrial sites were contaminated, and that an increase in clean air and water standards led to a decrease in land-based regulated disposal (Barnett, 1994). The problem was that there was neither an informed way of counting or tracking these sites, nor evidence of an adverse ecosystem and human effects (Barnett, 1994).

Since Love Canal, no other neighborhood has received the same degree of attention, although many have encountered toxic waste contaminants in their communities (Brown & Mikkelsen, 1990; Bryant & Mohai, 1992; Cable, Walsh, & Warland, 1988). No conclusive, significant correlation between chemicals and cancer has been found at Love Canal or at the other identified exposure sites. Nor has any truly verifiable evidence been found that exposure to, and living near, any other toxic waste site causes disease, though disorders have been loosely associated with chemical exposure, such as asthma, respiratory disease, nerve damage, miscarriages, and cancer.

People living near these sites must often decide on how much they want to expose themselves to risk. Once the presence of a waste site is known, they must decide, without data to guide their decisions, whether to stay in their homes or leave. This has historically interfered with the availability and collection of valid data. When a study is conducted, residents request to be informed of the results and progress of the study. Because most epidemiological studies require longitudinal or cohort analysis in order to be reliable and valid, it is advantageous to have a stable, nonmobile population. This begs ethical questions, on behalf of the researchers, to disclose data relating to exposure before the study is completed. Researchers cannot both verify exposure findings and expect residents to remain so that they can carry out the remainder of the study. Thus, individuals, families, and communities are asked to base their decisions on claims that cannot be substantiated one way or the other.

Toxic waste sites continue to be discovered in communities. In many cases, the resulting community struggles are extended battles. The operative phrase in many cases is “once a site is discovered.” The chemicals in Love Canal were buried 30 years before it was known to the community that their houses, school, and playground were built on top of and surrounding a chemical site containing 22,000 tons of waste. This is not to say that the problem didn’t exist before its discovery by residents; it just wasn’t defined as a problem. From the time the chemicals were buried to the discovery of the site by residents 30 years later, residents noticed dogs with burned noses, children with skin rashes, and increased rates of miscarriages, leukemia, and nerve and respiratory disorders. But they were not aware that these rates were out of the ordinary. The effects of the problem did not change, but the way the problem was represented did. The shift was in an awareness of the existence of the problem.

In addition to the chemical disaster at Love Canal, other environmental issues have been the subject of various social movement activities, as well as political legislation. In each instance, public perception influences how and whether the problem is acted on by those with the power to make a difference.

Culturally and socially, environmental problems represent problems of social organization, communication, and socialization. Social scientists can look toward the phenomenon, visible in the reaction to environmental problems, to begin making sense of culture and society at large. Our understanding of environmental issues as primarily social constructions offers insight into how these issues are created, maintained, and resolved.

For example, in many cases where chemical contamination is the focal issue of community groups, the level of risk is perceived by affected individuals rather than established by science. It is the social processes in a community that lead to risk determination, not the natural science interpretations of an issue. Individuals have been socialized to trust science for valid information. When the determination of risk is uncertain, individuals are left to determine the level of risk for themselves by other means. In most cases, this determination is made through contact with state or federal government officials, through collaboration with other community members, or through other sources of information, such as the media. This framework helps to explain disagreements over the seriousness of most environmental issues, from global climate change to mountain-top coal removal.

The subjective reality of environmental problems becomes visible in terms of how the issue is circulated in cultural discourse. Each stakeholder constructs different means of projecting information for public consumption. When presented in the media, the perception is that information is true and accurate. Most often the determination of risk takes place in the form of a public meeting. In this situation, public officials are in control of the meeting, drawing on public anticipation surrounding the specific issue and information to be released. At Love Canal, for example, officials kept the information to be discussed at the meeting private until the meeting in order to build anticipation and increase their power over the dissemination of information.

At both the cultural and social level, power is maintained through these exercises. Often, the state controls the dissemination of information that individuals perceive to be true and accurate. However, different modes of collaboration among community members can create a different means of risk determination. The sharing of common experiences among community residents can lead to a broader sense of mobilization. Once commonalties are recognized, residents begin to determine their own level of risk. Risk perception is based on the potential danger of a problem. The sources that individuals base their information and understanding on are numerous. Each source has developed a frame of events and information on which they base their version of reality. Whether from the media, science, the state, or local knowledge, such frames serve as a means to display a problem in terms of a specific group. Social movement development, in relation to the environment, offers a powerful tool for individuals looking to construct the frame of a given environmental reality.

The ways in which environmental realities have been constructed influences how they will be acted on socially, culturally, and politically. Cultural discourse then circulates in the public sphere and becomes normative. Environmental issues become part of the public dialogue. This dialogue serves to help develop an understanding about the factors that coalesce to create, maintain, and resolve social processes that influence environmental problems.

Community-level interaction is an interesting social space from which to witness environmental understanding. Community-based, environmental problems affect individuals in many ways. Some communities mobilize and form environmental organizations to address a specific problem. Others, with existing community organizations, add environmental problems to their agenda. Environmental problems can vary in scope, size, and duration.

Mobilization in these communities may occur due to individuals’ fear that nothing is being done to ensure the safety of their children and families. It may also occur on the basis of frustration and an inability to understand what and why this is happening in their community. In addition, community groups often mobilize as a result of a lack of trust in government. The mobilization of individuals to resist the state’s discourse challenges the power of the state. The level of trust in government is a key factor in determining the level of power the state can maintain during the presentation of its frame. For example, if trust in government is low, then a stronger frame needs to be developed to legitimize the government’s position. Government often emerges as the key stakeholder, as the actor that will have the power to create change.

Previous research addresses the state’s desire to maintain legitimacy at the same time that community groups seek to resist state discourse. Admitting that there is a problem shows that the state is capable of mistakes, and thus, the state’s legitimacy can be questioned and it is vulnerable. The goal in the rhetoric of the state is not to raise questions, thereby maintaining legitimacy.

Most environmental problems are categorized by place: global, local, or national. These categories are not mutually exclusive. For example, ozone depletion is a global problem because of the total atmospheric effects the ozone layer has on the biosphere from ultraviolet rays. Yet the problem can be seen as being local in an area where heavy smog is causing ozone depletion and high surface area ozone levels, such as in a highly urban area like Los Angeles.

Similarly, the discovery of toxic waste sites across the United States can be seen as a national problem. But in the specific communities where these sites are discovered, it is a local problem affecting individuals directly. The problem is no longer seen as away from them; it is now part of their community. This developing framework of environmental issues has helped individuals become aware of the multitude of impacts that these problems have. Social scientists have been able to develop an understanding of the environment that moves away from the depiction of the earth as something separate from human society, but, instead, the earth is a system with interrelated consequences and realities. One of the most vivid paradigm shifts has been the movement away from an anthropocentric worldview and toward an environmental worldview. This shift can be represented in the movement from the human environmental paradigm (HEP) to the new environmental paradigm (NEP).

Social scientists focus on this shift as a way to explain a cultural movement that has embraced a way of understanding the impact that society has on the environment. Arguably, once the NEP is part of the natural discourse of environmental issues, they become more easily recognized as problems that have risen from a system out of balance. This approach focuses on sustainable development and other modes of development that provide environmentally sensitive growth models. These efforts move toward a culture that is sensitive to a responsibility that ensures less devastating environmental impact in the future. As environmental sociologists and other environmental researchers seek answers for a sustainable society, we must consider the devastating impacts of our current modes of production. New modes of production that take into consideration innovative, green energy solutions will provide a stronger sustainable economy and environment for culture and society.

Bibliography:

  • Alinsky, S. (1971). Rules for radicals. New York: Random House.
  • Barnett, H. G. (1994). Toxic debts and the superfund dilemma. Chapel Hill: University of North Carolina Press.
  • Bell, M. (2008). Invitation to environmental sociology (3rd ed.). Thousand Oaks, CA: Pine Forge Press.
  • Brown, P., & Mikkelsen, E. (1990). No safe place: Toxic waste, leukemia and community action. Berkeley: University of California Press.
  • Bryant, B., & Mohai, P. (Eds.). (1992). Race and the incidence of environmental hazards: A time for discourse. Boulder, CO: Westview Press.
  • Cable, S., & Cable, C. (1997). Environmental problems, grassroots solutions: The politics of grassroots environmental conflict. New York: St. Martin’s Press.
  • Cable, S., Walsh, E., & Warland, R. (1988). Differential paths to political activism: Comparison of four mobilization processes after the Three Mile Island accident. Social Forces, 66, 951–969.
  • Carnow, B. W., Lepper, M. H., Shekelle, R. B., & Stamler, J. (1969). Chicago air pollution study: SO 2 levels and acute illness in patients with chronic bronchiopulmonary disease. Archives of Environmental Health, 18, 768–776.
  • Carson, R. (1962). Silent spring. Boston: Houghton Mifflin.
  • Cylke, F. K. (1993). The environment. New York: HarperCollins.
  • Davis, D. (2008). The secret history of the war on cancer. New York: Basic Books.
  • Dunlap, R., & Mertig, A. (1992). The evolution of the U.S. environmental movement from 1970 to 1990: An overview. London: Taylor & Francis.
  • Freudenburg, W. (1993). Risk and recreancy: Weber, the division of labor, and the rationality of risk perceptions. Social Forces, 71 (4), 909–932.
  • Gaventa, J. (1980). Power and powerlessness: Quiescence and rebellion in an Appalachian Valley. Urbana: University of Illinois Press.
  • Gibbs, L. (1982). Love Canal: My story. Albany, NY: SUNY Press.
  • Gore, A., Jr. (2006). An inconvenient truth: The planetary emergency of global warming and what we can do about it. Emmaus, NY: Rodale Press.
  • Gots, R. E. (1993). Toxic risks: Science regulation and perception. Boca Raton, FL: Lewis.
  • Gould, K. A., Pellow, D., & Schnaiberg, A. (2008). The treadmill of production: Injustice and unsustainability in the global economy. Boulder, CO: Paradigm.
  • Griffith, J. R. C., Duncan, R. C., Riggan, W. B., & Pellom, A. C. (1989). Cancer mortality in U.S. counties with hazardous waste sites and ground water pollution. Archives of Environmental Health, 44, 69–74.
  • Hannigan, J. (1995). Environmental sociology: A social constructionist perspective. London: Routledge.
  • Kennedy, R. F., Jr. (2004). Crimes against nature: How George Bush and his corporate pals are plundering the country and hijacking our democracy. New York: HarperCollins.
  • Kettel, B. (1996). Women, health and the environment. Social Science & Medicine, 42, 1367–1379.
  • Lagakos, S. W., Wessen, B., & Lelen, M. (1986). Contaminated well water and health effects in Woburn, Massachusetts . Journal of the American Statistical Association, 81, 583–614.
  • Landrigan, P. J. (1990). Prevention of toxic environmental illness in the twenty-first century. Environmental Health Perspectives, 86, 197–199.
  • Landrigan, P. J. (1992). Commentary: Environmental disease— A preventable epidemic. American Journal of Public Health, 82, 941–943.
  • Levine, A. (1982). Love Canal: Science, politics, people. Lexington, MA: D. C. Heath.
  • Lipscomb, J. A., Goldman, L. R., Satin, K. P., Smith, D. F., Vance, W., & Neutra, R. (1991). A follow-up study of the community near the McColl Waste Disposal Site. Environmental Health Perspectives, 94, 15–24.
  • National Research Council. (1991). Environmental epidemiology: Public health and hazardous wastes. Washington, DC: National Academy Press.
  • Neutra, R., Lipscomb, J., Satin, K., & Shusterman, D. (1991). Hypotheses to explain the higher symptom rates observed around hazardous waste sites. Environmental Health Perspectives, 94, 31–38.
  • Paigen, B., Goldman, L., Mougnant, M., Highland, J., & Steegman, A. T. (1987). Growth of children living near the hazardous waste site, Love Canal. Human Biology, 59, 489–508.
  • Robinson, E. (2002). Community frame analysis in Love Canal: Understanding messages in a contaminated community. Sociological Spectrum, 22, 139–169.
  • Robinson, E. (2009). Competing frames of environmental contamination: Influences on grassroots mobilization. Sociological Spectrum, 29, 3–27.
  • Sarbin, T., & Kitsuse, J. (1994). Constructing the social. London: Sage.
  • Snow, D., & Benford, R. D. (1988). Ideology, frame resonance and participant mobilization. International Social Movement Research, 1, 197–217.
  • Steingraber, S. (2001). Having faith: An ecologist’s journey to motherhood. Cambridge, MA: Perseus.
  • Szasz, A. (1994). Ecopopulism: Toxic waste and the movement for environmental justice. Minneapolis: University of Minnesota Press.
  • Tilly, C. (1978). From mobilization to revolution. New York: McGraw-Hill.
  • Townsend, P. (2009). Environmental anthropology: From pigs to policies (2nd ed.). Long Grove, IL: Waveland Press.
  • Wong, O., Morgan, R. W., Whorton, M. D., Gordon, N., & Kheifets, L. (1989). Ecological analysis and case-control studies of gastric cancer and leukemia in relation to DBCP in drinking water in Fresno County, California. British Journal of Independent Medicine, 46, 521–528.

Browse more research papers on environmental issues:

Order high quality custom paper.

research environment in research paper example

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Dusetzina SB, Tyree S, Meyer AM, et al. Linking Data for Health Services Research: A Framework and Instructional Guide [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2014 Sep.

Cover of Linking Data for Health Services Research

Linking Data for Health Services Research: A Framework and Instructional Guide [Internet].

  • Hardcopy Version at Agency for Healthcare Research and Quality

2 Research Environment

A foundational element of any research project is the research program environment. In the context of comparative effectiveness research (CER) using linked data, a secure and well-performing environment is important for several reasons, including that it helps build and assure trust between researchers and the providers of sensitive data, be they patients, registry administrators, insurance claim administrators, or others. If data providers are confident that a research partner has strong administrative and technical security systems and takes data security seriously at a programmatic level, they will be more confident in providing sensitive data to the researchers, including data with unique identifiers. As we describe in Chapters 4 and 5 of this report, linkage quality is typically much stronger when unique identifiers are available. Therefore, a secure research environment and capable information technology support can directly influence the quality of the research data obtained and, by extension, research results. With faith in the integrity and security of the research environment, data providers may also be more likely to provide other unique data that can be important to driving truly innovative research.

A secure and well-performing environment is also important in that system performance and security controls can directly influence the scope of the research project, including the size and complexity of the data that can be managed and linked to support the project. Often, as the scope and complexity of research projects increase and the data volume grows, computing environments are challenged to scale up to ensure seamless operations.

This chapter describes key considerations concerning the research environment, including the technical platform and security considerations, to guide researchers as they seek to develop or optimize their systems for CER projects using large volumes of data such as linked registry and administrative claims data.

  • Computing Systems and the Balance Between Security and Usability

As shown in Figure 2.1 , security and usability often stand on opposite ends of a spectrum. The tradeoff for having a highly secure system is decreased accessibility and practical usability, whereas systems that are highly accessible often face greater challenges in assuring data security. Understanding the scope of the research project and the needs of the researcher or research team is important to specifying a system configuration that meets the needs of the project and balances security with usability. For example, a single researcher with a small research project of limited scope will likely have different needs for a computing environment compared with a large decentralized research team undertaking a multiyear research study using national data.

Security versus usability.

We present three different computing system scenarios to help researchers identify where on the spectrum their program may fall. While this discussion does not take into account the number of users, it is important to note that the cost of large systems varies greatly depending on existing infrastructures and purchasing prices of solutions offered by various vendors.

Desktop Computer: High Security, Low Usability, Low Cost

In this environment, the user accesses all data on a dedicated desktop computer located in a dedicated and constantly locked office with limited network access.

  • Security : The risks of theft and network attacks are reduced to a minimum.
  • Usability : Multiple users will never be able to access the data concurrently.

Central Server: Medium Security, Medium Usability, Moderate Cost

This environment allows multiple users to connect remotely through a secure command line (e.g., through a secure protocol such as SSH) to a central computing server housing all the data and tools.

  • Security : The risk increases while gaining access to information over a network. Controlling individual users’ access to information creates new administrative challenges.
  • Usability : Multiple users can collaborate on a central system. Computing jobs can be submitted in the background and the progress can be checked from remote locations.

Virtual Remote Desktops: High Security, High Usability, High Cost

This environment allows multiple users to connect remotely to virtualized desktops over the Internet from laptops or desktop computers to access shared data and tools.

  • Security : Since the accessing computers supply only monitor, keyboard, and mouse, the data never leave the server environment. Even secure printing to dedicated printers is possible to control paper output.
  • Usability : Each user connects to a virtual computer in the central environment. All tools are housed on the server but accessed through existing desktops.
  • Building the Technical Platform

Regardless of the selected technologies, the number of users, or security requirements, the technical platform can be disassembled into various components ( Figure 2.2 ). Defining the requirements for the individual components creates a meaningful information technology plan.

Component architecture.

Securing the Platform

Most regulatory security frameworks, such as the Health Information Portability and Accountability Act (HIPAA) and Federal Information Security Management Act (FISMA), focus on controlling the confidentiality, integrity, and availability of information. The efforts to implement administrative, physical, and technical safeguards tend to scale up as the system complexity increases. Regulatory requirements and risk assessments will strongly affect the technical implementations.

Users Accessing Platform

To support a wide range of innovative research on complex linked data resources, the experience and focus of team members narrow and deepen. Work is divided among individuals to cover areas such as data management, data linking, cohort discovery, advanced modeling, and more. Complex research projects depend on the seamless integration and collaboration of the various users and the use of their preferred tools. By defining the user profiles and job responsibilities, the main usability properties of the expected environment are established. Examples of typical roles within complex project teams include the following.

Role: Data Manager . The data manager takes care of the data. This might include importing new data, conversion of file formats, preparation and receipt of data carriers (storage devices), archiving obsolete data, and granting access to data.

Role: Data Linking . The linking expert is responsible for linking data sources. This might include cleaning linking variables, building linking methods, and cohort discovery (e.g., selection of patients meeting specific study-inclusion criteria), resulting in datasets for various research projects.

Role: Analyst/Statistician . The statistician is responsible for all modeling aspects of a research study. These might include creating analytic cohorts for study questions (using previously linked deidentified data), preparing data for modeling, and analyzing data to meet project objectives.

Gaining Access to Platform

Access management controls how users gain access to the system and data. An existing organization might have a central user-management process that establishes authentication with a simple username/password combination. More advanced two-factor authentication methods ensure that a compromised password alone does not convey access to the system. Biometric authentication (e.g., fingerprint reader) verifies the identity of the intended user. Examples of commonly used authentication methods and their pros and cons are provided in Appendix 2.1 .

Processing Power of Platform

The processing power of the computing system directly affects the time it takes to manipulate the data. As linking processes touch the same information repeatedly, tuning parameters to optimize the performance of hardware and software will reduce the run times. For considerations for hardware performance, see Appendix 2.1 .

It is important to understand that the researcher’s network has an impact on data flow. The network can quickly become the bottleneck for moving data, resulting in an exceptionally slow response. In an optimized setup, the connection between data and processing components is a dedicated Gigabit (1,000Mbps) network or even fiber optics. Any components between the processing and storage, such as firewalls, network switches, or routers, will reduce information flow. In a setup in which the data are stored on hard drives directly attached to the processing system, network performance will have a limited impact on data flow.

Data Storage Platform

The storage performance directly affects the time it takes to perform data tasks. For example, tasks such as cleaning and standardizing data, combining data sources, or performing exploratory analyses of linked data sources are storage-intensive activities. The main technical characteristics of the storage platform are size and speed. A storage device is attached using a specific technology, such as Serial Advance Technology Attachment (SATA), Serial Attached Small Computer System Interface (Serial SCSI or SAS), Universal Serial Bus (USB), or Storage Area Network (SAN). Various vendors sell enterprise storage solutions encapsulating multiple storage devices in a single appliance.

Storage Size . When purchasing data carriers, it is important to understand that physical data size and actual available data size will greatly vary depending on the installation. Methods used to prevent data loss, such as Redundant Array of Independent Disks (RAID), might require as much as twice the amount of physical space as required for storing the physical data. The file system used to store data also affects available data size. A data carrier is divided into blocks like a blank book with many pages. The size of the data block is fixed for the entire file system. As an example, if the block size is 1,024 characters (or bytes) and a file of 1,500 characters is saved, it will consume 2,048 physical bytes on the data carrier. Since the partially used blocks cannot be used for other files, these bytes are “lost.”

Storage Speed. Storage devices have two speed-related properties: (1) the time it takes to find the data on the carrier (referred to as seek time and measured in milliseconds) and (2) the continuous read/write performance. The seek time depends mainly on how fast the disk is spinning. Common rotation speeds are 5,400, 7,200, 10,000, and 15,000 revolutions per minute (RPM). In the case of a solid state disk (SSD), the seek time will be extremely low, as there are no moving parts. The continuous read/write performance depends not only on how fast the disk is spinning, but also on how the disk is attached to the processing system.

A well-performing storage system can read/write information at rates of 100MB/s or more. A disk spinning at 5,400 or 7,200 RPM, as delivered in standard laptops or desktops, generally cannot achieve this. In comparison, an SSD attached over SATA can easily reach read/write rates of 500MB/s or more. To optimize cost, a computing system can be outfitted with slower/cheaper storage for archiving in combination with fast analytic storage to support powerful processing.

  • Securing the Research Environment

Federal and State laws have mandated several sets of regulations, each intended to address one or more of the following objectives:

  • Confidentiality : Preserving authorized restrictions on information access and disclosure, including means for protecting personal privacy and proprietary information
  • Integrity : Guarding against improper information modification or destruction; includes ensuring information nonrepudiation and authenticity
  • Availability : Ensuring reliable and timely access to information

Regulatory Requirements

The research environment must comply with applicable laws to protect the hosted information. Because HIPAA governs health information collected by covered entities mainly during health encounters, alternative research datasets might require compliance with other contractual requirements or State regulations. Researchers should consult with a regulatory expert as early as possible to ensure that they understand the scope of all applicable laws.

Regulatory requirements generally describe what must be controlled and leave it up to the research team to define how to reach the required controls by implementing adequate policies and procedures. Some State-level privacy laws may govern even self-collected information. In the following sections, we describe a sample of regulatory requirements that might be applicable to researchers working with sensitive health information.

Federal Information Security Management Act of 2002 . FISMA defines a mandatory framework for managing information security for all information systems used or operated by a U.S. Federal Government agency or by a contractor or other organization on behalf of a Federal agency. It requires the development, documentation, and implementation of an information security program. The National Institute for Standards and Technology (NIST) standards and guidelines (Special Publications 800 series) and Federal Information Processing Standards (FIPS) publications further define the framework of the program.

Health Information Portability and Accountability Act . The U.S. Congress created HIPAA in 1996. Security standards establishing requirements to safeguard Protected Health Information (PHI), both paper and electronic (ePHI), were issued as part of HIPAA in April 2003. The security requirements specifically address administrative, physical, and technical safeguards meant to ensure that patient health records and Personally Identifiable Information (PII) remain as secure as possible.

State Security-Breach Laws . Forty-six States, the District of Columbia, and multiple U.S. territories (Guam, Puerto Rico, and the Virgin Islands) have enacted privacy-breach notification laws. While these laws can vary from State to State, they generally follow a similar framework. This framework defines “sensitive data,” sets out requirements for triggering the breach-notification process, identifies actors and roles in the notification process, defines to whom the law applies, and describes those cases under which certain parties and/or information may be exempt from notification requirements ( www.fas.org/sgp/crs/misc/R42475.pdf ). Researchers are responsible for understanding their responsibilities under the relevant State breach-notification legislation and should consult legislative resources such as the National Conference of State Legislatures for regulatory text ( www.ncsl.org ).

Identifying Sensitive Data

Sensitive data are the information protected by regulatory requirements. The definition of sensitive data varies widely between laws. In some cases, the scope of a Data Use Agreement (DUA) could even require aggregation or define minimum cell sizes. In the following section, we provide a summary of regulatory definitions per FISMA, HIPAA, and State security-breach laws.

Personally Identifiable Information, FISMA . As used in information security, PII is any information maintained by an agency that can be linked to an individual. This includes (1) any information (e.g., name, Social Security Number, date and place of birth, mother’s maiden name, or biometric records) that can be used to distinguish or trace an individual’s identity and (2) any other information (e.g., medical, educational, financial, and employment information) that is linked or linkable to an individual. Examples of PII include, but are not limited to—

  • Name, such as full name, maiden name, mother’s maiden name, or alias
  • Personal identification number, such as Social Security Number, passport number, driver’s license number, taxpayer identification number, or financial account or credit card number
  • Address information, such as street address or email address
  • Personal characteristics, including photographic image (especially of face or other identifying characteristic), fingerprints, handwriting, or other biometric data (e.g., retina scan, voice signature, facial geometry)

Protected Health Information, HIPAA . The HIPAA Privacy Rule protects all “individually identifiable health information” held or transmitted by a covered entity or its business associate in any form or media, whether electronic, paper, or oral. The Privacy Rule calls this information Protected Health Information , or PHI.

Under HIPAA, individually identifiable health information is information, including demographic data, that relates to any of the following:

  • The individual’s past, present, or future physical or mental health condition
  • The provision of health care to the individual
  • The past, present, or future payment for the provision of health care to the individual
  • Information that identifies the individual or for which there is a reasonable basis to believe it can be used to identify the individual

Individually identifiable health information includes many common identifiers (e.g., name, address, birth date, Social Security Number). The Privacy Rule excludes from PHI employment records that a covered entity maintains in its capacity as an employer, and educational and certain other records subject to or defined in the Family Educational Rights and Privacy Act, 20 U.S.C. §1232g.

Electronic Protected Health Information, HIPAA . The HIPAA Security Rule protects a subset of information covered by the Privacy Rule, which is all individually identifiable health information a covered entity creates, receives, maintains, or transmits in electronic form. The Security Rule calls this information electronic Protected Health Information , or ePHI. The Security Rule does not apply to PHI transmitted orally or in writing.

Limited Datasets, HIPAA . HIPAA also has a provision for Limited Datasets (LDSs) from which most but not all potentially identifying information has been removed. Elements in an LDS are often necessary for research; however, Direct Identifiers , a subset of PHI defined by HIPAA §164.514(e)(2), must be removed. The Direct Identifiers include—

  • Postal address information other than town or city, State, and ZIP Code
  • Telephone numbers
  • Fax numbers
  • Electronic mail addresses
  • Social Security Numbers
  • Medical record numbers
  • Health plan beneficiary numbers
  • Account numbers
  • Certificate/license numbers
  • Vehicle identifiers and serial numbers, including license plate numbers
  • Device identifiers and serial numbers
  • Web Universal Resource Locators (URLs)
  • Internet Protocol (IP) addresses
  • Biometric identifiers, including finger and voice prints
  • Full-face photographic images and any comparable images

LDSs can include the following PHI:

  • Date of birth
  • Date of death
  • Dates of service
  • Town or city

Personal Information, State Security-Breach Laws . Researchers should review applicable State legislation for definitions of Personal Information. Generally, these definitions do not vary substantially from State to State and are very similar to Federal definitions. For example, the North Carolina State Security-Breach Laws (North Carolina General Statute §75-65) define Personal Information as a person’s first name or first initial and last name in combination with any of the following identifying information:

  • Social Security Number or employer taxpayer identification numbers
  • Driver’s license, State identification card, or passport numbers
  • Checking account numbers
  • Savings account numbers
  • Credit card numbers
  • Debit card numbers
  • Personal Identification Number (PIN)
  • Electronic identification numbers, electronic mail names or addresses, Internet account numbers, or Internet identification names
  • Digital signatures
  • Any other numbers or information that can be used to access a person’s financial resources
  • Biometric data
  • Fingerprints
  • Parent’s legal surname before marriage

Summary of Protected Information

The research team may find it useful to summarize in matrix form the protected information types identified by applicable regulatory requirements. This matrix will help the research team identify information in datasets and assess the policies and procedures that might apply to a specific work task. Table 2.1 shows an example of one such matrix.

Table 2.1. Example of matrix summarizing protected information types.

Example of matrix summarizing protected information types.

Building and Implementing a Security Plan

Meeting applicable regulatory requirements requires thoughtful planning and management. While it is tempting to think of information security in terms of technological controls, successful security management requires people, processes, and technology in equal proportion. An overarching security management plan addresses how people, processes, and technology will be leveraged to maintain the confidentiality, integrity, and availability of sensitive data within the bounds set by applicable regulatory requirements.

While development of the security management plan is an iterative process, with sections added or refined as planning activities proceed, the document will ultimately address the following:

  • Security laws and regulations describe those regulatory requirements applicable to the research team, as discussed previously.
  • Major functions list those functions the security plan is intended to accomplish.
  • Scope lists those sensitive data types the security program is intended to address.
  • Roles and responsibilities describe roles that will be held by members of the organization and their responsibilities vis-à-vis information security.
  • Management commitment represents an official statement on the part of the applicable management body in support of the processes and procedures documented within the security plan.
  • FISMA security categorization and impact level define the FISMA category assigned to the data and information systems covered by the security plan. This section is applicable only to those systems subject to FISMA.
  • Compliance and entity coordination describe which roles are responsible for ensuring organizational compliance with the security plan and which roles are responsible for coordinating security activities among relevant entities external to the research team (e.g., data centers, overarching security offices).

Security documentation control

Risk management (described in further detail below)

Workforce security

Access management

Security training

Incident reporting

Contingency planning

Security assessment

Facility access

Workstation access

Devices and removable media

Data integrity

Authentication

Network security

System activity review/audit

At the outset of security planning, the research team should be able to define the security laws and regulations, major functions, and scope sections. Roles and responsibilities, management commitment, entity coordination, and FISMA categorization (if applicable) can be defined further through stakeholder meetings. The processes and procedures documented in the subplans will be developed as part of the risk-management process described below.

Workforce Training

A training plan defines working procedures, emergency and incident management, sanction policies, policies and procedures on how to inform members of the workforce about their roles and responsibilities, and other relevant procedures. Many large research environments might be able to leverage existing training modules. These might include training on HIPAA, research ethics, basic computer and network use, and basic human resources policies. Keeping the retraining on an annual basis is advisable.

  • Managing Risks

Research teams must first understand regulatory requirements, then select and implement adequate security controls to meet these requirements and to mitigate risks posed to the security of the organization’s information systems and data. Often, discussions of information security mistakenly emphasize specific technical safeguards. An emphasis on risk management, however, properly defines technical solutions as the means by which organizational risks are controlled. Risk management, therefore, drives information security planning. A comprehensive risk-management program not only allows data custodians to identify risks posed to their data, but also provides a framework for selection of functional and technical security controls.

Data custodians subject to FISMA requirements should consult NIST guidance for implementing a FISMA-compliant life-cycle program, which includes detailed volumes of guidance and controls. We illustrate a more general risk-management framework in Figure 2.3 . This framework envisions risk management as a continuous cycle of assessing, addressing, and monitoring organizational risk to ensure the confidentiality, integrity, and availability of information systems.

Risk-management cycle.

Identifying and Assessing Risk

Risk identification is conducted on any technology, process, and procedure within the scope of the environment. Risk identification is, simply put, the process of identifying and documenting potential threats to the research team’s information and information systems. Risk identification can be conducted in a variety of ways, including brainstorming sessions, documentation reviews, assumptions analysis, cause and effect diagramming, strengths/weaknesses/opportunities/threats (SWOT) analysis, and expert consultation. Inclusion of an independent third party, be it an outside consultant or even representatives from a separate group within the research team, will provide an external point of view invaluable in fully defining the spectrum of potential adverse events. Regardless of the method used, this process must clearly identify and document the source of the risk and the impact of the risk should it be realized.

Assess identified risks along two primary dimensions: probability of occurrence and criticality of impact. Actions and mitigations planned in the next phase of the risk-management cycle will be based largely on each risk’s score as assessed during this phase.

Risk scores evaluate the combination of the probability and impact of a security breach/incident. Higher scores represent higher security risks. Lower scores represent reduced security risks. Table 2.2 provides an example of how to plot and assess the severity of a security breach/incident (criticality of impact) and the likelihood of occurrence of an event.

Table 2.2. Example of risk scores.

Example of risk scores.

Tracking Risks

The risk register is the collection of all identified risks, their assessed impact and probability, and possible actions/mitigations. Both HIPAA and FISMA mandate the analysis of risk and a record thereof. Table 2.3 is an example of a generic risk register.

Table 2.3. Example of generic risk register.

Example of generic risk register.

Planning Risk Responses

Once risks have been identified, assessed, and documented in the risk register, data custodians and other stakeholders can plan appropriate methods of dealing with each risk. Risk responses can be divided into four categories: avoidance, acceptance, transfer, and mitigation.

  • Risk avoidance occurs when a research team takes the necessary actions to reduce the likelihood of risk realization to close to (if not exactly) zero. Generally, risk avoidance is the most desirable method of dealing with risks; however, it is often cost prohibitive or simply infeasible to avoid all risks.
  • Risk acceptance occurs when a research team chooses to accept the consequences of a risk should it be realized. Risk acceptance is generally recommended when the impact of a risk is small or when the probability of occurrence is significantly lower than the cost to avoid, transfer, or mitigate. Each research team must define its own criteria for what constitutes an “acceptable” risk.
  • Risk transfer occurs when a research team passes the impact of risk realization on to another party. The most common form of risk transfer is insurance. Risk transfer is feasible only when the impact can be clearly measured and addressed by the third party.
  • Risk mitigation occurs when a research team takes steps to reduce the probability or impact of risk realization. Risks that cannot be avoided, accepted, or transferred must be mitigated.

After a research team decides whether to avoid, accept, transfer, or mitigate a risk, it must determine the necessary steps to do so. At this juncture, the research team will identify the necessary and appropriate technical controls to either avoid or mitigate certain risks. Actions identified in this phase must also be documented in the risk register.

Implementing Risk Responses

During the implementation phase, the research team develops and deploys the technical controls identified in the planning phase. Just as importantly, the research team must also document the controls selected, develop all necessary records, and train stakeholders accordingly. Users must understand not only how to use any security controls implemented but also the “rules of behavior” for maintaining a secure environment. Technical controls alone are not sufficient to create a fully secure environment; users and other stakeholders must foster and maintain a culture of security.

Monitoring Risks

Risk management is an ongoing cyclical process. The research team must periodically reassess the environment for new or changing risks, which in turn must be identified, assessed, and addressed through planning and action. Thoughtful and frequent monitoring of risk allows a research team to adapt more easily to changes, both expected and unexpected, without compromising information security.

  • Conclusions

A research environment incorporating a secure and well-performing computing platform represents the operational backbone for conducting innovative research using complex linked data sources. Securing and safeguarding the information not only meets legal and regulatory requirements, but also builds needed trust among stakeholders. Data providers will be more open to providing access to their information, researchers will be confident in accessing sensitive data, and programmers and analysts will operate effectively in a standardized environment with consistent application of technologies and tools. The technical implementation, combining performance and storage, will enable complex data management, as described in Chapter 3 . Supported by the leadership, successfully implemented security policies and procedures fit seamlessly into daily workflows, reducing and mitigating potential risks. The environment is now ready to support research and to receive even the most sensitive data.

Appendix 2.1. Procedures and Processes To Enhance Data Security

Moving sensitive data using cds or dvds.

Perform the following steps to create and transport sensitive data using CDs or DVDs. This procedure can be adapted for electronic transfer using a protocol such as the secure file transfer protocol (SFTP).

  • Create a new media number and add it to a data carrier list tracking all movable media containing sensitive data.
  • Create a local folder with the media number as the name.
  • Assemble all the data in the created folder.
  • Generate an encryption key using a GUID (globally unique identifier) tool such as that found at www.guidgenerator.com/ and print it on a document along with the media number.
  • Create an archive using a PGP ( http://en.wikipedia.org/wiki/Pretty_Good_Privacy ) encryption tool with all the contents of the folder, using the GUID as the encryption key.
  • After testing the self-extracting archive, use a file-shredding tool ( www.fileshredder.org/ ) to remove the folder with the data.
  • Burn the archive onto a data CD or DVD labeled with the media number.
  • You can now mail the data carrier, and fax or email the encryption key separately to the receiving party.

Guidelines for Storage and Destruction of Movable Media

  • Store movable media in a safe, separated from the encryption keys.
  • Destroy damaged and/or retired media, including hard disks, by shredding.
  • Update movable media records for every media item that is disposed of or destroyed.
  • Shred any printed material at location or use a secure document disposal service.

Decoupling and Mapping Data

Figure 2.4 describes a process for adding data sources to the research data environment, removing direct identifiers, and creating a merged dataset with elements regarding the individuals’ health status or health services utilization. In the context described below, we retain Protected Health Information (PHI) within the original data files but limit access (a process known as decoupling). When PHI is destroyed following the data linkage, this is known as deidentification. We use the terms deidentified and decoupled interchangeably in this report as we discuss data security and staging, but readers should understand the differences represented by the terminology.

Process for creating a merged dataset.

Data file : This file contains raw data such as claims, diagnoses, and treatment. Individuals cannot be directly identified in these data.

Finder file : The finder files contain direct identifiers of individuals.

Mapping process : Using a mapping method, the individuals in the finder files are matched.

Crosswalk : The crosswalk identifies matching records for each individual and possible duplicates. A new unique identifier (ID) is assigned for each true identified research subject.

Removal of direct identifiers : Using the crosswalk and the data file, a new file is generated by replacing the source identifiers with the new IDs.

Merged dataset : Information about an individual can now be accessed across data sources and deidentified datasets using the new IDs.

Physical Separation of Data Using Storage Architecture

In the computing environment, data are separated physically into the three storage areas, as shown in Figure 2.5 .

Physical storage areas.

Direct identifier storage : Raw data including direct identifiers are secured in a highly restricted part of the system. The direct identifier space is accessible only through very restrictive access-management controls. All work on data linking and removal of direct identifiers is performed in this environment. Only a limited set of specifically trained, authorized individuals work in this environment.

Deidentified storage : The deidentified space contains deidentified master datasets from each source. This storage might be accessible to authorized users in read-only mode. Research datasets are extracted, or “cut,” from these master files.

Project storage : Individual programmers access this space for projects. Institutional Review Boards and Data Use Agreements control the access to master datasets and datasets for projects.

Examples of Common Authentication Methods

Simple User Account . A user authenticates with username and password.

  • Pros : This is a quick and simple way to access a system.
  • Cons : Once the username and password are known, any individual can gain access.

Two-Factor Authentication . A user authenticates with username, password, and Personal Identification Number (PIN) (e.g., RSA secure ID).

  • Pros : A changing PIN provides a second factor in addition to logging in with the username and password. An authentication is not possible without the device.
  • Cons : The PIN verification is costly and, depending on the implementation, requires that the computing environment have access to the system verifying the PIN.

Biometric Authentication . A user authenticates with username, password, and (for example) a fingerprint.

  • Pros : The additional biometric verification requires the account holder to be present at the time of authentication.
  • Cons : Managing biometric information requires custom software installations on the client system.

Performance Considerations for Computer Hardware

Fine-tuning the three hardware components of memory, central processing unit (CPU), and bus speed is essential for the actual processing power. Random access memory (RAM) can be added after a purchase, but changes to the CPU or bus speeds are complicated and impractical. The bus speed represents how fast information can be moved between the CPU and the memory, and from and to the attached storage. When purchasing a system, you should purchase the highest affordable CPU/bus speed combination while leaving space to add memory later.

Central Processing Unit . Performance of a CPU is affected by (a) sockets (i.e., actual number of CPUs); (b) cores and threads (i.e., how instructions can be processed in parallel); and (c) clock (i.e., a number in GHz representing how many instructions are processed per second).

Random Access Memory . Selecting memory is dependent on the supported architecture of the motherboard. Manufacturers generally advise on what type of memory is supported and best for optimized performance. The product description of memory often includes physical and performance parameters. For example “240-Pin DDR3 1600” represents memory that has 240 pins connecting it to the motherboard and supports a 1600MHz clock.

Bus Speed . Depending on the architecture, there are multiple bus speeds affecting data throughput between storage, memory, and CPU. For example: a hard disk might be attached using SATA3 or “SATA 6Gb/s,” representing a bus speed of 6Gb/s. Bus speeds on the CPU/motherboard (chipset layout) are more complicated and fine-tuned by the vendor. Components with fancy names such as North Bridge, South Bridge, and Front Side Bus (FSB) ( http://en.wikipedia.org/wiki/Front-side_bus ) are part of this architecture. Vendors typically offer systems with high-performance options where these speeds are optimized and enhanced compared with home use products.

  • Cite this Page Dusetzina SB, Tyree S, Meyer AM, et al. Linking Data for Health Services Research: A Framework and Instructional Guide [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2014 Sep. 2, Research Environment.
  • PDF version of this title (1.0M)

In this Page

  • Procedures and Processes To Enhance Data Security

Other titles in these collections

  • AHRQ Methods for Effective Health Care
  • Health Services/Technology Assessment Texts (HSTAT)

Recent Activity

  • Research Environment - Linking Data for Health Services Research Research Environment - Linking Data for Health Services Research

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

LSE - Small Logo

  • About the LSE Impact Blog
  • Comments Policy
  • Popular Posts
  • Recent Posts
  • Subscribe to the Impact Blog
  • Write for us
  • LSE comment

Erika Pastrana

September 11th, 2024, creating a fully open environment for research code and data.

0 comments | 4 shares

Estimated reading time: 7 minutes

Quantitative research in the social and natural sciences is increasingly dependent on new datasets and forms of code. Making these resources open and accessible is a key aspect of open research and underpins efforts to maintain research integrity. Erika Pastrana   explains how Springer Nature developed Nature Computational Science to be fully compliant with open research and data principles.

Last year, we asked peers in the publishing industry whether we are providing what researchers need in the transition to Open Science . New findings from our State of Open Data repor t and recent regional research integrity surveys continue to show that there are gaps, particularly in supporting researchers with code and data sharing – a key aspect of achieving full and sustainable open science.

In our experience, success in trying to address these gaps involves a combination of implementing publication policies, leveraging technology and applying editorial expertise. None of these approaches in themselves will deliver sufficient results, but a tripartite approach has proven effective in moving the needle towards openness. Importantly, when this recipe meets a scientific community that is supportive of sharing, such as the computational science community, full (aka 100%) compliance with open science practices is possible.

Ingredient 1: Policies and best practice

Policies that strongly encourage or require sharing of research objects, during peer review or at publication are the groundwork to enable publishers to collaborate with researchers in following open science behaviours.

Springer Nature adopts an integrated policy framework that encourages authors to share, as much as possible publicly, the key research objects that underlie the work. We have long been committed to advancing reproducibility and open research practices across our journals, such as the steps taken to improve the reproducibility of published research by the Nature portfolio journals , the introduction of Springer Nature Data policies, as well as our support for protocol sharing . Recently, we announced a policy encouraging public code sharing in all of Springer Nature’s Journals and Books . This policy aims to provide transparency and to facilitate editors and authors working together to increase the sharing of new code that is key to scientific advancement. Data and code availability statements, along with the deposition and encouragement of data sharing, reinforce trust through transparency. These policies enable authors to submit their code for peer review and share verified code upon publication, strengthening their work.

In addition to policies relating to the sharing of objects, publishers can also influence researchers through education about best practices. In the case of code, documentation to enable others to check the code and re-use it, including information on dependencies, operating systems, technical requirements, as well as licences and terms of use, are vital. Our editors work with authors and reviewers to meet these standards of sharing. Upon publication, deposition of the code in a repository that assigns a ‘permanent identifier’ or PID (such as a DOI) is strongly encouraged and, in some journals, required.

These best practices ensure the code that was used as part of the work associated with the publication is permanently accessible via a unique identifier, cited in the paper and recognised as a valuable output in its own right.

Image shows a screenshot of a code availability statement reading: Code Availability - The open-source implementation of SCMER is available at https://github.com/KChen-lab/SCMER under an MIT License. Scripts for reproducing all the results are deposited in Code Ocean.

Fig. 1: Example of a Code Availability statement part of a Nature journal publication describing the accessibility of the code.

Ingredient 2: Evolving publishers’ technical capabilities

Any framework of policies must be supported by technical solutions that make it easy for busy researchers to share and link the outputs of research to their publications. Publishers, in collaboration with technological platforms, can create technical solutions to meet these needs as part of the submission process. In turn, these solutions can make reviewers’ lives easier by facilitating visibility, seamless access and technical support for verifying the research objects associated with the article they are reviewing.

In 2022, the submission system for the Nature journals was updated to collect information from authors about their plans to share code and data as part of the submission. This was enhanced by providing a service to facilitate sharing via a platform integration with Code Ocean and Figshare .

Looking at journal data submissions from those journals that offer code sharing, analysis shows that from November 2023 to March 2024, of those manuscripts in which authors reported having developed new code, between 25-41% of authors took up code sharing services, and at each journal, at least 25% of reviewers engaged with code review (for some journals the engagement from reviewers surpassed 50%). This initial data shows that making code sharing a central and integrated part of the submissions process, is facilitating authors to share their code and data, and making it easier for reviewers to verify the objects.

Questions posed to researchers submitting to Nature Journals about code sharing as part of their article submission: Have you developed new code as part of this work?

Fig. 2: Questions posed to researchers submitting to Nature Journals about code sharing as part of their article submission.

Ingredient 3: Editorial expertise

Policies and technological solutions can go a long way toward promoting open science behaviours, but the human element is essential to ensure authors are supported with their specific needs. Utilising their editorial expertise, publishers can make it easier for researchers to follow the best practice guidance when it comes to metadata standards, protocols for data and code deposition or object citation. Our role as a publisher is one where we can and should play an active voice in the community, collaborating with partners to help develop standards, tools, and services to better support sustainable open research and therefore, open science practice. Examples of this include our involvement with the MDAR framework and our ongoing drive of the FAIR data principles . We also prioritise working closely with our editorial and author community to develop best practice tools and accreditations at an author and institutional level and provide specialist data support, building data expertise among our editors and helping researchers to ensure compliance.

Moving the needle – Code sharing at Nature Computational Science

These three ingredients were put into practice in Nature Computational Science . Launched in 2021, the journal enforces code sharing during peer review and strongly encourages public sharing at publication. Code Ocean was offered to authors to support code sharing from the first submission, and editors are proactive and passionate about working with authors to navigate their open science needs. As a result, of the 205 primary research articles published in the journal since launch, all provide a Code Availability section and all share their code publicly, cite it and provide it via a PID. For those interested, the dataset is here , and you can see that 23% of authors chose to share their code via the integrated Code Ocean solution, whereas the rest primarily used Zenodo and Github.

From this early data, we can see that linking key research objects to the submission of manuscripts and setting up supportive policies and editorial practices can enable achieving fully open science. Of course, different scientific communities have specific needs, and computational science may be the low hanging fruit of what is to come in promoting data and code sharing. But knowing that a world of full open science is possible, should encourage us and excite us to the necessary work ahead.

The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our  comments policy  if you have any concerns on posting a comment below.

Image credit:  PeopleImages.com – Yuri A  om Shutterstock .

Print Friendly, PDF & Email

About the author

research environment in research paper example

Erika Pastrana has a degree and doctorate from Universidad Autónoma de Madrid and conducted her postdoctoral studies at Columbia University. She began her publishing career in 2010 as editor at Nature Methods, where she was in charge of neurosciences, and in March 2014 she moved to the journal Nature Communications. In April 2017 she took on her current role as Editorial Director of the Nature Research Journals. As part of the senior leadership team at Nature Portfolio, Erika works in collaboration with key partners externally and internally to develop new policies and practices that aim to improve editorial service and scientific communication in support of open science and reproducibility.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Notify me of follow-up comments by email.

Related Posts

research environment in research paper example

Doing social science data better – How can the ESRC improve its research data policy?

May 14th, 2024.

research environment in research paper example

How journal communities can ensure reproducible social science

January 23rd, 2024.

research environment in research paper example

Do disappearing data repositories pose a threat to open science and the scholarly record?

January 31st, 2024.

research environment in research paper example

How can open data sharing policies be more attentive to qualitative researchers?

October 26th, 2023.

research environment in research paper example

Visit our sister blog LSE Review of Books

Loyola Today Top Right Logo

Studying air pollution from the ground up

By Kelly McNees

Photos by Lukas Keapproth

September 10, 2024

Most of us think about air pollution as something that occurs at the ground level, but in a big city like Chicago, many people live and work in high-rise buildings. How pollution levels vary across different altitudes was the research question that prompted undergraduate student researcher Megan Wenner, a senior majoring in environmental science, to install outdoor air sensors on floors one, four, six, and nine of BVM Hall on the Lake Shore Campus, and on floor 14 of the Malibu Condominium building a few blocks away.   

Wenner is a member of the most recent cohort of Loyola’s Community Air Research Experience (CARE), a project that engages students from underrepresented backgrounds in hands-on research at the intersection of atmospheric science and environmental justice.   

In Wenner’s case, she and her research colleagues monitored data from the higher-altitude air sensors for one month and made live air-quality data available via a digital map. Wenner and her team hope the results — which showed that concentrations of a harmful type of particulate matter increase up to an altitude of about 14 meters and then decrease on higher floors — may help Chicagoans make decisions about how to reduce their exposure to polluted air.

Funded through a three-year National Science Foundation grant, and in collaboration with Colorado State University, CARE students receive a crash course in applied research. Two cohorts of eight students each learned about research design and methods, and career opportunities in geoscience, through seminars, field trips, and a four-week summer research intensive in which they installed air monitoring instruments and collected and analyzed data on particulate matter 2.5 (PM2.5) pollution.  

PM2.5 pollution is made from the fine inhalable particles generated by combustion. Many people in the Midwest and Northeast first learned about PM2.5 in the summer of 2023, when Canada’s seemingly endless forest fires led to air quality warnings, but vehicles and factories emit most of the PM2.5 in the air. High levels are linked to an increased risk of heart disease, lung cancer, strokes, asthma, and other significant health problems.    

research environment in research paper example

Empowering students to research questions relevant to the community has always been central to teaching science for Ping Jing, an associate professor in Loyola’s School of Environmental Sustainability. Jing designed CARE in that spirit, and with a focus on recruiting students from backgrounds that are underrepresented in environmental sciences.   

That goal resonated with SES Assistant Professor Tania Schusler’s environmental justice research, which focuses on communities that bear the highest pollution burden, and she joined CARE as co-lead investigator. Throughout the program’s history, CARE has remained committed to recruiting students of color, with the goal of bringing greater diversity, equity and inclusion to geoscience careers.   

“Many students of color have a strong interest in the environmental field, but don’t feel a sense of belonging working in it,” Schusler says. In assembling the 2023 cohort, the faculty noticed the program was also attracting students who identified as LGBTQ+. “That broadened our perspective on who is underrepresented in the field and became reflected in the 2023 cohort.”  

Following a spring orientation and a summer of data collection and analysis, the second CARE cohort spent the fall finalizing their results and presented them at the Louis Stokes Midwest Regional Center of Excellence Conference “Inclusion by Design: Nurturing Diversity in STEM” in November 2023. Students Megan Wenner and junior Anna Ries-Roncalli, also majoring in environmental science, recently co-authored a paper with Jing and m athematics and statistics Assistant Professor Mena Whalen about the altitude research, which was accepted for publication in the scientific journal Sensors . Wenner welcomed the opportunity.  

“The opportunity to be first author on a publication was not something I would have imagined was an option as an undergrad,” says Wenner. “It was very cool to see that come to fruition.”  

CARE students also collaborated with community partners Edgewater Environmental Coalition and Southeast Environmental Task Force to share their research results and raise awareness about the risks of PM2.5. They soon learned that effective science communication — translating technical results into approachable language and a format that is easy to distribute — requires a different set of skills than field work. Senior Thomas Crabtree, who is majoring in environmental policy, embraced the challenge and worked with the cohort to develop social media graphics, flyers and other materials to explain how individuals can protect themselves from PM2.5.  

“ It was really important to have relationships with community partners so it was not an extractive process, [where we were only] taking data, but rather that we were building mutual relationships where you both benefit. It wouldn’t have been respectful to just dump our results on them. We thought a lot about alternative ways to present our findings so that community members and parents could access the information,” Crabtree says.

research environment in research paper example

Now that the program is in its third year, CARE faculty are reflecting on its successes and challenges, and hoping to spread the word about how to facilitate undergraduate research opportunities that benefit students and communities.   

“ The CARE model is one that could apply to many environmental issues, like water quality and food access, so I hope that through the publications we’re working on and presentations by Dr. Jing at other research meetings, we can inspire faculty elsewhere to help students engage with this type of experience in other contexts,” Schusler says.  

Schusler also says that while she expected CARE students to learn practical research and science communication skills from the technical training, she had not anticipated the role social activities like bike rides on the lakefront and pizza parties would play in making the student researchers into true colleagues.   

“Dr. Jing understood how important it was to be intentional about giving the students the chance to build relationships with each other. Now that I observe their interactions and friendships, and see them exchange information and resources, it’s evident that sense of connection and community is strong and will continue.”  

For now, the work goes on. The air-quality questions CARE students explored in their research have already spurred a 2024 service-learning project in which students used portable, palm-sized monitors to collect nearly 400 hours of data   on indoor and outdoor air quality across their communities and reported their results in brochures and videos . And students from the 2022 and 2023 cohorts have moved on to other undergraduate research experiences. Wenner was selected for a research program focused on wetland ecology and restoration in Michigan . Crabtree is now an intern with the U.S. Environmental Protection Agency and says that participating in CARE opened his eyes to a broader range of career opportunities available to scientists, including in public policy.   

“CARE was a really empowering experience,” Crabtree says. “I didn’t realize how important the mentorship was until I found myself with this amazing team of faculty, and the professional development and career advice, really pushing us to find what it is that we enjoy in the research and what paths we might take to continue that work.”

Read more stories from the School of Environmental Sustainability.

Related Stories

Abc7: cynthia paidipati talks about the importance of the out of the darkness fundraising walk, ignatian solidarity network: paul campion discusses how his experience at loyola prepared him to become an environmental organizer.

Two men wearing button down shirts stand under the elevated tracks in Chicago

Multiyear research aims to combat opioid use among formerly incarcerated

journal-menu-img

Trending Terms:

  • artificial intelligence
  • Current Issue
  • First release papers
  • About About Science Mission & Scope Editors & Advisory Boards Editorial Policies Information for Authors Information for Reviewers Journal Metrics Staff Contact Us TOC Alerts and RSS Feeds
  • Submit manuscript

Climate change exacerbates the environmental impacts of agriculture

Information & authors, metrics & citations, check access, editor’s summary, structured abstract.

research environment in research paper example

Get full access to this article

View all available purchase options and get full access to this article.

References and Notes

Submit a response to this article, compose eletter, contributors, statement of competing interests, (0) eletters.

eLetters is a forum for ongoing peer review. eLetters are not edited, proofread, or indexed, but they are screened. eLetters should provide substantive and scholarly commentary on the article. Embedded figures cannot be submitted, and we discourage the use of figures within eLetters in general. If a figure is essential, please include a link to the figure within the text of the eLetter. Please read our Terms of Service before submitting an eLetter.

No eLetters have been published for this article yet.

Information

Published in.

research environment in research paper example

Article versions

Submission history, permissions, acknowledgments, affiliations, funding information, article usage.

Note: The article usage is presented with a three- to four-day delay and will update daily once available. Due to this delay, usage data will not appear immediately following publication.

Citation information is sourced from Crossref Cited-by service.

  • Yi Yang  et al.

Export citation

Select the format you want to export the citation of this publication.

View Options

Log in to view the full text.

AAAS ID LOGIN

AAAS login provides access to Science for AAAS Members, and access to other journals in the Science family to users who have purchased individual subscriptions.

  • Become a AAAS Member
  • Activate your AAAS ID
  • Purchase Access to Other Journals in the Science Family
  • Account Help

More options

Purchase digital access to this article

Download and print this article for your personal scholarly, research, and educational use.

Purchase this issue in print

Buy a single issue of Science for just $15 USD .

Restore Content Access

Restore content access for purchases made as a guest

View options

Download this article as a PDF file

Share article link

Copying failed.

Share on social media

Previous article, next article.

Get the latest news, commentary, and research, free to your inbox daily.

A methodology providing new insights into the flow patterns of karst aquifers: an example from SW Türkiye

  • Original Paper
  • Published: 12 September 2024
  • Volume 83 , article number  396 , ( 2024 )

Cite this article

research environment in research paper example

  • Athanasios Maramathas 1 ,
  • Konstantina Katsanou   ORCID: orcid.org/0000-0003-2880-6244 2 ,
  • Çağdaş Sağır 3 ,
  • Alper Baba 4 &
  • Nikolaos Lambrakis 5  

This paper presents a new and innovative methodology for the investigation of karst systems using spring discharge. The behaviour of springs in phase space is investigated by plotting the measurements of spring discharge versus the measurements of the water level at the spring’s outlet. Such a diagram reveals new features of the function of the karst system and the discharge pattern of the spring that are not captured by common research methods. The application of this method to the Azmak Spring in southwestern Türkiye revealed the existence of five distinct discharge subsystems that operate alternately and never simultaneously. They have a specific connection between them, while the transition from one to another is not random but follows a pattern. An attempt was made to interpret these features using concepts from percolation theory.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research environment in research paper example

Explore related subjects

  • Geoengineering

Data availability

Available upon request.

Acikel S, Ekmekci M (2018) Assessment of groundwater quality using multivariate statistical techniques in the Azmak Spring Zone, Muğla, Turkey. Environ Earth Sci 77:753

Article   Google Scholar  

Acikel S, Ekmekci M (2021) Distinction of multiple groundwater systems in a coastal karst spring zone in SW Turkey by hydrochemical and isotopic characteristics. Bull Eng Geol Environ 80:5781–5795. https://doi.org/10.1007/s10064-021-02150-4

Bayari CS, Ozyurt NN, Oztan M et al (2011) Submarine and coastal karstic groundwater discharges along the southwestern Mediterranean coast of Turkey. Hydrogeol J 19, 399–414 (2011). https://doi.org/10.1007/s10040-010-0677-y

Berkovitz B (1993) Percolation theory and its application to groundwater hydrology. Water Resour Res 29:4

Google Scholar  

Broadbent S, Hammersley J (1957) Percolation Processes I. Crystals and Mazes. Proceedings of the Cambridge Philosophical Society, 53, 629–641

Dewey JF, Sengör AC (1979) Aegean and surrounding regions: complex multiplate and continuum tectonics in a convergent zone. Geol Soc Am Bull 90(1):84–92

Doctor DH, Alexander EC Jr (2005) Interpretation of water chemistry and stable isotope data from a karst aquifer according to flow regimes identified through hydrograph recession analysis. US Geological Survey Karst Interest Group Proceedings, Rapid City, South Dakota, 82–92

Düztaş E, Kurtuluş B, Erdem G, Sağır C, Gürcan T, Avşar Ö, Regnier JL, Le Coz M, Razack M (2017) Electrical Resistivity Tomography (ERT) and Induced Polarization (IP) Applied to Karst Alluvium: Case Study from Azmak Spring, Mugla, Turkey. In Proceedings of the International Groundwater Conference

Ekmekci M (2003) Review of Turkish karst with emphasis on tectonic and paleogeographic controls. Acta Carstologica 32(2):205–218

Erdem G (2019) Assessment of Gökova karst aquifer system by hydrogeochemical and isotopic analysis. PhD Thesis, Muğla Sıtkı Koçman University, Muğla, Turkey

Fu T, Chen H, Wang K (2016) Structure and water storage capacity of a small karst aquifer based on stream discharge in southwest China. J Hydrol 534:50–62

Görür N, Sengör AMC, Sakinü M, Akkök R, Yiğitbaş E, Oktay FY, Barka A, Sarica N, Ecevitoğlu B, Demirbağ E, Ersoy Ş (1995) Rift formation in the Gökova region, southwest Anatolia: implications for the opening of the Aegean Sea. Geol Mag 132(6):637–650

Grimmett G (1999) Percolation Second edition. Springer-, New York

Günay G, Güner N, Törk K (2015) Turkish karst aquifers. Environ Earth Sci 74(1):217–226

Gürer ÖF, Sanğu E, Özburan M, Gürbüz A, Sarica-Filoreau N (2013) Complex basin evolution in the Gökova Gulf region: implications on the late cenozoic tectonics of southwest Turkey. Int J Earth Sci 102(8):2199–2221

Hendrick M, Renard P (2016a) Subnetworks of Percolation backbones to Model Karst systems around Tulum, Mexico. Front Phys 4:43. https://doi.org/10.3389/fphy.2016.00043

Hendrick M, Renard P (2016b) Fractal dimension, walk dimension and conductivity exponent of karst networks around Tulum. Front Phys 4(2):27. https://doi.org/10.3389/fphy.2016.00027

Hinrichsen H (2000) Non-equilibrium critical phenomena and phase transitions into absorbing states. Adv Phys. (2000) 49: 815–958. https://doi.org/10.1080/00018730050198152

Katsanou K, Maramathas A, Sağır Ç et al (2023) Determination of karst spring characteristics in complex geological setting using MODKARST model: Azmak Spring, SW Turkey. Arab J Geosci 16. https://doi.org/10.1007/s12517-022-11049-7

Kurttaş T, Günay G, Gemalmaz A (2022) Karst Hydrogeology of Muğla-Gökova Karst Springs. Caves and Karst of Turkey-volume 2. Springer, Cham, pp 93–96

Chapter   Google Scholar  

Kurtuluş B, Sağır Ç, Avşar Ö (2017) Assessment of Groundwater Metal-Metalloid Content using Geostatistical methods in Karabaglar Polje (Mugla, Turkey). Bull Mineral Res Explor 154(154):193–206

Malík P (2015) Evaluating discharge regimes of karst aquifer. Karst Aquifers—Characterization Eng, 205–249

Maramathas A, Boudouvis A (2005) Manifestation and measurement of the fractal characteristics of karst hydrogeological formations. Advances in Water Resources 2005.06.003

Maramathas A, Maroulis Z, Marinos-Kouris D (2003) A brackish karst springs model. Application Almiros Crete Greece Groundw 41(5):608–620. https://doi.org/10.1111/j.1745-6584.2003.tb02399.x

Article   CAS   Google Scholar  

Moore PJ, Martin JB, Screaton EJ (2009) Geochemical and statistical evidence of recharge, mixing, and controls on spring discharge in an eogenetic karst aquifer. J Hydrol 376(3–4):443–455

Richeng Liu T, Zhu Y, Jiang B, Li L, Yu Y, Du Y, Wang (2019) A predictive model correlating permeability to two dimentional fracture networks parameters. Bull Eng Geol Environ 78:1589–1605

Sağır Ç, Kurtuluş B, Razack M (2019) Hydrodynamic characterization of Mugla Karst aquifer using correlation and spectral analyses on the rainfall and springs water-level time series. Water 12(1):85–105. https://doi.org/10.3390/w12010085

Sağır Ç, Kurtuluş B, Soupios P, Ayrancı K, Düztaş E, Aksoy ME, Avşar Ö, Erdem G, Pekkan E, Canoğlu MC, Kak SI, Razack M (2020) Investigating the structure of a Coastal Karstic Aquifer through the Hydrogeological characterization of Springs using Geophysical methods and Field Investigation, Gökova Bay, SW Turkey. Water 12(12):3343–3361. https://doi.org/10.3390/w12123343

Şener A, Yolcubal İ, Sangu E (2020) Determination of recharge, storage and flow characteristics of a karst aquifer using multi-method approaches (Kocaeli, Turkey). Hydrogeol J 28(6):2141–2157

Şengör AC (1984) The Cimmeride orogenic system and the tectonics of Eurasia

Şengör AC, Yilmaz Y (1981) Tethyan evolution of Turkey: a plate tectonic approach. Tectonophysics 75(3–4):181–241

Şengör AMC, Natal’In BA, Burtman VS (1993) Evolution of the Altaid tectonic collage and Palaeozoic crustal growth in Eurasia. Nature 364(6435):299–307

Worthington SRH, Foley AE (2021) Deriving celerity from monitoring data in carbonate aquifers. J Hydrol 598:126451. https://doi.org/10.1016/j.jhydrol.2021.126451

Yadav MP, Agarwal R, Purohit SD, Kumar D, Suthar DL (2022) Groundwater flow in karstic aquifer: analytic solution of dual-porosity fractional model to simulate groundwater flow. Appl Math Sci Eng 30(1):598–608. https://doi.org/10.1080/27690911.2022.2117913

Download references

Acknowledgements

The authors would like to thank Dr. Bedri Kurtuluş, Associate Professor at Muğla Sıtkı Koçman University for his contribution to this work. The authors would also like to express their gratitude to the editor and the two anonymous reviewers, whose constructive comments helped substantially to improve the current manuscript.

This work was not supported by any source.

Author information

Authors and affiliations.

School of Chemical Engineering, National Technical University of Athens, Athens, Greece

Athanasios Maramathas

Department of Water Resources and Ecosystems, IHE Delft Institute for Water Education, Delft, Netherlands

Konstantina Katsanou

Geological Engineering Department, Middle East Technical University, Ankara, Turkey

Çağdaş Sağır

Department of International Water Resources, İzmir Institute of Technology, İzmir, Turkey

Department of Geology, University of Patras, Rio, Greece

Nikolaos Lambrakis

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Konstantina Katsanou .

Ethics declarations

Conflict of interest.

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Maramathas, A., Katsanou, K., Sağır, Ç. et al. A methodology providing new insights into the flow patterns of karst aquifers: an example from SW Türkiye. Bull Eng Geol Environ 83 , 396 (2024). https://doi.org/10.1007/s10064-024-03894-5

Download citation

Received : 12 December 2023

Accepted : 31 August 2024

Published : 12 September 2024

DOI : https://doi.org/10.1007/s10064-024-03894-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Phase space
  • Percolation theory
  • Karst matrix
  • Percolation backbone
  • Find a journal
  • Publish with us
  • Track your research

COMMENTS

  1. What really matters for successful research environments? A realist synthesis

    Introduction. Research environments matter. Environmental considerations such as robust cultures of research quality and support for researchers are thought to be the most influential predictors of research productivity.1, 2 Over 25 years ago, Bland and Ruffin1 identified 12 characteristics of research‐favourable environments in the international academic medicine literature spanning the ...

  2. Research Environment

    Successful research environment requires joint effort by individual researchers, research groups and the organization. This chapter describes the basic principles and good research practices in the context of the research environment and serves as a guide to good, responsible research for research newcomers - researchers at the beginning of their scientific career.

  3. Writing the Research Environment, Research Respondents ...

    This video contains information about how to write the research environment and respondents of the study sections in a research proposal. Different probabili...

  4. (PDF) Research Environment

    1. 1. Research Environment. Lana Bara ć. Abstract. Successful research environment requires joint effort by indi vidual researchers, research groups and the organization. This chapter describes ...

  5. THE RESEARCH ENVIRONMENT

    The U.S. academic research enterprise is entering a new era characterized by remarkable opportunities and increased strain. This two-part volume integrates the experiential knowledge of group members with quantitative data analyses in order to examine the status of scientific and technological research in academic settings.

  6. We must discuss research environments

    For example, in the 2021 UK Research Excellence Framework exercise , research environment is assessed independently of outputs and impact. Although 15% of total ranking scores are given to the research environment, this is defined non-specifically in terms of 'vitality and sustainability' . The potential scientific, health and wealth ...

  7. 3 Important Trends and Challenges in the Research Environment

    research as a substitute or replacement for expert judgment (P. B. Lowry et al., 2012).Complexity of Collaboration. Responsible Science described the growth of collaborative research after World War II, which has continued since the early 1990s. In contrast to earlier times, when articles with more than four co-authors and work involving more than one laboratory or research institution were ...

  8. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  9. Research Design

    Table of contents. Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies.

  10. (PDF) Understanding the Research Environment: Identifying Areas for

    This also brings us to the issue of analysis of the research environment per se and the effect of changes that are happening in undertaking a research activity could be shaping up the research environment. 4.2 Internal Factors affecting Research Environment: (a) Clear Goals with Individual Autonomy: The research environment is affected by the ...

  11. The Research Environment and Its Impact on Integrity in Research

    As reflected in Figure 3-2, the organizations and conditions of the external-task environment (unshaded circles) are embedded within this larger environment (shaded area). An example of how the broader environment can affect the conduct of research is the recent national debate over embryonic stem cell research; this debate reflects a clash of ...

  12. Important Trends and Challenges in the Research Environment

    Synopsis:A number of the elements in the research environment that were identified in the early 1990s as perhaps problematic for ensuring research integrity and maintaining good scientific practices have generally continued along their long-term trend lines, including the size and scope of the research enterprise, the complexity of collaboration, the growth of regulatory requirements, and the ...

  13. 10 Research Question Examples to Guide your Research Project

    The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

  14. (PDF) Managing for the Ideal Research Environment

    The answers arrived at are sum marised below. A.D. Madden. 20. The best environment for research wil l, unsurprisingly, vary with discipline. However, key components are summarised below: 1 ...

  15. PGR skills development and the research environment

    Expectations on access to the research environment. 8.7. The University provides a high-quality research environment in which PGR students develop their skills and conduct work on their research projects. 8.8. Schools and faculties must ensure that PGR students have access to an appropriate research environment, including the following: 8.8.1.

  16. How to do Research on Environment

    To find research on more specific aspects of your topic, alternate one new keyword at a time with the primary keyword of your topic with "and" in between them (for example, "global warming and causes," "global warming and health," "global warming and pollution," "global warming and solutions," etc.). For additional help with ...

  17. Research Environment

    This paper provides an extensive survey of the different methods of addressing security issues in the grid computing environment, and specifically contributes to the research environment by developing a comprehensive framework for classification of these research endeavors. The framework presented classifies security literature into System ...

  18. (DOC) Chapter 3 Research Environment

    Chapter 3 Research Environment In every research of course there is a place to conduct it. The environment where we conducted this research is prone to sunlight and has also many plants surrounding it. The researchers can make sure that this area/environment is very fit and suitable for our research to be conducted.

  19. Environmental Issues Research Paper Topics

    This comprehensive list of environmental issues research paper topics provides a wide range of areas to choose from for your research. The topics cover major environmental issues, from climate change and air pollution to biodiversity loss and overpopulation. Each of these topics can be explored from various angles, providing a rich source of ...

  20. Environmental Issues Research Paper

    This sample environmental issues research paper features: 6700 words (approx. 22 pages), an outline, and a bibliography with 39 sources. Browse other research paper examples for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help.

  21. Research Environment

    A foundational element of any research project is the research program environment. In the context of comparative effectiveness research (CER) using linked data, a secure and well-performing environment is important for several reasons, including that it helps build and assure trust between researchers and the providers of sensitive data, be they patients, registry administrators, insurance ...

  22. Creating a fully open environment for research code and data

    Fig. 1: Example of a Code Availability statement part of a Nature journal publication describing the accessibility of the code. Ingredient 2: Evolving publishers' technical capabilities Any framework of policies must be supported by technical solutions that make it easy for busy researchers to share and link the outputs of research to their ...

  23. Studying air pollution from the ground up

    Students Megan Wenner and junior Anna Ries-Roncalli, also majoring in environmental science, recently co-authored a paper with Jing and m athematics and statistics Assistant Professor Mena Whalen about the altitude research, which was accepted for publication in the scientific journal Sensors. Wenner welcomed the opportunity.

  24. Climate change exacerbates the environmental impacts of agriculture

    Agricultural reliability and sustainability are of key long-term importance for human and planetary health. The challenges raised by climate change require accelerated adoption of practices and technologies that improve agriculture's environmental sustainability and climate resilience, especially those that can simultaneously deliver multiple benefits, such as diversification and integrated ...

  25. What really matters for successful research environments? A realist

    Introduction. Research environments matter. Environmental considerations such as robust cultures of research quality and support for researchers are thought to be the most influential predictors of research productivity. 1, 2 Over 25 years ago, Bland and Ruffin 1 identified 12 characteristics of research-favourable environments in the international academic medicine literature spanning the ...

  26. A methodology providing new insights into the flow patterns of karst

    This paper presents a new and innovative methodology for the investigation of karst systems using spring discharge. The behaviour of springs in phase space is investigated by plotting the measurements of spring discharge versus the measurements of the water level at the spring's outlet. Such a diagram reveals new features of the function of the karst system and the discharge pattern of the ...