Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • CAREER FEATURE
  • 04 December 2020
  • Correction 09 December 2020

How to write a superb literature review

Andy Tay is a freelance writer based in Singapore.

You can also search for this author in PubMed   Google Scholar

Literature reviews are important resources for scientists. They provide historical context for a field while offering opinions on its future trajectory. Creating them can provide inspiration for one’s own research, as well as some practice in writing. But few scientists are trained in how to write a review — or in what constitutes an excellent one. Even picking the appropriate software to use can be an involved decision (see ‘Tools and techniques’). So Nature asked editors and working scientists with well-cited reviews for their tips.

Access options

doi: https://doi.org/10.1038/d41586-020-03422-x

Interviews have been edited for length and clarity.

Updates & Corrections

Correction 09 December 2020 : An earlier version of the tables in this article included some incorrect details about the programs Zotero, Endnote and Manubot. These have now been corrected.

Hsing, I.-M., Xu, Y. & Zhao, W. Electroanalysis 19 , 755–768 (2007).

Article   Google Scholar  

Ledesma, H. A. et al. Nature Nanotechnol. 14 , 645–657 (2019).

Article   PubMed   Google Scholar  

Brahlek, M., Koirala, N., Bansal, N. & Oh, S. Solid State Commun. 215–216 , 54–62 (2015).

Choi, Y. & Lee, S. Y. Nature Rev. Chem . https://doi.org/10.1038/s41570-020-00221-w (2020).

Download references

Related Articles

methodology for review paper

  • Research management

Massive Attack’s science-led drive to lower music’s carbon footprint

Massive Attack’s science-led drive to lower music’s carbon footprint

Career Feature 04 SEP 24

Tales of a migratory marine biologist

Tales of a migratory marine biologist

Career Feature 28 AUG 24

Nail your tech-industry interviews with these six techniques

Nail your tech-industry interviews with these six techniques

Career Column 28 AUG 24

Why I’m committed to breaking the bias in large language models

Why I’m committed to breaking the bias in large language models

Career Guide 04 SEP 24

Binning out-of-date chemicals? Somebody think about the carbon!

Correspondence 27 AUG 24

No more hunting for replication studies: crowdsourced database makes them easy to find

No more hunting for replication studies: crowdsourced database makes them easy to find

Nature Index 27 AUG 24

Publishing nightmare: a researcher’s quest to keep his own work from being plagiarized

Publishing nightmare: a researcher’s quest to keep his own work from being plagiarized

News 04 SEP 24

Intellectual property and data privacy: the hidden risks of AI

Intellectual property and data privacy: the hidden risks of AI

How can I publish open access when I can’t afford the fees?

How can I publish open access when I can’t afford the fees?

Career Feature 02 SEP 24

Postdoctoral Associate- Genetic Epidemiology

Houston, Texas (US)

Baylor College of Medicine (BCM)

methodology for review paper

NOMIS Foundation ETH Postdoctoral Fellowship

The NOMIS Foundation ETH Fellowship Programme supports postdoctoral researchers at ETH Zurich within the Centre for Origin and Prevalence of Life ...

Zurich, Canton of Zürich (CH)

Centre for Origin and Prevalence of Life at ETH Zurich

methodology for review paper

13 PhD Positions at Heidelberg University

GRK2727/1 – InCheck Innate Immune Checkpoints in Cancer and Tissue Damage

Heidelberg, Baden-Württemberg (DE) and Mannheim, Baden-Württemberg (DE)

Medical Faculties Mannheim & Heidelberg and DKFZ, Germany

methodology for review paper

Postdoctoral Associate- Environmental Epidemiology

Open faculty positions at the state key laboratory of brain cognition & brain-inspired intelligence.

The laboratory focuses on understanding the mechanisms of brain intelligence and developing the theory and techniques of brain-inspired intelligence.

Shanghai, China

CAS Center for Excellence in Brain Science and Intelligence Technology (CEBSIT)

methodology for review paper

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Review articles: purpose, process, and structure

  • Published: 02 October 2017
  • Volume 46 , pages 1–5, ( 2018 )

Cite this article

methodology for review paper

  • Robert W. Palmatier 1 ,
  • Mark B. Houston 2 &
  • John Hulland 3  

241k Accesses

486 Citations

65 Altmetric

Explore all metrics

Avoid common mistakes on your manuscript.

Many research disciplines feature high-impact journals that are dedicated outlets for review papers (or review–conceptual combinations) (e.g., Academy of Management Review , Psychology Bulletin , Medicinal Research Reviews ). The rationale for such outlets is the premise that research integration and synthesis provides an important, and possibly even a required, step in the scientific process. Review papers tend to include both quantitative (i.e., meta-analytic, systematic reviews) and narrative or more qualitative components; together, they provide platforms for new conceptual frameworks, reveal inconsistencies in the extant body of research, synthesize diverse results, and generally give other scholars a “state-of-the-art” snapshot of a domain, often written by topic experts (Bem 1995 ). Many premier marketing journals publish meta-analytic review papers too, though authors often must overcome reviewers’ concerns that their contributions are limited due to the absence of “new data.” Furthermore, relatively few non-meta-analysis review papers appear in marketing journals, probably due to researchers’ perceptions that such papers have limited publication opportunities or their beliefs that the field lacks a research tradition or “respect” for such papers. In many cases, an editor must provide strong support to help such review papers navigate the review process. Yet, once published, such papers tend to be widely cited, suggesting that members of the field find them useful (see Bettencourt and Houston 2001 ).

In this editorial, we seek to address three topics relevant to review papers. First, we outline a case for their importance to the scientific process, by describing the purpose of review papers . Second, we detail the review paper editorial initiative conducted over the past two years by the Journal of the Academy of Marketing Science ( JAMS ), focused on increasing the prevalence of review papers. Third, we describe a process and structure for systematic ( i.e. , non-meta-analytic) review papers , referring to Grewal et al. ( 2018 ) insights into parallel meta-analytic (effects estimation) review papers. (For some strong recent examples of marketing-related meta-analyses, see Knoll and Matthes 2017 ; Verma et al. 2016 ).

Purpose of review papers

In their most general form, review papers “are critical evaluations of material that has already been published,” some that include quantitative effects estimation (i.e., meta-analyses) and some that do not (i.e., systematic reviews) (Bem 1995 , p. 172). They carefully identify and synthesize relevant literature to evaluate a specific research question, substantive domain, theoretical approach, or methodology and thereby provide readers with a state-of-the-art understanding of the research topic. Many of these benefits are highlighted in Hanssens’ ( 2018 ) paper titled “The Value of Empirical Generalizations in Marketing,” published in this same issue of JAMS.

The purpose of and contributions associated with review papers can vary depending on their specific type and research question, but in general, they aim to

Resolve definitional ambiguities and outline the scope of the topic.

Provide an integrated, synthesized overview of the current state of knowledge.

Identify inconsistencies in prior results and potential explanations (e.g., moderators, mediators, measures, approaches).

Evaluate existing methodological approaches and unique insights.

Develop conceptual frameworks to reconcile and extend past research.

Describe research insights, existing gaps, and future research directions.

Not every review paper can offer all of these benefits, but this list represents their key contributions. To provide a sufficient contribution, a review paper needs to achieve three key standards. First, the research domain needs to be well suited for a review paper, such that a sufficient body of past research exists to make the integration and synthesis valuable—especially if extant research reveals theoretical inconsistences or heterogeneity in its effects. Second, the review paper must be well executed, with an appropriate literature collection and analysis techniques, sufficient breadth and depth of literature coverage, and a compelling writing style. Third, the manuscript must offer significant new insights based on its systematic comparison of multiple studies, rather than simply a “book report” that describes past research. This third, most critical standard is often the most difficult, especially for authors who have not “lived” with the research domain for many years, because achieving it requires drawing some non-obvious connections and insights from multiple studies and their many different aspects (e.g., context, method, measures). Typically, after the “review” portion of the paper has been completed, the authors must spend many more months identifying the connections to uncover incremental insights, each of which takes time to detail and explicate.

The increasing methodological rigor and technical sophistication of many marketing studies also means that they often focus on smaller problems with fewer constructs. By synthesizing these piecemeal findings, reconciling conflicting evidence, and drawing a “big picture,” meta-analyses and systematic review papers become indispensable to our comprehensive understanding of a phenomenon, among both academic and practitioner communities. Thus, good review papers provide a solid platform for future research, in the reviewed domain but also in other areas, in that researchers can use a good review paper to learn about and extend key insights to new areas.

This domain extension, outside of the core area being reviewed, is one of the key benefits of review papers that often gets overlooked. Yet it also is becoming ever more important with the expanding breadth of marketing (e.g., econometric modeling, finance, strategic management, applied psychology, sociology) and the increasing velocity in the accumulation of marketing knowledge (e.g., digital marketing, social media, big data). Against this backdrop, systematic review papers and meta-analyses help academics and interested managers keep track of research findings that fall outside their main area of specialization.

JAMS’ review paper editorial initiative

With a strong belief in the importance of review papers, the editorial team of JAMS has purposely sought out leading scholars to provide substantive review papers, both meta-analysis and systematic, for publication in JAMS . Many of the scholars approached have voiced concerns about the risk of such endeavors, due to the lack of alternative outlets for these types of papers. Therefore, we have instituted a unique process, in which the authors develop a detailed outline of their paper, key tables and figures, and a description of their literature review process. On the basis of this outline, we grant assurances that the contribution hurdle will not be an issue for publication in JAMS , as long as the authors execute the proposed outline as written. Each paper still goes through the normal review process and must meet all publication quality standards, of course. In many cases, an Area Editor takes an active role to help ensure that each paper provides sufficient insights, as required for a high-quality review paper. This process gives the author team confidence to invest effort in the process. An analysis of the marketing journals in the Financial Times (FT 50) journal list for the past five years (2012–2016) shows that JAMS has become the most common outlet for these papers, publishing 31% of all review papers that appeared in the top six marketing journals.

As a next step in positioning JAMS as a receptive marketing outlet for review papers, we are conducting a Thought Leaders Conference on Generalizations in Marketing: Systematic Reviews and Meta-Analyses , with a corresponding special issue (see www.springer.com/jams ). We will continue our process of seeking out review papers as an editorial strategy in areas that could be advanced by the integration and synthesis of extant research. We expect that, ultimately, such efforts will become unnecessary, as authors initiate review papers on topics of their own choosing to submit them to JAMS . In the past two years, JAMS already has increased the number of papers it publishes annually, from just over 40 to around 60 papers per year; this growth has provided “space” for 8–10 review papers per year, reflecting our editorial target.

Consistent with JAMS ’ overall focus on managerially relevant and strategy-focused topics, all review papers should reflect this emphasis. For example, the domains, theories, and methods reviewed need to have some application to past or emerging managerial research. A good rule of thumb is that the substantive domain, theory, or method should attract the attention of readers of JAMS .

The efforts of multiple editors and Area Editors in turn have generated a body of review papers that can serve as useful examples of the different types and approaches that JAMS has published.

Domain-based review papers

Domain-based review papers review, synthetize, and extend a body of literature in the same substantive domain. For example, in “The Role of Privacy in Marketing” (Martin and Murphy 2017 ), the authors identify and define various privacy-related constructs that have appeared in recent literature. Then they examine the different theoretical perspectives brought to bear on privacy topics related to consumers and organizations, including ethical and legal perspectives. These foundations lead in to their systematic review of privacy-related articles over a clearly defined date range, from which they extract key insights from each study. This exercise of synthesizing diverse perspectives allows these authors to describe state-of-the-art knowledge regarding privacy in marketing and identify useful paths for research. Similarly, a new paper by Cleeren et al. ( 2017 ), “Marketing Research on Product-Harm Crises: A Review, Managerial Implications, and an Agenda for Future Research,” provides a rich systematic review, synthesizes extant research, and points the way forward for scholars who are interested in issues related to defective or dangerous market offerings.

Theory-based review papers

Theory-based review papers review, synthetize, and extend a body of literature that uses the same underlying theory. For example, Rindfleisch and Heide’s ( 1997 ) classic review of research in marketing using transaction cost economics has been cited more than 2200 times, with a significant impact on applications of the theory to the discipline in the past 20 years. A recent paper in JAMS with similar intent, which could serve as a helpful model, focuses on “Resource-Based Theory in Marketing” (Kozlenkova et al. 2014 ). The article dives deeply into a description of the theory and its underlying assumptions, then organizes a systematic review of relevant literature according to various perspectives through which the theory has been applied in marketing. The authors conclude by identifying topical domains in marketing that might benefit from additional applications of the theory (e.g., marketing exchange), as well as related theories that could be integrated meaningfully with insights from the resource-based theory.

Method-based review papers

Method-based review papers review, synthetize, and extend a body of literature that uses the same underlying method. For example, in “Event Study Methodology in the Marketing Literature: An Overview” (Sorescu et al. 2017 ), the authors identify published studies in marketing that use an event study methodology. After a brief review of the theoretical foundations of event studies, they describe in detail the key design considerations associated with this method. The article then provides a roadmap for conducting event studies and compares this approach with a stock market returns analysis. The authors finish with a summary of the strengths and weaknesses of the event study method, which in turn suggests three main areas for further research. Similarly, “Discriminant Validity Testing in Marketing: An Analysis, Causes for Concern, and Proposed Remedies” (Voorhies et al. 2016 ) systematically reviews existing approaches for assessing discriminant validity in marketing contexts, then uses Monte Carlo simulation to determine which tests are most effective.

Our long-term editorial strategy is to make sure JAMS becomes and remains a well-recognized outlet for both meta-analysis and systematic managerial review papers in marketing. Ideally, review papers would come to represent 10%–20% of the papers published by the journal.

Process and structure for review papers

In this section, we review the process and typical structure of a systematic review paper, which lacks any long or established tradition in marketing research. The article by Grewal et al. ( 2018 ) provides a summary of effects-focused review papers (i.e., meta-analyses), so we do not discuss them in detail here.

Systematic literature review process

Some review papers submitted to journals take a “narrative” approach. They discuss current knowledge about a research domain, yet they often are flawed, in that they lack criteria for article inclusion (or, more accurately, article exclusion), fail to discuss the methodology used to evaluate included articles, and avoid critical assessment of the field (Barczak 2017 ). Such reviews tend to be purely descriptive, with little lasting impact.

In contrast, a systematic literature review aims to “comprehensively locate and synthesize research that bears on a particular question, using organized, transparent, and replicable procedures at each step in the process” (Littell et al. 2008 , p. 1). Littell et al. describe six key steps in the systematic review process. The extent to which each step is emphasized varies by paper, but all are important components of the review.

Topic formulation . The author sets out clear objectives for the review and articulates the specific research questions or hypotheses that will be investigated.

Study design . The author specifies relevant problems, populations, constructs, and settings of interest. The aim is to define explicit criteria that can be used to assess whether any particular study should be included in or excluded from the review. Furthermore, it is important to develop a protocol in advance that describes the procedures and methods to be used to evaluate published work.

Sampling . The aim in this third step is to identify all potentially relevant studies, including both published and unpublished research. To this end, the author must first define the sampling unit to be used in the review (e.g., individual, strategic business unit) and then develop an appropriate sampling plan.

Data collection . By retrieving the potentially relevant studies identified in the third step, the author can determine whether each study meets the eligibility requirements set out in the second step. For studies deemed acceptable, the data are extracted from each study and entered into standardized templates. These templates should be based on the protocols established in step 2.

Data analysis . The degree and nature of the analyses used to describe and examine the collected data vary widely by review. Purely descriptive analysis is useful as a starting point but rarely is sufficient on its own. The examination of trends, clusters of ideas, and multivariate relationships among constructs helps flesh out a deeper understanding of the domain. For example, both Hult ( 2015 ) and Huber et al. ( 2014 ) use bibliometric approaches (e.g., examine citation data using multidimensional scaling and cluster analysis techniques) to identify emerging versus declining themes in the broad field of marketing.

Reporting . Three key aspects of this final step are common across systematic reviews. First, the results from the fifth step need to be presented, clearly and compellingly, using narratives, tables, and figures. Second, core results that emerge from the review must be interpreted and discussed by the author. These revelatory insights should reflect a deeper understanding of the topic being investigated, not simply a regurgitation of well-established knowledge. Third, the author needs to describe the implications of these unique insights for both future research and managerial practice.

A new paper by Watson et al. ( 2017 ), “Harnessing Difference: A Capability-Based Framework for Stakeholder Engagement in Environmental Innovation,” provides a good example of a systematic review, starting with a cohesive conceptual framework that helps establish the boundaries of the review while also identifying core constructs and their relationships. The article then explicitly describes the procedures used to search for potentially relevant papers and clearly sets out criteria for study inclusion or exclusion. Next, a detailed discussion of core elements in the framework weaves published research findings into the exposition. The paper ends with a presentation of key implications and suggestions for the next steps. Similarly, “Marketing Survey Research Best Practices: Evidence and Recommendations from a Review of JAMS Articles” (Hulland et al. 2017 ) systematically reviews published marketing studies that use survey techniques, describes recent trends, and suggests best practices. In their review, Hulland et al. examine the entire population of survey papers published in JAMS over a ten-year span, relying on an extensive standardized data template to facilitate their subsequent data analysis.

Structure of systematic review papers

There is no cookie-cutter recipe for the exact structure of a useful systematic review paper; the final structure depends on the authors’ insights and intended points of emphasis. However, several key components are likely integral to a paper’s ability to contribute.

Depth and rigor

Systematic review papers must avoid falling in to two potential “ditches.” The first ditch threatens when the paper fails to demonstrate that a systematic approach was used for selecting articles for inclusion and capturing their insights. If a reader gets the impression that the author has cherry-picked only articles that fit some preset notion or failed to be thorough enough, without including articles that make significant contributions to the field, the paper will be consigned to the proverbial side of the road when it comes to the discipline’s attention.

Authors that fall into the other ditch present a thorough, complete overview that offers only a mind-numbing recitation, without evident organization, synthesis, or critical evaluation. Although comprehensive, such a paper is more of an index than a useful review. The reviewed articles must be grouped in a meaningful way to guide the reader toward a better understanding of the focal phenomenon and provide a foundation for insights about future research directions. Some scholars organize research by scholarly perspectives (e.g., the psychology of privacy, the economics of privacy; Martin and Murphy 2017 ); others classify the chosen articles by objective research aspects (e.g., empirical setting, research design, conceptual frameworks; Cleeren et al. 2017 ). The method of organization chosen must allow the author to capture the complexity of the underlying phenomenon (e.g., including temporal or evolutionary aspects, if relevant).

Replicability

Processes for the identification and inclusion of research articles should be described in sufficient detail, such that an interested reader could replicate the procedure. The procedures used to analyze chosen articles and extract their empirical findings and/or key takeaways should be described with similar specificity and detail.

We already have noted the potential usefulness of well-done review papers. Some scholars always are new to the field or domain in question, so review papers also need to help them gain foundational knowledge. Key constructs, definitions, assumptions, and theories should be laid out clearly (for which purpose summary tables are extremely helpful). An integrated conceptual model can be useful to organize cited works. Most scholars integrate the knowledge they gain from reading the review paper into their plans for future research, so it is also critical that review papers clearly lay out implications (and specific directions) for research. Ideally, readers will come away from a review article filled with enthusiasm about ways they might contribute to the ongoing development of the field.

Helpful format

Because such a large body of research is being synthesized in most review papers, simply reading through the list of included studies can be exhausting for readers. We cannot overstate the importance of tables and figures in review papers, used in conjunction with meaningful headings and subheadings. Vast literature review tables often are essential, but they must be organized in a way that makes their insights digestible to the reader; in some cases, a sequence of more focused tables may be better than a single, comprehensive table.

In summary, articles that review extant research in a domain (topic, theory, or method) can be incredibly useful to the scientific progress of our field. Whether integrating the insights from extant research through a meta-analysis or synthesizing them through a systematic assessment, the promised benefits are similar. Both formats provide readers with a useful overview of knowledge about the focal phenomenon, as well as insights on key dilemmas and conflicting findings that suggest future research directions. Thus, the editorial team at JAMS encourages scholars to continue to invest the time and effort to construct thoughtful review papers.

Barczak, G. (2017). From the editor: writing a review article. Journal of Product Innovation Management, 34 (2), 120–121.

Article   Google Scholar  

Bem, D. J. (1995). Writing a review article for psychological bulletin. Psychological Bulletin, 118 (2), 172–177.

Bettencourt, L. A., & Houston, M. B. (2001). Assessing the impact of article method type and subject area on citation frequency and reference diversity. Marketing Letters, 12 (4), 327–340.

Cleeren, K., Dekimpe, M. G., & van Heerde, H. J. (2017). Marketing research on product-harm crises: a review, managerial implications. Journal of the Academy of Marketing Science, 45 (5), 593–615.

Grewal, D., Puccinelli, N. M., & Monroe, K. B. (2018). Meta-analysis: error cancels and truth accrues. Journal of the Academy of Marketing Science, 46 (1).

Hanssens, D. M. (2018). The value of empirical generalizations in marketing. Journal of the Academy of Marketing Science, 46 (1).

Huber, J., Kamakura, W., & Mela, C. F. (2014). A topical history of JMR . Journal of Marketing Research, 51 (1), 84–91.

Hulland, J., Baumgartner, H., & Smith, K. M. (2017). Marketing survey research best practices: evidence and recommendations from a review of JAMS articles. Journal of the Academy of Marketing Science. https://doi.org/10.1007/s11747-017-0532-y .

Hult, G. T. M. (2015). JAMS 2010—2015: literature themes and intellectual structure. Journal of the Academy of Marketing Science, 43 (6), 663–669.

Knoll, J., & Matthes, J. (2017). The effectiveness of celebrity endorsements: a meta-analysis. Journal of the Academy of Marketing Science, 45 (1), 55–75.

Kozlenkova, I. V., Samaha, S. A., & Palmatier, R. W. (2014). Resource-based theory in marketing. Journal of the Academy of Marketing Science, 42 (1), 1–21.

Littell, J. H., Corcoran, J., & Pillai, V. (2008). Systematic reviews and meta-analysis . New York: Oxford University Press.

Book   Google Scholar  

Martin, K. D., & Murphy, P. E. (2017). The role of data privacy in marketing. Journal of the Academy of Marketing Science, 45 (2), 135–155.

Rindfleisch, A., & Heide, J. B. (1997). Transaction cost analysis: past, present, and future applications. Journal of Marketing, 61 (4), 30–54.

Sorescu, A., Warren, N. L., & Ertekin, L. (2017). Event study methodology in the marketing literature: an overview. Journal of the Academy of Marketing Science, 45 (2), 186–207.

Verma, V., Sharma, D., & Sheth, J. (2016). Does relationship marketing matter in online retailing? A meta-analytic approach. Journal of the Academy of Marketing Science, 44 (2), 206–217.

Voorhies, C. M., Brady, M. K., Calantone, R., & Ramirez, E. (2016). Discriminant validity testing in marketing: an analysis, causes for concern, and proposed remedies. Journal of the Academy of Marketing Science, 44 (1), 119–134.

Watson, R., Wilson, H. N., Smart, P., & Macdonald, E. K. (2017). Harnessing difference: a capability-based framework for stakeholder engagement in environmental innovation. Journal of Product Innovation Management. https://doi.org/10.1111/jpim.12394 .

Download references

Author information

Authors and affiliations.

Foster School of Business, University of Washington, Box: 353226, Seattle, WA, 98195-3226, USA

Robert W. Palmatier

Neeley School of Business, Texas Christian University, Fort Worth, TX, USA

Mark B. Houston

Terry College of Business, University of Georgia, Athens, GA, USA

John Hulland

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Robert W. Palmatier .

Rights and permissions

Reprints and permissions

About this article

Palmatier, R.W., Houston, M.B. & Hulland, J. Review articles: purpose, process, and structure. J. of the Acad. Mark. Sci. 46 , 1–5 (2018). https://doi.org/10.1007/s11747-017-0563-4

Download citation

Published : 02 October 2017

Issue Date : January 2018

DOI : https://doi.org/10.1007/s11747-017-0563-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

Academia Insider

Review Paper Format: How To Write A Review Article Fast

This guide aims to demystify the review paper format, presenting practical tips to help you accelerate the writing process. 

From understanding the structure to synthesising literature effectively, we’ll explore how to create a compelling review article swiftly, ensuring your work is both impactful and timely.

Whether you’re a seasoned researcher or a budding scholar, these info on review paper format and style will streamline your writing journey.

Research Paper, Review Paper Format

PartsNotes
Title & AbstractSets the stage with a concise title and a descriptive abstract summarising the review’s scope and findings.
IntroductionLays the groundwork by presenting the research question, justifying the review’s importance, and highlighting knowledge gaps.
MethodologyDetails the research methods used to select, assess, and synthesise studies, showcasing the review’s rigor and integrity.
BodyThe core section where literature is summarised, analysed, and critiqued, synthesising evidence and presenting arguments with well-structured paragraphs.
Discussion & ConclusionWeaves together main points, reflects on the findings’ implications for the field, and suggests future research directions.
CitationAcknowledges the scholarly community’s contributions, linking to cited research and enriching the review’s academic discourse.

What Is A Review Paper?

Diving into the realm of scholarly communication, you might have stumbled upon a research review article.

This unique genre serves to synthesise existing data, offering a panoramic view of the current state of knowledge on a particular topic. 

methodology for review paper

Unlike a standard research article that presents original experiments, a review paper delves into published literature, aiming to: 

  • clarify, and
  • evaluate previous findings.

Imagine you’re tasked to write a review article. The starting point is often a burning research question. Your mission? To scour various journals, piecing together a well-structured narrative that not only summarises key findings but also identifies gaps in existing literature.

This is where the magic of review writing shines – it’s about creating a roadmap for future research, highlighting areas ripe for exploration.

Review articles come in different flavours, with systematic reviews and meta-analyses being the gold standards. The methodology here is meticulous, with a clear protocol for selecting and evaluating studies.

This rigorous approach ensures that your review is more than just an overview; it’s a critical analysis that adds depth to the understanding of the subject.

Crafting a good review requires mastering the art of citation. Every claim or observation you make needs to be backed by relevant literature. This not only lends credibility to your work but also provides a treasure trove of information for readers eager to delve deeper.

Types Of Review Paper

Not all review articles are created equal. Each type has its methodology, purpose, and format, catering to different research needs and questions. Here’s a couple of types of review paper for you to look at:

Systematic Review Paper

First up is the systematic review, the crème de la crème of review types. It’s known for its rigorous methodology, involving a detailed plan for:

  • identifying,
  • selecting, and
  • critically appraising relevant research. 

The aim? To answer a specific research question. Systematic reviews often include meta-analyses , where data from multiple studies are statistically combined to provide more robust conclusions.

This review type is a cornerstone in evidence-based fields like healthcare.

Literature Review Paper

Then there’s the literature review, a broader type you might encounter.

Here, the goal is to give an overview of the main points and debates on a topic, without the stringent methodological framework of a systematic review.

Literature reviews are great for getting a grasp of the field and identifying where future research might head. Often reading literature review papers can help you to learn about a topic rather quickly.

review paper format

Narrative Reviews

Narrative reviews allow for a more flexible approach. Authors of narrative reviews draw on existing literature to provide insights or critique a certain area of research.

This is generally done with a less formal structure than systematic reviews. This type is particularly useful for areas where it’s difficult to quantify findings across studies.

Scoping Reviews

Scoping reviews are gaining traction for their ability to map out the existing literature on a broad topic, identifying:

  • key concepts,
  • theories, and
Unlike systematic reviews, scoping reviews have a more exploratory approach, which can be particularly useful in emerging fields or for topics that haven’t been comprehensively reviewed before.

Each type of review serves a unique purpose and requires a specific skill set. Whether you’re looking to summarise existing findings, synthesise data for evidence-based practice, or explore new research territories, there’s a review type that fits the bill. 

Knowing how to write, read, and interpret these reviews can significantly enhance your understanding of any research area.

What Are The Parts In A Review Paper

A review paper format has a pretty set structure, with minor changes here and there to suit the topic covered. The review paper format not only organises your thoughts but also guides your readers through the complexities of your topic.

Title & Abstract

Starting with the title and abstract, you set the stage. The title should be a concise indicator of the content, making it easier for others to quickly tell what your article content is about.

As for the abstract, it should act as a descriptive summary, offering a snapshot of your review’s scope and findings. 

Introduction

The introduction lays the groundwork, presenting the research question that drives your review. It’s here you:

  • justify the importance of your review,
  • delineating the current state of knowledge and
  • highlighting gaps.

This section aims to articulate the significance of the topic and your objective in exploring it.

Methodology

The methodology section is the backbone of systematic reviews and meta-analyses, detailing the research methods employed to select, assess, and synthesise studies. 

review paper format

This transparency allows readers to gauge the rigour and reproducibility of your review. It’s a testament to the integrity of your work, showing how you’ve minimised bias.

The heart of your review lies in the body, where you:

  • analyse, and
  • critique existing literature .

This is where you synthesise evidence, draw connections, and present both sides of any argument. Well-structured paragraphs and clear subheadings guide readers through your analysis, offering insights and fostering a deeper understanding of the subject.

Discussion & Conclusion

The discussion or conclusion section is where you weave together the main points, reflecting on what your findings mean for the field.

It’s about connecting the dots, offering a synthesis of evidence that answers your initial research question. This part often hints at future research directions, suggesting areas that need further exploration due to gaps in existing knowledge.

Review paper format usually includes the citation list – it is your nod to the scholarly community, acknowledging the contributions of others.

Each citation is a thread in the larger tapestry of academic discourse, enabling readers to delve deeper into the research that has shaped your review.

Tips To Write An Review Article Fast

Writing a review article quickly without sacrificing quality might seem like a tall order, but with the right approach, it’s entirely achievable. 

Clearly Define Your Research Question

Clearly define your research question. A focused question not only narrows down the scope of your literature search but also keeps your review concise and on track.

By honing in on a specific aspect of a broader topic, you can avoid the common pitfall of becoming overwhelmed by the vast expanse of available literature. This specificity allows you to zero in on the most relevant studies, making your review more impactful.

Efficient Literature Searching

Utilise databases specific to your field and employ advanced search techniques like Boolean operators. This can drastically reduce the time you spend sifting through irrelevant articles.

Additionally, leveraging citation chains—looking at who has cited a pivotal paper in your area and who it cites—can uncover valuable sources you might otherwise miss.

Organise Your Findings Systematically

Developing a robust organisation strategy is key. As you gather sources, categorize them based on themes or methodologies.

This not only aids in structuring your review but also in identifying areas where research is lacking or abundant. Organize your findings based on the review paper format.

Tools like citation management software can be invaluable here, helping you keep track of your sources and their key points. We list out some of the best AI tools for academic research here. 

methodology for review paper

Build An Outline Before Writing

Don’t underestimate the power of a well-structured outline. A clear blueprint of your article can guide your writing process, ensuring that each section flows logically into the next.

This roadmap not only speeds up the writing process by providing a clear direction but also helps maintain coherence, ensuring your review article delivers a compelling narrative that advances understanding in your field.

Start Writing With The Easiest Sections

When it’s time to write, start with sections you find easiest. This might be the methodology or a particular thematic section where you feel most confident.

Getting words on the page can build momentum, making it easier to tackle more challenging sections later.

Remember, your first draft doesn’t have to be perfect; the goal is to start articulating your synthesis of the literature.

Learn How To Write An Article Review

Mastering the review paper format is a crucial step towards efficient academic writing. By adhering to the structured components outlined, you can streamline the creation of a compelling review article.

Embracing these guidelines not only speeds up the writing process but also enhances the clarity and impact of your work, ensuring your contributions to scholarly discourse are both valuable and timely.

A review paper serves to synthesise existing data, offering a panoramic view of the current state of knowledge on a particular topic

A Review Paper Format Usually Contains What Sections?

You usually will see sections like introduction, literature review, methodology, analysis and findings, discussions, citation and conclusion.

How To Write A Review Paper Fast?

The key is to organize, pre-plan things out before writing it.

methodology for review paper

Dr Andrew Stapleton has a Masters and PhD in Chemistry from the UK and Australia. He has many years of research experience and has worked as a Postdoctoral Fellow and Associate at a number of Universities. Although having secured funding for his own research, he left academia to help others with his YouTube channel all about the inner workings of academia and how to make it work for you.

Thank you for visiting Academia Insider.

We are here to help you navigate Academia as painlessly as possible. We are supported by our readers and by visiting you are helping us earn a small amount through ads and affiliate revenue - Thank you!

methodology for review paper

2024 © Academia Insider

methodology for review paper

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

methodology for review paper

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

Prevent plagiarism. Run a free check.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Methodology of a systematic review

Affiliations.

  • 1 Hospital Universitario La Paz, Madrid, España. Electronic address: [email protected].
  • 2 Hospital Universitario Fundación Alcorcón, Madrid, España.
  • 3 Instituto Valenciano de Oncología, Valencia, España.
  • 4 Hospital Universitario de Cabueñes, Gijón, Asturias, España.
  • 5 Hospital Universitario Ramón y Cajal, Madrid, España.
  • 6 Hospital Universitario Gregorio Marañón, Madrid, España.
  • 7 Hospital Universitario de Canarias, Tenerife, España.
  • 8 Hospital Clínic, Barcelona, España; EAU Guidelines Office Board Member.
  • PMID: 29731270
  • DOI: 10.1016/j.acuro.2018.01.010

Context: The objective of evidence-based medicine is to employ the best scientific information available to apply to clinical practice. Understanding and interpreting the scientific evidence involves understanding the available levels of evidence, where systematic reviews and meta-analyses of clinical trials are at the top of the levels-of-evidence pyramid.

Acquisition of evidence: The review process should be well developed and planned to reduce biases and eliminate irrelevant and low-quality studies. The steps for implementing a systematic review include (i) correctly formulating the clinical question to answer (PICO), (ii) developing a protocol (inclusion and exclusion criteria), (iii) performing a detailed and broad literature search and (iv) screening the abstracts of the studies identified in the search and subsequently of the selected complete texts (PRISMA).

Synthesis of the evidence: Once the studies have been selected, we need to (v) extract the necessary data into a form designed in the protocol to summarise the included studies, (vi) assess the biases of each study, identifying the quality of the available evidence, and (vii) develop tables and text that synthesise the evidence.

Conclusions: A systematic review involves a critical and reproducible summary of the results of the available publications on a particular topic or clinical question. To improve scientific writing, the methodology is shown in a structured manner to implement a systematic review.

Keywords: Meta-analysis; Metaanálisis; Methodology; Metodología; Revisión sistemática; Systematic review.

Copyright © 2018 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

PubMed Disclaimer

Similar articles

  • Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas. Crider K, Williams J, Qi YP, Gutman J, Yeung L, Mai C, Finkelstain J, Mehta S, Pons-Duran C, Menéndez C, Moraleda C, Rogers L, Daniels K, Green P. Crider K, et al. Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217. Cochrane Database Syst Rev. 2022. PMID: 36321557 Free PMC article.
  • The Effectiveness of Integrated Care Pathways for Adults and Children in Health Care Settings: A Systematic Review. Allen D, Gillen E, Rixson L. Allen D, et al. JBI Libr Syst Rev. 2009;7(3):80-129. doi: 10.11124/01938924-200907030-00001. JBI Libr Syst Rev. 2009. PMID: 27820426
  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • Palliative Treatment of Cancer-Related Pain [Internet]. Kongsgaard U, Kaasa S, Dale O, Ottesen S, Nordøy T, Hessling SE, von Hofacker S, Bruland ØS, Lyngstadaas A. Kongsgaard U, et al. Oslo, Norway: Knowledge Centre for the Health Services at The Norwegian Institute of Public Health (NIPH); 2005 Dec. Report from Norwegian Knowledge Centre for the Health Services (NOKC) No. 09-2005. Oslo, Norway: Knowledge Centre for the Health Services at The Norwegian Institute of Public Health (NIPH); 2005 Dec. Report from Norwegian Knowledge Centre for the Health Services (NOKC) No. 09-2005. PMID: 29320015 Free Books & Documents. Review.
  • WHO/ILO work-related burden of disease and injury: Protocol for systematic reviews of occupational exposure to dusts and/or fibres and of the effect of occupational exposure to dusts and/or fibres on pneumoconiosis. Mandrioli D, Schlünssen V, Ádám B, Cohen RA, Colosio C, Chen W, Fischer A, Godderis L, Göen T, Ivanov ID, Leppink N, Mandic-Rajcevic S, Masci F, Nemery B, Pega F, Prüss-Üstün A, Sgargi D, Ujita Y, van der Mierden S, Zungu M, Scheepers PTJ. Mandrioli D, et al. Environ Int. 2018 Oct;119:174-185. doi: 10.1016/j.envint.2018.06.005. Epub 2018 Jun 27. Environ Int. 2018. PMID: 29958118 Review.
  • Effects of whole-body vibration on bone mineral density in postmenopausal women: an overview of systematic reviews. Yin S, Liu Y, Zhong Y, Zhu F. Yin S, et al. BMC Womens Health. 2024 Aug 6;24(1):444. doi: 10.1186/s12905-024-03290-x. BMC Womens Health. 2024. PMID: 39107743 Free PMC article.
  • Effects of different nutrition interventions on sarcopenia criteria in older people: A study protocol for a systematic review of systematic reviews with meta-analysis. Ferreira LF, Roda Cardoso J, Telles da Rosa LH. Ferreira LF, et al. PLoS One. 2024 May 10;19(5):e0302843. doi: 10.1371/journal.pone.0302843. eCollection 2024. PLoS One. 2024. PMID: 38728270 Free PMC article.
  • Editorial: Reviews in psychiatry 2022: psychopharmacology. Taube M. Taube M. Front Psychiatry. 2024 Feb 28;15:1382027. doi: 10.3389/fpsyt.2024.1382027. eCollection 2024. Front Psychiatry. 2024. PMID: 38482070 Free PMC article. No abstract available.
  • Writing a Scientific Review Article: Comprehensive Insights for Beginners. Amobonye A, Lalung J, Mheta G, Pillai S. Amobonye A, et al. ScientificWorldJournal. 2024 Jan 17;2024:7822269. doi: 10.1155/2024/7822269. eCollection 2024. ScientificWorldJournal. 2024. PMID: 38268745 Free PMC article. Review.
  • Appraising systematic reviews: a comprehensive guide to ensuring validity and reliability. Shaheen N, Shaheen A, Ramadan A, Hefnawy MT, Ramadan A, Ibrahim IA, Hassanein ME, Ashour ME, Flouty O. Shaheen N, et al. Front Res Metr Anal. 2023 Dec 21;8:1268045. doi: 10.3389/frma.2023.1268045. eCollection 2023. Front Res Metr Anal. 2023. PMID: 38179256 Free PMC article. Review.
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Elsevier Science

Other Literature Sources

  • scite Smart Citations

Research Materials

  • NCI CPTC Antibody Characterization Program
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Methodology

Methodologies should present a new experimental or computational method, test or procedure. The method described may either be completely new, or may offer a better version of an existing method. The article must describe a demonstrable advance on what is currently available. The method needs to have been well tested and ideally, but not necessarily, used in a way that proves its value.

Systematic Reviews strongly encourages that all datasets on which the conclusions of the paper rely should be available to readers. We encourage authors to ensure that their datasets are either deposited in publicly available repositories (where available and appropriate) or presented in the main manuscript or additional supporting files whenever possible. Please see Springer Nature’s information on recommended repositories .

Preparing your manuscript

The information below details the section headings that you should include in your manuscript and what information should be within each section.

Please note that your manuscript must include a 'Declarations' section including all of the subheadings (please see below for more information).

The title page should:

  • "A versus B in the treatment of C: a randomized controlled trial", "X is a risk factor for Y: a case control study", "What is the impact of factor X on subject Y: A systematic review"
  • or for non-clinical or non-research studies a description of what the article reports
  • if a collaboration group should be listed as an author, please list the Group name as an author. If you would like the names of the individual members of the Group to be searchable through their individual PubMed records, please include this information in the “Acknowledgements” section in accordance with the instructions below
  • Large Language Models (LLMs), such as ChatGPT , do not currently satisfy our authorship criteria . Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript.
  • indicate the corresponding author

The Abstract should not exceed 350 words. Please minimize the use of abbreviations and do not cite references in the abstract. Reports of randomized controlled trials should follow the CONSORT extension for abstracts. The abstract must include the following separate sections:

  • Background: the context and purpose of the study
  • Methods: how the study was performed and statistical tests used
  • Results: the main findings
  • Conclusions: brief summary and potential implications
  • Trial registration: If your article reports the results of a health care intervention on human participants, it must be registered in an appropriate registry and the registration number and date of registration should be stated in this section. If it was not registered prospectively (before enrollment of the first participant), you should include the words 'retrospectively registered'. See our editorial policies for more information on trial registration

Three to ten keywords representing the main content of the article.

The Background section should explain the background to the study, its aims, a summary of the existing literature and why this study was necessary or its contribution to the field.

The methods section should include:

  • the aim, design and setting of the study
  • the characteristics of participants or description of materials
  • a clear description of all processes, interventions and comparisons. Generic drug names should generally be used. When proprietary brands are used in research, include the brand names in parentheses
  • the type of statistical analysis used, including a power calculation if appropriate

This should include the findings of the study including, if appropriate, results of statistical analysis which must be included either in the text or as tables and figures.

This section should discuss the implications of the findings in context of existing research and highlight limitations of the study.

Conclusions

This should state clearly the main conclusions and provide an explanation of the importance and relevance of the study reported.

List of abbreviations

If abbreviations are used in the text they should be defined in the text at first use, and a list of abbreviations should be provided.

Declarations

All manuscripts must contain the following sections under the heading 'Declarations':

Ethics approval and consent to participate

Consent for publication, availability of data and materials, competing interests, authors' contributions, acknowledgements.

  • Authors' information (optional)

Please see below for details on the information to be included in these sections.

If any of the sections are not relevant to your manuscript, please include the heading and write 'Not applicable' for that section. 

Manuscripts reporting studies involving human participants, human data or human tissue must:

  • include a statement on ethics approval and consent (even where the need for approval was waived)
  • include the name of the ethics committee that approved the study and the committee’s reference number if appropriate

Studies involving animals must include a statement on ethics approval and for experimental studies involving client-owned animals, authors must also include a statement on informed consent from the client or owner.

See our editorial policies for more information.

If your manuscript does not report on or involve the use of any animal or human data or tissue, please state “Not applicable” in this section.

If your manuscript contains any individual person’s data in any form (including any individual details, images or videos), consent for publication must be obtained from that person, or in the case of children, their parent or legal guardian. All presentations of case reports must have consent for publication.

You can use your institutional consent form or our consent form if you prefer. You should not send the form to us on submission, but we may request to see a copy at any stage (including after publication).

See our editorial policies for more information on consent for publication.

If your manuscript does not contain data from any individual person, please state “Not applicable” in this section.

All manuscripts must include an ‘Availability of data and materials’ statement. Data availability statements should include information on where data supporting the results reported in the article can be found including, where applicable, hyperlinks to publicly archived datasets analysed or generated during the study. By data we mean the minimal dataset that would be necessary to interpret, replicate and build upon the findings reported in the article. We recognise it is not always possible to share research data publicly, for instance when individual privacy could be compromised, and in such instances data availability should still be stated in the manuscript along with any conditions for access.

Authors are also encouraged to preserve search strings on searchRxiv https://searchrxiv.org/ , an archive to support researchers to report, store and share their searches consistently and to enable them to review and re-use existing searches. searchRxiv enables researchers to obtain a digital object identifier (DOI) for their search, allowing it to be cited. 

Data availability statements can take one of the following forms (or a combination of more than one if required for multiple datasets):

  • The datasets generated and/or analysed during the current study are available in the [NAME] repository, [PERSISTENT WEB LINK TO DATASETS]
  • The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
  • All data generated or analysed during this study are included in this published article [and its supplementary information files].
  • The datasets generated and/or analysed during the current study are not publicly available due [REASON WHY DATA ARE NOT PUBLIC] but are available from the corresponding author on reasonable request.
  • Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.
  • The data that support the findings of this study are available from [third party name] but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of [third party name].
  • Not applicable. If your manuscript does not contain any data, please state 'Not applicable' in this section.

More examples of template data availability statements, which include examples of openly available and restricted access datasets, are available here .

BioMed Central strongly encourages the citation of any publicly available data on which the conclusions of the paper rely in the manuscript. Data citations should include a persistent identifier (such as a DOI) and should ideally be included in the reference list. Citations of datasets, when they appear in the reference list, should include the minimum information recommended by DataCite and follow journal style. Dataset identifiers including DOIs should be expressed as full URLs. For example:

Hao Z, AghaKouchak A, Nakhjiri N, Farahmand A. Global integrated drought monitoring and prediction system (GIDMaPS) data sets. figshare. 2014. http://dx.doi.org/10.6084/m9.figshare.853801

With the corresponding text in the Availability of data and materials statement:

The datasets generated during and/or analysed during the current study are available in the [NAME] repository, [PERSISTENT WEB LINK TO DATASETS]. [Reference number]  

If you wish to co-submit a data note describing your data to be published in BMC Research Notes , you can do so by visiting our submission portal . Data notes support open data and help authors to comply with funder policies on data sharing. Co-published data notes will be linked to the research article the data support ( example ).

All financial and non-financial competing interests must be declared in this section.

See our editorial policies for a full explanation of competing interests. If you are unsure whether you or any of your co-authors have a competing interest please contact the editorial office.

Please use the authors initials to refer to each authors' competing interests in this section.

If you do not have any competing interests, please state "The authors declare that they have no competing interests" in this section.

All sources of funding for the research reported should be declared. If the funder has a specific role in the conceptualization, design, data collection, analysis, decision to publish, or preparation of the manuscript, this should be declared.

The individual contributions of authors to the manuscript should be specified in this section. Guidance and criteria for authorship can be found in our editorial policies .

Please use initials to refer to each author's contribution in this section, for example: "FC analyzed and interpreted the patient data regarding the hematological disease and the transplant. RH performed the histological examination of the kidney, and was a major contributor in writing the manuscript. All authors read and approved the final manuscript."

Please acknowledge anyone who contributed towards the article who does not meet the criteria for authorship including anyone who provided professional writing services or materials.

Authors should obtain permission to acknowledge from all those mentioned in the Acknowledgements section.

See our editorial policies for a full explanation of acknowledgements and authorship criteria.

If you do not have anyone to acknowledge, please write "Not applicable" in this section.

Group authorship (for manuscripts involving a collaboration group): if you would like the names of the individual members of a collaboration Group to be searchable through their individual PubMed records, please ensure that the title of the collaboration Group is included on the title page and in the submission system and also include collaborating author names as the last paragraph of the “Acknowledgements” section. Please add authors in the format First Name, Middle initial(s) (optional), Last Name. You can add institution or country information for each author if you wish, but this should be consistent across all authors.

Please note that individual names may not be present in the PubMed record at the time a published article is initially included in PubMed as it takes PubMed additional time to code this information.

Authors' information

This section is optional.

You may choose to use this section to include any relevant information about the author(s) that may aid the reader's interpretation of the article, and understand the standpoint of the author(s). This may include details about the authors' qualifications, current positions they hold at institutions or societies, or any other relevant background information. Please refer to authors using their initials. Note this section should not be used to describe any competing interests.

Footnotes can be used to give additional information, which may include the citation of a reference included in the reference list. They should not consist solely of a reference citation, and they should never include the bibliographic details of a reference. They should also not contain any figures or tables.

Footnotes to the text are numbered consecutively; those to tables should be indicated by superscript lower-case letters (or asterisks for significance values and other statistical data). Footnotes to the title or the authors of the article are not given reference symbols.

Always use footnotes instead of endnotes.

Examples of the Vancouver reference style are shown below.

See our editorial policies for author guidance on good citation practice

Web links and URLs: All web links and URLs, including links to the authors' own websites, should be given a reference number and included in the reference list rather than within the text of the manuscript. They should be provided in full, including both the title of the site and the URL, as well as the date the site was accessed, in the following format: The Mouse Tumor Biology Database. http://tumor.informatics.jax.org/mtbwi/index.do . Accessed 20 May 2013. If an author or group of authors can clearly be associated with a web link, such as for weblogs, then they should be included in the reference.

Example reference style:

Article within a journal

Smith JJ. The world of science. Am J Sci. 1999;36:234-5.

Article within a journal (no page numbers)

Rohrmann S, Overvad K, Bueno-de-Mesquita HB, Jakobsen MU, Egeberg R, Tjønneland A, et al. Meat consumption and mortality - results from the European Prospective Investigation into Cancer and Nutrition. BMC Medicine. 2013;11:63.

Article within a journal by DOI

Slifka MK, Whitton JL. Clinical implications of dysregulated cytokine production. Dig J Mol Med. 2000; doi:10.1007/s801090000086.

Article within a journal supplement

Frumin AM, Nussbaum J, Esposito M. Functional asplenia: demonstration of splenic activity by bone marrow scan. Blood 1979;59 Suppl 1:26-32.

Book chapter, or an article within a book

Wyllie AH, Kerr JFR, Currie AR. Cell death: the significance of apoptosis. In: Bourne GH, Danielli JF, Jeon KW, editors. International review of cytology. London: Academic; 1980. p. 251-306.

OnlineFirst chapter in a series (without a volume designation but with a DOI)

Saito Y, Hyuga H. Rate equation approaches to amplification of enantiomeric excess and chiral symmetry breaking. Top Curr Chem. 2007. doi:10.1007/128_2006_108.

Complete book, authored

Blenkinsopp A, Paxton P. Symptoms in the pharmacy: a guide to the management of common illness. 3rd ed. Oxford: Blackwell Science; 1998.

Online document

Doe J. Title of subordinate document. In: The dictionary of substances and their effects. Royal Society of Chemistry. 1999. http://www.rsc.org/dose/title of subordinate document. Accessed 15 Jan 1999.

Online database

Healthwise Knowledgebase. US Pharmacopeia, Rockville. 1998. http://www.healthwise.org. Accessed 21 Sept 1998.

Supplementary material/private homepage

Doe J. Title of supplementary material. 2000. http://www.privatehomepage.com. Accessed 22 Feb 2000.

University site

Doe, J: Title of preprint. http://www.uni-heidelberg.de/mydata.html (1999). Accessed 25 Dec 1999.

Doe, J: Trivial HTTP, RFC2169. ftp://ftp.isi.edu/in-notes/rfc2169.txt (1999). Accessed 12 Nov 1999.

Organization site

ISSN International Centre: The ISSN register. http://www.issn.org (2006). Accessed 20 Feb 2007.

Dataset with persistent identifier

Zheng L-Y, Guo X-S, He B, Sun L-J, Peng Y, Dong S-S, et al. Genome data from sweet and grain sorghum (Sorghum bicolor). GigaScience Database. 2011. http://dx.doi.org/10.5524/100012 .

Figures, tables and additional files

See  General formatting guidelines  for information on how to format figures, tables and additional files.

Submit manuscript

  • Editorial Board
  • Manuscript editing services
  • Instructions for Editors
  • Sign up for article alerts and news from this journal
  • Follow us on Twitter

Annual Journal Metrics

Citation Impact 2023 Journal Impact Factor: 6.3 5-year Journal Impact Factor: 4.5 Source Normalized Impact per Paper (SNIP): 1.919 SCImago Journal Rank (SJR): 1.620

Speed 2023 Submission to first editorial decision (median days): 92 Submission to acceptance (median days): 296

Usage 2023 Downloads: 3,531,065 Altmetric mentions: 3,533

  • More about our metrics

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]
  • Interlibrary Loan and Scan & Deliver
  • Course Reserves
  • Purchase Request
  • Collection Development & Maintenance
  • Current Negotiations
  • Ask a Librarian
  • Instructor Support
  • Library How-To
  • Research Guides
  • Research Support
  • Study Rooms
  • Research Rooms
  • Partner Spaces
  • Loanable Equipment
  • Print, Scan, Copy
  • 3D Printers
  • Poster Printing
  • OSULP Leadership
  • Strategic Plan

Scholarly Articles: How can I tell?

  • Journal Information
  • Literature Review
  • Author and affiliation
  • Introduction
  • Specialized Vocabulary

Methodology

  • Research sponsors
  • Peer-review

The methodology section or methods section tells you how the author(s) went about doing their research. It should let you know a) what method they used to gather data (survey, interviews, experiments, etc.), why they chose this method, and what the limitations are to this method.

The methodology section should be detailed enough that another researcher could replicate the study described. When you read the methodology or methods section:

  • What kind of research method did the authors use? Is it an appropriate method for the type of study they are conducting?
  • How did the authors get their tests subjects? What criteria did they use?
  • What are the contexts of the study that may have affected the results (e.g. environmental conditions, lab conditions, timing questions, etc.)
  • Is the sample size representative of the larger population (i.e., was it big enough?)
  • Are the data collection instruments and procedures likely to have measured all the important characteristics with reasonable accuracy?
  • Does the data analysis appear to have been done with care, and were appropriate analytical techniques used? 

A good researcher will always let you know about the limitations of his or her research.

  • << Previous: Specialized Vocabulary
  • Next: Results >>
  • Last Updated: Apr 15, 2024 3:26 PM
  • URL: https://guides.library.oregonstate.edu/ScholarlyArticle

methodology for review paper

Contact Info

121 The Valley Library Corvallis OR 97331–4501

Phone: 541-737-3331

Services for Persons with Disabilities

In the Valley Library

  • Oregon State University Press
  • Special Collections and Archives Research Center
  • Undergrad Research & Writing Studio
  • Graduate Student Commons
  • Tutoring Services
  • Northwest Art Collection

Digital Projects

  • Oregon Explorer
  • Oregon Digital
  • ScholarsArchive@OSU
  • Digital Publishing Initiatives
  • Atlas of the Pacific Northwest
  • Marilyn Potts Guin Library  
  • Cascades Campus Library
  • McDowell Library of Vet Medicine

FDLP Emblem

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

sustainability-logo

Article Menu

methodology for review paper

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Literature review on collaborative project delivery for sustainable construction: bibliometric analysis.

methodology for review paper

1. Introduction

2. literature review, 2.1. collaborative project delivery, 2.2. design build (db), 2.3. construction manager at risk (cmar), 2.4. integrated project delivery method (ipd), 2.5. sustainability, 2.6. sustainable construction, 2.7. benefits of eci comparing case studies, 2.8. collaborative delivery models, 3. methodology, 3.1. research methods, 3.2. database research, 4.1. ipd, design-build, and cmar overview, 4.1.1. yearly publication distribution of db cmar and ipd, 4.1.2. major country analysis, 4.1.3. most relevant and influential journals, 4.1.4. corresponding author countries, 4.2. keyword analysis, 4.2.1. high-frequency keyword analysis, 4.2.2. co-occurrence network analysis, 4.2.3. analysis of keywords’ frequency over time, 5. discussion, 5.1. findings of advantages and disadvantages of ipd, db, and cmar for sustainable construction, 5.1.1. advantages of ipd, 5.1.2. advantages of design-build, 5.1.3. advantages of construction manager at risk, 5.1.4. disadvantages of ipd, 5.1.5. disadvantages of design-build, 5.1.6. disadvantages of construction manager at risk, 5.2. most suitable cpd technique for sustainable construction based on literature review, 5.2.1. limitations, 5.2.2. recommendations for future research, 6. future trend, 6.1. enhancing innovation through collaborative project delivery, 6.2. open communication and block chain technology, 6.3. multi-party agreement, 6.4. utilizing artificial intelligence in decision support systems, 7. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • Giachino, J.; Cecil, M.; Husselbee, B.; Matthews, C. Alternative Project Delivery: Construction Management at Risk, Design-Build and Public-Private Partnerships. In Proceedings of the Utility Management Conference 2016, San Diego, CA, USA, 24–26 February 2016. [ Google Scholar ]
  • Shrestha, P.P.; Maharjan, R.; Batista, J.R. Performance of Design-Build and Construction Manager-at-Risk Methods in Water and Wastewater Projects. Pract. Period. Struct. Des. Constr. 2019 , 24 , 04018029. [ Google Scholar ] [ CrossRef ]
  • Shrestha, P.P.; Batista, J. Lessons Learned in Design-Build and Construction-Manager-at-Risk Water and Wastewater Project. J. Leg. Aff. Dispute Resolut. Eng. Constr. 2020 , 12 , 04520002. [ Google Scholar ] [ CrossRef ]
  • Xia, B.; Chan, A.P.C. Identification of Selection Criteria for Operational Variations of The Design-Build System: A Delphi Study in China. J. Civ. Eng. Manag. 2012 , 18 , 173–183. [ Google Scholar ] [ CrossRef ]
  • Shane, J.S.; Bogus, S.M.; Molenaar, K.R. Municipal Water/Wastewater Project Delivery Performance Comparison. J. Manag. Eng. 2013 , 29 , 251–258. [ Google Scholar ] [ CrossRef ]
  • Sullivan, J.; El Asmar, M.; Chalhoub, J.; Obeid, H. Two Decades of Performance Comparisons for Design-Build, Construction Manager at Risk, and Design-Bid-Build: Quantitative Analysis of the State of Knowledge on Project Cost, Schedule, and Quality. J. Constr. Eng. Manag. 2017 , 143 , 04017009. [ Google Scholar ] [ CrossRef ]
  • Raouf, A.M.; Al-Ghamdi, S. Effectiveness of Project Delivery Systems in Executing Green Buildings. J. Constr. Eng. Manag. 2019 , 145 , 03119005. [ Google Scholar ] [ CrossRef ]
  • Francom, T.; El Asmar, M.; Ariaratnam, S.T. Performance Analysis of Construction Manager at Risk on Pipeline Engineering and Construction Projects. J. Manag. Eng. 2016 , 32 , 04016016. [ Google Scholar ] [ CrossRef ]
  • Gransberg, D.D.; Shane, J.S.; Transportation Research Board. Construction Manager-at-Risk Project Delivery for Highway Programs ; The National Academies Press: Washington, DC, USA, 2010. [ Google Scholar ]
  • Rahman, M.M.; Kumaraswamy, M.M. Potential for Implementing Relational Contracting and Joint Risk Management. J. Manag. Eng. 2004 , 20 , 178–189. [ Google Scholar ] [ CrossRef ]
  • Feghaly, J.; El Asmar, M.; Ariaratnam, S.; Bearup, W. Selecting project delivery methods for water treatment plants. Eng. Constr. Archit. Manag. 2019 , 27 , 936–951. [ Google Scholar ] [ CrossRef ]
  • Park, H.-S.; Lee, D.; Kim, S.; Kim, J.-L. Comparing Project Performance of Design-Build and Design-Bid-Build Methods for Large-sized Public Apartment Housing Projects in Korea. J. Asian Archit. Build. Eng. 2015 , 14 , 323–330. [ Google Scholar ] [ CrossRef ]
  • Shrestha, P.P.; Batista, J.; Maharajan, R. Risks involved in using alternative project delivery (APD) methods in water and wastewater projects. Procedia Eng. 2016 , 145 , 219–223. [ Google Scholar ] [ CrossRef ]
  • Hettiaarachchige, N.; Rathnasinghe, A.; Ranadewa, K.; Thurairajah, N. Thurairajah, Lean Integrated Project Delivery for Construction Procurement: The Case of Sri Lanka. Buildings 2022 , 12 , 524. [ Google Scholar ] [ CrossRef ]
  • Kent, D.C.; Becerik-Gerber, B. Understanding Construction Industry Experience and Attitudes toward Integrated Project Delivery. J. Constr. Eng. Manag. 2010 , 136 , 815–825. [ Google Scholar ] [ CrossRef ]
  • Franz, B.; Leicht, R.; Molenaar, K.; Messner, J. Impact of Team Integration and Group Cohesion on Project Delivery Performance. J. Constr. Eng. Manag. 2017 , 143 , 04016088. [ Google Scholar ] [ CrossRef ]
  • Engebø, A.; Klakegg, O.J.; Lohne, J.; Lædre, O. A collaborative project delivery method for design of a high-performance building. Int. J. Manag. Proj. Bus. 2020 , 13 , 1141–1165. [ Google Scholar ] [ CrossRef ]
  • Ahmed, S.; El-Sayegh, S. Critical Review of the Evolution of Project Delivery Methods in the Construction Industry. Buildings 2020 , 11 , 11. [ Google Scholar ] [ CrossRef ]
  • Bond-Barnard, T.J.; Fletcher, L.; Steyn, H. Linking trust and collaboration in project teams to project management success. Int. J. Manag. Proj. Bus. 2018 , 11 , 432–457. [ Google Scholar ] [ CrossRef ]
  • Rodrigues, M.R.; Lindhard, S.M. Lindhard, Benefits and challenges to applying IPD: Experiences from a Norwegian mega-project. Constr. Innov. 2021 , 23 , 287–305. [ Google Scholar ] [ CrossRef ]
  • Kaminsky, J. The fourth pillar of infrastructure sustainability: Tailoring civil infrastructure to social context. Constr. Manag. Econ. 2015 , 33 , 299–309. [ Google Scholar ] [ CrossRef ]
  • Al Khalil, M.I. Selecting the appropriate project delivery method using AHP. Int. J. Proj. Manag. 2002 , 20 , 469–474. [ Google Scholar ] [ CrossRef ]
  • Ibbs, C.W.; Kwak, Y.H.; Ng, T.; Odabasi, A.M. Project Delivery Systems and Project Change: Quantitative Analysis. J. Constr. Eng. Manag. 2003 , 129 , 382–387. [ Google Scholar ] [ CrossRef ]
  • Jansen, J.; Beck, A. Overcoming the Challenges of Large Diameter Water Project in North Texas via CMAR Delivery Method. In Proceedings of the Pipelines 2020, San Antonio, TX, USA, 9–12 August 2020; Conference Held Virtually. pp. 264–271. [ Google Scholar ] [ CrossRef ]
  • Bingham, E.; Gibson, G.E.; Asmar, M.E. Measuring User Perceptions of Popular Transportation Project Delivery Methods Using Least Significant Difference Intervals and Multiple Range Tests. J. Constr. Eng. Manag. 2018 , 144 , 04018033. [ Google Scholar ] [ CrossRef ]
  • Cho, Y.J. A review of construction delivery systems: Focus on the construction management at risk system in the Korean public construction market. KSCE J. Civ. Eng. 2016 , 20 , 530–537. [ Google Scholar ] [ CrossRef ]
  • Rosayuru, H.D.R.R.; Waidyasekara, K.G.A.S.; Wijewickrama, M.K.C.S. Sustainable BIM based integrated project delivery system for construction industry in Sri Lanka. Int. J. Constr. Manag. 2022 , 22 , 769–783. [ Google Scholar ] [ CrossRef ]
  • Pishdad-Bozorgi, P.; Beliveau, Y.J. Symbiotic Relationships between Integrated Project Delivery (IPD) and Trust. Int. J. Constr. Educ. Res. 2016 , 12 , 179–192. [ Google Scholar ] [ CrossRef ]
  • Sherif, M.; Abotaleb, I.; Alqahtani, F.K. Alqahtani, Application of Integrated Project Delivery (IPD) in the Middle East: Implementation and Challenges. Buildings 2022 , 12 , 467. [ Google Scholar ] [ CrossRef ]
  • Manata, B.; Garcia, A.J.; Mollaoglu, S.; Miller, V.D. The effect of commitment differentiation on integrated project delivery team dynamics: The critical roles of goal alignment, communication behaviors, and decision quality. Int. J. Proj. Manag. 2021 , 39 , 259–269. [ Google Scholar ] [ CrossRef ]
  • Kraatz, J.A.; Sanchez, A.X.; Hampson, K.D. Hampson, Digital Modeling, Integrated Project Delivery and Industry Transformation: An Australian Case Study. Buildings 2014 , 4 , 453–466. [ Google Scholar ] [ CrossRef ]
  • Zhang, L.; He, J.; Zhou, S. Sharing Tacit Knowledge for Integrated Project Team Flexibility: Case Study of Integrated Project Delivery. J. Constr. Eng. Manag. 2013 , 139 , 795–804. [ Google Scholar ] [ CrossRef ]
  • El Asmar, M.; Hanna, A.S.; Loh, W.-Y. Quantifying Performance for the Integrated Project Delivery System as Compared to Established Delivery Systems. J. Constr. Eng. Manag. 2013 , 139 , 04013012. [ Google Scholar ] [ CrossRef ]
  • Ghassemi, R.; Becerik-Gerber, B. Transitioning to integrated project delivery: Potential barriers and lessons learned. Lean Constr. J. 2011 , 32–52. Available online: https://leanconstruction.org/resources/lean-construction-journal/lcj-back-issues/2011-issue/ (accessed on 11 August 2024).
  • Mei, T.; Guo, Z.; Li, P.; Fang, K.; Zhong, S. Influence of Integrated Project Delivery Principles on Project Performance in China: An SEM-Based Approach. Sustainability 2022 , 14 , 4381. [ Google Scholar ] [ CrossRef ]
  • Ilozor, B.D.; Kelly, D.J. Building information modeling and integrated project delivery in the commercial construction industry: A conceptual study. J. Eng. Proj. Prod. Manag. 2012 , 2 , 23–36. [ Google Scholar ] [ CrossRef ]
  • Zabihi, H.; Habib, F.; Mirsaeedie, L. Sustainability in Building and Construction: Revising Definitions and Concepts. Int. J. Emerg. Sci. 2012 , 2 , 570–578. [ Google Scholar ]
  • Young, J.W.S. A Framework for the Ultimate Environmental Index—Putting Atmospheric Change Into Context With Sustainability. Environ. Monit. Assess. 1997 , 46 , 135–149. [ Google Scholar ] [ CrossRef ]
  • Ding, G.K.C. Sustainable construction—The role of environmental assessment tools. J. Environ. Manag. 2008 , 86 , 451–464. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Conte, E. The Era of Sustainability: Promises, Pitfalls and Prospects for Sustainable Buildings and the Built Environment. Sustainability 2018 , 10 , 2092. [ Google Scholar ] [ CrossRef ]
  • Standardized Method of Life Cycle Costing for Construction Procurement. A Supplement to BS ISO 15686-5. Buildings and Constructed Assets. Service Life Planning. Life Cycle Costing ; BSI British Standards: London, UK, 2008. [ CrossRef ]
  • Sustainability|Free Full-Text|A Hybrid Multi-Criteria Decision Support System for Selecting the Most Sustainable Structural Material for a Multistory Building Construction. Available online: https://www.mdpi.com/2071-1050/15/4/3128 (accessed on 2 April 2024).
  • Korkmaz, S.; Riley, D.; Horman, M. Piloting Evaluation Metrics for Sustainable High-Performance Building Project Delivery. J. Constr. Eng. Manag. 2010 , 136 , 877–885. [ Google Scholar ] [ CrossRef ]
  • Ng, M.S.; Graser, K.; Hall, D.M. Digital fabrication, BIM and early contractor involvement in design in construction projects: A comparative case study. Archit. Eng. Des. Manag. 2021 , 19 , 39–55. [ Google Scholar ] [ CrossRef ]
  • Moradi, S.; Kähkönen, K.; Sormunen, P. Analytical and Conceptual Perspectives toward Behavioral Elements of Collaborative Delivery Models in Construction Projects. Buildings 2022 , 12 , 316. [ Google Scholar ] [ CrossRef ]
  • Zupic, I.; Čater, T. Bibliometric Methods in Management and Organization. 2015. Available online: https://journals.sagepub.com/doi/abs/10.1177/1094428114562629 (accessed on 3 April 2024).
  • Rozas, L.W.; Klein, W.C. The Value and Purpose of the Traditional Qualitative Literature Review. J. Evid.-Based Soc. Work 2010 , 7 , 387–399. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Cobo, M.J.; López-Herrera, A.G.; Herrera-Viedma, E.; Herrera, F. Science mapping software tools: Review, analysis, and cooperative study among tools. J. Am. Soc. Inf. Sci. Technol. 2011 , 62 , 1382–1402. [ Google Scholar ] [ CrossRef ]
  • Cancino, C.A.; Merigó, J.M.; Coronado, F.C. A bibliometric analysis of leading universities in innovation research. J. Innov. Knowl. 2017 , 2 , 106–124. [ Google Scholar ] [ CrossRef ]
  • Pedro, L.F.M.G.; Barbosa, C.M.M.d.O.; Santos, C.M.d.N. A critical review of mobile learning integration in formal educational contexts. Int. J. Educ. Technol. High. Educ. 2018 , 15 , 10. [ Google Scholar ] [ CrossRef ]
  • Wen, S.; Tang, H.; Ying, F.; Wu, G. Exploring the Global Research Trends of Supply Chain Management of Construction Projects Based on a Bibliometric Analysis: Current Status and Future Prospects. Buildings 2023 , 13 , 373. [ Google Scholar ] [ CrossRef ]
  • Hosseini, M.R.; Martek, I.; Zavadskas, E.K.; Aibinu, A.A.; Arashpour, M.; Chileshe, N. Critical evaluation of off-site construction research: A Scientometric analysis. Autom. Constr. 2018 , 87 , 235–247. [ Google Scholar ] [ CrossRef ]
  • Toyin, J.O.; Mewomo, M.C. Mewomo, Overview of BIM contributions in the construction phase: Review and bibliometric analysis. J. Inf. Technol. Constr. 2023 , 28 , 500–514. [ Google Scholar ] [ CrossRef ]
  • Kahvandi, Z.; Saghatforoush, E.; Alinezhad, M.; Noghli, F. Integrated Project Delivery (IPD) Research Trends. J. Eng. 2017 , 7 , 99–114. [ Google Scholar ] [ CrossRef ]
  • Hale, D.R.; Shrestha, P.P.; Gibson, G.E.; Migliaccio, G.C. Empirical Comparison of Design/Build and Design/Bid/Build Project Delivery Methods. J. Constr. Eng. Manag. 2009 , 135 , 579–587. [ Google Scholar ] [ CrossRef ]
  • Mollaoglu-Korkmaz, S.; Swarup, L.; Riley, D. Delivering Sustainable, High-Performance Buildings: Influence of Project Delivery Methods on Integration and Project Outcomes. J. Manag. Eng. 2013 , 29 , 71–78. [ Google Scholar ] [ CrossRef ]
  • Ugwu, O.O.; Haupt, T.C. Key performance indicators and assessment methods for infrastructure sustainability—a South African construction industry perspective. Build. Environ. 2007 , 42 , 665–680. [ Google Scholar ] [ CrossRef ]
  • Kines, P.; Andersen, L.P.S.; Spangenberg, S.; Mikkelsen, K.L.; Dyreborg, J.; Zohar, D. Improving construction site safety through leader-based verbal safety communication. J. Safety Res. 2010 , 41 , 399–406. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ballard, G. The Lean Project Delivery System: An Update. 2008. [ Google Scholar ]
  • Bynum, P.; Issa, R.R.A.; Olbina, S. Building information modeling in support of sustainable design and construction. J. Constr. Eng. Manag. 2013 , 139 , 24–34. [ Google Scholar ] [ CrossRef ]
  • Choudhry, R.M.; Fang, D.; Lingard, H. Measuring Safety Climate of a Construction Company. J. Constr. Eng. Manag. 2009 , 135 , 890–899. [ Google Scholar ] [ CrossRef ]
  • Wardani, M.A.E.; Messner, J.I.; Horman, M.J. Comparing procurement methods for Design-Build projects. J. Constr. Eng. Manag. 2006 , 132 , 230–238. [ Google Scholar ] [ CrossRef ]
  • Liu, J.; Zhao, X.; Yan, P. Risk Paths in International Construction Projects: Case Study from Chinese Contractors. J. Constr. Eng. Manag. 2016 , 142 . [ Google Scholar ] [ CrossRef ]
  • El-Sayegh, S. Evaluating the effectiveness of project delivery methods. J. Constr. Manag. Econ. 2008 , 23 , 457–465. [ Google Scholar ]
  • Fang, C.; Marle, F.; Zio, E.; Bocquet, J.-C. Network theory-based analysis of risk interactions in large engineering projects. Reliability Eng. Syst. Safety 2012 , 106 , 1–10. [ Google Scholar ] [ CrossRef ]
  • Franz, B.; Leicht, R.M. Initiating IPD Concepts on Campus Facilities with a ‘Collaboration Addendum’. In Proceedings of the Construction Research Congress 2012, West Lafayette, IN, USA, 21–23 May 2012; pp. 61–70. [ Google Scholar ] [ CrossRef ]
  • Kim, H.; Kim, K.; Kim, H. Vision-Based Object-Centric Safety Assessment Using Fuzzy Inference: Monitoring Struck-By Accidents with Moving Objects. J. Comput. Civil Eng. 2016 , 30 . [ Google Scholar ] [ CrossRef ]
  • Zhou, Y.; Ding, L.Y.; Chen, L.J. Application of 4D visualization technology for safety management in metro construction. Automation Constr. 2013 , 34 , 25–36. [ Google Scholar ] [ CrossRef ]
  • Wanberg, J.; Harper, C.; Hallowell, M.R.; Rajendran, S. Relationship between Construction Safety and Quality Performance. J. Constr. Eng. Manag. 2013 , 139 . [ Google Scholar ] [ CrossRef ]
  • Shrestha, P.P.; O’Connor, J.T.; Gibson, G.E. Performance comparison of large Design-Build and Design-Bid-Build highway projects. J. Constr. Eng. Manag. 2012 , 138 , 1–13. [ Google Scholar ] [ CrossRef ]
  • Torabi, S.A.; Hassini, E. Multi-site production planning integrating procurement and distribution plans in multi-echelon supply chains: An interactive fuzzy goal programming approach. Int. J. Prod. Res. 2009 , 47 , 5475–5499. [ Google Scholar ] [ CrossRef ]
  • Baradan, S.; Usmen, M. Comparative Injury and Fatality Risk Analysis of Building Trades. J. Constr. Eng. Manag.-ASCE 2006 , 132 . [ Google Scholar ] [ CrossRef ]
  • Levitt, R.E. CEM Research for the Next 50 Years: Maximizing Economic, Environmental, and Societal Value of the Built Environment1. J. Constr. Eng. Manag. 2007 , 133 , 619–628. [ Google Scholar ] [ CrossRef ]
  • Araya, F. Modeling the spread of COVID-19 on construction workers: An agent-based approach. Saf. Sci. 2021 , 133 , 105022. [ Google Scholar ] [ CrossRef ]
  • Zheng, X.; Le, Y.; Chan, A.P.; Hu, Y.; Li, Y. Review of the application of social network analysis (SNA) in construction project management research. Int. J. Proj. Manag. 2016 , 34 , 1214–1225. [ Google Scholar ] [ CrossRef ]
  • Elghaish, F.; Abrishami, S. A centralised cost management system: Exploiting EVM and ABC within IPD. Eng. Constr. Archit. Manag. 2021 , 28 , 549–569. [ Google Scholar ] [ CrossRef ]
  • Smith, R.E.; Mossman, A.; Emmitt, S. Lean and integrated project delivery. Lean Constr. J. 2011 , 1–16. [ Google Scholar ]
  • Bröchner, J.; Badenfelt, U. Changes and change management in construction and IT projects. Autom. Constr. 2011 , 20 , 767–775. [ Google Scholar ] [ CrossRef ]
  • Monteiro, A.; Mêda, P.; Martins, J.P. Framework for the coordinated application of two different integrated project delivery platforms. Autom. Constr. 2014 , 38 , 87–99. [ Google Scholar ] [ CrossRef ]
  • Azhar, N.; Kang, Y.; Ahmad, I.U. Factors influencing integrated project delivery in publicly owned construction projects: An information modelling perspective. Procedia Eng. 2014 , 77 , 213–221. [ Google Scholar ] [ CrossRef ]
  • Mihic, M.; Sertic, J.; Zavrski, I. Integrated Project Delivery as Integration between Solution Development and Solution Implementation. Procedia Soc. Behav. Sci. 2014 , 119 , 557–565. [ Google Scholar ] [ CrossRef ]
  • Nawi, M.N.M.; Haron, A.T.; Hamid, Z.A.; Kamar, K.A.M.; Baharuddin, Y. Improving integrated practice through building information modeling-integrated project delivery (BIM-IPD) for Malaysian industrialised building system (IBS) Construction Projects. Malays. Constr. Res. J. 2014 , 15 , 29–38. Available online: https://dsgate.uum.edu.my/jspui/handle/123456789/1651 (accessed on 24 April 2024).
  • Ma, Z.; Zhang, D.; Li, J. A dedicated collaboration platform for Integrated Project Delivery. Autom. Constr. 2018 , 86 , 199–209. [ Google Scholar ] [ CrossRef ]
  • Yadav, S.; Kanade, G. Application of Revit as Building Information Modeling (BIM) for Integrated Project Delivery (IPD) to Building Construction Project—A Review. Int. Res. J. Eng. Technol. 2018 , 5 , 11–14. [ Google Scholar ]
  • Salim, M.S.; Mahjoob, A.M.R. Integrated project delivery (IPD) method with BIM to improve the project performance: A case study in the Republic of Iraq. Asian J. Civ. Eng. 2020 , 21 , 947–957. [ Google Scholar ] [ CrossRef ]
  • Ling, Y.Y.; Lau, B.S.Y. A case study on the management of the development of a large-scale power plant project in East Asia based on design-build arrangement. Int. J. Proj. Manag. 2002 , 20 , 413–423. [ Google Scholar ] [ CrossRef ]
  • Dalui, P.; Elghaish, F.; Brooks, T.; McIlwaine, S. Integrated Project Delivery with BIM: A Methodical Approach Within the UK Consulting Sector. J. Inf. Technol. Constr. 2021 , 26 , 922–935. [ Google Scholar ] [ CrossRef ]
  • Pishdad-Bozorgi, P. Case Studies on the Role of Integrated Project Delivery (IPD) Approach on the Establishment and Promotion of Trust. Int. J. Constr. Educ. Res. 2017 , 13 , 102–124. [ Google Scholar ] [ CrossRef ]
  • Singleton, M.S.; Hamzeh, F.R. Implementing integrated project delivery on department of the navy construction projects: Lean Construction Journal. Lean Constr. J. 2011 , 17–31. [ Google Scholar ]
  • Tran, D.Q.; Nguyen, L.D.; Faught, A. Examination of communication processes in design-build project delivery in building construction. Eng. Constr. Archit. Manag. 2017 , 24 , 1319–1336. [ Google Scholar ] [ CrossRef ]
  • Park, J.; Kwak, Y.H. Design-Bid-Build (DBB) vs. Design-Build (DB) in the U.S. public transportation projects: The choice and consequences. Int. J. Proj. Manag. 2017 , 35 , 280–295. [ Google Scholar ] [ CrossRef ]
  • Wiss, R.A.; Roberts, R.T.; Phraner, S.D. Beyond Design-Build-Operate-Maintain: New Partnership Approach Toward Fixed Guideway Transit Projects. Transp. Res. Rec. J. Transp. Res. Board 2000 , 1704 , 13–18. [ Google Scholar ] [ CrossRef ]
  • Xia, B.; Chan, A.P. Key competences of design-build clients in China. J. Facil. Manag. 2010 , 8 , 114–129. [ Google Scholar ] [ CrossRef ]
  • DeBernard, D.M. Beyond Collaboration—The Benefits of Integrated Project Delivery ; AIA Soloso Website: Washington, DC, USA, 2008. [ Google Scholar ]
  • Chen, Q.; Jin, Z.; Xia, B.; Wu, P.; Skitmore, M. Time and Cost Performance of Design–Build Projects. J. Constr. Eng. Manag. 2016 , 142 , 04015074. [ Google Scholar ] [ CrossRef ]
  • Xia, B.; Chan, P. Review of the design-build market in the People’s Republic of China. J. Constr. Procure. 2008 , 14 , 108–117. [ Google Scholar ]
  • Mcwhirt, D.; Ahn, J.; Shane, J.S.; Strong, K.C. Military construction projects: Comparison of project delivery methods. J. Facil. Manag. 2011 , 9 , 157–169. [ Google Scholar ] [ CrossRef ]
  • Minchin, R.E.; Li, X.; Issa, R.R.; Vargas, G.G. Comparison of Cost and Time Performance of Design-Build and Design-Bid-Build Delivery Systems in Florida. J. Constr. Eng. Manag. 2013 , 139 , 04013007. [ Google Scholar ] [ CrossRef ]
  • Adamtey, S.; Onsarigo, L. Effective tools for projects delivered by progressive design-build method. In Proceedings of the CSCE Annual Conference 2019, Laval, QC, Canada, 12–15 June 2019; pp. 1–10. [ Google Scholar ]
  • Adamtey, S.A. A Case Study Performance Analysis of Design-Build and Integrated Project Delivery Methods. Int. J. Constr. Educ. Res. 2021 , 17 , 68–84. [ Google Scholar ] [ CrossRef ]
  • Gad, G.M.; Adamtey, S.A.; Gransberg, D.D. Gransberg, Trends in Quality Management Approaches to Design–Build Transportation Projects. Transp. Res. Rec. J. Transp. Res. Board. 2015 , 2504 , 87–92. [ Google Scholar ] [ CrossRef ]
  • Sari, E.M.; Irawan, A.P.; Wibowo, M.A.; Siregar, J.P.; Praja, A.K.A. Project delivery systems: The partnering concept in integrated and non-integrated construction projects. Sustainability 2022 , 15 , 86. [ Google Scholar ] [ CrossRef ]
  • Chakra, H.A.; Ashi, A. Comparative analysis of design/build and design/bid/build project delivery systems in Lebanon. J. Ind. Eng. Int. 2019 , 15 , 147–152. [ Google Scholar ] [ CrossRef ]
  • Perkins, R.A. Sources of Changes in Design–Build Contracts for a Governmental Owner. J. Constr. Eng. Manag. 2009 , 135 , 588–593. [ Google Scholar ] [ CrossRef ]
  • Palaneeswaran, E.; Kumaraswamy, M.M. Contractor Selection for Design/Build Projects. J. Constr. Eng. Manag. 2000 , 126 , 331–339. [ Google Scholar ] [ CrossRef ]
  • Chan, A.P.C. Evaluation of enhanced design and build system a case study of a hospital project. Constr. Manag. Econ. 2000 , 18 , 863–871. [ Google Scholar ] [ CrossRef ]
  • Shrestha, P.P.; Davis, B.; Gad, G.M. Investigation of Legal Issues in Construction-Manager-at-Risk Projects: Case Study of Airport Projects. J. Leg. Aff. Dispute Resolut. Eng. Constr. 2020 , 12 , 04520022. [ Google Scholar ] [ CrossRef ]
  • Marston, S. CMAR Project Delivery Method Generates Team Orientated Project Management with Win/Win Mentality. In Proceedings of the Pipelines 2020, San Antonio, TX, USA, 9–12 August 2020; pp. 167–170. [ Google Scholar ] [ CrossRef ]
  • Francom, T.; El Asmar, M.; Ariaratnam, S.T. Ariaratnam, Longitudinal Study of Construction Manager at Risk for Pipeline Rehabilitation. J. Pipeline Syst. Eng. Pract. 2017 , 8 , 04017001. [ Google Scholar ] [ CrossRef ]
  • Peña-Mora, F.; Tamaki, T. Effect of Delivery Systems on Collaborative Negotiations for Large-Scale Infrastructure Projects. J. Manag. Eng. 2001 , 17 , 105–121. [ Google Scholar ] [ CrossRef ]
  • Mahdi, I.M.; Alreshaid, K. Decision support system for selecting the proper project delivery method using analytical hierarchy process (AHP). Int. J. Proj. Manag. 2005 , 23 , 564–572. [ Google Scholar ] [ CrossRef ]
  • Randall, T.; Pool, S.; Limke, J.; Bradney, A. CMaR Delivery of Critical Water and Wastewater Pipelines. In Proceedings of the Pipelines 2020, San Antonio, TX, USA, 9–12 August 2020; Conference Held Virtually. pp. 280–289. [ Google Scholar ] [ CrossRef ]
  • Perrenoud, A.; Reyes, M.; Ghosh, S.; Coetzee, M. Collaborative Risk Management of the Approval Process of Building Envelope Materials. In Proceedings of the AEI 2017, Oklahoma City, OK, USA, 11–13 April 2017; pp. 806–816. [ Google Scholar ] [ CrossRef ]
  • Parrott, B.C.; Bomba, M.B. Integrated Project Delivery and Building Information Modeling: A New Breed of Contract. 2010. Available online: https://content.aia.org/sites/default/files/2017-03/Integrated%20project%20delivery%20and%20BIM-%20A%20new%20breed%20of%20contract.pdf (accessed on 18 November 2023).
  • Cheng, R. IPD Case Studies. Report. March 2012. Available online: http://conservancy.umn.edu/handle/11299/201408 (accessed on 1 May 2024).
  • Lee, H.W.; Anderson, S.M.; Kim, Y.-W.; Ballard, G. Ballard, Advancing Impact of Education, Training, and Professional Experience on Integrated Project Delivery. Pract. Period. Struct. Des. Constr. 2014 , 19 , 8–14. [ Google Scholar ] [ CrossRef ]
  • Hoseingholi, M.; Jalal, M.P. Jalal, Identification and Analysis of Owner-Induced Problems in Design–Build Project Lifecycle. J. Leg. Aff. Dispute Resolut. Eng. Constr. 2017 , 9 , 04516013. [ Google Scholar ] [ CrossRef ]
  • Öztaş, A.; Ökmen, Ö. Risk analysis in fixed-price design–build construction projects. Build. Environ. 2004 , 39 , 229–237. [ Google Scholar ] [ CrossRef ]
  • Lee, D.-E.; Arditi, D. Total Quality Performance of Design/Build Firms Using Quality Function Deployment. J. Constr. Eng. Manag. 2006 , 132 , 49–57. [ Google Scholar ] [ CrossRef ]
  • Garner, B.; Richardson, K.; Castro-Lacouture, D. Design-Build Project Delivery in Military Construction: Approach to Best Value Procurement. J. Adv. Perform. Inf. Value 2008 , 1 , 35–50. [ Google Scholar ] [ CrossRef ]
  • Graham, P. Evaluation of Design-Build Practice in Colorado Project IR IM(CX) 025-3(113) ; Colorado Department of Transportation: Denver, CO, USA, 2001. [ Google Scholar ]
  • Parami Dewi, A.; Too, E.; Trigunarsyah, B. Implementing design build project delivery system in Indonesian road infrastructure projects. In Innovation and Sustainable Construction in Developing Countries (CIB W107 Conference 2011) ; Uwakweh, B.O., Ed.; Construction Publishing House/International Council for Research and Innovation in Building and C: Hanoi, Vietnam, 2011; pp. 108–117. [ Google Scholar ]
  • Arditi, D.; Lee, D.-E. Assessing the corporate service quality performance of design-build contractors using quality function deployment. Constr. Manag. Econ. 2003 , 21 , 175–185. [ Google Scholar ] [ CrossRef ]
  • Rao, T. . Is Design-Build Right for Your Next WWW Project? presented at the WEFTEC 2009, Water Environment Federation. January 2009, pp. 6444–6458. Available online: https://www.accesswater.org/publications/proceedings/-297075/is-design-build-right-for-your-next-www-project- (accessed on 3 April 2024).
  • Touran, A.; Molenaar, K.R.; Gransberg, D.D.; Ghavamifar, K. Decision Support System for Selection of Project Delivery Method in Transit. Transp. Res. Rec. 2009 , 2111 , 148–157. [ Google Scholar ] [ CrossRef ]
  • Culp, G. Alternative Project Delivery Methods for Water and Wastewater Projects: Do They Save Time and Money? Leadersh. Manag. Eng. 2011 , 11 , 231–240. [ Google Scholar ] [ CrossRef ]
  • Ling, F.Y.Y.; Poh, B.H.M. Problems encountered by owners of design–build projects in Singapore. Int. J. Proj. Manag. 2008 , 26 , 164–173. [ Google Scholar ] [ CrossRef ]
  • Pishdad-Bozorgi, P.; de la Garza, J.M. Comparative Analysis of Design-Bid-Build and Design-Build from the Standpoint of Claims. In Proceedings of the Construction Research Congress 2012, West Lafayette, IN, USA, 21–23 May 2012. [ Google Scholar ] [ CrossRef ]
  • Walewski, J.; Gibson, G.E., Jr.; Jasper, J. Project Delivery Methods and Contracting Approaches Available for Implementation by the Texas Department of Transportation. University of Texas at Austin. Center for Transportation Research. 2001. Available online: https://rosap.ntl.bts.gov/view/dot/14863 (accessed on 3 April 2024).
  • Alleman, D.; Antoine, A.; Gransberg, D.D.; Molenaar, K.R. Comparison of Qualifications-Based Selection and Best-Value Procurement for Construction Manager–General Contractor Highway Construction. 2017. Available online: https://journals.sagepub.com/doi/abs/10.3141/2630-08 (accessed on 2 April 2024).
  • Gransberg, N.J.; Gransberg, D.D. Public Project Construction Manager-at-Risk Contracts: Lessons Learned from a Comparison of Commercial and Infrastructure Projects. J. Leg. Aff. Dispute Resolut. Eng. Constr. 2020 , 12 , 04519039. [ Google Scholar ] [ CrossRef ]
  • Anderson, S.D.; Damnjanovic, I. Selection and Evaluation of Alternative Contracting Methods to Accelerate Project Completion ; The National Academies Press: Washington, DC, USA, 2008; Available online: http://elibrary.pcu.edu.ph:9000/digi/NA02/2008/23075.pdf (accessed on 26 April 2024).
  • Shrestha, P.P.; Batista, J.; Maharjan, R. Impediments in Using Design-Build or Construction Management-at-Risk Delivery Methods for Water and Wastewater Projects. In Proceedings of the Construction Research Congress 2016, San Juan, PR, USA, 31 May–2 June 2016; pp. 380–387. [ Google Scholar ] [ CrossRef ]
  • Chateau, L. Environmental acceptability of beneficial use of waste as construction material—State of knowledge, current practices and future developments in Europe and in France. J. Hazard. Mater. 2007 , 139 , 556–562. [ Google Scholar ] [ CrossRef ]
  • Lam, T.I.; Chan, H.W.E.; Chau, C.K.; Poon, C.S. An Overview of the Development of Green Specifications in the Construction Industry. In Proceedings of the International Conference on Urban Sustainability [ICONUS], 1 January 2008; pp. 295–301. Available online: https://research.polyu.edu.hk/en/publications/an-overview-of-the-development-of-green-specifications-in-the-con (accessed on 2 May 2024).
  • Tabish, S.Z.S.; Jha, K.N. Success Traits for a Construction Project. J. Constr. Eng. Manag. 2012 , 138 , 1131–1138. [ Google Scholar ] [ CrossRef ]
  • Niroumand, H.; Zain, M.; Jamil, M. A guideline for assessing of critical parameters on Earth architecture and Earth buildings as a sustainable architecture in various countries. Renew. Sustain. Energy Rev. 2013 , 28 , 130–165. [ Google Scholar ] [ CrossRef ]
  • Rogulj, K.; Jajac, N. Achieving a Construction Barrier–Free Environment: Decision Support to Policy Selection. J. Manag. Eng. 2018 , 34 , 04018020. [ Google Scholar ] [ CrossRef ]
  • Sackey, S.; Kim, B.-S. Environmental and Economic Performance of Asphalt Shingle and Clay Tile Roofing Sheets Using Life Cycle Assessment Approach and TOPSIS. J. Constr. Eng. Manag. 2018 , 144 , 04018104. [ Google Scholar ] [ CrossRef ]
  • Carretero-Ayuso, M.J.; García-Sanz-Calcedo, J.; Rodríguez-Jiménez, C.E. Rodríguez-Jiménez, Characterization and Appraisal of Technical Specifications in Brick Façade Projects in Spain. J. Perform. Constr. Facil. 2018 , 32 , 04018012. [ Google Scholar ] [ CrossRef ]
  • Golabchi, A.; Guo, X.; Liu, M.; Han, S.; Lee, S.; AbouRizk, S. An integrated ergonomics framework for evaluation and design of construction operations. Autom. Constr. 2018 , 95 , 72–85. [ Google Scholar ] [ CrossRef ]
  • Jha, K.; Iyer, K. Commitment, coordination, competence and the iron triangle. Int. J. Proj. Manag. 2007 , 25 , 527–540. [ Google Scholar ] [ CrossRef ]
  • Tabassi, A.A.; Ramli, M.; Roufechaei, K.M.; Tabasi, A.A. Team development and performance in construction design teams: An assessment of a hierarchical model with mediating effect of compensation. Constr. Manag. Econ. 2014 , 32 , 932–949. [ Google Scholar ] [ CrossRef ]
  • Chen, Y.; Okudan, G.E.; Riley, D.R. Sustainable performance criteria for construction method selection in concrete buildings. Autom. Constr. 2010 , 19 , 235–244. [ Google Scholar ] [ CrossRef ]
  • Doloi, H.; Sawhney, A.; Iyer, K.; Rentala, S. Analysing factors affecting delays in Indian construction projects. Int. J. Proj. Manag. 2012 , 30 , 479–489. [ Google Scholar ] [ CrossRef ]
  • Kog, Y.C.; Loh, P.K. Critical Success Factors for Different Components of Construction Projects. J. Constr. Eng. Manag. 2012 , 138 , 520–528. [ Google Scholar ] [ CrossRef ]
  • Gunduz, M.; Almuajebh, M. Critical success factors for sustainable construction project management. Sustainability 2020 , 12 , 1990. [ Google Scholar ] [ CrossRef ]
  • Cao, D.; Li, H.; Wang, G.; Luo, X.; Tan, D. Relationship Network Structure and Organizational Competitiveness: Evidence from BIM Implementation Practices in the Construction Industry. J. Manag. Eng. 2018 , 34 , 04018005. [ Google Scholar ] [ CrossRef ]
  • Clevenger, C.M. Development of a Project Management Certification Plan for a DOT. J. Manag. Eng. 2018 , 34 , 06018002. [ Google Scholar ] [ CrossRef ]
  • Bygballe, L.E.; Swärd, A. Collaborative Project Delivery Models and the Role of Routines in Institutionalizing Partnering. Proj. Manag. J. 2019 , 50 , 161–176. [ Google Scholar ] [ CrossRef ]
  • Collins, W.; Parrish, K. The Need for Integrated Project Delivery in the Public Sector. In Proceedings of the Construction Research Congress 2014, Atlanta, GA, USA, 19–21 May 2014; pp. 719–728. [ Google Scholar ] [ CrossRef ]
  • Turk, Ž.; Klinc, R. Potentials of Blockchain Technology for Construction Management. Procedia Eng. 2017 , 196 , 638–645. [ Google Scholar ] [ CrossRef ]
  • Elghaish, F.; Abrishami, S.; Hosseini, M.R. Integrated project delivery with blockchain: An automated financial system. Autom. Constr. 2020 , 114 , 103182. [ Google Scholar ] [ CrossRef ]
  • Fish, A. Integrated Project Delivery: The Obstacles of Implementation. May 2011. Available online: http://hdl.handle.net/2097/8554 (accessed on 3 April 2024).
  • Pan, Y.; Zhang, L. Roles of artificial intelligence in construction engineering and management: A critical review and future trends. Autom. Constr. 2020 , 122 , 103517. [ Google Scholar ] [ CrossRef ]
  • Mellit, A.; Kalogirou, S.A. Artificial intelligence techniques for photovoltaic applications: A review. Prog. Energy Combust. Sci. 2008 , 34 , 574–632. [ Google Scholar ] [ CrossRef ]
  • Smith, C.J.; Wong, A.T.C. Advancements in Artificial Intelligence-Based Decision Support Systems for Improving Construction Project Sustainability: A Systematic Literature Review. Informatics 2022 , 9 , 43. [ Google Scholar ] [ CrossRef ]
  • Villa, F. Semantically driven meta-modelling: Automating model construction in an environmental decision support system for the assessment of ecosystem services flows. In Information Technologies in Environmental Engineering ; Athanasiadis, I.N., Rizzoli, A.E., Mitkas, P.A., Gómez, J.M., Eds.; Springer: Berlin, Heidelberg, 2009; pp. 23–36. [ Google Scholar ]
  • Minhas, M.R.; Potdar, V. Decision Support Systems in Construction: A Bibliometric Analysis. Buildings 2020 , 10 , 108. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

PaperReferenceTotal Citation
TC
TC Per YearNormalized TC
Kent D.C., 2010, J Constr Eng Manage(Kent and Becerik-Gerber, 2010) [ ]30021.437.67
Ugwu O.O., 2007, Build Environ(Ugwu and Haupt, 2007) [ ]26915.827.69
Kines P., 2010, J Saf Res(Kines et al., 2010) [ ]23817.006.08
Asmar M., 2013, J Constr Eng Manag(Asmar et al., 2013) [ ]22620.555.01
Ballard G., 2008, Lean Constr J(Ballard, 2008) [ ]22113.816.85
Hale D.R., 2009, J Constr Eng Manag(Hale et al., 2009) [ ]21114.076.95
Bynum P., 2013, J Constr Eng Manag(Bynum et al., 2013) [ ]18516.824.11
Ibbs C.W., 2003, J Constr Eng Manag(Ibbs et al., 2003) [ ]1838.718.58
Choudry R.M., 2009, J Constr Eng Manag(Choudhry et al., 2009) [ ]18212.136.00
Mollaoglu-Korkmaz S., 2013, J Manage Eng(Mollaoglu-Korkmaz et al., 2013) [ ]15213.823.37
El Wardani M.A., 2006, J Constr Eng Manag(El Wardani et al., 2006) [ ]1448.004.65
Ghassemi R., 2011, Lean Constr J(Ghassemi and Becerik-Gerber, 2011) [ ]14311.005.54
Liu J., 2016, J Constr Eng Manag(Liu et al., 2016) [ ]14017.505.12
El-Sayegh S.M., 2015, J Manag Eng(El-Sayegh and Mansour, 2015) [ ]13515.006.59
Fang C., 2012, Reliab Eng Syst Saf(Fang et al., 2012) [ ]13110.924.05
Franz B., 2017, J Constr Eng Manag(Franz et al., 2017) [ ]12618.005.56
Kim H., 2016, J Comput Civ Eng(Kim et al., 2016) [ ]12515.634.57
Ding L.Y., 2013, Autom Constr(Ding and Zhou, 2013) [ ]11810.732.62
Wanberg J., 2013, J Constr Eng Manag(Wanberg et al., 2013) [ ]11610.552.57
Shrestha, P.P., 2012, J Constr Eng Manag(Shrestha et al., 2012) [ ]1129.333.47
Torabi S.A., 2009, Int J Prod Res(Torabi and Hassini, 2009) [ ]1057.003.46
Baradan S., 2006, J Constr Eng Manag(Baradan and Usmen, 2006) [ ]995.503.20
Levitt R.E., 2007, J Constr Eng Manag(Levitt, 2007) [ ]975.712.77
Sullivan J., 2017, J Constr Eng Manag(Sullivan et al., 2017) [ ]9313.294.11
Araya F., 2021, Saf Sci(Araya, 2021) [ ]9230.679.5
Country Frequency
USA584
CHINA167
UK101
AUSTRALIA71
SOUTH KOREA56
CANADA51
IRAN39
MALAYSIA39
INDIA30
SOUTH AFRICA22
SPAIN22
FINLAND18
FRANCE17
DENMARK16
EGYPT16
SWEDEN16
INDONESIA15
NETHERLANDS14
NEW ZEALAND14
BRAZIL13
GERMANY13
NIGERIA13
UNITED ARAB ENIRATES13
JORDAN12
SAUDI ARABIA12
CountryTCAverage Article Citations
USA493323.70
CHINA110618.10
UNITED KINGDOM76319.10
HONG KONG70337.00
AUSTRALIA49421.50
SOUTH KOREA31216.00
IRAN19852.00
SPAIN19115.20
SWEDEN18821.20
PAKISTAN18220.90
FRANCE164182.00
UNITED ARAB EMIRATES16332.80
MALAYSIA15432.60
INDIA14515.40
SINGAPORE13013.20
CANADA10743.30
ITALY927.60
LEBANON9218.40
NETHERLANDS9118.40
NORWAY7418.20
IPD Advantages
Advantages% Percentage of Advantages from Ordered List of PublicationPublication List
Collaborative atmosphere and fairness79B = [ ] C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] K = [ ] L = [ ] M = [ ] N = [ ] O = [ ] P = [ ] Q = [ ] R = [ ] S = [ ] T = [ ] U = [ ] V = [ ]
Early involvement of stakeholders63B = [ ] C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] L = [ ] M = [ ] N = [ ] O U = [ ] V = [ ] W = [ ]
Promoting trust25R = [ ] S = [ ] U = [ ] V = [ ] W = [ ] X = [ ]
Reduce schedule time42C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] S = [ ] T = [ ]
Reduce waste42C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] S = [ ] T = [ ]
Shared cost, risk reward, and responsibilities75C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] S = [ ] T = [ ] U = [ ] V = [ ] W = [ ] X = [ ]
Multi-party agreement and noncompetitive bidding54C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] K = [ ] N = [ ] Q = [ ] T = [ ] V = [ ]
Integrated decision-making for designs and shared design responsibilities38C = [ ] D = [ ] E = [ ] H = [ ] I = [ ] J = [ ] L = [ ] P = [ ] T = [ ]
Open communication and time management38D = [ ] E = [ ] F = [ ] O = [ ] R = [ ] S = [ ] T = [ ] U = [ ] V = [ ]
Reduce project duration and liability by fast-tracking design and construction25F = [ ] G = [ ] L = [ ] O = [ ] S = V
Shared manpower and changes in SOW, equipment rentage, and change orders17A = [ ] F = [ ] G = [ ] Q = [ ]
Information sharing and technological impact38A = [ ] D = [ ] G = KLMPRV
Fast problem resolution through an integrated approach21B = [ ] C = [ ] D = [ ] E = [ ] S = [ ]
Lowest cost delivery and project cost33A = [ ] C = [ ] F = [ ] G = [ ] L = [ ] P = [ ] Q = [ ] S = [ ] T = [ ] U = [ ]
Improved efficiency and reduced errors29B = [ ] C = [ ] F = [ ] L = [ ] Q = [ ] S = [ ] T = [ ]
Combined risk pool estimated maximum price (allowable cost)17A = [ ] L = [ ] P = [ ] Q = [ ]
Cooperation innovation and coordination46CEFLPQRSTUV
Combined labor material cost estimation, budgeting, and profits25A = [ ] D = [ ] P = [ ] S = [ ] T = [ ] U = [ ] V = [ ]
Strengthened relationship and self-governance17C = [ ] D = [ ] F = [ ]
Fewer change orders, Schedules, and request for information21L = [ ] O = [ ] Q = [ ] T = [ ] V = [ ]
Ordered list of publication A = [ ] B = [ ] C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] K = [ ] L = [ ] M = [ ] N = [ ] O = [ ] P = [ ] Q = [ ] R = [ ] S = [ ] T = [ ] U = [ ] V = [ ] W = [ ] X = [ ]
DB Advantages
Disadvantages%Percentage of Advantages from Ordered List of PublicationPublication List
Single point of accountability for the design and construction39CDIJMOQRT C = [ ] D = [ ] I = [ ] J = [ ] M = [ ] O = [ ] Q = [ ] R = [ ] T = [ ]
Produces time saving schedule52CDHJKLMORSTV C = [ ] D = [ ] H = [ ] J = [ ] K = [ ] L = [ ] M = [ ] O = [ ] R = [ ] S = [ ] T = [ ] V = [ ]
Cost effective projects39CKLMNOPQSV C = [ ] K = [ ] L = [ ] M = [ ] N = [ ] O = [ ] P = [ ] Q = [ ] S = [ ] V = [ ]
Design build functions as a single Entity8DF D = [ ] F = [ ]
Enhances quality and mitigates design errors21F = [ ] J = [ ] S = [ ] V = [ ] W = [ ] F = [ ]
Facilitates teamwork between owner and design builder 30J = [ ] N = [ ] P = [ ] S = [ ] U = [ ] V = [ ] W = [ ]
Insight into constructability of the design build contractor (Early involvement of contractor)13H = [ ] I = [ ] T = [ ]
Enhances fast tracking4R = [ ]
Good coordination and decision-making27C = [ ] D = [ ] E = [ ] M = [ ] O = [ ] Q = [ ]
Clients’ owner credibility13A = [ ] C = [ ] G = [ ]
Dispute reduction mitigates disputes21B = [ ] H = [ ] I = [ ] J = [ ] Q = [ ]
Ordered list of publication A = [ ] B = [ ] C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] K = [ ] L = [ ] M = [ ] N = [ ] O = [ ] P = [ ] Q = [ ] R = [ ] S = [ ] T = [ ] U = [ ] V = [ ] W = [ ]
CMAR Advantages
AdvantagesPercentage of Advantages from the Ordered List of PublicationPublication List
Early stakeholder involvement 31H = [ ] I = [ ] L = [ ] M = [ ] O = [ ]
Fast-tracking cost savings and delivery within budget50A = [ ] B = [ ] C = [ ] D = [ ] F = [ ] I = [ ] M = [ ] O = [ ]
Reduce project duration by fast-tracking design and construction6C = [ ]
Clients have control over the design details and early knowledge of costs50B = [ ] C = [ ] D = [ ] H = [ ] I = [ ] K = [ ] M = [ ] P = [ ]
Mitigates against change order50A = [ ] C = [ ] E = [ ] H = [ ] I = [ ] K = [ ] M = [ ] P = [ ]
Provides a GMP by considering the risk of price31A = [ ] B = [ ] C = [ ] M = [ ] O = [ ]
Reduces design cost and redesigning cost25C = [ ] D = [ ] E = [ ] H = [ ]
Facilitates schedule management75B = [ ] C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] K = [ ] M = [ ] N = [ ]
Facilitates cost control and transparency 69C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] K = [ ] M = [ ] N = [ ]
Single point of responsibility for construction and joint team orientation for accountability44A = [ ] B = [ ] E = [ ] F = [ ] I = [ ] M = [ ] N = [ ]
Facilitates Collaboration25E = [ ] F = [ ] I = [ ] J = [ ]
Ordered list of publication A = [ ] B = [ ] C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] K = [ ] L = [ ] M = [ ] N = [ ] O = [ ] P = [ ]
IPD Disadvantages
Disadvantages% Percentage of Disadvantages from Ordered List of PublicationPublication List
Impossibility of being sued internally over disputes and mistrust, alongside complexities in compensation and resource distribution42C = [ ] E = [ ] F = [ ] I = [ ] L = [ ]
Skepticism of the added value of IPD and impossibility of owners’ inability to tap into financial reserves from shared risk funds50E = [ ] F = [ ] G = [ ] J = [ ] K = [ ] L = [ ]
Difficulty in deciding scope17A = [ ] H = [ ]
Difficulty in deciding target cost/Budgeting25A = [ ] D = [ ] H = [ ]
Adversarial team relationships and legality issues50B = [ ] C = [ ] D = [ ] F = [ ] K = [ ] L = [ ]
Immature insurance policy for IPD and uneasiness to produce a coordinating document25A = [ ] J = [ ] K = [ ]
Fabricated drawings in place of engineering drawings because of too early interactions8F = [ ]
High initial cost of investment in setting up IPD team and difficulty in replacing a member of IPD team16J = [ ] L = [ ]
Inexperience in initiating/developing an IPD team and knowledge level16K = [ ] L = [ ]
Low adoption of IPD due to cultural, financial, and technological barriers33E = [ ] F = [ ] K = [ ] L = [ ]
High degree of risks amongst teams coming together for IPD and owners responsible for claims, damages, and expenses (liabilities)25D = [ ] F = [ ] L = [ ]
Issues with poor collaboration8H = [ ]
Non-adaptability to IPD environment42E = [ ] G = [ ] J = [ ] K = [ ] L = [ ]
Ordered list of publication A = [ ] B = [ ] C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] K = [ ] L = [ ]
DB Disadvantages
DisadvantagesPercentage of Disadvantages from Ordered List of PublicationPublication List
Non-competitive selection of team not dependent on best designs of professionals and general contractors35B = [ ] C = [ ] D = [ ] E = [ ] G = [ ] I = [ ] J = [ ] K = [ ] L = [ ] M = [ ] O = [ ] P = [ ] Q = [ ] R = [ ] S = [ ]
Deficient checks, balances, and insurance among the designer, general contractor, and owner30A = [ ] B = [ ] C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] L = [ ] M = [ ] N = [ ] U = V
Unfair allocation of risk and high startup cost40R = [ ] C = [ ] S = [ ]
Architect/Engineer(A/E) not related to clients/owners with no control over the design requirements. A/E has less control or influence over the final design and project requirements60C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] S = [ ]
Owner cannot guarantee the quality of the finished project35C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] S = [ ]
Difficulty in defining SOW, and alterations in the designs after the contract and during construction with decrease in time35C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] K = [ ] M = [ ] N = [ ]
Difficulty in providing track record for design and construction40C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] K = [ ] N = [ ]
Discrepancy in quality control and testing intensive of owner’s viewpoint25C = [ ] D = [ ] E = [ ] H = [ ] I = [ ] J = [ ] K = [ ] N = [ ]
Delay in design changes, inflexibility, and the absence of a detailed design35D = [ ] E = [ ] F = [ ] O = [ ] R = [ ] S = [ ]
Owner/client needs external support to develop SOW/preliminary design of the project 10E = [ ] F = [ ] L = [ ] O = [ ] S = [ ]
Increased labour costs and tender prices5A = [ ] F = [ ] G = [ ] Q = [ ]
Guaranteed maximum price is established with Incomplete designs and work requirement25A = [ ] D = [ ] G = [ ] K = [ ] L = [ ] M = [ ] P = [ ] R = [ ]
Responsibility of contractor for omission and changes in design20A = [ ] B = [ ] C = [ ] D = [ ] S = [ ]
Ordered list of publication A = [ ] B = [ ] C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] K = [ ] L = [ ] M = [ ] N = [ ] O = [ ] P = [ ] Q = [ ] R = [ ] S = [ ]
CMAR Disadvantages
Disadvantages% Percentage of Advantages from Ordered List of PublicationPublication List
Unclear definition and relationship of roles and responsibilities of CM and design professionals78A = [ ] B = [ ] C = [ ] D = [ ] G = [ ] H = [ ] I = [ ]
Difficult to enforce GMP, SOW, and construction based on incomplete documents67A = [ ] D = [ ] E = [ ] G = [ ] H = [ ] I = [ ]
Not suitable for small projects or hold trade contractors over GMP tradeoffs and prices56B = [ ] C = [ ] G = [ ] H = [ ] I = [ ]
Improper education on CMAR methodology, polices, and regulations56E = [ ] F = [ ] G = [ ] H = [ ] I = [ ]
Knowledge, conflicts, and communication issues between the designer and the CM 56B = [ ] E = [ ] F = [ ] G = [ ] H = [ ]
Shift of responsibilities (including money) from owners/clients to CM44A = [ ] B = [ ] E = [ ] I = [ ]
Additional cost due to design and construction and design defects56A = [ ] C = [ ] D = [ ] G = [ ] H = [ ]
Inability of CMAR to self-perform during preconstruction 11C = [ ]
Disputes/issues concerning construction quality and the completeness of the design22A = [ ] D = [ ]
No information exchange/alignment between the A/E with the CMAR11A = [ ]
Ordered list of publication A = [ ] B = [ ] C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ]
Critical Success Factors for Sustainable Construction
AdvantagesPercentage of Advantages from Ordered List of Publication %Publication List
Collaborative atmosphere47A = [ ] C = [ ] G = [ ] H = [ ] K = [ ] N = [ ] O = [ ]
Early stakeholder involvement26N = [ ] J = [ ] I = [ ]
Reduce design errors13N = [ ] O = [ ]
Cost savings and delivery within budget/Client representative 33ABCEF A = [ ] B = [ ] C = [ ]
Influence of client 13B = [ ] J = [ ]
Ordered list of publication A = [ ] B = [ ] C = [ ] D = [ ] E = [ ] F = [ ] G = [ ] H = [ ] I = [ ] J = [ ] K = [ ] L = [ ] M = [ ] N = [ ] O = [ ] P = [ ] Q = [ ]
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Babalola, O.G.; Alam Bhuiyan, M.M.; Hammad, A. Literature Review on Collaborative Project Delivery for Sustainable Construction: Bibliometric Analysis. Sustainability 2024 , 16 , 7707. https://doi.org/10.3390/su16177707

Babalola OG, Alam Bhuiyan MM, Hammad A. Literature Review on Collaborative Project Delivery for Sustainable Construction: Bibliometric Analysis. Sustainability . 2024; 16(17):7707. https://doi.org/10.3390/su16177707

Babalola, Olabode Gafar, Mohammad Masfiqul Alam Bhuiyan, and Ahmed Hammad. 2024. "Literature Review on Collaborative Project Delivery for Sustainable Construction: Bibliometric Analysis" Sustainability 16, no. 17: 7707. https://doi.org/10.3390/su16177707

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • Search Menu
  • Sign in through your institution
  • Computer Science
  • Earth Sciences
  • Information Science
  • Life Sciences
  • Materials Science
  • Science Policy
  • Advance Access
  • Special Topics
  • Author Guidelines
  • Submission Site
  • Open Access Options
  • Self-Archiving Policy
  • Reasons to submit
  • About National Science Review
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Machine learning requires automation, slem framework and approaches, slem algorithms and applications, related fields.

  • < Previous

Simulating learning methodology (SLeM): an approach to machine learning automation

  • Article contents
  • Figures & tables
  • Supplementary Data

Zongben Xu, Jun Shu, Deyu Meng, Simulating learning methodology (SLeM): an approach to machine learning automation, National Science Review , Volume 11, Issue 8, August 2024, nwae277, https://doi.org/10.1093/nsr/nwae277

  • Permissions Icon Permissions

Machine learning (ML) is a fundamental technology of artificial intelligence (AI) that focuses on searching the possibly existing mapping |$f:\mathcal {X}\rightarrow \mathcal {Y}$| to fit a given dataset |$\mathcal {D}= \lbrace (x_i,y_i)\rbrace _{i=1}^N$|⁠ , where each |$(x,y)\in \mathcal {X}\times \mathcal {Y} \subset \mathbb {R}^d \times \mathbb {R}$|⁠ . The traditional learning paradigm of ML research is to find a mapping |$f^{*}$| from a predefined hypothesis space |$\mathcal {F} = \lbrace f_{\theta }:\mathcal {X}\rightarrow \mathcal {Y}, \theta \in \Theta \rbrace$| by solving the following equation, based on a given optimality criterion (i.e. a loss function |$\ell :\mathcal {Y} \times \mathcal {Y} \rightarrow \mathbb {R}^+$|⁠ ):

Here |$\mathbb {E}_{\mathcal {D}}$| denotes expectation with respect to |$\mathcal {D}$|⁠ , |$R(\cdot )$| is a regularizer that controls the property of the solution and |$\Theta$| is the set of parameters |$\theta \in \mathbb {R}^p$|⁠ .

Under such a learning paradigm, ML techniques like deep learning have revolutionized various fields of AI, e.g. computer vision, natural language processing and speech recognition, by effectively addressing complex problems that were once considered intractable. However, the effectiveness of ML always highly relies on some prerequisites of ML’s fundamental components before solving the aforementioned formulation. Some examples are as follows.

The independence hypothesis on the loss function. The loss function |$\ell$| is preset before implementation, certainly independent of the data distribution and application problems.

The large capacity hypothesis on the hypothesis space. The hypothesis space |$\mathcal {F}$| should be large enough in capacity to embody the optimal solution to be found. It is certainly preset independent of the application problems.

The completeness hypothesis on training data. Samples |$(x,y)$| in the training dataset should be well labeled, of low-level noise, class balanced and of sufficient number.

The prior determination hypothesis on the regularizer. Regularizer R is fixed and preset by a prior of the hypothesis space |$\mathcal {F}$|⁠ , while only hyperparameter |$\lambda$| is adjusted.

The Euclidean space hypothesis on the analysis tool. The performance of ML can be analyzed in the Euclidean space, which means that the optimization algorithm (i.e.  |$\arg \min$|⁠ ) of solving parameters |$\theta$| can always be naturally embedded in |$\mathbb {R}^p$| with the Euclidean norm.

All of these prerequisites are standard settings in ML research. They can be seen as both the prompt for the rapid development of ML and the restraints on its progress. To improve the performance of existing AI technologies, it is necessary to break through these prior hypotheses of ML. However, it is easy to observe that we can optimally set these components if and only if the optimal solution to the problem is known in advance, falling into a ‘chicken or egg’ dilemma. Therefore, it is fundamental to establish some kind of best-fitting strategies to ML setting-up for ML applications. Recently, there have been a series of strategies towards breaking through these hypotheses of ML with a best-fitting theory, e.g. model-driven deep learning for large capacity/regularizer hypotheses, the noise modeling principle for the independence hypothesis, self-paced learning for the completeness hypothesis, Banach space geometry for the Euclidean hypothesis (see [ 1 ] and the references therein).

Though these strategies have demonstrated effectiveness and powerfulness, they still highly rely on manually presetting, not automatically designing purely from data. Specifically, at the data level, we still rely on human effort to collect, select and annotate data. Humans must determine which data should be used for training and testing purposes. At the model and algorithm level, we have to manually construct the fundamental structure of learning models (e.g. deep neural networks), predefine the basic forms of loss functions, determine the algorithm types and their hyperparameters of the optimization algorithms, etc. Moreover, at the task and environment level, current techniques are good at solving single tasks in a closed environment, while they are limited in handling complex and varying multi-tasks in a more realistic open and evolutionary environment. In a nutshell, the current learning paradigm that relies on extensive manual interventions on ML’s components struggles to handle complex data and diverse tasks in the real world, resulting in a degradation and unsatisfied learning capability of current ML techniques.

A natural approach to address the aforementioned challenges is to reduce manual interventions in the ML process via some learning strategies towards automation of ML. In other words, we hope to design ML’s fundamental components for enhancing adaptive learning capabilities of ML in an open and evolutionary environment with diverse tasks, thereby achieving the so-called machine learning automation (Auto |$^6$| ML)[ 1 ]. We can summarize Auto |$^6$| ML as the following six automation goals.

Data and sample level : automatically generate data and select samples.

Model and algorithm level : automatically construct models/losses and design algorithms.

Task and environment level : automatically transfer between varying tasks and adapt to dynamic environments.

Achieving Auto |$^6$| ML could be understood as the automation regulation and design of ML’s fundamental components such as data, models, losses and algorithms, which intrinsically calls for a determination of ‘learning methodology’ mapping. In the following, we propose a ‘simulating learning methodology’ (SLeM) approach for the learning methodology determination in general and for Auto |$^6$| ML in particular. We report the SLeM framework, approaches, algorithms and applications in Fig.  1 .

Illustration of the SLeM framework, theories, algorithms and applications for machine learning automation.

Illustration of the SLeM framework, theories, algorithms and applications for machine learning automation.

In this section, we propose a SLeM framework by formalizing the learning task, learning method and learning methodology, and then we present three possible computations to implement SLeM.

Learning task

Machine learning summarizes the observable laws in the real world. From the view of mathematics and statistics, a learning task can thus be defined as a work that infers the underlying laws (i.e. probability density function) from observed data. In some sense, it is equivalent to a statistical inference task, and its specific forms include classification, regression, clustering, dimensionality reduction, etc. A learning task could be represented in many different ways; for example, (i) a task can be described by prompts via natural language instructions/demonstrations [ 2 ], i.e.  |$T=(t_1,\dots ,t_n)$|⁠ , where |$t_i$| is the task demonstration. The current popular large language model (LLM) solves the problem just via text prompt interaction with the model. Moreover, a task can be decomposed into a series of sub-tasks, i.e.  |$T = t_1 \circ t_{2|1} \circ t_{3|2,1} \circ \cdots$|⁠ . Such a hierarchical prompt representation of a learning task can help solve a complicated reasoning task for LLM [ 3 ]. (ii) A task can be characterized by small-size high-quality data, called meta-data [ 4 ], denoted |$D^{(q)}= \lbrace (x_i^{(q)}, y_i^{(q)})\rbrace _{i=1}^m$|⁠ , which is popularly used in meta learning. (iii) A task can also be defined by a set of logic rules/knowledge, called meta-knowledge [ 5 ], which could also be utilized to quantify the task representation. More forms of task representation are still required, and the research on the precise mathematical formulation of a learning task is still ongoing.

Learning method

We define a learning method as a specification of all four elements of ML in equation ( 1 ). More precisely, we define the learning space |$\mathcal {K}=(\mathcal {D},\mathcal {F},\mathcal {L}, \mathcal {A})$|⁠ , where |$\mathcal {D},\mathcal {F},\mathcal {L}, \mathcal {A}$| denote the data (distribution functions), model (hypothesis), loss (loss functions) and algorithm spaces, respectively, and we define a learning method as an element in |$\mathcal {K}$| when a learning task is given, with |$\mathcal {D},\mathcal {F},\mathcal {L}, \mathcal {A}$| representing a proper data scheme, a learner’s architecture, a specific loss function and an optimization algorithm, respectively. The determination of the learning method could be considered as designing ML’s components for the task, which is potentially hopeful to alleviate the above ML prerequisites. To make the computation tractable, we suppose that |$\mathcal {K}$| is separable, that is, each element in the learning space |$\mathcal {K}$| can be expanded with a countably infinite number of base functions, and then |$\mathcal {K}$| could be represented by the product of four infinite sequence spaces |$\Psi = (\Psi _{\mathcal {D}}, \Psi _{\mathcal {F}}, \Psi _{\mathcal {L}}, \Psi _{\mathcal {A}})$|⁠ . From this perspective, a learning method then corresponds to a hyperparameter assignment of |$\mathcal {K}$|⁠ . In other words, an effective hyperparameter configuration involved in the ML process can be interpreted as a proper ‘learning method’ imposed on a learning task [ 6 ]. In practice, we employ finite hyperparameter assignment sequences to approximate |$\Psi$|⁠ .

Learning methodology

The learning methodology is a mapping from the task space |$\mathcal {T}$| to the learning space |$\mathcal {K}$| or |$\Psi$|⁠ , denoted |$\mathcal {LM}:\mathcal {T}\rightarrow (\mathcal {K}$| or |$\Psi )$|⁠ . Thus, a learning methodology could be understood as a hyperparameter assignment rule of the learning method. Determination of the learning methodology is, however, an intrinsically infinite-dimensional ML problem.

SLeM aims to learn the learning methodology mapping |$\mathcal {LM}$|⁠ , or, in other words, learn the hyperparameter assignment rule of ML. To this end, we can employ an explicit hyperparameter setting mapping |$h:\mathcal {T}\rightarrow \Psi$| conditioned on learning tasks that map from the learning task space |$\mathcal {T}$| to the hyperparameter space |$\Psi$|⁠ , covering the whole learning process to simulate the ‘learning methodology’. Formally, we propose solving the following formulation to get the ‘learning methodology’ mapping h shared among various learning tasks:

Here |$\boldsymbol {L}$| is a metric evaluating the learning method |$\psi = (\psi _{\mathcal {D}}, \psi _{\mathcal {F}}, \psi _{\mathcal {L}}, \psi _{\mathcal {A}}) \in \Psi$| for learning task |$T \in \mathcal {T}$|⁠ , |$\mathcal {S}$| is the joint probability distribution over |$\mathcal {T}\times \Psi$| and |$\mathcal {H}$| is the hypothesis space of h .

The obtained learning methodology mapping is promising to help ML model finely adapt to varying tasks from dynamic environments with fewer human interventions, and thereby achieving Auto |$^6$| ML. Note that the formulation in equation ( 2 ) is computationally intractable; a natural method to solve it is to collect observations |$\lbrace (\mathcal {T}_i,\Psi _i)_{i=1}^t \rbrace$| from |$\mathcal {S}$|⁠ . We propose three typical realization approaches for SLeM according to different task representation forms, which are verified to be effective for achieving Auto |$^6$| ML in practice.

Prompt-based SLeM

Suppose that we have access to observations |$S = \lbrace (T_i, \psi _i)\rbrace _{i=1}^M$|⁠ , denoted by task prompts and corresponding learning methods; then we can rewrite equation ( 2 ) as

This approach is closely related to the recent LLM techniques [ 2 ]. When given a task prompt, the LLM directly predicts its solution, while SLeM firstly predicts its learning method, and then produces the solution based on the learning method. This understanding potentially reveals the insight of the task generalization ability of LLM techniques. However, such a ‘brute-force’ learning paradigm is cumbersome and labor intensive; how to develop lightweight-reduced implementations for this formulation is left for future study.

Meta-data-based SLeM

Suppose that we have enough meta-data |$D_i^{(q)}$| that can be used to properly evaluate learning methods adapting to learning task |$T_i$|⁠ ; then we can rewrite equation ( 2 ) as

where |$\ell ^{meta}\ {\rm and}\ \ell ^{task}$| are meta and task losses, respectively, |$\ell (f ,D) = (\frac{1}{|D|})\sum _{i=1}^{|D|} \ell (f(x_i), y_i)$| and |$f^{*}_{i}(h)$| is the optimal learner for task |$T_i$| given hyperparameter configurations predicted by |$h(T_i)$|⁠ . To better distinguish f and h , we usually call h a meta-learner. Here |$D_i^{(s)}$| is the training set for task |$T_i$|⁠ , and we drop its explicit dependence on |$h(T_i)$|⁠ . Formulation ( 4 ) can be very easily integrated into the traditional ML framework to provide a fresh understanding and extension of the original ML framework. In the next section, we further show that such a meta-data-based SLeM formulation could greatly enhance adaptive learning capabilities of existing ML methods. We have provided a statistical learning guarantee for the task transfer generalization ability of the so-obtained learning methodology in [ 6 ], which makes Auto |$^6$| ML directly tractable and more solid.

Meta-knowledge-based SLeM

Collecting meta-data may be costly and difficult in some applications. Instead, we also suggest utilizing meta-knowledge to evaluate the learning methodology [ 5 ]. Specifically, we propose the following meta-regularization (MR) approach for computing the learning methodology h :

Here |$\mathcal {MR}(h)$| is a meta-regularizer that confines the meta-learner functions in terms of data augmentation consistency (DAC), regulated by meta-knowledge, and |$\lambda , \gamma \ge 0$| are hyperparameters making a trade-off between meta-loss and the meta-regularizer. In [ 5 ], we theoretically showed that the DAC-MR approach can be treated as a proxy meta-objective used to evaluate the meta-learner without high-quality meta-data (i.e.  |$\lambda =0, \gamma \gt 0$|⁠ ). Besides, meta-loss combined with the DAC-MR approach is capable of achieving better meta-level generalization (i.e.  |$\lambda \gt 0, \gamma \gt 0$|⁠ ). We also empirically demonstrated that the DAC-MR approach could learn well-performing meta-learners from training tasks with noisy, sparse or even unavailable meta-data, well aligned with theoretical insights.

The learning process of SLeM contains meta-training and meta-test stages, respectively. In the meta-training stage, we extract the learning methodology from given meta-training tasks. However, it often still needs human interventions to help get the learning methodology, like collecting meta-training tasks, designing the architecture of the learning methodology mapping, configuring hyperparameters of meta-training algorithms, etc. Yet we want to emphasize that, in the meta-test stage, our meta-learned learning methodology is fixed, which could be used to tune hyperparameters of ML in a plug-and-play manner. In this sense, it should be more rational to say that such a SLeM scheme alleviates the work load of tune additional hyperparameters of machine learning at the meta-test stage of SLeM, and thus potentially achieves Auto |$^6$| ML at the data and sample and model and algorithm levels. It is essential to note that SLeM still requires a human to specify what problem or task they want ML to solve, and to set input task information for the learning methodology mapping. When task information specified by users reflects the characteristic of varying tasks, the learning methodology could adaptively predict the machine learning method for varying tasks. In this sense, SLeM is potentially effective for addressing varying tasks from dynamic environments. In other words, SLeM could achieve Auto |$^6$| ML at the task and environment level with proper task information specified by a human.

Based on the proposed SLeM framework, we can readily develop a series of SLeM algorithms for Auto |$^6$| ML, as presented in the following. It is worth emphasizing that the realizations of Auto |$^6$| ML are mainly based on meta-data-based SLeM approaches in this paper.

Data auto-selection

We explore the assignment of a weight |$v_i \in [0,1]$| to each candidate datum |$x_i$|⁠ , which represents the possibility of |$x_i$| being selected. Compared with conventional methods using pre-defined weighting schemes to assign values of the |$v_i$|⁠ , we adopt an MLP net called MW-Net [ 4 ] to learn an explicit weighting scheme. It has been substantiated that weighting functions automatically extracted from data comply with those proposed in the hand-designed studies for class imbalance or noisy labels. We further reform MW-Net by introducing a task feature as the supplementary input information, denoted CMW-Net [ 7 ], for addressing real-world heterogeneous data bias. CMW-Net is substantiated to be performable in different complicated data bias cases, and helps improve the performance of sample selection and label correction in a series of data bias issues, including datasets with class imbalance, different synthetic label noise forms and real-life complicated biased datasets. The meta-learned weighting scheme can especially be used in a plug-and-play manner, and can be directly deployed on unseen datasets, without needing to specifically tune extra hyperparameters of the CMW-Net algorithm.

Model auto-adjustment

The existing backbone networks have limited ability to adapt to different distribution shifts. They always use a noise transition matrix to adjust the prediction of the deep classifier for addressing the influence of noisy labels. Compared with previous methods specifically designed based on knowledge of the transition matrix, we use a transformer network, called IDCS-NTM [ 8 ], to automatically predict the noise transition for adjusting the prediction of the deep classifier adapting to various noisy labels. Meanwhile, the meta-learned noise transition network can help adjust the prediction of the deep classifier on unseen real noisy datasets, and achieves better performance compared with manually designed noise transition.

Loss auto-setting

For a regression task, the form of the loss function corresponds to the distribution of the underlying noise. How to set the loss function could be formulated as a weighted loss optimization problem. Conventional methods attempt to solve weighted loss by assigning the unknown distribution subjectively or fixing the weight vector empirically, which makes them hard to address complex scenarios adaptively and effectively. We use a hyper-weight network (HWnet) [ 9 ] to predict the weight vector. HWnet could automatically adjust weights for different learning tasks, so as to auto-set the loss function in compliance with the tasks at hand. The meta-learned HWnet can be explicitly plugged into other unseen tasks to finely adapt to various complex noise scenarios, and helps improve their performance. For the classification task, we also explore a loss adjuster [ 10 ] to automatically set robust loss functions of every instance for various noisy label tasks. The meta-learned loss adjuster could also transfer to unseen real-life noisy datasets, and achieves better performance compared with hand-designed robust loss functions with carefully tuned hyperparameters.

Algorithm auto-designing

The stochastic gradient descent algorithm requires manually presetting a learning rate (LR) schedule (i.e.  |$\lbrace \alpha _t\rbrace _{t=1}^T$| with T the total iteration steps) for the task at hand. We use a long short-term memory-based net, called MLR-SNet [ 11 ], to adaptively set the LR schedule. MLR-SNet could automatically learn a proper LR schedule to comply with the training dynamics of different deep neural network (DNN) training problems, which are more flexible than hand-designed policies for specific learning tasks. The meta-learned LR schedule is plug and play, and could be readily transferred to unseen heterogeneous tasks. MLR-SNet is substantiated to be transferable among DNN training tasks of different training epochs, datasets and network architectures and the large-scale ImageNet, and achieves comparable performance with the corresponding best hand-designed LR schedules in the test data.

SLeM applications

We have released the aforementioned SLeM algorithms on an open-source platform at https://github.com/xjtushujun/Auto-6ML based on Jittor, aiming to provide a toolkit box for users to handle real-life Auto |$^6$| ML problems. Recently, our CMW-Net algorithm [ 12 ] was the champion of the 2022 International Algorithm Case Competition, which achieves a competitive sample selection and label correction performance on real-life heterogeneous and diverse label noise tasks, showing its potential usefulness for more practical datasets and tasks. It is possible to utilize SLeM algorithms to real application problems possessing features of varying multiple tasks from dynamic environments. For example, the visual unmanned navigation problem calls for reliable feature extraction and matching techniques that generalize to different geophysical scenarios and multimodal data; the smart education problem calls for effective visual recognition, detection and analysis techniques that generalize to diverse teaching scenarios and analysis tasks; and so on.

AutoML [ 13 , 14 ] encompasses a wide range of methods aiming to automate traditionally manual aspects of the machine learning process, such as data preparation, algorithm selection, hyperparameter tuning and architecture search, while it has limited researches on automatical transfer between varying tasks, which is emphasized by aforementioned Auto |$^6$| ML. Existing AutoML methods are mostly heuristic, making it difficult to develop theoretical evidence. In comparison, our SLeM framework establishes a unified mathematical formulation for Auto |$^6$| ML, and provides theoretical insight into the task transfer generalization ability of SLeM [ 6 ].

Algorithm selection [ 15 ] learns a mapping from the problem space to the algorithm space by searching for the optimal algorithm from a pool of finite algorithms for the tasks at hand. It is usually inflexible to addressing varying tasks. The SLeM adopts bi-level optimization tools to extract learning methodology mapping for predicting the proper learning method of different tasks with a sound theoretical guarantee, which could more flexibly and adaptively fit query tasks.

Existing SLeM algorithms only realize automation for each component of ML, which is far from the goal of Auto |$^6$| ML. In particular, the learning process of SLeM still requires extensive human interventions and selections. Achieving SLeM algorithms with stronger automation capabilities and more complex automation problems/scenarios is still an important problem in future research. Moreover, developing the lightweight prompt-based SLeM approach is worth deeper and more comprehensive exploration for the reduction of the LLM. Besides, we try to construct a novel learning theory on infinite-dimensional functional space to finely reveal the insights of SLeM, and develop task-generalized transfer learning theory to provide a theoretical foundation for handling varying tasks and dynamic environments in real-world applications. Building connections between SLeM and other techniques on exploring task-transferable generalization, like meta-learning, in-context learning and large foundation models, is also valuable for future research.

This work was supported by the National Key Research and Development Program of China (2022YFA1004100) and in part by the National Natural Science Foundation of China (12326606 and 12226004).

Conflict of interest statement . None declared.

Xu   Z . Sci Sin Inform   2021 ; 51 : 1967 – 78 .

Brown   T , Mann   B , Ryder   N  et al.    Language models are fewshot learners. In: Proceedings of the 34th International Conference on Neural Information Processing Systems . Red Hook : Curran Associates , 2020 ; 1877 – 1901 .

Google Scholar

Google Preview

Wei   J , Wang   X , Schuurmans   D  et al.    Chain-of-thought prompting elicits reasoning in large language models . In: Proceedings of the 36th International Conference on Neural Information Processing Systems . Red Hook : Curran Associates , 2024 ; 24824 – 37 .

Shu   J , Xie   Q , Yi   L  et al.    Meta-Weight-Net: learning an explicit mapping for sample weighting . In: Proceedings of the 33rd International Conference on Neural Information Processing Systems . Red Hook : Curran Associates , 2019 ; 1919 – 30 .

Shu   J , Yuan   X , Meng   D  et al.  arXiv:2305.07892.

Shu   J , Meng   D , Xu   Z . J Mach Learn Res   2023 ; 24 : 186.

Shu   J , Yuan   X , Meng   D  et al.    IEEE Trans Pattern Anal Mach Intell   2023 ; 45 : 11521 – 39 . 10.1109/TPAMI.2023.3271451

Shu   J , Zhao   Q , Z Xu  et al.  arXiv: 2006.05697 .

Rui   X , Cao   X , Shu   J  et al.  arXiv:2301.06081.

Ding   K , Shu   J , Meng   D  et al.  arXiv:2301.07306.

Shu   J , Zhu   Y , Zhao   Q  et al.    IEEE Trans Pattern Anal Mach Intell   2023 ; 45 : 3505 – 21 . 10.1109/TPAMI.2022.3184315

Shu   J , Yuan   X , Meng   D . Natl Sci Rev   2023 ; 10 : nwad084 . 10.1093/nsr/nwad084

Hutter   F , Kotthoff   L , Vanschoren   J . Automated Machine Learning: Methods, Systems, Challenges . Cham : Springer , 2019 . 10.1007/978-3-030-05318-5

Baratchi   M , Wang   C , Limmer   S  et al.    Artif Intell Rev   2024 ; 57 : 122 .

Rice   JR . Adv Comput   1976 ; 15 : 65 – 118 . 10.1016/S0065-2458(08)60520-3

Month: Total Views:
August 2024 97

Email alerts

Citing articles via.

  • Recommend to Your Librarian

Affiliations

  • Online ISSN 2053-714X
  • Print ISSN 2095-5138
  • Copyright © 2024 China Science Publishing & Media Ltd. (Science Press)
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Peer Reviewed

GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation

Article metrics.

CrossRef

CrossRef Citations

Altmetric Score

PDF Downloads

Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research. Our analysis of a selection of questionable GPT-fabricated scientific papers found in Google Scholar shows that many are about applied, often controversial topics susceptible to disinformation: the environment, health, and computing. The resulting enhanced potential for malicious manipulation of society’s evidence base, particularly in politically divisive domains, is a growing concern.

Swedish School of Library and Information Science, University of Borås, Sweden

Department of Arts and Cultural Sciences, Lund University, Sweden

Division of Environmental Communication, Swedish University of Agricultural Sciences, Sweden

methodology for review paper

Research Questions

  • Where are questionable publications produced with generative pre-trained transformers (GPTs) that can be found via Google Scholar published or deposited?
  • What are the main characteristics of these publications in relation to predominant subject categories?
  • How are these publications spread in the research infrastructure for scholarly communication?
  • How is the role of the scholarly communication infrastructure challenged in maintaining public trust in science and evidence through inappropriate use of generative AI?

research note Summary

  • A sample of scientific papers with signs of GPT-use found on Google Scholar was retrieved, downloaded, and analyzed using a combination of qualitative coding and descriptive statistics. All papers contained at least one of two common phrases returned by conversational agents that use large language models (LLM) like OpenAI’s ChatGPT. Google Search was then used to determine the extent to which copies of questionable, GPT-fabricated papers were available in various repositories, archives, citation databases, and social media platforms.
  • Roughly two-thirds of the retrieved papers were found to have been produced, at least in part, through undisclosed, potentially deceptive use of GPT. The majority (57%) of these questionable papers dealt with policy-relevant subjects (i.e., environment, health, computing), susceptible to influence operations. Most were available in several copies on different domains (e.g., social media, archives, and repositories).
  • Two main risks arise from the increasingly common use of GPT to (mass-)produce fake, scientific publications. First, the abundance of fabricated “studies” seeping into all areas of the research infrastructure threatens to overwhelm the scholarly communication system and jeopardize the integrity of the scientific record. A second risk lies in the increased possibility that convincingly scientific-looking content was in fact deceitfully created with AI tools and is also optimized to be retrieved by publicly available academic search engines, particularly Google Scholar. However small, this possibility and awareness of it risks undermining the basis for trust in scientific knowledge and poses serious societal risks.

Implications

The use of ChatGPT to generate text for academic papers has raised concerns about research integrity. Discussion of this phenomenon is ongoing in editorials, commentaries, opinion pieces, and on social media (Bom, 2023; Stokel-Walker, 2024; Thorp, 2023). There are now several lists of papers suspected of GPT misuse, and new papers are constantly being added. 1 See for example Academ-AI, https://www.academ-ai.info/ , and Retraction Watch, https://retractionwatch.com/papers-and-peer-reviews-with-evidence-of-chatgpt-writing/ . While many legitimate uses of GPT for research and academic writing exist (Huang & Tan, 2023; Kitamura, 2023; Lund et al., 2023), its undeclared use—beyond proofreading—has potentially far-reaching implications for both science and society, but especially for their relationship. It, therefore, seems important to extend the discussion to one of the most accessible and well-known intermediaries between science, but also certain types of misinformation, and the public, namely Google Scholar, also in response to the legitimate concerns that the discussion of generative AI and misinformation needs to be more nuanced and empirically substantiated  (Simon et al., 2023).

Google Scholar, https://scholar.google.com , is an easy-to-use academic search engine. It is available for free, and its index is extensive (Gusenbauer & Haddaway, 2020). It is also often touted as a credible source for academic literature and even recommended in library guides, by media and information literacy initiatives, and fact checkers (Tripodi et al., 2023). However, Google Scholar lacks the transparency and adherence to standards that usually characterize citation databases. Instead, Google Scholar uses automated crawlers, like Google’s web search engine (Martín-Martín et al., 2021), and the inclusion criteria are based on primarily technical standards, allowing any individual author—with or without scientific affiliation—to upload papers to be indexed (Google Scholar Help, n.d.). It has been shown that Google Scholar is susceptible to manipulation through citation exploits (Antkare, 2020) and by providing access to fake scientific papers (Dadkhah et al., 2017). A large part of Google Scholar’s index consists of publications from established scientific journals or other forms of quality-controlled, scholarly literature. However, the index also contains a large amount of gray literature, including student papers, working papers, reports, preprint servers, and academic networking sites, as well as material from so-called “questionable” academic journals, including paper mills. The search interface does not offer the possibility to filter the results meaningfully by material type, publication status, or form of quality control, such as limiting the search to peer-reviewed material.

To understand the occurrence of ChatGPT (co-)authored work in Google Scholar’s index, we scraped it for publications, including one of two common ChatGPT responses (see Appendix A) that we encountered on social media and in media reports (DeGeurin, 2024). The results of our descriptive statistical analyses showed that around 62% did not declare the use of GPTs. Most of these GPT-fabricated papers were found in non-indexed journals and working papers, but some cases included research published in mainstream scientific journals and conference proceedings. 2 Indexed journals mean scholarly journals indexed by abstract and citation databases such as Scopus and Web of Science, where the indexation implies journals with high scientific quality. Non-indexed journals are journals that fall outside of this indexation. More than half (57%) of these GPT-fabricated papers concerned policy-relevant subject areas susceptible to influence operations. To avoid increasing the visibility of these publications, we abstained from referencing them in this research note. However, we have made the data available in the Harvard Dataverse repository.

The publications were related to three issue areas—health (14.5%), environment (19.5%) and computing (23%)—with key terms such “healthcare,” “COVID-19,” or “infection”for health-related papers, and “analysis,” “sustainable,” and “global” for environment-related papers. In several cases, the papers had titles that strung together general keywords and buzzwords, thus alluding to very broad and current research. These terms included “biology,” “telehealth,” “climate policy,” “diversity,” and “disrupting,” to name just a few.  While the study’s scope and design did not include a detailed analysis of which parts of the articles included fabricated text, our dataset did contain the surrounding sentences for each occurrence of the suspicious phrases that formed the basis for our search and subsequent selection. Based on that, we can say that the phrases occurred in most sections typically found in scientific publications, including the literature review, methods, conceptual and theoretical frameworks, background, motivation or societal relevance, and even discussion. This was confirmed during the joint coding, where we read and discussed all articles. It became clear that not just the text related to the telltale phrases was created by GPT, but that almost all articles in our sample of questionable articles likely contained traces of GPT-fabricated text everywhere.

Evidence hacking and backfiring effects

Generative pre-trained transformers (GPTs) can be used to produce texts that mimic scientific writing. These texts, when made available online—as we demonstrate—leak into the databases of academic search engines and other parts of the research infrastructure for scholarly communication. This development exacerbates problems that were already present with less sophisticated text generators (Antkare, 2020; Cabanac & Labbé, 2021). Yet, the public release of ChatGPT in 2022, together with the way Google Scholar works, has increased the likelihood of lay people (e.g., media, politicians, patients, students) coming across questionable (or even entirely GPT-fabricated) papers and other problematic research findings. Previous research has emphasized that the ability to determine the value and status of scientific publications for lay people is at stake when misleading articles are passed off as reputable (Haider & Åström, 2017) and that systematic literature reviews risk being compromised (Dadkhah et al., 2017). It has also been highlighted that Google Scholar, in particular, can be and has been exploited for manipulating the evidence base for politically charged issues and to fuel conspiracy narratives (Tripodi et al., 2023). Both concerns are likely to be magnified in the future, increasing the risk of what we suggest calling evidence hacking —the strategic and coordinated malicious manipulation of society’s evidence base.

The authority of quality-controlled research as evidence to support legislation, policy, politics, and other forms of decision-making is undermined by the presence of undeclared GPT-fabricated content in publications professing to be scientific. Due to the large number of archives, repositories, mirror sites, and shadow libraries to which they spread, there is a clear risk that GPT-fabricated, questionable papers will reach audiences even after a possible retraction. There are considerable technical difficulties involved in identifying and tracing computer-fabricated papers (Cabanac & Labbé, 2021; Dadkhah et al., 2023; Jones, 2024), not to mention preventing and curbing their spread and uptake.

However, as the rise of the so-called anti-vaxx movement during the COVID-19 pandemic and the ongoing obstruction and denial of climate change show, retracting erroneous publications often fuels conspiracies and increases the following of these movements rather than stopping them. To illustrate this mechanism, climate deniers frequently question established scientific consensus by pointing to other, supposedly scientific, studies that support their claims. Usually, these are poorly executed, not peer-reviewed, based on obsolete data, or even fraudulent (Dunlap & Brulle, 2020). A similar strategy is successful in the alternative epistemic world of the global anti-vaccination movement (Carrion, 2018) and the persistence of flawed and questionable publications in the scientific record already poses significant problems for health research, policy, and lawmakers, and thus for society as a whole (Littell et al., 2024). Considering that a person’s support for “doing your own research” is associated with increased mistrust in scientific institutions (Chinn & Hasell, 2023), it will be of utmost importance to anticipate and consider such backfiring effects already when designing a technical solution, when suggesting industry or legal regulation, and in the planning of educational measures.

Recommendations

Solutions should be based on simultaneous considerations of technical, educational, and regulatory approaches, as well as incentives, including social ones, across the entire research infrastructure. Paying attention to how these approaches and incentives relate to each other can help identify points and mechanisms for disruption. Recognizing fraudulent academic papers must happen alongside understanding how they reach their audiences and what reasons there might be for some of these papers successfully “sticking around.” A possible way to mitigate some of the risks associated with GPT-fabricated scholarly texts finding their way into academic search engine results would be to provide filtering options for facets such as indexed journals, gray literature, peer-review, and similar on the interface of publicly available academic search engines. Furthermore, evaluation tools for indexed journals 3 Such as LiU Journal CheckUp, https://ep.liu.se/JournalCheckup/default.aspx?lang=eng . could be integrated into the graphical user interfaces and the crawlers of these academic search engines. To enable accountability, it is important that the index (database) of such a search engine is populated according to criteria that are transparent, open to scrutiny, and appropriate to the workings of  science and other forms of academic research. Moreover, considering that Google Scholar has no real competitor, there is a strong case for establishing a freely accessible, non-specialized academic search engine that is not run for commercial reasons but for reasons of public interest. Such measures, together with educational initiatives aimed particularly at policymakers, science communicators, journalists, and other media workers, will be crucial to reducing the possibilities for and effects of malicious manipulation or evidence hacking. It is important not to present this as a technical problem that exists only because of AI text generators but to relate it to the wider concerns in which it is embedded. These range from a largely dysfunctional scholarly publishing system (Haider & Åström, 2017) and academia’s “publish or perish” paradigm to Google’s near-monopoly and ideological battles over the control of information and ultimately knowledge. Any intervention is likely to have systemic effects; these effects need to be considered and assessed in advance and, ideally, followed up on.

Our study focused on a selection of papers that were easily recognizable as fraudulent. We used this relatively small sample as a magnifying glass to examine, delineate, and understand a problem that goes beyond the scope of the sample itself, which however points towards larger concerns that require further investigation. The work of ongoing whistleblowing initiatives 4 Such as Academ-AI, https://www.academ-ai.info/ , and Retraction Watch, https://retractionwatch.com/papers-and-peer-reviews-with-evidence-of-chatgpt-writing/ . , recent media reports of journal closures (Subbaraman, 2024), or GPT-related changes in word use and writing style (Cabanac et al., 2021; Stokel-Walker, 2024) suggest that we only see the tip of the iceberg. There are already more sophisticated cases (Dadkhah et al., 2023) as well as cases involving fabricated images (Gu et al., 2022). Our analysis shows that questionable and potentially manipulative GPT-fabricated papers permeate the research infrastructure and are likely to become a widespread phenomenon. Our findings underline that the risk of fake scientific papers being used to maliciously manipulate evidence (see Dadkhah et al., 2017) must be taken seriously. Manipulation may involve undeclared automatic summaries of texts, inclusion in literature reviews, explicit scientific claims, or the concealment of errors in studies so that they are difficult to detect in peer review. However, the mere possibility of these things happening is a significant risk in its own right that can be strategically exploited and will have ramifications for trust in and perception of science. Society’s methods of evaluating sources and the foundations of media and information literacy are under threat and public trust in science is at risk of further erosion, with far-reaching consequences for society in dealing with information disorders. To address this multifaceted problem, we first need to understand why it exists and proliferates.

Finding 1: 139 GPT-fabricated, questionable papers were found and listed as regular results on the Google Scholar results page. Non-indexed journals dominate.

Most questionable papers we found were in non-indexed journals or were working papers, but we did also find some in established journals, publications, conferences, and repositories. We found a total of 139 papers with a suspected deceptive use of ChatGPT or similar LLM applications (see Table 1). Out of these, 19 were in indexed journals, 89 were in non-indexed journals, 19 were student papers found in university databases, and 12 were working papers (mostly in preprint databases). Table 1 divides these papers into categories. Health and environment papers made up around 34% (47) of the sample. Of these, 66% were present in non-indexed journals.

Indexed journals*534719
Non-indexed journals1818134089
Student papers4311119
Working papers532212
Total32272060139

Finding 2: GPT-fabricated, questionable papers are disseminated online, permeating the research infrastructure for scholarly communication, often in multiple copies. Applied topics with practical implications dominate.

The 20 papers concerning health-related issues are distributed across 20 unique domains, accounting for 46 URLs. The 27 papers dealing with environmental issues can be found across 26 unique domains, accounting for 56 URLs.  Most of the identified papers exist in multiple copies and have already spread to several archives, repositories, and social media. It would be difficult, or impossible, to remove them from the scientific record.

As apparent from Table 2, GPT-fabricated, questionable papers are seeping into most parts of the online research infrastructure for scholarly communication. Platforms on which identified papers have appeared include ResearchGate, ORCiD, Journal of Population Therapeutics and Clinical Pharmacology (JPTCP), Easychair, Frontiers, the Institute of Electrical and Electronics Engineer (IEEE), and X/Twitter. Thus, even if they are retracted from their original source, it will prove very difficult to track, remove, or even just mark them up on other platforms. Moreover, unless regulated, Google Scholar will enable their continued and most likely unlabeled discoverability.

Environmentresearchgate.net (13)orcid.org (4)easychair.org (3)ijope.com* (3)publikasiindonesia.id (3)
Healthresearchgate.net (15)ieee.org (4)twitter.com (3)jptcp.com** (2)frontiersin.org
(2)

A word rain visualization (Centre for Digital Humanities Uppsala, 2023), which combines word prominences through TF-IDF 5 Term frequency–inverse document frequency , a method for measuring the significance of a word in a document compared to its frequency across all documents in a collection. scores with semantic similarity of the full texts of our sample of GPT-generated articles that fall into the “Environment” and “Health” categories, reflects the two categories in question. However, as can be seen in Figure 1, it also reveals overlap and sub-areas. The y-axis shows word prominences through word positions and font sizes, while the x-axis indicates semantic similarity. In addition to a certain amount of overlap, this reveals sub-areas, which are best described as two distinct events within the word rain. The event on the left bundles terms related to the development and management of health and healthcare with “challenges,” “impact,” and “potential of artificial intelligence”emerging as semantically related terms. Terms related to research infrastructures, environmental, epistemic, and technological concepts are arranged further down in the same event (e.g., “system,” “climate,” “understanding,” “knowledge,” “learning,” “education,” “sustainable”). A second distinct event further to the right bundles terms associated with fish farming and aquatic medicinal plants, highlighting the presence of an aquaculture cluster.  Here, the prominence of groups of terms such as “used,” “model,” “-based,” and “traditional” suggests the presence of applied research on these topics. The two events making up the word rain visualization, are linked by a less dominant but overlapping cluster of terms related to “energy” and “water.”

methodology for review paper

The bar chart of the terms in the paper subset (see Figure 2) complements the word rain visualization by depicting the most prominent terms in the full texts along the y-axis. Here, word prominences across health and environment papers are arranged descendingly, where values outside parentheses are TF-IDF values (relative frequencies) and values inside parentheses are raw term frequencies (absolute frequencies).

methodology for review paper

Finding 3: Google Scholar presents results from quality-controlled and non-controlled citation databases on the same interface, providing unfiltered access to GPT-fabricated questionable papers.

Google Scholar’s central position in the publicly accessible scholarly communication infrastructure, as well as its lack of standards, transparency, and accountability in terms of inclusion criteria, has potentially serious implications for public trust in science. This is likely to exacerbate the already-known potential to exploit Google Scholar for evidence hacking (Tripodi et al., 2023) and will have implications for any attempts to retract or remove fraudulent papers from their original publication venues. Any solution must consider the entirety of the research infrastructure for scholarly communication and the interplay of different actors, interests, and incentives.

We searched and scraped Google Scholar using the Python library Scholarly (Cholewiak et al., 2023) for papers that included specific phrases known to be common responses from ChatGPT and similar applications with the same underlying model (GPT3.5 or GPT4): “as of my last knowledge update” and/or “I don’t have access to real-time data” (see Appendix A). This facilitated the identification of papers that likely used generative AI to produce text, resulting in 227 retrieved papers. The papers’ bibliographic information was automatically added to a spreadsheet and downloaded into Zotero. 6 An open-source reference manager, https://zotero.org .

We employed multiple coding (Barbour, 2001) to classify the papers based on their content. First, we jointly assessed whether the paper was suspected of fraudulent use of ChatGPT (or similar) based on how the text was integrated into the papers and whether the paper was presented as original research output or the AI tool’s role was acknowledged. Second, in analyzing the content of the papers, we continued the multiple coding by classifying the fraudulent papers into four categories identified during an initial round of analysis—health, environment, computing, and others—and then determining which subjects were most affected by this issue (see Table 1). Out of the 227 retrieved papers, 88 papers were written with legitimate and/or declared use of GPTs (i.e., false positives, which were excluded from further analysis), and 139 papers were written with undeclared and/or fraudulent use (i.e., true positives, which were included in further analysis). The multiple coding was conducted jointly by all authors of the present article, who collaboratively coded and cross-checked each other’s interpretation of the data simultaneously in a shared spreadsheet file. This was done to single out coding discrepancies and settle coding disagreements, which in turn ensured methodological thoroughness and analytical consensus (see Barbour, 2001). Redoing the category coding later based on our established coding schedule, we achieved an intercoder reliability (Cohen’s kappa) of 0.806 after eradicating obvious differences.

The ranking algorithm of Google Scholar prioritizes highly cited and older publications (Martín-Martín et al., 2016). Therefore, the position of the articles on the search engine results pages was not particularly informative, considering the relatively small number of results in combination with the recency of the publications. Only the query “as of my last knowledge update” had more than two search engine result pages. On those, questionable articles with undeclared use of GPTs were evenly distributed across all result pages (min: 4, max: 9, mode: 8), with the proportion of undeclared use being slightly higher on average on later search result pages.

To understand how the papers making fraudulent use of generative AI were disseminated online, we programmatically searched for the paper titles (with exact string matching) in Google Search from our local IP address (see Appendix B) using the googlesearch – python library(Vikramaditya, 2020). We manually verified each search result to filter out false positives—results that were not related to the paper—and then compiled the most prominent URLs by field. This enabled the identification of other platforms through which the papers had been spread. We did not, however, investigate whether copies had spread into SciHub or other shadow libraries, or if they were referenced in Wikipedia.

We used descriptive statistics to count the prevalence of the number of GPT-fabricated papers across topics and venues and top domains by subject. The pandas software library for the Python programming language (The pandas development team, 2024) was used for this part of the analysis. Based on the multiple coding, paper occurrences were counted in relation to their categories, divided into indexed journals, non-indexed journals, student papers, and working papers. The schemes, subdomains, and subdirectories of the URL strings were filtered out while top-level domains and second-level domains were kept, which led to normalizing domain names. This, in turn, allowed the counting of domain frequencies in the environment and health categories. To distinguish word prominences and meanings in the environment and health-related GPT-fabricated questionable papers, a semantically-aware word cloud visualization was produced through the use of a word rain (Centre for Digital Humanities Uppsala, 2023) for full-text versions of the papers. Font size and y-axis positions indicate word prominences through TF-IDF scores for the environment and health papers (also visualized in a separate bar chart with raw term frequencies in parentheses), and words are positioned along the x-axis to reflect semantic similarity (Skeppstedt et al., 2024), with an English Word2vec skip gram model space (Fares et al., 2017). An English stop word list was used, along with a manually produced list including terms such as “https,” “volume,” or “years.”

  • Artificial Intelligence
  • / Search engines

Cite this Essay

Haider, J., Söderström, K. R., Ekström, B., & Rödl, M. (2024). GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation. Harvard Kennedy School (HKS) Misinformation Review . https://doi.org/10.37016/mr-2020-156

  • / Appendix B

Bibliography

Antkare, I. (2020). Ike Antkare, his publications, and those of his disciples. In M. Biagioli & A. Lippman (Eds.), Gaming the metrics (pp. 177–200). The MIT Press. https://doi.org/10.7551/mitpress/11087.003.0018

Barbour, R. S. (2001). Checklists for improving rigour in qualitative research: A case of the tail wagging the dog? BMJ , 322 (7294), 1115–1117. https://doi.org/10.1136/bmj.322.7294.1115

Bom, H.-S. H. (2023). Exploring the opportunities and challenges of ChatGPT in academic writing: A roundtable discussion. Nuclear Medicine and Molecular Imaging , 57 (4), 165–167. https://doi.org/10.1007/s13139-023-00809-2

Cabanac, G., & Labbé, C. (2021). Prevalence of nonsensical algorithmically generated papers in the scientific literature. Journal of the Association for Information Science and Technology , 72 (12), 1461–1476. https://doi.org/10.1002/asi.24495

Cabanac, G., Labbé, C., & Magazinov, A. (2021). Tortured phrases: A dubious writing style emerging in science. Evidence of critical issues affecting established journals . arXiv. https://doi.org/10.48550/arXiv.2107.06751

Carrion, M. L. (2018). “You need to do your research”: Vaccines, contestable science, and maternal epistemology. Public Understanding of Science , 27 (3), 310–324. https://doi.org/10.1177/0963662517728024

Centre for Digital Humanities Uppsala (2023). CDHUppsala/word-rain [Computer software]. https://github.com/CDHUppsala/word-rain

Chinn, S., & Hasell, A. (2023). Support for “doing your own research” is associated with COVID-19 misperceptions and scientific mistrust. Harvard Kennedy School (HSK) Misinformation Review, 4 (3). https://doi.org/10.37016/mr-2020-117

Cholewiak, S. A., Ipeirotis, P., Silva, V., & Kannawadi, A. (2023). SCHOLARLY: Simple access to Google Scholar authors and citation using Python (1.5.0) [Computer software]. https://doi.org/10.5281/zenodo.5764801

Dadkhah, M., Lagzian, M., & Borchardt, G. (2017). Questionable papers in citation databases as an issue for literature review. Journal of Cell Communication and Signaling , 11 (2), 181–185. https://doi.org/10.1007/s12079-016-0370-6

Dadkhah, M., Oermann, M. H., Hegedüs, M., Raman, R., & Dávid, L. D. (2023). Detection of fake papers in the era of artificial intelligence. Diagnosis , 10 (4), 390–397. https://doi.org/10.1515/dx-2023-0090

DeGeurin, M. (2024, March 19). AI-generated nonsense is leaking into scientific journals. Popular Science. https://www.popsci.com/technology/ai-generated-text-scientific-journals/

Dunlap, R. E., & Brulle, R. J. (2020). Sources and amplifiers of climate change denial. In D.C. Holmes & L. M. Richardson (Eds.), Research handbook on communicating climate change (pp. 49–61). Edward Elgar Publishing. https://doi.org/10.4337/9781789900408.00013

Fares, M., Kutuzov, A., Oepen, S., & Velldal, E. (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources. In J. Tiedemann & N. Tahmasebi (Eds.), Proceedings of the 21st Nordic Conference on Computational Linguistics (pp. 271–276). Association for Computational Linguistics. https://aclanthology.org/W17-0237

Google Scholar Help. (n.d.). Inclusion guidelines for webmasters . https://scholar.google.com/intl/en/scholar/inclusion.html

Gu, J., Wang, X., Li, C., Zhao, J., Fu, W., Liang, G., & Qiu, J. (2022). AI-enabled image fraud in scientific publications. Patterns , 3 (7), 100511. https://doi.org/10.1016/j.patter.2022.100511

Gusenbauer, M., & Haddaway, N. R. (2020). Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Research Synthesis Methods , 11 (2), 181–217.   https://doi.org/10.1002/jrsm.1378

Haider, J., & Åström, F. (2017). Dimensions of trust in scholarly communication: Problematizing peer review in the aftermath of John Bohannon’s “Sting” in science. Journal of the Association for Information Science and Technology , 68 (2), 450–467. https://doi.org/10.1002/asi.23669

Huang, J., & Tan, M. (2023). The role of ChatGPT in scientific communication: Writing better scientific review articles. American Journal of Cancer Research , 13 (4), 1148–1154. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10164801/

Jones, N. (2024). How journals are fighting back against a wave of questionable images. Nature , 626 (8000), 697–698. https://doi.org/10.1038/d41586-024-00372-6

Kitamura, F. C. (2023). ChatGPT is shaping the future of medical writing but still requires human judgment. Radiology , 307 (2), e230171. https://doi.org/10.1148/radiol.230171

Littell, J. H., Abel, K. M., Biggs, M. A., Blum, R. W., Foster, D. G., Haddad, L. B., Major, B., Munk-Olsen, T., Polis, C. B., Robinson, G. E., Rocca, C. H., Russo, N. F., Steinberg, J. R., Stewart, D. E., Stotland, N. L., Upadhyay, U. D., & Ditzhuijzen, J. van. (2024). Correcting the scientific record on abortion and mental health outcomes. BMJ , 384 , e076518. https://doi.org/10.1136/bmj-2023-076518

Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. (2023). ChatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology, 74 (5), 570–581. https://doi.org/10.1002/asi.24750

Martín-Martín, A., Orduna-Malea, E., Ayllón, J. M., & Delgado López-Cózar, E. (2016). Back to the past: On the shoulders of an academic search engine giant. Scientometrics , 107 , 1477–1487. https://doi.org/10.1007/s11192-016-1917-2

Martín-Martín, A., Thelwall, M., Orduna-Malea, E., & Delgado López-Cózar, E. (2021). Google Scholar, Microsoft Academic, Scopus, Dimensions, Web of Science, and OpenCitations’ COCI: A multidisciplinary comparison of coverage via citations. Scientometrics , 126 (1), 871–906. https://doi.org/10.1007/s11192-020-03690-4

Simon, F. M., Altay, S., & Mercier, H. (2023). Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown. Harvard Kennedy School (HKS) Misinformation Review, 4 (5). https://doi.org/10.37016/mr-2020-127

Skeppstedt, M., Ahltorp, M., Kucher, K., & Lindström, M. (2024). From word clouds to Word Rain: Revisiting the classic word cloud to visualize climate change texts. Information Visualization , 23 (3), 217–238. https://doi.org/10.1177/14738716241236188

Swedish Research Council. (2017). Good research practice. Vetenskapsrådet.

Stokel-Walker, C. (2024, May 1.). AI Chatbots Have Thoroughly Infiltrated Scientific Publishing . Scientific American. https://www.scientificamerican.com/article/chatbots-have-thoroughly-infiltrated-scientific-publishing/

Subbaraman, N. (2024, May 14). Flood of fake science forces multiple journal closures: Wiley to shutter 19 more journals, some tainted by fraud. The Wall Street Journal . https://www.wsj.com/science/academic-studies-research-paper-mills-journals-publishing-f5a3d4bc

The pandas development team. (2024). pandas-dev/pandas: Pandas (v2.2.2) [Computer software]. Zenodo. https://doi.org/10.5281/zenodo.10957263

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science , 379 (6630), 313–313. https://doi.org/10.1126/science.adg7879

Tripodi, F. B., Garcia, L. C., & Marwick, A. E. (2023). ‘Do your own research’: Affordance activation and disinformation spread. Information, Communication & Society , 27 (6), 1212–1228. https://doi.org/10.1080/1369118X.2023.2245869

Vikramaditya, N. (2020). Nv7-GitHub/googlesearch [Computer software]. https://github.com/Nv7-GitHub/googlesearch

This research has been supported by Mistra, the Swedish Foundation for Strategic Environmental Research, through the research program Mistra Environmental Communication (Haider, Ekström, Rödl) and the Marcus and Amalia Wallenberg Foundation [2020.0004] (Söderström).

Competing Interests

The authors declare no competing interests.

The research described in this article was carried out under Swedish legislation. According to the relevant EU and Swedish legislation (2003:460) on the ethical review of research involving humans (“Ethical Review Act”), the research reported on here is not subject to authorization by the Swedish Ethical Review Authority (“etikprövningsmyndigheten”) (SRC, 2017).

This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are properly credited.

Data Availability

All data needed to replicate this study are available at the Harvard Dataverse: https://doi.org/10.7910/DVN/WUVD8X

Acknowledgements

The authors wish to thank two anonymous reviewers for their valuable comments on the article manuscript as well as the editorial group of Harvard Kennedy School (HKS) Misinformation Review for their thoughtful feedback and input.

IMAGES

  1. How to write a methods section of a research paper

    methodology for review paper

  2. Methodology Sample In Research : Research Support: Research Methodology

    methodology for review paper

  3. Methodology Sample In Research

    methodology for review paper

  4. Example Of Methodology Paper

    methodology for review paper

  5. How to write about methodology in a research paper

    methodology for review paper

  6. Review methodology for paper selection.

    methodology for review paper

VIDEO

  1. Methodological Reviews

  2. Methodology

  3. How to Do a Good Literature Review for Research Paper and Thesis

  4. How to write Review Paper (Example 1)

  5. Warning: The dangers of excessive PhD journal publications

  6. 1.5 Methodology Review

COMMENTS

  1. Methodologic Guidelines for Review Papers

    A review paper should describe the methods used for summarizing the evidence from the studies selected for review. These may range from simple narrative techniques to highly structured quantitative techniques, such as meta-analysis. Criteria for Conclusions and Recommendations. A review paper should describe the methods used to make conclusions.

  2. Writing a Scientific Review Article: Comprehensive Insights for

    2. Benefits of Review Articles to the Author. Analysing literature gives an overview of the "WHs": WHat has been reported in a particular field or topic, WHo the key writers are, WHat are the prevailing theories and hypotheses, WHat questions are being asked (and answered), and WHat methods and methodologies are appropriate and useful [].For new or aspiring researchers in a particular ...

  3. Literature review as a research methodology: An overview and guidelines

    This paper discusses literature review as a methodology for conducting research and offers an overview of different types of reviews, as well as some guidelines to how to both conduct and evaluate a literature review paper. It also discusses common pitfalls and how to get literature reviews published. 1.

  4. How to write a review paper

    a critical review of the relevant literature and then ensuring that their research design, methods, results, and conclusions follow logically from these objectives (Maier, 2013). There exist a number of papers devoted to instruction on how to write a good review paper. Among the most . useful for scientific reviews, in my estimation, are those by

  5. A Step-by-Step Guide to Writing a Scientific Review Article

    Abstract. Scientific review articles are comprehensive, focused reviews of the scientific literature written by subject matter experts. The task of writing a scientific review article can seem overwhelming; however, it can be managed by using an organized approach and devoting sufficient time to the process.

  6. How to Write a Literature Review

    Learn how to conduct a literature review for your thesis, dissertation, or research paper. Follow five steps to search, evaluate, synthesize, and write a literature review that situates your work within existing knowledge.

  7. How to write a review paper

    Writing the Review. 1Good scientific writing tells a story, so come up with a logical structure for your paper, with a beginning, middle, and end. Use appropriate headings and sequencing of ideas to make the content flow and guide readers seamlessly from start to finish.

  8. Basics of Writing Review Articles

    The methods section of the review paper should be written detailed enough to prove its adequacy and make it possible to be reconducted including more recent papers in the future. Explicit scientific methods are required for systematic reviews as defined by international guidelines (7-9). Although no guidelines exist for traditional narrative ...

  9. How to write a superb literature review

    How to write a superb literature review

  10. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question.

  11. An overview of methodological approaches in systematic reviews

    1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...

  12. Review articles: purpose, process, and structure

    Method-based review papers review, synthetize, and extend a body of literature that uses the same underlying method. For example, in "Event Study Methodology in the Marketing Literature: An Overview" (Sorescu et al. 2017), the authors identify published studies in marketing that use an event study methodology. After a brief review of the ...

  13. Guidelines for writing a systematic review

    A Systematic Review (SR) is a synthesis of evidence that is identified and critically appraised to understand a specific topic. SRs are more comprehensive than a Literature Review, which most academics will be familiar with, as they follow a methodical process to identify and analyse existing literature (Cochrane, 2022).

  14. Review Paper Format: How To Write A Review Article Fast

    Types Of Review Paper. Not all review articles are created equal. Each type has its methodology, purpose, and format, catering to different research needs and questions. Systematic Review Paper. First up is the systematic review, the crème de la crème of review types. It's known for its rigorous methodology, involving a detailed plan for ...

  15. How to write a good scientific review article

    The process of sifting through research papers and distilling their key messages into one narrative can provide great inspiration for your own work. Writing a review also enhances your publication record and highlights your in-depth knowledge of a research area, providing a platform for you to give your own perspectives on recent advances and ...

  16. Systematic Review

    Learn how to conduct a systematic review, a type of review that uses repeatable methods to find, select, and synthesize all available evidence. See the steps, examples, and differences with other types of reviews.

  17. Methodology of a systematic review

    The steps for implementing a systematic review include (i) correctly formulating the clinical question to answer (PICO), (ii) developing a protocol (inclusion and exclusion criteria), (iii) performing a detailed and broad literature search and (iv) screening the abstracts of the studies identified in the search and subsequently of the selected ...

  18. How to review a paper

    22 Sep 2016. By Elisabeth Pain. Share: A good peer review requires disciplinary expertise, a keen and critical eye, and a diplomatic and constructive approach. Credit: dmark/iStockphoto. As junior scientists develop their expertise and make names for themselves, they are increasingly likely to receive invitations to review research manuscripts.

  19. How to write a review article?

    An essential part of the review process is differentiating good research from bad and leaning on the results of the better studies. The ideal way to synthesize studies is to perform a meta-analysis. In conclusion, when writing a review, it is best to clearly focus on fixed ideas, to use a procedural and critical approach to the literature and ...

  20. Methodology

    The method described may either be completely new, or may offer a better version of an existing method. The article must describe a demonstrable advance on what is currently available. The method needs to have been well tested and ideally, but not necessarily, used in a way that proves its value. Systematic Reviews strongly encourages that all ...

  21. LibGuides: Scholarly Articles: How can I tell?: Methodology

    Methodology. The methodology section or methods section tells you how the author (s) went about doing their research. It should let you know a) what method they used to gather data (survey, interviews, experiments, etc.), why they chose this method, and what the limitations are to this method. The methodology section should be detailed enough ...

  22. Literature Review on Collaborative Project Delivery for Sustainable

    This paper aims to conduct a bibliometric analysis and traditional literature review concerning collaborative project delivery (CPD) methods, with an emphasis on design-build (DB), construction management at risk (CMAR), and integrated project delivery (PD) Methods. This article seeks to identify the most influential publications, reveal the advantages and disadvantages of CPD, and determine ...

  23. Simulating learning methodology (SLeM): an approach to machine learning

    This paper introduces a 'simulating learning methodology' (SLeM) approach for the learning methodology determination in general and for Auto6 ML in particu ... Deyu Meng, Simulating learning methodology (SLeM): an approach to machine learning automation, National Science Review, Volume 11, Issue 8, August 2024 ... $^6$| ML are mainly based ...

  24. Chapter 9 Methods for Literature Reviews

    9.3. Types of Review Articles and Brief Illustrations. EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic.

  25. Data‐driven artificial intelligence‐based streamflow forecasting, a

    This paper reviews the application of DDAI-based approaches in daily or longer-than-daily streamflow prediction and does not review cases like real-time forecasting or flood forecasting. ... This method is particularly useful when there is not enough data available as the target or when there is a strong influence from the antecedent conditions ...

  26. Ten Simple Rules for Writing a Literature Review

    Literature reviews are in great demand in most scientific fields. Their need stems from the ever-increasing output of scientific publications .For example, compared to 1991, in 2008 three, eight, and forty times more papers were indexed in Web of Science on malaria, obesity, and biodiversity, respectively .Given such mountains of papers, scientists cannot be expected to examine in detail every ...

  27. GPT-fabricated scientific papers on Google Scholar: Key features

    Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research.