When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections

How to Write a Peer Review

review of a research paper

When you write a peer review for a manuscript, what should you include in your comments? What should you leave out? And how should the review be formatted?

This guide provides quick tips for writing and organizing your reviewer report.

Review Outline

Use an outline for your reviewer report so it’s easy for the editors and author to follow. This will also help you keep your comments organized.

Think about structuring your review like an inverted pyramid. Put the most important information at the top, followed by details and examples in the center, and any additional points at the very bottom.

review of a research paper

Here’s how your outline might look:

1. Summary of the research and your overall impression

In your own words, summarize what the manuscript claims to report. This shows the editor how you interpreted the manuscript and will highlight any major differences in perspective between you and the other reviewers. Give an overview of the manuscript’s strengths and weaknesses. Think about this as your “take-home” message for the editors. End this section with your recommended course of action.

2. Discussion of specific areas for improvement

It’s helpful to divide this section into two parts: one for major issues and one for minor issues. Within each section, you can talk about the biggest issues first or go systematically figure-by-figure or claim-by-claim. Number each item so that your points are easy to follow (this will also make it easier for the authors to respond to each point). Refer to specific lines, pages, sections, or figure and table numbers so the authors (and editors) know exactly what you’re talking about.

Major vs. minor issues

What’s the difference between a major and minor issue? Major issues should consist of the essential points the authors need to address before the manuscript can proceed. Make sure you focus on what is  fundamental for the current study . In other words, it’s not helpful to recommend additional work that would be considered the “next step” in the study. Minor issues are still important but typically will not affect the overall conclusions of the manuscript. Here are some examples of what would might go in the “minor” category:

  • Missing references (but depending on what is missing, this could also be a major issue)
  • Technical clarifications (e.g., the authors should clarify how a reagent works)
  • Data presentation (e.g., the authors should present p-values differently)
  • Typos, spelling, grammar, and phrasing issues

3. Any other points

Confidential comments for the editors.

Some journals have a space for reviewers to enter confidential comments about the manuscript. Use this space to mention concerns about the submission that you’d want the editors to consider before sharing your feedback with the authors, such as concerns about ethical guidelines or language quality. Any serious issues should be raised directly and immediately with the journal as well.

This section is also where you will disclose any potentially competing interests, and mention whether you’re willing to look at a revised version of the manuscript.

Do not use this space to critique the manuscript, since comments entered here will not be passed along to the authors.  If you’re not sure what should go in the confidential comments, read the reviewer instructions or check with the journal first before submitting your review. If you are reviewing for a journal that does not offer a space for confidential comments, consider writing to the editorial office directly with your concerns.

Get this outline in a template

Giving Feedback

Giving feedback is hard. Giving effective feedback can be even more challenging. Remember that your ultimate goal is to discuss what the authors would need to do in order to qualify for publication. The point is not to nitpick every piece of the manuscript. Your focus should be on providing constructive and critical feedback that the authors can use to improve their study.

If you’ve ever had your own work reviewed, you already know that it’s not always easy to receive feedback. Follow the golden rule: Write the type of review you’d want to receive if you were the author. Even if you decide not to identify yourself in the review, you should write comments that you would be comfortable signing your name to.

In your comments, use phrases like “ the authors’ discussion of X” instead of “ your discussion of X .” This will depersonalize the feedback and keep the focus on the manuscript instead of the authors.

General guidelines for effective feedback

review of a research paper

  • Justify your recommendation with concrete evidence and specific examples.
  • Be specific so the authors know what they need to do to improve.
  • Be thorough. This might be the only time you read the manuscript.
  • Be professional and respectful. The authors will be reading these comments too.
  • Remember to say what you liked about the manuscript!

review of a research paper

Don’t

  • Recommend additional experiments or  unnecessary elements that are out of scope for the study or for the journal criteria.
  • Tell the authors exactly how to revise their manuscript—you don’t need to do their work for them.
  • Use the review to promote your own research or hypotheses.
  • Focus on typos and grammar. If the manuscript needs significant editing for language and writing quality, just mention this in your comments.
  • Submit your review without proofreading it and checking everything one more time.

Before and After: Sample Reviewer Comments

Keeping in mind the guidelines above, how do you put your thoughts into words? Here are some sample “before” and “after” reviewer comments

✗ Before

“The authors appear to have no idea what they are talking about. I don’t think they have read any of the literature on this topic.”

✓ After

“The study fails to address how the findings relate to previous research in this area. The authors should rewrite their Introduction and Discussion to reference the related literature, especially recently published work such as Darwin et al.”

“The writing is so bad, it is practically unreadable. I could barely bring myself to finish it.”

“While the study appears to be sound, the language is unclear, making it difficult to follow. I advise the authors work with a writing coach or copyeditor to improve the flow and readability of the text.”

“It’s obvious that this type of experiment should have been included. I have no idea why the authors didn’t use it. This is a big mistake.”

“The authors are off to a good start, however, this study requires additional experiments, particularly [type of experiment]. Alternatively, the authors should include more information that clarifies and justifies their choice of methods.”

Suggested Language for Tricky Situations

You might find yourself in a situation where you’re not sure how to explain the problem or provide feedback in a constructive and respectful way. Here is some suggested language for common issues you might experience.

What you think : The manuscript is fatally flawed. What you could say: “The study does not appear to be sound” or “the authors have missed something crucial”.

What you think : You don’t completely understand the manuscript. What you could say : “The authors should clarify the following sections to avoid confusion…”

What you think : The technical details don’t make sense. What you could say : “The technical details should be expanded and clarified to ensure that readers understand exactly what the researchers studied.”

What you think: The writing is terrible. What you could say : “The authors should revise the language to improve readability.”

What you think : The authors have over-interpreted the findings. What you could say : “The authors aim to demonstrate [XYZ], however, the data does not fully support this conclusion. Specifically…”

What does a good review look like?

Check out the peer review examples at F1000 Research to see how other reviewers write up their reports and give constructive feedback to authors.

Time to Submit the Review!

Be sure you turn in your report on time. Need an extension? Tell the journal so that they know what to expect. If you need a lot of extra time, the journal might need to contact other reviewers or notify the author about the delay.

Tip: Building a relationship with an editor

You’ll be more likely to be asked to review again if you provide high-quality feedback and if you turn in the review on time. Especially if it’s your first review for a journal, it’s important to show that you are reliable. Prove yourself once and you’ll get asked to review again!

  • Getting started as a reviewer
  • Responding to an invitation
  • Reading a manuscript
  • Writing a peer review

The contents of the Peer Review Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

The contents of the Writing Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

There’s a lot to consider when deciding where to submit your work. Learn how to choose a journal that will help your study reach its audience, while reflecting your values as a researcher…

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • CAREER FEATURE
  • 04 December 2020
  • Correction 09 December 2020

How to write a superb literature review

Andy Tay is a freelance writer based in Singapore.

You can also search for this author in PubMed   Google Scholar

Literature reviews are important resources for scientists. They provide historical context for a field while offering opinions on its future trajectory. Creating them can provide inspiration for one’s own research, as well as some practice in writing. But few scientists are trained in how to write a review — or in what constitutes an excellent one. Even picking the appropriate software to use can be an involved decision (see ‘Tools and techniques’). So Nature asked editors and working scientists with well-cited reviews for their tips.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

doi: https://doi.org/10.1038/d41586-020-03422-x

Interviews have been edited for length and clarity.

Updates & Corrections

Correction 09 December 2020 : An earlier version of the tables in this article included some incorrect details about the programs Zotero, Endnote and Manubot. These have now been corrected.

Hsing, I.-M., Xu, Y. & Zhao, W. Electroanalysis 19 , 755–768 (2007).

Article   Google Scholar  

Ledesma, H. A. et al. Nature Nanotechnol. 14 , 645–657 (2019).

Article   PubMed   Google Scholar  

Brahlek, M., Koirala, N., Bansal, N. & Oh, S. Solid State Commun. 215–216 , 54–62 (2015).

Choi, Y. & Lee, S. Y. Nature Rev. Chem . https://doi.org/10.1038/s41570-020-00221-w (2020).

Download references

Related Articles

review of a research paper

  • Research management

Want to make a difference? Try working at an environmental non-profit organization

Want to make a difference? Try working at an environmental non-profit organization

Career Feature 26 APR 24

Scientists urged to collect royalties from the ‘magic money tree’

Scientists urged to collect royalties from the ‘magic money tree’

Career Feature 25 APR 24

NIH pay rise for postdocs and PhD students could have US ripple effect

NIH pay rise for postdocs and PhD students could have US ripple effect

News 25 APR 24

Algorithm ranks peer reviewers by reputation — but critics warn of bias

Algorithm ranks peer reviewers by reputation — but critics warn of bias

Nature Index 25 APR 24

Researchers want a ‘nutrition label’ for academic-paper facts

Researchers want a ‘nutrition label’ for academic-paper facts

Nature Index 17 APR 24

How young people benefit from Swiss apprenticeships

How young people benefit from Swiss apprenticeships

Spotlight 17 APR 24

How reliable is this research? Tool flags papers discussed on PubPeer

How reliable is this research? Tool flags papers discussed on PubPeer

News 29 APR 24

W2 Professorship with tenure track to W3 in Animal Husbandry (f/m/d)

The Faculty of Agricultural Sciences at the University of Göttingen invites applications for a temporary professorship with civil servant status (g...

Göttingen (Stadt), Niedersachsen (DE)

Georg-August-Universität Göttingen

review of a research paper

W1 professorship for „Tissue Aspects of Immunity and Inflammation“

Kiel University (CAU) and the University of Lübeck (UzL) are striving to increase the proportion of qualified female scientists in research and tea...

University of Luebeck

review of a research paper

W1 professorship for "Bioinformatics and artificial intelligence that preserve privacy"

Kiel, Schleswig-Holstein (DE)

Universität Kiel - Medizinische Fakultät

review of a research paper

W1 professorship for "Central Metabolic Inflammation“

review of a research paper

W1 professorship for "Congenital and adaptive lymphocyte regulation"

review of a research paper

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

How to write a good scientific review article

Affiliation.

  • 1 The FEBS Journal Editorial Office, Cambridge, UK.
  • PMID: 35792782
  • DOI: 10.1111/febs.16565

Literature reviews are valuable resources for the scientific community. With research accelerating at an unprecedented speed in recent years and more and more original papers being published, review articles have become increasingly important as a means to keep up to date with developments in a particular area of research. A good review article provides readers with an in-depth understanding of a field and highlights key gaps and challenges to address with future research. Writing a review article also helps to expand the writer's knowledge of their specialist area and to develop their analytical and communication skills, amongst other benefits. Thus, the importance of building review-writing into a scientific career cannot be overstated. In this instalment of The FEBS Journal's Words of Advice series, I provide detailed guidance on planning and writing an informative and engaging literature review.

© 2022 Federation of European Biochemical Societies.

Publication types

  • Review Literature as Topic*

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing a Literature Review

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.

Where, when, and why would I write a lit review?

There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.

A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.

Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.

What are the parts of a lit review?

Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.

Introduction:

  • An introductory paragraph that explains what your working topic and thesis is
  • A forecast of key topics or texts that will appear in the review
  • Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
  • Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically Evaluate: Mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.

Conclusion:

  • Summarize the key findings you have taken from the literature and emphasize their significance
  • Connect it back to your primary research question

How should I organize my lit review?

Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:

  • Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
  • Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
  • Qualitative versus quantitative research
  • Empirical versus theoretical scholarship
  • Divide the research by sociological, historical, or cultural sources
  • Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.

What are some strategies or tips I can use while writing my lit review?

Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .

As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.

Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:

  • It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
  • Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
  • Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
  • Read more about synthesis here.

The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.

  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • This Or That Game New
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Education and Communications
  • Critical Reviews

How to Write an Article Review (With Examples)

Last Updated: April 24, 2024 Fact Checked

Preparing to Write Your Review

Writing the article review, sample article reviews, expert q&a.

This article was co-authored by Jake Adams . Jake Adams is an academic tutor and the owner of Simplifi EDU, a Santa Monica, California based online tutoring business offering learning resources and online tutors for academic subjects K-College, SAT & ACT prep, and college admissions applications. With over 14 years of professional tutoring experience, Jake is dedicated to providing his clients the very best online tutoring experience and access to a network of excellent undergraduate and graduate-level tutors from top colleges all over the nation. Jake holds a BS in International Business and Marketing from Pepperdine University. There are 12 references cited in this article, which can be found at the bottom of the page. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 3,096,303 times.

An article review is both a summary and an evaluation of another writer's article. Teachers often assign article reviews to introduce students to the work of experts in the field. Experts also are often asked to review the work of other professionals. Understanding the main points and arguments of the article is essential for an accurate summation. Logical evaluation of the article's main theme, supporting arguments, and implications for further research is an important element of a review . Here are a few guidelines for writing an article review.

Education specialist Alexander Peterman recommends: "In the case of a review, your objective should be to reflect on the effectiveness of what has already been written, rather than writing to inform your audience about a subject."

Article Review 101

  • Read the article very closely, and then take time to reflect on your evaluation. Consider whether the article effectively achieves what it set out to.
  • Write out a full article review by completing your intro, summary, evaluation, and conclusion. Don't forget to add a title, too!
  • Proofread your review for mistakes (like grammar and usage), while also cutting down on needless information.

Step 1 Understand what an article review is.

  • Article reviews present more than just an opinion. You will engage with the text to create a response to the scholarly writer's ideas. You will respond to and use ideas, theories, and research from your studies. Your critique of the article will be based on proof and your own thoughtful reasoning.
  • An article review only responds to the author's research. It typically does not provide any new research. However, if you are correcting misleading or otherwise incorrect points, some new data may be presented.
  • An article review both summarizes and evaluates the article.

Step 2 Think about the organization of the review article.

  • Summarize the article. Focus on the important points, claims, and information.
  • Discuss the positive aspects of the article. Think about what the author does well, good points she makes, and insightful observations.
  • Identify contradictions, gaps, and inconsistencies in the text. Determine if there is enough data or research included to support the author's claims. Find any unanswered questions left in the article.

Step 3 Preview the article.

  • Make note of words or issues you don't understand and questions you have.
  • Look up terms or concepts you are unfamiliar with, so you can fully understand the article. Read about concepts in-depth to make sure you understand their full context.

Step 4 Read the article closely.

  • Pay careful attention to the meaning of the article. Make sure you fully understand the article. The only way to write a good article review is to understand the article.

Step 5 Put the article into your words.

  • With either method, make an outline of the main points made in the article and the supporting research or arguments. It is strictly a restatement of the main points of the article and does not include your opinions.
  • After putting the article in your own words, decide which parts of the article you want to discuss in your review. You can focus on the theoretical approach, the content, the presentation or interpretation of evidence, or the style. You will always discuss the main issues of the article, but you can sometimes also focus on certain aspects. This comes in handy if you want to focus the review towards the content of a course.
  • Review the summary outline to eliminate unnecessary items. Erase or cross out the less important arguments or supplemental information. Your revised summary can serve as the basis for the summary you provide at the beginning of your review.

Step 6 Write an outline of your evaluation.

  • What does the article set out to do?
  • What is the theoretical framework or assumptions?
  • Are the central concepts clearly defined?
  • How adequate is the evidence?
  • How does the article fit into the literature and field?
  • Does it advance the knowledge of the subject?
  • How clear is the author's writing? Don't: include superficial opinions or your personal reaction. Do: pay attention to your biases, so you can overcome them.

Step 1 Come up with...

  • For example, in MLA , a citation may look like: Duvall, John N. "The (Super)Marketplace of Images: Television as Unmediated Mediation in DeLillo's White Noise ." Arizona Quarterly 50.3 (1994): 127-53. Print. [9] X Trustworthy Source Purdue Online Writing Lab Trusted resource for writing and citation guidelines Go to source

Step 3 Identify the article.

  • For example: The article, "Condom use will increase the spread of AIDS," was written by Anthony Zimmerman, a Catholic priest.

Step 4 Write the introduction....

  • Your introduction should only be 10-25% of your review.
  • End the introduction with your thesis. Your thesis should address the above issues. For example: Although the author has some good points, his article is biased and contains some misinterpretation of data from others’ analysis of the effectiveness of the condom.

Step 5 Summarize the article.

  • Use direct quotes from the author sparingly.
  • Review the summary you have written. Read over your summary many times to ensure that your words are an accurate description of the author's article.

Step 6 Write your critique.

  • Support your critique with evidence from the article or other texts.
  • The summary portion is very important for your critique. You must make the author's argument clear in the summary section for your evaluation to make sense.
  • Remember, this is not where you say if you liked the article or not. You are assessing the significance and relevance of the article.
  • Use a topic sentence and supportive arguments for each opinion. For example, you might address a particular strength in the first sentence of the opinion section, followed by several sentences elaborating on the significance of the point.

Step 7 Conclude the article review.

  • This should only be about 10% of your overall essay.
  • For example: This critical review has evaluated the article "Condom use will increase the spread of AIDS" by Anthony Zimmerman. The arguments in the article show the presence of bias, prejudice, argumentative writing without supporting details, and misinformation. These points weaken the author’s arguments and reduce his credibility.

Step 8 Proofread.

  • Make sure you have identified and discussed the 3-4 key issues in the article.

review of a research paper

You Might Also Like

Write a Feature Article

  • ↑ https://libguides.cmich.edu/writinghelp/articlereview
  • ↑ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4548566/
  • ↑ Jake Adams. Academic Tutor & Test Prep Specialist. Expert Interview. 24 July 2020.
  • ↑ https://guides.library.queensu.ca/introduction-research/writing/critical
  • ↑ https://www.iup.edu/writingcenter/writing-resources/organization-and-structure/creating-an-outline.html
  • ↑ https://writing.umn.edu/sws/assets/pdf/quicktips/titles.pdf
  • ↑ https://owl.purdue.edu/owl/research_and_citation/mla_style/mla_formatting_and_style_guide/mla_works_cited_periodicals.html
  • ↑ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4548565/
  • ↑ https://writingcenter.uconn.edu/wp-content/uploads/sites/593/2014/06/How_to_Summarize_a_Research_Article1.pdf
  • ↑ https://www.uis.edu/learning-hub/writing-resources/handouts/learning-hub/how-to-review-a-journal-article
  • ↑ https://writingcenter.unc.edu/tips-and-tools/editing-and-proofreading/

About This Article

Jake Adams

If you have to write an article review, read through the original article closely, taking notes and highlighting important sections as you read. Next, rewrite the article in your own words, either in a long paragraph or as an outline. Open your article review by citing the article, then write an introduction which states the article’s thesis. Next, summarize the article, followed by your opinion about whether the article was clear, thorough, and useful. Finish with a paragraph that summarizes the main points of the article and your opinions. To learn more about what to include in your personal critique of the article, keep reading the article! Did this summary help you? Yes No

  • Send fan mail to authors

Reader Success Stories

Prince Asiedu-Gyan

Prince Asiedu-Gyan

Apr 22, 2022

Did this article help you?

Sammy James

Sammy James

Sep 12, 2017

Juabin Matey

Juabin Matey

Aug 30, 2017

Vanita Meghrajani

Vanita Meghrajani

Jul 21, 2016

F. K.

Nov 27, 2018

Am I a Narcissist or an Empath Quiz

Featured Articles

Relive the 1970s (for Kids)

Trending Articles

What Do I Want in a Weight Loss Program Quiz

Watch Articles

Make Sugar Cookies

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

Get all the best how-tos!

Sign up for wikiHow's weekly email newsletter

Page Content

Overview of the review report format, the first read-through, first read considerations, spotting potential major flaws, concluding the first reading, rejection after the first reading, before starting the second read-through, doing the second read-through, the second read-through: section by section guidance, how to structure your report, on presentation and style, criticisms & confidential comments to editors, the recommendation, when recommending rejection, additional resources, step by step guide to reviewing a manuscript.

When you receive an invitation to peer review, you should be sent a copy of the paper's abstract to help you decide whether you wish to do the review. Try to respond to invitations promptly - it will prevent delays. It is also important at this stage to declare any potential Conflict of Interest.

The structure of the review report varies between journals. Some follow an informal structure, while others have a more formal approach.

" Number your comments!!! " (Jonathon Halbesleben, former Editor of Journal of Occupational and Organizational Psychology)

Informal Structure

Many journals don't provide criteria for reviews beyond asking for your 'analysis of merits'. In this case, you may wish to familiarize yourself with examples of other reviews done for the journal, which the editor should be able to provide or, as you gain experience, rely on your own evolving style.

Formal Structure

Other journals require a more formal approach. Sometimes they will ask you to address specific questions in your review via a questionnaire. Or they might want you to rate the manuscript on various attributes using a scorecard. Often you can't see these until you log in to submit your review. So when you agree to the work, it's worth checking for any journal-specific guidelines and requirements. If there are formal guidelines, let them direct the structure of your review.

In Both Cases

Whether specifically required by the reporting format or not, you should expect to compile comments to authors and possibly confidential ones to editors only.

Reviewing with Empathy

Following the invitation to review, when you'll have received the article abstract, you should already understand the aims, key data and conclusions of the manuscript. If you don't, make a note now that you need to feedback on how to improve those sections.

The first read-through is a skim-read. It will help you form an initial impression of the paper and get a sense of whether your eventual recommendation will be to accept or reject the paper.

Keep a pen and paper handy when skim-reading.

Try to bear in mind the following questions - they'll help you form your overall impression:

  • What is the main question addressed by the research? Is it relevant and interesting?
  • How original is the topic? What does it add to the subject area compared with other published material?
  • Is the paper well written? Is the text clear and easy to read?
  • Are the conclusions consistent with the evidence and arguments presented? Do they address the main question posed?
  • If the author is disagreeing significantly with the current academic consensus, do they have a substantial case? If not, what would be required to make their case credible?
  • If the paper includes tables or figures, what do they add to the paper? Do they aid understanding or are they superfluous?

While you should read the whole paper, making the right choice of what to read first can save time by flagging major problems early on.

Editors say, " Specific recommendations for remedying flaws are VERY welcome ."

Examples of possibly major flaws include:

  • Drawing a conclusion that is contradicted by the author's own statistical or qualitative evidence
  • The use of a discredited method
  • Ignoring a process that is known to have a strong influence on the area under study

If experimental design features prominently in the paper, first check that the methodology is sound - if not, this is likely to be a major flaw.

You might examine:

  • The sampling in analytical papers
  • The sufficient use of control experiments
  • The precision of process data
  • The regularity of sampling in time-dependent studies
  • The validity of questions, the use of a detailed methodology and the data analysis being done systematically (in qualitative research)
  • That qualitative research extends beyond the author's opinions, with sufficient descriptive elements and appropriate quotes from interviews or focus groups

Major Flaws in Information

If methodology is less of an issue, it's often a good idea to look at the data tables, figures or images first. Especially in science research, it's all about the information gathered. If there are critical flaws in this, it's very likely the manuscript will need to be rejected. Such issues include:

  • Insufficient data
  • Unclear data tables
  • Contradictory data that either are not self-consistent or disagree with the conclusions
  • Confirmatory data that adds little, if anything, to current understanding - unless strong arguments for such repetition are made

If you find a major problem, note your reasoning and clear supporting evidence (including citations).

After the initial read and using your notes, including those of any major flaws you found, draft the first two paragraphs of your review - the first summarizing the research question addressed and the second the contribution of the work. If the journal has a prescribed reporting format, this draft will still help you compose your thoughts.

The First Paragraph

This should state the main question addressed by the research and summarize the goals, approaches, and conclusions of the paper. It should:

  • Help the editor properly contextualize the research and add weight to your judgement
  • Show the author what key messages are conveyed to the reader, so they can be sure they are achieving what they set out to do
  • Focus on successful aspects of the paper so the author gets a sense of what they've done well

The Second Paragraph

This should provide a conceptual overview of the contribution of the research. So consider:

  • Is the paper's premise interesting and important?
  • Are the methods used appropriate?
  • Do the data support the conclusions?

After drafting these two paragraphs, you should be in a position to decide whether this manuscript is seriously flawed and should be rejected (see the next section). Or whether it is publishable in principle and merits a detailed, careful read through.

Even if you are coming to the opinion that an article has serious flaws, make sure you read the whole paper. This is very important because you may find some really positive aspects that can be communicated to the author. This could help them with future submissions.

A full read-through will also make sure that any initial concerns are indeed correct and fair. After all, you need the context of the whole paper before deciding to reject. If you still intend to recommend rejection, see the section "When recommending rejection."

Once the paper has passed your first read and you've decided the article is publishable in principle, one purpose of the second, detailed read-through is to help prepare the manuscript for publication. You may still decide to recommend rejection following a second reading.

" Offer clear suggestions for how the authors can address the concerns raised. In other words, if you're going to raise a problem, provide a solution ." (Jonathon Halbesleben, Editor of Journal of Occupational and Organizational Psychology)

Preparation

To save time and simplify the review:

  • Don't rely solely upon inserting comments on the manuscript document - make separate notes
  • Try to group similar concerns or praise together
  • If using a review program to note directly onto the manuscript, still try grouping the concerns and praise in separate notes - it helps later
  • Note line numbers of text upon which your notes are based - this helps you find items again and also aids those reading your review

Now that you have completed your preparations, you're ready to spend an hour or so reading carefully through the manuscript.

As you're reading through the manuscript for a second time, you'll need to keep in mind the argument's construction, the clarity of the language and content.

With regard to the argument’s construction, you should identify:

  • Any places where the meaning is unclear or ambiguous
  • Any factual errors
  • Any invalid arguments

You may also wish to consider:

  • Does the title properly reflect the subject of the paper?
  • Does the abstract provide an accessible summary of the paper?
  • Do the keywords accurately reflect the content?
  • Is the paper an appropriate length?
  • Are the key messages short, accurate and clear?

Not every submission is well written. Part of your role is to make sure that the text’s meaning is clear.

Editors say, " If a manuscript has many English language and editing issues, please do not try and fix it. If it is too bad, note that in your review and it should be up to the authors to have the manuscript edited ."

If the article is difficult to understand, you should have rejected it already. However, if the language is poor but you understand the core message, see if you can suggest improvements to fix the problem:

  • Are there certain aspects that could be communicated better, such as parts of the discussion?
  • Should the authors consider resubmitting to the same journal after language improvements?
  • Would you consider looking at the paper again once these issues are dealt with?

On Grammar and Punctuation

Your primary role is judging the research content. Don't spend time polishing grammar or spelling. Editors will make sure that the text is at a high standard before publication. However, if you spot grammatical errors that affect clarity of meaning, then it's important to highlight these. Expect to suggest such amendments - it's rare for a manuscript to pass review with no corrections.

A 2010 study of nursing journals found that 79% of recommendations by reviewers were influenced by grammar and writing style (Shattel, et al., 2010).

1. The Introduction

A well-written introduction:

  • Sets out the argument
  • Summarizes recent research related to the topic
  • Highlights gaps in current understanding or conflicts in current knowledge
  • Establishes the originality of the research aims by demonstrating the need for investigations in the topic area
  • Gives a clear idea of the target readership, why the research was carried out and the novelty and topicality of the manuscript

Originality and Topicality

Originality and topicality can only be established in the light of recent authoritative research. For example, it's impossible to argue that there is a conflict in current understanding by referencing articles that are 10 years old.

Authors may make the case that a topic hasn't been investigated in several years and that new research is required. This point is only valid if researchers can point to recent developments in data gathering techniques or to research in indirectly related fields that suggest the topic needs revisiting. Clearly, authors can only do this by referencing recent literature. Obviously, where older research is seminal or where aspects of the methodology rely upon it, then it is perfectly appropriate for authors to cite some older papers.

Editors say, "Is the report providing new information; is it novel or just confirmatory of well-known outcomes ?"

It's common for the introduction to end by stating the research aims. By this point you should already have a good impression of them - if the explicit aims come as a surprise, then the introduction needs improvement.

2. Materials and Methods

Academic research should be replicable, repeatable and robust - and follow best practice.

Replicable Research

This makes sufficient use of:

  • Control experiments
  • Repeated analyses
  • Repeated experiments

These are used to make sure observed trends are not due to chance and that the same experiment could be repeated by other researchers - and result in the same outcome. Statistical analyses will not be sound if methods are not replicable. Where research is not replicable, the paper should be recommended for rejection.

Repeatable Methods

These give enough detail so that other researchers are able to carry out the same research. For example, equipment used or sampling methods should all be described in detail so that others could follow the same steps. Where methods are not detailed enough, it's usual to ask for the methods section to be revised.

Robust Research

This has enough data points to make sure the data are reliable. If there are insufficient data, it might be appropriate to recommend revision. You should also consider whether there is any in-built bias not nullified by the control experiments.

Best Practice

During these checks you should keep in mind best practice:

  • Standard guidelines were followed (e.g. the CONSORT Statement for reporting randomized trials)
  • The health and safety of all participants in the study was not compromised
  • Ethical standards were maintained

If the research fails to reach relevant best practice standards, it's usual to recommend rejection. What's more, you don't then need to read any further.

3. Results and Discussion

This section should tell a coherent story - What happened? What was discovered or confirmed?

Certain patterns of good reporting need to be followed by the author:

  • They should start by describing in simple terms what the data show
  • They should make reference to statistical analyses, such as significance or goodness of fit
  • Once described, they should evaluate the trends observed and explain the significance of the results to wider understanding. This can only be done by referencing published research
  • The outcome should be a critical analysis of the data collected

Discussion should always, at some point, gather all the information together into a single whole. Authors should describe and discuss the overall story formed. If there are gaps or inconsistencies in the story, they should address these and suggest ways future research might confirm the findings or take the research forward.

4. Conclusions

This section is usually no more than a few paragraphs and may be presented as part of the results and discussion, or in a separate section. The conclusions should reflect upon the aims - whether they were achieved or not - and, just like the aims, should not be surprising. If the conclusions are not evidence-based, it's appropriate to ask for them to be re-written.

5. Information Gathered: Images, Graphs and Data Tables

If you find yourself looking at a piece of information from which you cannot discern a story, then you should ask for improvements in presentation. This could be an issue with titles, labels, statistical notation or image quality.

Where information is clear, you should check that:

  • The results seem plausible, in case there is an error in data gathering
  • The trends you can see support the paper's discussion and conclusions
  • There are sufficient data. For example, in studies carried out over time are there sufficient data points to support the trends described by the author?

You should also check whether images have been edited or manipulated to emphasize the story they tell. This may be appropriate but only if authors report on how the image has been edited (e.g. by highlighting certain parts of an image). Where you feel that an image has been edited or manipulated without explanation, you should highlight this in a confidential comment to the editor in your report.

6. List of References

You will need to check referencing for accuracy, adequacy and balance.

Where a cited article is central to the author's argument, you should check the accuracy and format of the reference - and bear in mind different subject areas may use citations differently. Otherwise, it's the editor’s role to exhaustively check the reference section for accuracy and format.

You should consider if the referencing is adequate:

  • Are important parts of the argument poorly supported?
  • Are there published studies that show similar or dissimilar trends that should be discussed?
  • If a manuscript only uses half the citations typical in its field, this may be an indicator that referencing should be improved - but don't be guided solely by quantity
  • References should be relevant, recent and readily retrievable

Check for a well-balanced list of references that is:

  • Helpful to the reader
  • Fair to competing authors
  • Not over-reliant on self-citation
  • Gives due recognition to the initial discoveries and related work that led to the work under assessment

You should be able to evaluate whether the article meets the criteria for balanced referencing without looking up every reference.

7. Plagiarism

By now you will have a deep understanding of the paper's content - and you may have some concerns about plagiarism.

Identified Concern

If you find - or already knew of - a very similar paper, this may be because the author overlooked it in their own literature search. Or it may be because it is very recent or published in a journal slightly outside their usual field.

You may feel you can advise the author how to emphasize the novel aspects of their own study, so as to better differentiate it from similar research. If so, you may ask the author to discuss their aims and results, or modify their conclusions, in light of the similar article. Of course, the research similarities may be so great that they render the work unoriginal and you have no choice but to recommend rejection.

"It's very helpful when a reviewer can point out recent similar publications on the same topic by other groups, or that the authors have already published some data elsewhere ." (Editor feedback)

Suspected Concern

If you suspect plagiarism, including self-plagiarism, but cannot recall or locate exactly what is being plagiarized, notify the editor of your suspicion and ask for guidance.

Most editors have access to software that can check for plagiarism.

Editors are not out to police every paper, but when plagiarism is discovered during peer review it can be properly addressed ahead of publication. If plagiarism is discovered only after publication, the consequences are worse for both authors and readers, because a retraction may be necessary.

For detailed guidelines see COPE's Ethical guidelines for reviewers and Wiley's Best Practice Guidelines on Publishing Ethics .

8. Search Engine Optimization (SEO)

After the detailed read-through, you will be in a position to advise whether the title, abstract and key words are optimized for search purposes. In order to be effective, good SEO terms will reflect the aims of the research.

A clear title and abstract will improve the paper's search engine rankings and will influence whether the user finds and then decides to navigate to the main article. The title should contain the relevant SEO terms early on. This has a major effect on the impact of a paper, since it helps it appear in search results. A poor abstract can then lose the reader's interest and undo the benefit of an effective title - whilst the paper's abstract may appear in search results, the potential reader may go no further.

So ask yourself, while the abstract may have seemed adequate during earlier checks, does it:

  • Do justice to the manuscript in this context?
  • Highlight important findings sufficiently?
  • Present the most interesting data?

Editors say, " Does the Abstract highlight the important findings of the study ?"

If there is a formal report format, remember to follow it. This will often comprise a range of questions followed by comment sections. Try to answer all the questions. They are there because the editor felt that they are important. If you're following an informal report format you could structure your report in three sections: summary, major issues, minor issues.

  • Give positive feedback first. Authors are more likely to read your review if you do so. But don't overdo it if you will be recommending rejection
  • Briefly summarize what the paper is about and what the findings are
  • Try to put the findings of the paper into the context of the existing literature and current knowledge
  • Indicate the significance of the work and if it is novel or mainly confirmatory
  • Indicate the work's strengths, its quality and completeness
  • State any major flaws or weaknesses and note any special considerations. For example, if previously held theories are being overlooked

Major Issues

  • Are there any major flaws? State what they are and what the severity of their impact is on the paper
  • Has similar work already been published without the authors acknowledging this?
  • Are the authors presenting findings that challenge current thinking? Is the evidence they present strong enough to prove their case? Have they cited all the relevant work that would contradict their thinking and addressed it appropriately?
  • If major revisions are required, try to indicate clearly what they are
  • Are there any major presentational problems? Are figures & tables, language and manuscript structure all clear enough for you to accurately assess the work?
  • Are there any ethical issues? If you are unsure it may be better to disclose these in the confidential comments section

Minor Issues

  • Are there places where meaning is ambiguous? How can this be corrected?
  • Are the correct references cited? If not, which should be cited instead/also? Are citations excessive, limited, or biased?
  • Are there any factual, numerical or unit errors? If so, what are they?
  • Are all tables and figures appropriate, sufficient, and correctly labelled? If not, say which are not

Your review should ultimately help the author improve their article. So be polite, honest and clear. You should also try to be objective and constructive, not subjective and destructive.

You should also:

  • Write clearly and so you can be understood by people whose first language is not English
  • Avoid complex or unusual words, especially ones that would even confuse native speakers
  • Number your points and refer to page and line numbers in the manuscript when making specific comments
  • If you have been asked to only comment on specific parts or aspects of the manuscript, you should indicate clearly which these are
  • Treat the author's work the way you would like your own to be treated

Most journals give reviewers the option to provide some confidential comments to editors. Often this is where editors will want reviewers to state their recommendation - see the next section - but otherwise this area is best reserved for communicating malpractice such as suspected plagiarism, fraud, unattributed work, unethical procedures, duplicate publication, bias or other conflicts of interest.

However, this doesn't give reviewers permission to 'backstab' the author. Authors can't see this feedback and are unable to give their side of the story unless the editor asks them to. So in the spirit of fairness, write comments to editors as though authors might read them too.

Reviewers should check the preferences of individual journals as to where they want review decisions to be stated. In particular, bear in mind that some journals will not want the recommendation included in any comments to authors, as this can cause editors difficulty later - see Section 11 for more advice about working with editors.

You will normally be asked to indicate your recommendation (e.g. accept, reject, revise and resubmit, etc.) from a fixed-choice list and then to enter your comments into a separate text box.

Recommending Acceptance

If you're recommending acceptance, give details outlining why, and if there are any areas that could be improved. Don't just give a short, cursory remark such as 'great, accept'. See Improving the Manuscript

Recommending Revision

Where improvements are needed, a recommendation for major or minor revision is typical. You may also choose to state whether you opt in or out of the post-revision review too. If recommending revision, state specific changes you feel need to be made. The author can then reply to each point in turn.

Some journals offer the option to recommend rejection with the possibility of resubmission – this is most relevant where substantial, major revision is necessary.

What can reviewers do to help? " Be clear in their comments to the author (or editor) which points are absolutely critical if the paper is given an opportunity for revisio n." (Jonathon Halbesleben, Editor of Journal of Occupational and Organizational Psychology)

Recommending Rejection

If recommending rejection or major revision, state this clearly in your review (and see the next section, 'When recommending rejection').

Where manuscripts have serious flaws you should not spend any time polishing the review you've drafted or give detailed advice on presentation.

Editors say, " If a reviewer suggests a rejection, but her/his comments are not detailed or helpful, it does not help the editor in making a decision ."

In your recommendations for the author, you should:

  • Give constructive feedback describing ways that they could improve the research
  • Keep the focus on the research and not the author. This is an extremely important part of your job as a reviewer
  • Avoid making critical confidential comments to the editor while being polite and encouraging to the author - the latter may not understand why their manuscript has been rejected. Also, they won't get feedback on how to improve their research and it could trigger an appeal

Remember to give constructive criticism even if recommending rejection. This helps developing researchers improve their work and explains to the editor why you felt the manuscript should not be published.

" When the comments seem really positive, but the recommendation is rejection…it puts the editor in a tough position of having to reject a paper when the comments make it sound like a great paper ." (Jonathon Halbesleben, Editor of Journal of Occupational and Organizational Psychology)

Visit our Wiley Author Learning and Training Channel for expert advice on peer review.

Watch the video, Ethical considerations of Peer Review

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Reumatologia
  • v.59(1); 2021

Logo of reumatol

Peer review guidance: a primer for researchers

Olena zimba.

1 Department of Internal Medicine No. 2, Danylo Halytsky Lviv National Medical University, Lviv, Ukraine

Armen Yuri Gasparyan

2 Departments of Rheumatology and Research and Development, Dudley Group NHS Foundation Trust (Teaching Trust of the University of Birmingham, UK), Russells Hall Hospital, Dudley, West Midlands, UK

The peer review process is essential for quality checks and validation of journal submissions. Although it has some limitations, including manipulations and biased and unfair evaluations, there is no other alternative to the system. Several peer review models are now practised, with public review being the most appropriate in view of the open science movement. Constructive reviewer comments are increasingly recognised as scholarly contributions which should meet certain ethics and reporting standards. The Publons platform, which is now part of the Web of Science Group (Clarivate Analytics), credits validated reviewer accomplishments and serves as an instrument for selecting and promoting the best reviewers. All authors with relevant profiles may act as reviewers. Adherence to research reporting standards and access to bibliographic databases are recommended to help reviewers draft evidence-based and detailed comments.

Introduction

The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors’ mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for ‘elite’ research fellows who contribute to their professional societies and add value by voluntarily sharing their knowledge and experience.

Since the launch of the first academic periodicals back in 1665, the peer review has been mandatory for validating scientific facts, selecting influential works, and minimizing chances of publishing erroneous research reports [ 1 ]. Over the past centuries, peer review models have evolved from single-handed editorial evaluations to collegial discussions, with numerous strengths and inevitable limitations of each practised model [ 2 , 3 ]. With multiplication of periodicals and editorial management platforms, the reviewer pool has expanded and internationalized. Various sets of rules have been proposed to select skilled reviewers and employ globally acceptable tools and language styles [ 4 , 5 ].

In the era of digitization, the ethical dimension of the peer review has emerged, necessitating involvement of peers with full understanding of research and publication ethics to exclude unethical articles from the pool of evidence-based research and reviews [ 6 ]. In the time of the COVID-19 pandemic, some, if not most, journals face the unavailability of skilled reviewers, resulting in an unprecedented increase of articles without a history of peer review or those with surprisingly short evaluation timelines [ 7 ].

Editorial recommendations and the best reviewers

Guidance on peer review and selection of reviewers is currently available in the recommendations of global editorial associations which can be consulted by journal editors for updating their ethics statements and by research managers for crediting the evaluators. The International Committee on Medical Journal Editors (ICMJE) qualifies peer review as a continuation of the scientific process that should involve experts who are able to timely respond to reviewer invitations, submitting unbiased and constructive comments, and keeping confidentiality [ 8 ].

The reviewer roles and responsibilities are listed in the updated recommendations of the Council of Science Editors (CSE) [ 9 ] where ethical conduct is viewed as a premise of the quality evaluations. The Committee on Publication Ethics (COPE) further emphasizes editorial strategies that ensure transparent and unbiased reviewer evaluations by trained professionals [ 10 ]. Finally, the World Association of Medical Editors (WAME) prioritizes selecting the best reviewers with validated profiles to avoid substandard or fraudulent reviewer comments [ 11 ]. Accordingly, the Sarajevo Declaration on Integrity and Visibility of Scholarly Publications encourages reviewers to register with the Open Researcher and Contributor ID (ORCID) platform to validate and publicize their scholarly activities [ 12 ].

Although the best reviewer criteria are not listed in the editorial recommendations, it is apparent that the manuscript evaluators should be active researchers with extensive experience in the subject matter and an impressive list of relevant and recent publications [ 13 ]. All authors embarking on an academic career and publishing articles with active contact details can be involved in the evaluation of others’ scholarly works [ 14 ]. Ideally, the reviewers should be peers of the manuscript authors with equal scholarly ranks and credentials.

However, journal editors may employ schemes that engage junior research fellows as co-reviewers along with their mentors and senior fellows [ 15 ]. Such a scheme is successfully practised within the framework of the Emerging EULAR (European League Against Rheumatism) Network (EMEUNET) where seasoned authors (mentors) train ongoing researchers (mentees) how to evaluate submissions to the top rheumatology journals and select the best evaluators for regular contributors to these journals [ 16 ].

The awareness of the EQUATOR Network reporting standards may help the reviewers to evaluate methodology and suggest related revisions. Statistical skills help the reviewers to detect basic mistakes and suggest additional analyses. For example, scanning data presentation and revealing mistakes in the presentation of means and standard deviations often prompt re-analyses of distributions and replacement of parametric tests with non-parametric ones [ 17 , 18 ].

Constructive reviewer comments

The main goal of the peer review is to support authors in their attempt to publish ethically sound and professionally validated works that may attract readers’ attention and positively influence healthcare research and practice. As such, an optimal reviewer comment has to comprehensively examine all parts of the research and review work ( Table I ). The best reviewers are viewed as contributors who guide authors on how to correct mistakes, discuss study limitations, and highlight its strengths [ 19 ].

Structure of a reviewer comment to be forwarded to authors

Some of the currently practised review models are well positioned to help authors reveal and correct their mistakes at pre- or post-publication stages ( Table II ). The global move toward open science is particularly instrumental for increasing the quality and transparency of reviewer contributions.

Advantages and disadvantages of common manuscript evaluation models

Since there are no universally acceptable criteria for selecting reviewers and structuring their comments, instructions of all peer-reviewed journal should specify priorities, models, and expected review outcomes [ 20 ]. Monitoring and reporting average peer review timelines is also required to encourage timely evaluations and avoid delays. Depending on journal policies and article types, the first round of peer review may last from a few days to a few weeks. The fast-track review (up to 3 days) is practised by some top journals which process clinical trial reports and other priority items.

In exceptional cases, reviewer contributions may result in substantive changes, appreciated by authors in the official acknowledgments. In most cases, however, reviewers should avoid engaging in the authors’ research and writing. They should refrain from instructing the authors on additional tests and data collection as these may delay publication of original submissions with conclusive results.

Established publishers often employ advanced editorial management systems that support reviewers by providing instantaneous access to the review instructions, online structured forms, and some bibliographic databases. Such support enables drafting of evidence-based comments that examine the novelty, ethical soundness, and implications of the reviewed manuscripts [ 21 ].

Encouraging reviewers to submit their recommendations on manuscript acceptance/rejection and related editorial tasks is now a common practice. Skilled reviewers may prompt the editors to reject or transfer manuscripts which fall outside the journal scope, perform additional ethics checks, and minimize chances of publishing erroneous and unethical articles. They may also raise concerns over the editorial strategies in their comments to the editors.

Since reviewer and editor roles are distinct, reviewer recommendations are aimed at helping editors, but not at replacing their decision-making functions. The final decisions rest with handling editors. Handling editors weigh not only reviewer comments, but also priorities related to article types and geographic origins, space limitations in certain periods, and envisaged influence in terms of social media attention and citations. This is why rejections of even flawless manuscripts are likely at early rounds of internal and external evaluations across most peer-reviewed journals.

Reviewers are often requested to comment on language correctness and overall readability of the evaluated manuscripts. Given the wide availability of in-house and external editing services, reviewer comments on language mistakes and typos are categorized as minor. At the same time, non-Anglophone experts’ poor language skills often exclude them from contributing to the peer review in most influential journals [ 22 ]. Comments should be properly edited to convey messages in positive or neutral tones, express ideas of varying degrees of certainty, and present logical order of words, sentences, and paragraphs [ 23 , 24 ]. Consulting linguists on communication culture, passing advanced language courses, and honing commenting skills may increase the overall quality and appeal of the reviewer accomplishments [ 5 , 25 ].

Peer reviewer credits

Various crediting mechanisms have been proposed to motivate reviewers and maintain the integrity of science communication [ 26 ]. Annual reviewer acknowledgments are widely practised for naming manuscript evaluators and appreciating their scholarly contributions. Given the need to weigh reviewer contributions, some journal editors distinguish ‘elite’ reviewers with numerous evaluations and award those with timely and outstanding accomplishments [ 27 ]. Such targeted recognition ensures ethical soundness of the peer review and facilitates promotion of the best candidates for grant funding and academic job appointments [ 28 ].

Also, large publishers and learned societies issue certificates of excellence in reviewing which may include Continuing Professional Development (CPD) points [ 29 ]. Finally, an entirely new crediting mechanism is proposed to award bonus points to active reviewers who may collect, transfer, and use these points to discount gold open-access charges within the publisher consortia [ 30 ].

With the launch of Publons ( http://publons.com/ ) and its integration with Web of Science Group (Clarivate Analytics), reviewer recognition has become a matter of scientific prestige. Reviewers can now freely open their Publons accounts and record their contributions to online journals with Digital Object Identifiers (DOI). Journal editors, in turn, may generate official reviewer acknowledgments and encourage reviewers to forward them to Publons for building up individual reviewer and journal profiles. All published articles maintain e-links to their review records and post-publication promotion on social media, allowing the reviewers to continuously track expert evaluations and comments. A paid-up partnership is also available to journals and publishers for automatically transferring peer-review records to Publons upon mutually acceptable arrangements.

Listing reviewer accomplishments on an individual Publons profile showcases scholarly contributions of the account holder. The reviewer accomplishments placed next to the account holders’ own articles and editorial accomplishments point to the diversity of scholarly contributions. Researchers may establish links between their Publons and ORCID accounts to further benefit from complementary services of both platforms. Publons Academy ( https://publons.com/community/academy/ ) additionally offers an online training course to novice researchers who may improve their reviewing skills under the guidance of experienced mentors and journal editors. Finally, journal editors may conduct searches through the Publons platform to select the best reviewers across academic disciplines.

Peer review ethics

Prior to accepting reviewer invitations, scholars need to weigh a number of factors which may compromise their evaluations. First of all, they are required to accept the reviewer invitations if they are capable of timely submitting their comments. Peer review timelines depend on article type and vary widely across journals. The rules of transparent publishing necessitate recording manuscript submission and acceptance dates in article footnotes to inform readers of the evaluation speed and to help investigators in the event of multiple unethical submissions. Timely reviewer accomplishments often enable fast publication of valuable works with positive implications for healthcare. Unjustifiably long peer review, on the contrary, delays dissemination of influential reports and results in ethical misconduct, such as plagiarism of a manuscript under evaluation [ 31 ].

In the times of proliferation of open-access journals relying on article processing charges, unjustifiably short review may point to the absence of quality evaluation and apparently ‘predatory’ publishing practice [ 32 , 33 ]. Authors when choosing their target journals should take into account the peer review strategy and associated timelines to avoid substandard periodicals.

Reviewer primary interests (unbiased evaluation of manuscripts) may come into conflict with secondary interests (promotion of their own scholarly works), necessitating disclosures by filling in related parts in the online reviewer window or uploading the ICMJE conflict of interest forms. Biomedical reviewers, who are directly or indirectly supported by the pharmaceutical industry, may encounter conflicts while evaluating drug research. Such instances require explicit disclosures of conflicts and/or rejections of reviewer invitations.

Journal editors are obliged to employ mechanisms for disclosing reviewer financial and non-financial conflicts of interest to avoid processing of biased comments [ 34 ]. They should also cautiously process negative comments that oppose dissenting, but still valid, scientific ideas [ 35 ]. Reviewer conflicts that stem from academic activities in a competitive environment may introduce biases, resulting in unfair rejections of manuscripts with opposing concepts, results, and interpretations. The same academic conflicts may lead to coercive reviewer self-citations, forcing authors to incorporate suggested reviewer references or face negative feedback and an unjustified rejection [ 36 ]. Notably, several publisher investigations have demonstrated a global scale of such misconduct, involving some highly cited researchers and top scientific journals [ 37 ].

Fake peer review, an extreme example of conflict of interest, is another form of misconduct that has surfaced in the time of mass proliferation of gold open-access journals and publication of articles without quality checks [ 38 ]. Fake reviews are generated by manipulating authors and commercial editing agencies with full access to their own manuscripts and peer review evaluations in the journal editorial management systems. The sole aim of these reviews is to break the manuscript evaluation process and to pave the way for publication of pseudoscientific articles. Authors of these articles are often supported by funds intended for the growth of science in non-Anglophone countries [ 39 ]. Iranian and Chinese authors are often caught submitting fake reviews, resulting in mass retractions by large publishers [ 38 ]. Several suggestions have been made to overcome this issue, with assigning independent reviewers and requesting their ORCID IDs viewed as the most practical options [ 40 ].

Conclusions

The peer review process is regulated by publishers and editors, enforcing updated global editorial recommendations. Selecting the best reviewers and providing authors with constructive comments may improve the quality of published articles. Reviewers are selected in view of their professional backgrounds and skills in research reporting, statistics, ethics, and language. Quality reviewer comments attract superior submissions and add to the journal’s scientific prestige [ 41 ].

In the era of digitization and open science, various online tools and platforms are available to upgrade the peer review and credit experts for their scholarly contributions. With its links to the ORCID platform and social media channels, Publons now offers the optimal model for crediting and keeping track of the best and most active reviewers. Publons Academy additionally offers online training for novice researchers who may benefit from the experience of their mentoring editors. Overall, reviewer training in how to evaluate journal submissions and avoid related misconduct is an important process, which some indexed journals are experimenting with [ 42 ].

The timelines and rigour of the peer review may change during the current pandemic. However, journal editors should mobilize their resources to avoid publication of unchecked and misleading reports. Additional efforts are required to monitor published contents and encourage readers to post their comments on publishers’ online platforms (blogs) and other social media channels [ 43 , 44 ].

The authors declare no conflict of interest.

  • Open access
  • Published: 18 April 2024

Research ethics and artificial intelligence for global health: perspectives from the global forum on bioethics in research

  • James Shaw 1 , 13 ,
  • Joseph Ali 2 , 3 ,
  • Caesar A. Atuire 4 , 5 ,
  • Phaik Yeong Cheah 6 ,
  • Armando Guio Español 7 ,
  • Judy Wawira Gichoya 8 ,
  • Adrienne Hunt 9 ,
  • Daudi Jjingo 10 ,
  • Katherine Littler 9 ,
  • Daniela Paolotti 11 &
  • Effy Vayena 12  

BMC Medical Ethics volume  25 , Article number:  46 ( 2024 ) Cite this article

973 Accesses

6 Altmetric

Metrics details

The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice. In this paper we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022.

The GFBR is an annual meeting organized by the World Health Organization and supported by the Wellcome Trust, the US National Institutes of Health, the UK Medical Research Council (MRC) and the South African MRC. The forum aims to bring together ethicists, researchers, policymakers, research ethics committee members and other actors to engage with challenges and opportunities specifically related to research ethics. In 2022 the focus of the GFBR was “Ethics of AI in Global Health Research”. The forum consisted of 6 case study presentations, 16 governance presentations, and a series of small group and large group discussions. A total of 87 participants attended the forum from 31 countries around the world, representing disciplines of bioethics, AI, health policy, health professional practice, research funding, and bioinformatics. In this paper, we highlight central insights arising from GFBR 2022.

We describe the significance of four thematic insights arising from the forum: (1) Appropriateness of building AI, (2) Transferability of AI systems, (3) Accountability for AI decision-making and outcomes, and (4) Individual consent. We then describe eight recommendations for governance leaders to enhance the ethical governance of AI in global health research, addressing issues such as AI impact assessments, environmental values, and fair partnerships.

Conclusions

The 2022 Global Forum on Bioethics in Research illustrated several innovations in ethical governance of AI for global health research, as well as several areas in need of urgent attention internationally. This summary is intended to inform international and domestic efforts to strengthen research ethics and support the evolution of governance leadership to meet the demands of AI in global health research.

Peer Review reports

Introduction

The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice [ 1 , 2 , 3 ]. Beyond the growing number of AI applications being implemented in health care, capabilities of AI models such as Large Language Models (LLMs) expand the potential reach and significance of AI technologies across health-related fields [ 4 , 5 ]. Discussion about effective, ethical governance of AI technologies has spanned a range of governance approaches, including government regulation, organizational decision-making, professional self-regulation, and research ethics review [ 6 , 7 , 8 ]. In this paper, we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health research, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022. Although applications of AI for research, health care, and public health are diverse and advancing rapidly, the insights generated at the forum remain highly relevant from a global health perspective. After summarizing important context for work in this domain, we highlight categories of ethical issues emphasized at the forum for attention from a research ethics perspective internationally. We then outline strategies proposed for research, innovation, and governance to support more ethical AI for global health.

In this paper, we adopt the definition of AI systems provided by the Organization for Economic Cooperation and Development (OECD) as our starting point. Their definition states that an AI system is “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy” [ 9 ]. The conceptualization of an algorithm as helping to constitute an AI system, along with hardware, other elements of software, and a particular context of use, illustrates the wide variety of ways in which AI can be applied. We have found it useful to differentiate applications of AI in research as those classified as “AI systems for discovery” and “AI systems for intervention”. An AI system for discovery is one that is intended to generate new knowledge, for example in drug discovery or public health research in which researchers are seeking potential targets for intervention, innovation, or further research. An AI system for intervention is one that directly contributes to enacting an intervention in a particular context, for example informing decision-making at the point of care or assisting with accuracy in a surgical procedure.

The mandate of the GFBR is to take a broad view of what constitutes research and its regulation in global health, with special attention to bioethics in Low- and Middle- Income Countries. AI as a group of technologies demands such a broad view. AI development for health occurs in a variety of environments, including universities and academic health sciences centers where research ethics review remains an important element of the governance of science and innovation internationally [ 10 , 11 ]. In these settings, research ethics committees (RECs; also known by different names such as Institutional Review Boards or IRBs) make decisions about the ethical appropriateness of projects proposed by researchers and other institutional members, ultimately determining whether a given project is allowed to proceed on ethical grounds [ 12 ].

However, research involving AI for health also takes place in large corporations and smaller scale start-ups, which in some jurisdictions fall outside the scope of research ethics regulation. In the domain of AI, the question of what constitutes research also becomes blurred. For example, is the development of an algorithm itself considered a part of the research process? Or only when that algorithm is tested under the formal constraints of a systematic research methodology? In this paper we take an inclusive view, in which AI development is included in the definition of research activity and within scope for our inquiry, regardless of the setting in which it takes place. This broad perspective characterizes the approach to “research ethics” we take in this paper, extending beyond the work of RECs to include the ethical analysis of the wide range of activities that constitute research as the generation of new knowledge and intervention in the world.

Ethical governance of AI in global health

The ethical governance of AI for global health has been widely discussed in recent years. The World Health Organization (WHO) released its guidelines on ethics and governance of AI for health in 2021, endorsing a set of six ethical principles and exploring the relevance of those principles through a variety of use cases. The WHO guidelines also provided an overview of AI governance, defining governance as covering “a range of steering and rule-making functions of governments and other decision-makers, including international health agencies, for the achievement of national health policy objectives conducive to universal health coverage.” (p. 81) The report usefully provided a series of recommendations related to governance of seven domains pertaining to AI for health: data, benefit sharing, the private sector, the public sector, regulation, policy observatories/model legislation, and global governance. The report acknowledges that much work is yet to be done to advance international cooperation on AI governance, especially related to prioritizing voices from Low- and Middle-Income Countries (LMICs) in global dialogue.

One important point emphasized in the WHO report that reinforces the broader literature on global governance of AI is the distribution of responsibility across a wide range of actors in the AI ecosystem. This is especially important to highlight when focused on research for global health, which is specifically about work that transcends national borders. Alami et al. (2020) discussed the unique risks raised by AI research in global health, ranging from the unavailability of data in many LMICs required to train locally relevant AI models to the capacity of health systems to absorb new AI technologies that demand the use of resources from elsewhere in the system. These observations illustrate the need to identify the unique issues posed by AI research for global health specifically, and the strategies that can be employed by all those implicated in AI governance to promote ethically responsible use of AI in global health research.

RECs and the regulation of research involving AI

RECs represent an important element of the governance of AI for global health research, and thus warrant further commentary as background to our paper. Despite the importance of RECs, foundational questions have been raised about their capabilities to accurately understand and address ethical issues raised by studies involving AI. Rahimzadeh et al. (2023) outlined how RECs in the United States are under-prepared to align with recent federal policy requiring that RECs review data sharing and management plans with attention to the unique ethical issues raised in AI research for health [ 13 ]. Similar research in South Africa identified variability in understanding of existing regulations and ethical issues associated with health-related big data sharing and management among research ethics committee members [ 14 , 15 ]. The effort to address harms accruing to groups or communities as opposed to individuals whose data are included in AI research has also been identified as a unique challenge for RECs [ 16 , 17 ]. Doerr and Meeder (2022) suggested that current regulatory frameworks for research ethics might actually prevent RECs from adequately addressing such issues, as they are deemed out of scope of REC review [ 16 ]. Furthermore, research in the United Kingdom and Canada has suggested that researchers using AI methods for health tend to distinguish between ethical issues and social impact of their research, adopting an overly narrow view of what constitutes ethical issues in their work [ 18 ].

The challenges for RECs in adequately addressing ethical issues in AI research for health care and public health exceed a straightforward survey of ethical considerations. As Ferretti et al. (2021) contend, some capabilities of RECs adequately cover certain issues in AI-based health research, such as the common occurrence of conflicts of interest where researchers who accept funds from commercial technology providers are implicitly incentivized to produce results that align with commercial interests [ 12 ]. However, some features of REC review require reform to adequately meet ethical needs. Ferretti et al. outlined weaknesses of RECs that are longstanding and those that are novel to AI-related projects, proposing a series of directions for development that are regulatory, procedural, and complementary to REC functionality. The work required on a global scale to update the REC function in response to the demands of research involving AI is substantial.

These issues take greater urgency in the context of global health [ 19 ]. Teixeira da Silva (2022) described the global practice of “ethics dumping”, where researchers from high income countries bring ethically contentious practices to RECs in low-income countries as a strategy to gain approval and move projects forward [ 20 ]. Although not yet systematically documented in AI research for health, risk of ethics dumping in AI research is high. Evidence is already emerging of practices of “health data colonialism”, in which AI researchers and developers from large organizations in high-income countries acquire data to build algorithms in LMICs to avoid stricter regulations [ 21 ]. This specific practice is part of a larger collection of practices that characterize health data colonialism, involving the broader exploitation of data and the populations they represent primarily for commercial gain [ 21 , 22 ]. As an additional complication, AI algorithms trained on data from high-income contexts are unlikely to apply in straightforward ways to LMIC settings [ 21 , 23 ]. In the context of global health, there is widespread acknowledgement about the need to not only enhance the knowledge base of REC members about AI-based methods internationally, but to acknowledge the broader shifts required to encourage their capabilities to more fully address these and other ethical issues associated with AI research for health [ 8 ].

Although RECs are an important part of the story of the ethical governance of AI for global health research, they are not the only part. The responsibilities of supra-national entities such as the World Health Organization, national governments, organizational leaders, commercial AI technology providers, health care professionals, and other groups continue to be worked out internationally. In this context of ongoing work, examining issues that demand attention and strategies to address them remains an urgent and valuable task.

The GFBR is an annual meeting organized by the World Health Organization and supported by the Wellcome Trust, the US National Institutes of Health, the UK Medical Research Council (MRC) and the South African MRC. The forum aims to bring together ethicists, researchers, policymakers, REC members and other actors to engage with challenges and opportunities specifically related to research ethics. Each year the GFBR meeting includes a series of case studies and keynotes presented in plenary format to an audience of approximately 100 people who have applied and been competitively selected to attend, along with small-group breakout discussions to advance thinking on related issues. The specific topic of the forum changes each year, with past topics including ethical issues in research with people living with mental health conditions (2021), genome editing (2019), and biobanking/data sharing (2018). The forum is intended to remain grounded in the practical challenges of engaging in research ethics, with special interest in low resource settings from a global health perspective. A post-meeting fellowship scheme is open to all LMIC participants, providing a unique opportunity to apply for funding to further explore and address the ethical challenges that are identified during the meeting.

In 2022, the focus of the GFBR was “Ethics of AI in Global Health Research”. The forum consisted of 6 case study presentations (both short and long form) reporting on specific initiatives related to research ethics and AI for health, and 16 governance presentations (both short and long form) reporting on actual approaches to governing AI in different country settings. A keynote presentation from Professor Effy Vayena addressed the topic of the broader context for AI ethics in a rapidly evolving field. A total of 87 participants attended the forum from 31 countries around the world, representing disciplines of bioethics, AI, health policy, health professional practice, research funding, and bioinformatics. The 2-day forum addressed a wide range of themes. The conference report provides a detailed overview of each of the specific topics addressed while a policy paper outlines the cross-cutting themes (both documents are available at the GFBR website: https://www.gfbr.global/past-meetings/16th-forum-cape-town-south-africa-29-30-november-2022/ ). As opposed to providing a detailed summary in this paper, we aim to briefly highlight central issues raised, solutions proposed, and the challenges facing the research ethics community in the years to come.

In this way, our primary aim in this paper is to present a synthesis of the challenges and opportunities raised at the GFBR meeting and in the planning process, followed by our reflections as a group of authors on their significance for governance leaders in the coming years. We acknowledge that the views represented at the meeting and in our results are a partial representation of the universe of views on this topic; however, the GFBR leadership invested a great deal of resources in convening a deeply diverse and thoughtful group of researchers and practitioners working on themes of bioethics related to AI for global health including those based in LMICs. We contend that it remains rare to convene such a strong group for an extended time and believe that many of the challenges and opportunities raised demand attention for more ethical futures of AI for health. Nonetheless, our results are primarily descriptive and are thus not explicitly grounded in a normative argument. We make effort in the Discussion section to contextualize our results by describing their significance and connecting them to broader efforts to reform global health research and practice.

Uniquely important ethical issues for AI in global health research

Presentations and group dialogue over the course of the forum raised several issues for consideration, and here we describe four overarching themes for the ethical governance of AI in global health research. Brief descriptions of each issue can be found in Table  1 . Reports referred to throughout the paper are available at the GFBR website provided above.

The first overarching thematic issue relates to the appropriateness of building AI technologies in response to health-related challenges in the first place. Case study presentations referred to initiatives where AI technologies were highly appropriate, such as in ear shape biometric identification to more accurately link electronic health care records to individual patients in Zambia (Alinani Simukanga). Although important ethical issues were raised with respect to privacy, trust, and community engagement in this initiative, the AI-based solution was appropriately matched to the challenge of accurately linking electronic records to specific patient identities. In contrast, forum participants raised questions about the appropriateness of an initiative using AI to improve the quality of handwashing practices in an acute care hospital in India (Niyoshi Shah), which led to gaming the algorithm. Overall, participants acknowledged the dangers of techno-solutionism, in which AI researchers and developers treat AI technologies as the most obvious solutions to problems that in actuality demand much more complex strategies to address [ 24 ]. However, forum participants agreed that RECs in different contexts have differing degrees of power to raise issues of the appropriateness of an AI-based intervention.

The second overarching thematic issue related to whether and how AI-based systems transfer from one national health context to another. One central issue raised by a number of case study presentations related to the challenges of validating an algorithm with data collected in a local environment. For example, one case study presentation described a project that would involve the collection of personally identifiable data for sensitive group identities, such as tribe, clan, or religion, in the jurisdictions involved (South Africa, Nigeria, Tanzania, Uganda and the US; Gakii Masunga). Doing so would enable the team to ensure that those groups were adequately represented in the dataset to ensure the resulting algorithm was not biased against specific community groups when deployed in that context. However, some members of these communities might desire to be represented in the dataset, whereas others might not, illustrating the need to balance autonomy and inclusivity. It was also widely recognized that collecting these data is an immense challenge, particularly when historically oppressive practices have led to a low-trust environment for international organizations and the technologies they produce. It is important to note that in some countries such as South Africa and Rwanda, it is illegal to collect information such as race and tribal identities, re-emphasizing the importance for cultural awareness and avoiding “one size fits all” solutions.

The third overarching thematic issue is related to understanding accountabilities for both the impacts of AI technologies and governance decision-making regarding their use. Where global health research involving AI leads to longer-term harms that might fall outside the usual scope of issues considered by a REC, who is to be held accountable, and how? This question was raised as one that requires much further attention, with law being mixed internationally regarding the mechanisms available to hold researchers, innovators, and their institutions accountable over the longer term. However, it was recognized in breakout group discussion that many jurisdictions are developing strong data protection regimes related specifically to international collaboration for research involving health data. For example, Kenya’s Data Protection Act requires that any internationally funded projects have a local principal investigator who will hold accountability for how data are shared and used [ 25 ]. The issue of research partnerships with commercial entities was raised by many participants in the context of accountability, pointing toward the urgent need for clear principles related to strategies for engagement with commercial technology companies in global health research.

The fourth and final overarching thematic issue raised here is that of consent. The issue of consent was framed by the widely shared recognition that models of individual, explicit consent might not produce a supportive environment for AI innovation that relies on the secondary uses of health-related datasets to build AI algorithms. Given this recognition, approaches such as community oversight of health data uses were suggested as a potential solution. However, the details of implementing such community oversight mechanisms require much further attention, particularly given the unique perspectives on health data in different country settings in global health research. Furthermore, some uses of health data do continue to require consent. One case study of South Africa, Nigeria, Kenya, Ethiopia and Uganda suggested that when health data are shared across borders, individual consent remains necessary when data is transferred from certain countries (Nezerith Cengiz). Broader clarity is necessary to support the ethical governance of health data uses for AI in global health research.

Recommendations for ethical governance of AI in global health research

Dialogue at the forum led to a range of suggestions for promoting ethical conduct of AI research for global health, related to the various roles of actors involved in the governance of AI research broadly defined. The strategies are written for actors we refer to as “governance leaders”, those people distributed throughout the AI for global health research ecosystem who are responsible for ensuring the ethical and socially responsible conduct of global health research involving AI (including researchers themselves). These include RECs, government regulators, health care leaders, health professionals, corporate social accountability officers, and others. Enacting these strategies would bolster the ethical governance of AI for global health more generally, enabling multiple actors to fulfill their roles related to governing research and development activities carried out across multiple organizations, including universities, academic health sciences centers, start-ups, and technology corporations. Specific suggestions are summarized in Table  2 .

First, forum participants suggested that governance leaders including RECs, should remain up to date on recent advances in the regulation of AI for health. Regulation of AI for health advances rapidly and takes on different forms in jurisdictions around the world. RECs play an important role in governance, but only a partial role; it was deemed important for RECs to acknowledge how they fit within a broader governance ecosystem in order to more effectively address the issues within their scope. Not only RECs but organizational leaders responsible for procurement, researchers, and commercial actors should all commit to efforts to remain up to date about the relevant approaches to regulating AI for health care and public health in jurisdictions internationally. In this way, governance can more adequately remain up to date with advances in regulation.

Second, forum participants suggested that governance leaders should focus on ethical governance of health data as a basis for ethical global health AI research. Health data are considered the foundation of AI development, being used to train AI algorithms for various uses [ 26 ]. By focusing on ethical governance of health data generation, sharing, and use, multiple actors will help to build an ethical foundation for AI development among global health researchers.

Third, forum participants believed that governance processes should incorporate AI impact assessments where appropriate. An AI impact assessment is the process of evaluating the potential effects, both positive and negative, of implementing an AI algorithm on individuals, society, and various stakeholders, generally over time frames specified in advance of implementation [ 27 ]. Although not all types of AI research in global health would warrant an AI impact assessment, this is especially relevant for those studies aiming to implement an AI system for intervention into health care or public health. Organizations such as RECs can use AI impact assessments to boost understanding of potential harms at the outset of a research project, encouraging researchers to more deeply consider potential harms in the development of their study.

Fourth, forum participants suggested that governance decisions should incorporate the use of environmental impact assessments, or at least the incorporation of environment values when assessing the potential impact of an AI system. An environmental impact assessment involves evaluating and anticipating the potential environmental effects of a proposed project to inform ethical decision-making that supports sustainability [ 28 ]. Although a relatively new consideration in research ethics conversations [ 29 ], the environmental impact of building technologies is a crucial consideration for the public health commitment to environmental sustainability. Governance leaders can use environmental impact assessments to boost understanding of potential environmental harms linked to AI research projects in global health over both the shorter and longer terms.

Fifth, forum participants suggested that governance leaders should require stronger transparency in the development of AI algorithms in global health research. Transparency was considered essential in the design and development of AI algorithms for global health to ensure ethical and accountable decision-making throughout the process. Furthermore, whether and how researchers have considered the unique contexts into which such algorithms may be deployed can be surfaced through stronger transparency, for example in describing what primary considerations were made at the outset of the project and which stakeholders were consulted along the way. Sharing information about data provenance and methods used in AI development will also enhance the trustworthiness of the AI-based research process.

Sixth, forum participants suggested that governance leaders can encourage or require community engagement at various points throughout an AI project. It was considered that engaging patients and communities is crucial in AI algorithm development to ensure that the technology aligns with community needs and values. However, participants acknowledged that this is not a straightforward process. Effective community engagement requires lengthy commitments to meeting with and hearing from diverse communities in a given setting, and demands a particular set of skills in communication and dialogue that are not possessed by all researchers. Encouraging AI researchers to begin this process early and build long-term partnerships with community members is a promising strategy to deepen community engagement in AI research for global health. One notable recommendation was that research funders have an opportunity to incentivize and enable community engagement with funds dedicated to these activities in AI research in global health.

Seventh, forum participants suggested that governance leaders can encourage researchers to build strong, fair partnerships between institutions and individuals across country settings. In a context of longstanding imbalances in geopolitical and economic power, fair partnerships in global health demand a priori commitments to share benefits related to advances in medical technologies, knowledge, and financial gains. Although enforcement of this point might be beyond the remit of RECs, commentary will encourage researchers to consider stronger, fairer partnerships in global health in the longer term.

Eighth, it became evident that it is necessary to explore new forms of regulatory experimentation given the complexity of regulating a technology of this nature. In addition, the health sector has a series of particularities that make it especially complicated to generate rules that have not been previously tested. Several participants highlighted the desire to promote spaces for experimentation such as regulatory sandboxes or innovation hubs in health. These spaces can have several benefits for addressing issues surrounding the regulation of AI in the health sector, such as: (i) increasing the capacities and knowledge of health authorities about this technology; (ii) identifying the major problems surrounding AI regulation in the health sector; (iii) establishing possibilities for exchange and learning with other authorities; (iv) promoting innovation and entrepreneurship in AI in health; and (vi) identifying the need to regulate AI in this sector and update other existing regulations.

Ninth and finally, forum participants believed that the capabilities of governance leaders need to evolve to better incorporate expertise related to AI in ways that make sense within a given jurisdiction. With respect to RECs, for example, it might not make sense for every REC to recruit a member with expertise in AI methods. Rather, it will make more sense in some jurisdictions to consult with members of the scientific community with expertise in AI when research protocols are submitted that demand such expertise. Furthermore, RECs and other approaches to research governance in jurisdictions around the world will need to evolve in order to adopt the suggestions outlined above, developing processes that apply specifically to the ethical governance of research using AI methods in global health.

Research involving the development and implementation of AI technologies continues to grow in global health, posing important challenges for ethical governance of AI in global health research around the world. In this paper we have summarized insights from the 2022 GFBR, focused specifically on issues in research ethics related to AI for global health research. We summarized four thematic challenges for governance related to AI in global health research and nine suggestions arising from presentations and dialogue at the forum. In this brief discussion section, we present an overarching observation about power imbalances that frames efforts to evolve the role of governance in global health research, and then outline two important opportunity areas as the field develops to meet the challenges of AI in global health research.

Dialogue about power is not unfamiliar in global health, especially given recent contributions exploring what it would mean to de-colonize global health research, funding, and practice [ 30 , 31 ]. Discussions of research ethics applied to AI research in global health contexts are deeply infused with power imbalances. The existing context of global health is one in which high-income countries primarily located in the “Global North” charitably invest in projects taking place primarily in the “Global South” while recouping knowledge, financial, and reputational benefits [ 32 ]. With respect to AI development in particular, recent examples of digital colonialism frame dialogue about global partnerships, raising attention to the role of large commercial entities and global financial capitalism in global health research [ 21 , 22 ]. Furthermore, the power of governance organizations such as RECs to intervene in the process of AI research in global health varies widely around the world, depending on the authorities assigned to them by domestic research governance policies. These observations frame the challenges outlined in our paper, highlighting the difficulties associated with making meaningful change in this field.

Despite these overarching challenges of the global health research context, there are clear strategies for progress in this domain. Firstly, AI innovation is rapidly evolving, which means approaches to the governance of AI for health are rapidly evolving too. Such rapid evolution presents an important opportunity for governance leaders to clarify their vision and influence over AI innovation in global health research, boosting the expertise, structure, and functionality required to meet the demands of research involving AI. Secondly, the research ethics community has strong international ties, linked to a global scholarly community that is committed to sharing insights and best practices around the world. This global community can be leveraged to coordinate efforts to produce advances in the capabilities and authorities of governance leaders to meaningfully govern AI research for global health given the challenges summarized in our paper.

Limitations

Our paper includes two specific limitations that we address explicitly here. First, it is still early in the lifetime of the development of applications of AI for use in global health, and as such, the global community has had limited opportunity to learn from experience. For example, there were many fewer case studies, which detail experiences with the actual implementation of an AI technology, submitted to GFBR 2022 for consideration than was expected. In contrast, there were many more governance reports submitted, which detail the processes and outputs of governance processes that anticipate the development and dissemination of AI technologies. This observation represents both a success and a challenge. It is a success that so many groups are engaging in anticipatory governance of AI technologies, exploring evidence of their likely impacts and governing technologies in novel and well-designed ways. It is a challenge that there is little experience to build upon of the successful implementation of AI technologies in ways that have limited harms while promoting innovation. Further experience with AI technologies in global health will contribute to revising and enhancing the challenges and recommendations we have outlined in our paper.

Second, global trends in the politics and economics of AI technologies are evolving rapidly. Although some nations are advancing detailed policy approaches to regulating AI more generally, including for uses in health care and public health, the impacts of corporate investments in AI and political responses related to governance remain to be seen. The excitement around large language models (LLMs) and large multimodal models (LMMs) has drawn deeper attention to the challenges of regulating AI in any general sense, opening dialogue about health sector-specific regulations. The direction of this global dialogue, strongly linked to high-profile corporate actors and multi-national governance institutions, will strongly influence the development of boundaries around what is possible for the ethical governance of AI for global health. We have written this paper at a point when these developments are proceeding rapidly, and as such, we acknowledge that our recommendations will need updating as the broader field evolves.

Ultimately, coordination and collaboration between many stakeholders in the research ethics ecosystem will be necessary to strengthen the ethical governance of AI in global health research. The 2022 GFBR illustrated several innovations in ethical governance of AI for global health research, as well as several areas in need of urgent attention internationally. This summary is intended to inform international and domestic efforts to strengthen research ethics and support the evolution of governance leadership to meet the demands of AI in global health research.

Data availability

All data and materials analyzed to produce this paper are available on the GFBR website: https://www.gfbr.global/past-meetings/16th-forum-cape-town-south-africa-29-30-november-2022/ .

Clark P, Kim J, Aphinyanaphongs Y, Marketing, Food US. Drug Administration Clearance of Artificial Intelligence and Machine Learning Enabled Software in and as Medical devices: a systematic review. JAMA Netw Open. 2023;6(7):e2321792–2321792.

Article   Google Scholar  

Potnis KC, Ross JS, Aneja S, Gross CP, Richman IB. Artificial intelligence in breast cancer screening: evaluation of FDA device regulation and future recommendations. JAMA Intern Med. 2022;182(12):1306–12.

Siala H, Wang Y. SHIFTing artificial intelligence to be responsible in healthcare: a systematic review. Soc Sci Med. 2022;296:114782.

Yang X, Chen A, PourNejatian N, Shin HC, Smith KE, Parisien C, et al. A large language model for electronic health records. NPJ Digit Med. 2022;5(1):194.

Meskó B, Topol EJ. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ Digit Med. 2023;6(1):120.

Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1(9):389–99.

Minssen T, Vayena E, Cohen IG. The challenges for Regulating Medical Use of ChatGPT and other large Language models. JAMA. 2023.

Ho CWL, Malpani R. Scaling up the research ethics framework for healthcare machine learning as global health ethics and governance. Am J Bioeth. 2022;22(5):36–8.

Yeung K. Recommendation of the council on artificial intelligence (OECD). Int Leg Mater. 2020;59(1):27–34.

Maddox TM, Rumsfeld JS, Payne PR. Questions for artificial intelligence in health care. JAMA. 2019;321(1):31–2.

Dzau VJ, Balatbat CA, Ellaissi WF. Revisiting academic health sciences systems a decade later: discovery to health to population to society. Lancet. 2021;398(10318):2300–4.

Ferretti A, Ienca M, Sheehan M, Blasimme A, Dove ES, Farsides B, et al. Ethics review of big data research: what should stay and what should be reformed? BMC Med Ethics. 2021;22(1):1–13.

Rahimzadeh V, Serpico K, Gelinas L. Institutional review boards need new skills to review data sharing and management plans. Nat Med. 2023;1–3.

Kling S, Singh S, Burgess TL, Nair G. The role of an ethics advisory committee in data science research in sub-saharan Africa. South Afr J Sci. 2023;119(5–6):1–3.

Google Scholar  

Cengiz N, Kabanda SM, Esterhuizen TM, Moodley K. Exploring perspectives of research ethics committee members on the governance of big data in sub-saharan Africa. South Afr J Sci. 2023;119(5–6):1–9.

Doerr M, Meeder S. Big health data research and group harm: the scope of IRB review. Ethics Hum Res. 2022;44(4):34–8.

Ballantyne A, Stewart C. Big data and public-private partnerships in healthcare and research: the application of an ethics framework for big data in health and research. Asian Bioeth Rev. 2019;11(3):315–26.

Samuel G, Chubb J, Derrick G. Boundaries between research ethics and ethical research use in artificial intelligence health research. J Empir Res Hum Res Ethics. 2021;16(3):325–37.

Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics. 2021;22(1):1–17.

Teixeira da Silva JA. Handling ethics dumping and neo-colonial research: from the laboratory to the academic literature. J Bioethical Inq. 2022;19(3):433–43.

Ferryman K. The dangers of data colonialism in precision public health. Glob Policy. 2021;12:90–2.

Couldry N, Mejias UA. Data colonialism: rethinking big data’s relation to the contemporary subject. Telev New Media. 2019;20(4):336–49.

Organization WH. Ethics and governance of artificial intelligence for health: WHO guidance. 2021.

Metcalf J, Moss E. Owning ethics: corporate logics, silicon valley, and the institutionalization of ethics. Soc Res Int Q. 2019;86(2):449–76.

Data Protection Act - OFFICE OF THE DATA PROTECTION COMMISSIONER KENYA [Internet]. 2021 [cited 2023 Sep 30]. https://www.odpc.go.ke/dpa-act/ .

Sharon T, Lucivero F. Introduction to the special theme: the expansion of the health data ecosystem–rethinking data ethics and governance. Big Data & Society. Volume 6. London, England: SAGE Publications Sage UK; 2019. p. 2053951719852969.

Reisman D, Schultz J, Crawford K, Whittaker M. Algorithmic impact assessments: a practical Framework for Public Agency. AI Now. 2018.

Morgan RK. Environmental impact assessment: the state of the art. Impact Assess Proj Apprais. 2012;30(1):5–14.

Samuel G, Richie C. Reimagining research ethics to include environmental sustainability: a principled approach, including a case study of data-driven health research. J Med Ethics. 2023;49(6):428–33.

Kwete X, Tang K, Chen L, Ren R, Chen Q, Wu Z, et al. Decolonizing global health: what should be the target of this movement and where does it lead us? Glob Health Res Policy. 2022;7(1):3.

Abimbola S, Asthana S, Montenegro C, Guinto RR, Jumbam DT, Louskieter L, et al. Addressing power asymmetries in global health: imperatives in the wake of the COVID-19 pandemic. PLoS Med. 2021;18(4):e1003604.

Benatar S. Politics, power, poverty and global health: systems and frames. Int J Health Policy Manag. 2016;5(10):599.

Download references

Acknowledgements

We would like to acknowledge the outstanding contributions of the attendees of GFBR 2022 in Cape Town, South Africa. This paper is authored by members of the GFBR 2022 Planning Committee. We would like to acknowledge additional members Tamra Lysaght, National University of Singapore, and Niresh Bhagwandin, South African Medical Research Council, for their input during the planning stages and as reviewers of the applications to attend the Forum.

This work was supported by Wellcome [222525/Z/21/Z], the US National Institutes of Health, the UK Medical Research Council (part of UK Research and Innovation), and the South African Medical Research Council through funding to the Global Forum on Bioethics in Research.

Author information

Authors and affiliations.

Department of Physical Therapy, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada

Berman Institute of Bioethics, Johns Hopkins University, Baltimore, MD, USA

Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA

Department of Philosophy and Classics, University of Ghana, Legon-Accra, Ghana

Caesar A. Atuire

Centre for Tropical Medicine and Global Health, Nuffield Department of Medicine, University of Oxford, Oxford, UK

Mahidol Oxford Tropical Medicine Research Unit, Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand

Phaik Yeong Cheah

Berkman Klein Center, Harvard University, Bogotá, Colombia

Armando Guio Español

Department of Radiology and Informatics, Emory University School of Medicine, Atlanta, GA, USA

Judy Wawira Gichoya

Health Ethics & Governance Unit, Research for Health Department, Science Division, World Health Organization, Geneva, Switzerland

Adrienne Hunt & Katherine Littler

African Center of Excellence in Bioinformatics and Data Intensive Science, Infectious Diseases Institute, Makerere University, Kampala, Uganda

Daudi Jjingo

ISI Foundation, Turin, Italy

Daniela Paolotti

Department of Health Sciences and Technology, ETH Zurich, Zürich, Switzerland

Effy Vayena

Joint Centre for Bioethics, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

JS led the writing, contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. JA contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. CA contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. PYC contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. AE contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. JWG contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. AH contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. DJ contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. KL contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. DP contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. EV contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper.

Corresponding author

Correspondence to James Shaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Shaw, J., Ali, J., Atuire, C.A. et al. Research ethics and artificial intelligence for global health: perspectives from the global forum on bioethics in research. BMC Med Ethics 25 , 46 (2024). https://doi.org/10.1186/s12910-024-01044-w

Download citation

Received : 31 October 2023

Accepted : 01 April 2024

Published : 18 April 2024

DOI : https://doi.org/10.1186/s12910-024-01044-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Machine learning
  • Research ethics
  • Global health

BMC Medical Ethics

ISSN: 1472-6939

review of a research paper

This paper is in the following e-collection/theme issue:

Published on 29.4.2024 in Vol 8 (2024)

Attributes, Quality, and Downloads of Dementia-Related Mobile Apps for Patients With Dementia and Their Caregivers: App Review and Evaluation Study

Authors of this article:

Author Orcid Image

Original Paper

  • Tzu Han Chen 1   ; 
  • Shin-Da Lee 2 , PhD   ; 
  • Wei-Fen Ma 3, 4 , PhD  

1 PhD Program for Health Science and Industry, China Medical University, Taichung, Taiwan

2 PhD Program in Healthcare Science, Department of Physical Therapy, China Medical University, Taichung, Taiwan

3 PhD Program in Healthcare Science, School of Nursing, China Medical University, Taichung, Taiwan

4 Department of Nursing, China Medical University Hospital, Taichung, Taiwan

Corresponding Author:

Wei-Fen Ma, PhD

PhD Program in Healthcare Science

School of Nursing

China Medical University

No 100, Sec 1, Jingmao Road

Beitun District

Taichung, 406040

Phone: 886 4 22053366 ext 7107

Fax:886 4 22053748

Email: [email protected]

Background: The adoption of mobile health (mHealth) apps among older adults (>65 years) is rapidly increasing. However, use of such apps has not been fully effective in supporting people with dementia and their caregivers in their daily lives. This is mainly attributed to the heterogeneous quality of mHealth apps, highlighting the need for improved app quality in the development of dementia-related mHealth apps.

Objective: The aims of this study were (1) to assess the quality and content of mobile apps for dementia management and (2) to investigate the relationship between app quality and download numbers.

Methods: We reviewed dementia-related mHealth apps available in the Google Play Store and Apple App Store in Taiwan. The identified mobile apps were stratified according to a random sampling approach and evaluated by five independent reviewers with sufficient training and proficiency in the field of mHealth and the related health care sector. App quality was scored according to the user version of the Mobile Application Rating Scale. A correlation analysis was then performed between the app quality score and number of app downloads.

Results: Among the 17 apps that were evaluated, only one was specifically designed to provide dementia-related education. The mean score for the overall app quality was 3.35 (SD 0.56), with the engagement (mean 3.04, SD 0.82) and information (mean 3.14, SD 0.88) sections of the scale receiving the lowest ratings. Our analyses showed clear differences between the top three– and bottom three–rated apps, particularly in the entertainment and interest subsections of the engagement category where the ratings ranged from 1.4 to 5. The top three apps had a common feature in their interface, which included memory, attention, focus, calculation, and speed-training games, whereas the apps that received lower ratings were found to be deficient in providing adequate information. Although there was a correlation between the number of downloads (5000 or more) and app quality (t 15 =4.087, P <.001), this may not be a significant determinant of the app’s perceived impact.

Conclusions: The quality of dementia-related mHealth apps is highly variable. In particular, our results show that the top three quality apps performed well in terms of engagement and information, and they all received more than 5000 downloads. The findings of this study are limited due to the small sample size and possibility of disregarding exceptional occurrences. Publicly available expert ratings of mobile apps could help people with dementia and their caregivers choose a quality mHealth app.

Introduction

The global aging population is experiencing an astonishing surge, which will inevitably result in a significant rise in the prevalence of dementia [ 1 ]. Consequently, it has become crucial to identify efficacious strategies to support people affected by dementia and enhance the well-being of their caregivers [ 2 ]. In addition, numerous studies have shown that mobile health (mHealth) apps can effectively reduce medical costs and improve quality of life for middle-aged and older adults, especially after COVID-19 [ 3 , 4 ].

The use of technology among older adults (aged >65 years) has triggered noteworthy transformations in health care provision [ 5 ]. An area where technology has proven especially valuable is in the realm of dementia management, with mHealth apps dominating the forefront of this field [ 6 ]. In addition, the UK government has shown support for the advancement of intelligent assistive technology for individuals with dementia [ 7 ]. This includes endorsing the development of mHealth apps specifically tailored to patients with early-stage dementia and their caregivers [ 8 ]. These apps are believed to have significant potential in aiding cognitive function and facilitating self-care among those living with dementia [ 9 ].

However, the constant emergence of mHealth apps has made it challenging for both patients with dementia and their caregivers to differentiate, evaluate, and use mHealth apps that promote healthy behaviors [ 10 , 11 ]. Therefore, information pertaining to dementia-related mHealth apps and their functionalities should be effectively evaluated and made publicly available.

There is significant heterogeneity in the quality of dementia-related mHealth apps [ 12 ], and most studies assessing app quality have used criteria that focused on general characteristics that could be assessed without downloading or using the app itself [ 13 , 14 ]. Therefore, there is a need for a human-centered, multidimensional measure that includes usability components and relatively more domains to identify high-quality mHealth apps [ 15 ]. Ideally, better features and functionality would drive high-quality apps; however, efforts to identify the differences between high- and low-quality apps have been hampered by scarce research.

Moreover, the factors that contribute to the popularity of specific mHealth apps remain largely unknown, although there is some evidence of a relationship between an app’s star rating and its number of downloads [ 16 ]. However, few studies have evaluated dementia-related mHealth apps to date. Therefore, the specific metrics of app quality that are likely to be associated with a higher number of downloads remain to be identified.

This study had several goals. The first goal was to analyze the content of mobile apps for people with dementia and their caregivers across different categories. The second goal was to assess the quality of individual apps using the user version of the Mobile Application Rating Scale (uMARS). The third objective was to perform a comparative analysis of the highest- and lowest-quality dementia-related mHealth apps, with the broader goal of establishing guidelines to facilitate future app development. Finally, the study aimed to explore the correlation between app quality and downloads. This was done to help identify the gaps in the currently available dementia-related mHealth apps and to provide recommendations for patients with dementia and their caregivers on how to select high-quality apps.

Search Strategy and Inclusion Criteria

Apps were identified from the Taiwan Apple App Store and Google Play Store. Between July 2022 and November 2022, the following search terms (in Mandarin and English) were used in the app stores: dementia, cognitive dysfunction, dementia caregiver, Alzheimer disease, dementia care, cognitive games, and memory games. The screening criteria and process are illustrated in Figure 1 .

Apps were included if they met all of the following inclusion criteria: (1) exists in the Google Play Store for Android mobile devices and the App Store for Apple mobile devices; (2) addresses daily-life topics related to neurocognitive disorders [ 17 ], and (3) was purposefully developed with the primary goal of supporting patients or caregivers (including health care workers) with the topic of mild cognitive impairment; (4) can be downloaded and used for free; (5) mainly uses Mandarin or the English version can be translated into Mandarin and is easy to understand; and (6) has been updated within the last 5 years.

review of a research paper

Stratified Random Sampling of Apps by Average Download Numbers

In November 2022, searches were conducted on the two platforms to find apps that met the above criteria. Of the 407 apps found, 332 were deemed ineligible after screening ( Figure 1 ). The remaining 75 apps were thoroughly screened, resulting in 52 apps included for preliminary evaluation. Since the length of time an app has been available on a platform can affect its number of downloads, we calculated the ratio of download numbers with respect to time on the platform. Additionally, to consider uneven allocation and lack of continuity in stratification, the apps were sorted according to the ratio of downloads relative to the number of days since the release date on the platform. Thus, the average number of downloads was calculated as the total number of downloads/number of days on platform since the release date. The apps were then ranked according to the average number of downloads in ascending order, and we randomly selected 1 out of every 3 apps for a total of 17 apps that were subject to detailed quality assessment and review.

General Characteristics and Classification

Each app was used by two authors (THC and WFM) independently. According to their content subcategory, the selected apps were categorized into four different types using the guidelines provided by the National Institute for Health and Care Excellence and the National Health Service in the United Kingdom [ 18 , 19 ]. Any conflicts in app classification were adjudicated by discussion between the two reviewers regarding each domain within the extraction form to reach consensus. Details on the main characteristics and comments of the included apps are provided in Multimedia Appendix 1 .

mHealth App Quality Evaluation

The uMARS is a tool that can be used to evaluate the quality of mHealth apps, including four objective subdomains: engagement, functionality, esthetics, and information. There is also a domain for subjective quality and another for perceived impact. Stoyanov et al [ 20 ] developed the uMARS in 2016, which showed excellent internal consistency (Cronbach α=0.90). The uMARS scores are rated on a 5-point Likert scale ranging from 1 (“strongly disagree”) to 5 (“strongly agree”).

The objective quality score is calculated as the average of the scores of the four dimensions. Engagement is defined as fun, interesting, customizable, interactive, and has prompts (eg, sends alerts, messages, reminders, feedback, allows sharing). Functionality refers to overall app functioning, easy to learn, navigation, flow logic, and gestural design of the app. Esthetics refers to the graphic design, overall visual appeal, color scheme, and stylistic consistency. Finally, the information domain assesses whether the app contains high-quality information (eg, text, feedback, measures, and references from a credible source). The subjective quality score reflects the rater’s personal interest in the app. The final uMARS subscale includes 6 items designed to assess the perceived impact of the app on the user’s awareness, knowledge, attitude, intention to change, help-seeking, and likelihood to change the target health behavior.

Reviewer Recruitment and Selection

Reviewers recruited for this study were required to have a professional background in clinical treatment, the health care industry, or information engineering. Additionally, they were required to have at least 3 years of work experience in elderly health care or health technology–related fields, as well as experience using digital mobile devices. Exclusion criteria included no relevant work experience in elderly health care or health technology–related fields in the past 5 years.

Five reviewers were recruited as an interdisciplinary group of experts. The initial reviewer possessed knowledge and had experience in creating a content management system for a dementia management app. The second reviewer was a health informatics researcher with sufficient training and expertise in the relevant health care technology fields focused on dementia. The third reviewer also had extensive experience in dementia and in the mHealth industry. The fourth reviewer was a psychiatric nurse with experience in caring for older adults along with clinical experience in dementia. The final reviewer was a nurse practitioner who has been providing care for older adults and patients with dementia for over a decade.

Evaluation Process

Each of the apps was assessed by the five reviewers and the evaluation process was conducted between December 17, 2022, and January 3, 2023. All 17 apps can be found on the Android platform; hence, the apps were reviewed when running on the same Android tablet. The experts were blinded to the download numbers, year, and country of development of the apps, and they were not allowed to discuss their assessments with each other to ensure independence in their ratings. We ensured an equal distribution of app assessments in each round by applying a ratio that took into account the download-to-time axis. Furthermore, each reviewer allocated a minimum of 30 minutes and a maximum of 1 hour to thoroughly evaluate the included apps.

Ethical Considerations

The study received ethical approval from the ethics committee of China Medical Hospital, Taiwan, on November 8, 2022 (approval number: CMUH111-REC2-151) and was conducted according to the guidelines of the Declaration of Helsinki.

The experts in this study were not compelled to take part and had the freedom to determine their involvement. Additionally, they possessed the ability to discontinue their participation at any juncture, without being required to supply a justification for their decision.

This study utilized legally obtained publicly available information, and it was ensured that the use of information aligns with its intended public knowledge purpose. Furthermore, data collected from research and expert evaluations are stored on a hard drive and encrypted. The evaluation process was fully anonymous, with no face-to-face interactions among experts, and the evaluation of the app was a non-nominal, noninteractive, and noninvasive study. Relevant original data regarding this research will be preserved for at least 3 years after the execution period, securely locked in the principal investigator’s office cabinet.

The clinical trial protocol developed by the research institute stipulates that in the event of adverse reactions resulting in damages, China Medical University Hospital is responsible for providing compensation. Nonetheless, adverse reactions explicitly disclosed in the informed consent form signed by the experts are not eligible for compensation. This study was not covered by liability insurance and the per-expert evaluation cost was US $170.

Statistical Analysis

The number and proportion of information displayed in the apps, including the country of app development, download number, and app type, were summarized using descriptive statistics. The uMARS scores, along with the scores for each domain and subscale, are presented as the mean and SD. The t test was used to examine the association between downloads and each domain of the uMARS. Statistical analyses were conducted using IBM SPSS Statistics v28 (IBM Corp). We considered P <.05 to indicate statistical significance in all analyses.

App Attributes

The apps were primarily developed in the United States, and 11 out of the 17 dementia-related mobile apps were downloaded less than 5000 times. Among the 17 apps, 8 were classified as those designed to improve clinical outcomes from established treatment pathways through behavior change, and for enhancement of patient adherence and compliance with treatment; 5 were designed as standalone digital game therapeutics; 3 were classified for supporting clinical diagnosis and/or decision-making; and 1 app was primarily designed to provide disease-related education ( Table 1 ).

App Quality Assessment by Interdisciplinary Experts

There was a notable level of agreement or correlation among the reviewers in their app evaluations, as indicated by the Kendall W statistic of 0.143, which was significant at P =.05.

Overall, the mean app quality score was 3.35 (SD 0.56), which ranged from 2.25 (worst-rated app) to 4.07 (best-rated app). For engagement, the mean score was 3.04 (SD 0.81). Furthermore, functionality had the highest mean score of 3.76 (SD 0.38) and showed the smallest variation in minimum and maximum scores among the apps evaluated. In other words, these apps were considered to have relatively high levels of functionality and usability by the interdisciplinary expert reviewers. The esthetic quality of the interface received a mean score of 3.45 (SD 0.65), indicating that visual design elements such as button size, icon clarity, and content arrangement were perceived as being well organized. Additionally, the information domain received a mean score of 3.14 (SD 0.88), suggesting that the presentation and accessibility of information on the screen could be improved. Multimedia Appendix 2 provides the complete details of app quality scores.

Top Three and Bottom Three Performers in App Quality Score

The apps ranked in the top three positions according to app quality scores included Memorado Brain Games, NeuroNation-Brain Training & Brain Games, and Brain Track. The common characteristic among these apps is that their interface consists of training games focused on memory, attention, concentration, calculation, and speed. Conversely, Alz Test, American Caregiver Association, and Dementia and Me ranked in the bottom three; these three apps performed poorly on both engagement and information.

The overall scores for each item for the top three and bottom three apps are provided in Table 2 . The functionality domain received the highest average ratings, particularly for gestural design, navigation, and performance. The largest discrepancies in app quality ratings between the top three and bottom three apps were found in the areas of entertainment and interest, where the scores ranged from 1.4 (worst-rated app) to 5 (best-rated app). Similarly, in the subscale of perceived impact, there was a significant difference in attitude, with ratings ranging from 1.2 (worst-rated app) to 4.2 (best-rated app).

a N/A: not applicable.

Association Between Downloads and Quality of Mobile Apps

The Connectivity in Digital Health survey of global mHealth apps reported that 55% of the apps available on the Google Play store, Apple App Store, Windows Phone Store, Amazon Appstore, and Blackberry World had fewer than 5000 total downloads [ 21 ]. Therefore, the 17 apps included in our study were divided into two subgroups based on the total number of downloads. The first subset consisted of 6 apps with more than 5000 total downloads, representing 35.3% of all apps. The mean app quality score for this subgroup was significantly higher than that of the group of apps with less than 5000 downloads ( Table 3 ). In addition, apps with more than 5000 downloads generally had higher scores for each domain. However, neither information nor perceived impact scores were significantly correlated with the number of downloads ( Table 3 ).

a uMARS: user version of the Mobile Application Rating Scale.

Principal Findings

According to our results, there was only one included app that primarily focused on delivering dementia-related education. Furthermore, the top three quality apps were all classified as the main app type, as they all served as standalone digital game therapeutics. In general, the dementia-related mHealth apps were of moderate quality with a common characteristic of high functionality. Nonetheless, these apps exhibited poor performance in engagement and the credibility of information domain. Although we found a correlation between the number of downloads and app quality, this may not be a significant determinant of the information provided and the app’s perceived impact.

Comparison With Prior Work

mHealth apps offer a new way to support people with dementia and their caregivers [ 22 ]. However, previous studies have pointed out that the scientific literature on the design and evaluation of web- and mobile-based health apps remains scarce [ 23 , 24 ]. To address this issue, our study directly assessed the app type in a practical setting and found the lack of a dementia management app that delivers disease-related education. A randomized controlled trial indicated that mHealth apps can be of educational value to patients by providing structured disease and treatment-related education; therefore, future app developers can focus on increasing the availability of this app type with educational value [ 25 ].

A previous study suggested that research collaboration between health care and software engineering experts could help advance our knowledge of app functionality and effectiveness [ 16 ]. Therefore, we established a panel of experts to obtain accurate results on the quality of currently available dementia-related mHealth apps and further identified their subjective quality and perceived impact. The pattern of high functionality and low information quality is in accordance with the findings of other studies on mobile apps designed for older adults [ 26 ]. Additionally, the inadequacy of credibility was associated with several risks, particularly in the areas of self-diagnosis, prevention, and health promotion [ 27 ].

High-quality mHealth apps offer self-management features, relaxation, recreation, and trustworthy information [ 28 , 29 ]. The uMARS consists of elements of usability and a broader range of areas that are used in the assessment of mHealth apps with superior quality. Notably, a consensus was reached among the reviewers in both the engagement and esthetics domains. However, there was no correlation or similarity among reviewers with respect to assessments on functionality and information of the apps. This discrepancy may be due to the different backgrounds of the reviewers [ 30 ]; health care providers may perceive the app’s information as inadequate, whereas experienced developers of dementia apps may find its functionality to be lacking.

Currently, little is known about why some health apps become popular and others do not, and researchers have demonstrated that the number of downloads on app marketplaces does not correlate with clinical utility or validity for mental health apps [ 31 ]. A study from the Netherlands and Portugal identified the predictors that might influence the number of downloads for urology apps [ 32 ]. However, there is little research on the predictors of app downloads for dementia-related mHealth apps in the PubMed database. Hence, to gain a more comprehensive understanding, the apps were stratified using a random sampling approach. Due to the different themes of mHealth apps, our study found a positive relationship between app quality and number of downloads. Finally, the download number does only seem to be a limited orientation aid for the selection of an mHealth app, and future studies should consider this aspect.

Limitations

This study has several limitations. Initially, the search for mobile apps was conducted within a limited time frame and focused on apps that had been updated within the last 5 years. As such, the study fell short with respect to establishing causal relationships. In addition, rapidly expanding and ever-changing mobile app marketplaces are facing significant challenges in keeping pace with the dynamic landscape; hence, some of the apps evaluated in this study may have since changed or new alternatives may have been developed. Furthermore, the search for mobile apps was confined to app stores in Taiwan, which may not accurately represent app offerings in other countries due to regional disparities in developers’ decisions regarding app availability.

Previous research indicated that the cost associated with using mHealth apps acts as a major obstacle for older individuals when it comes to embracing mobile technologies [ 9 , 33 ]. Furthermore, a recent study discovered that 96% of mHealth apps that are accessible on the Chinese market can be downloaded without cost [ 34 ]. Consequently, one-quarter of the apps would have been overlooked if they required payment. Nonetheless, it is possible that within this group of paid apps, there may have been some high-quality apps that were unintentionally excluded from consideration.

Additionally, the stratification method represents both less popular and highly downloaded apps, mirroring real-world data [ 21 ]. However, this method resulted in a smaller sample size, which could potentially lead to some superior apps being overlooked by chance. With only 17 apps remaining for evaluation, it is possible that there may not have been sufficient statistical power to establish a significant relationship between app quality and download frequency.

Finally, to ensure a rigorous evaluation of the app content, experts from different fields were recruited to review the apps. However, the limited number of reviewers could potentially influence the results of the study, and the degree of agreement may not be strong given that the reviewers are from different disciplines and the time they allocated to evaluate each app could potentially impact the reliability of agreement.

Despite these limitations, this study helps to fill the gap in the evaluation of dementia-related mobile apps. The results can still be used to guide the selection of such apps in Taiwan and possibly other regions with similar app marketplaces, while also highlighting the need for ongoing evaluation of mobile apps for dementia care.

Conclusions

This study set out to gain a better understanding of the characteristics, quality, and downloads of dementia-related mHealth apps. In particular, the top three quality apps were all offered as standalone digital game therapeutics, which scored well on both engagement and information quality, and received more than 5000 total downloads. Nevertheless, the findings of our investigation do not offer a comprehensive solution due to the restricted scale of the sample and the potential for overlooking extraordinary instances. Consequently, annual reviews and publicly available expert ratings of mobile apps could help people with dementia and their caregivers choose a high-quality mobile app.

Acknowledgments

The authors acknowledge all staff and participants for their contributions to the study. This study was supported by the Ministry of Science and Technology (MOST 110-2314-B-039-041-MY2; NSTC112-2314-B-039-015) and China Medical University (CMU111-MF-108), Taiwan. The funders reviewed the study as part of the grant application but had no further role in study design; data collection, analysis, and interpretation; manuscript preparation; and paper publication.

Data Availability

The study data are identified participant data. The data that support the findings of this study will be available beginning 12 months and ending 36 months following the article publication from the corresponding author (WFM) upon reasonable request.

Authors' Contributions

THC and WFM designed the study and were responsible for data collection and analysis. THC, SDL, and WFM all contributed to manuscript preparation and critical revisions.

Conflicts of Interest

None declared.

Description, classification, and overall comments of reviewers after using the selected dementia-related mobile health apps.

User version of Mobile App Rating Scale scoring of the dementia-related mHealth apps.

CONSORT-EHEALTH checklist (V 1.6.1).

  • GBD 2019 Dementia Forecasting Collaborators. Estimation of the global prevalence of dementia in 2019 and forecasted prevalence in 2050: an analysis for the Global Burden of Disease Study 2019. Lancet Public Health. Feb 2022;7(2):e105-e125. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Astell AJ, Bouranis N, Hoey J, Lindauer A, Mihailidis A, Nugent C, et al. Technology and Dementia Professional Interest Area .... Technology and dementia: the future is now. Dement Geriatr Cogn Disord. 2019;47(3):131-139. [ CrossRef ] [ Medline ]
  • Chiu C, Hu Y, Lin D, Chang F, Chang C, Lai C. The attitudes, impact, and learning needs of older adults using apps on touchscreen mobile devices: results from a pilot study. Comput Hum Behav. Oct 2016;63:189-197. [ CrossRef ]
  • Almalki M, Giannicchi A. Health apps for combating COVID-19: descriptive review and taxonomy. JMIR Mhealth Uhealth. Mar 02, 2021;9(3):e24322. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chiu C, Liu C. Understanding older adult's technology adoption and withdrawal for elderly care and education: mixed method analysis from national survey. J Med Internet Res. Nov 03, 2017;19(11):e374. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ambegaonkar A, Ritchie C, de la Fuente Garcia S. The use of mobile applications as communication aids for people with dementia: opportunities and limitations. J Alzheimers Dis Rep. 2021;5(1):681-692. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • David MCB, Kolanko M, Del Giovane M, Lai H, True J, Beal E, et al. Remote monitoring of physiology in people living with dementia: an observational cohort study. JMIR Aging. Mar 09, 2023;6:e43777. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lee AR, Csipke E, Yates L, Moniz-Cook E, McDermott O, Taylor S, et al. A web-based self-management app for living well with dementia: user-centered development study. JMIR Hum Factors. Feb 24, 2023;10:e40785. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kruse CS, Mileski M, Moreno J. Mobile health solutions for the aging population: a systematic narrative analysis. J Telemed Telecare. May 2017;23(4):439-451. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Øksnebjerg L, Woods B, Ruth K, Lauridsen A, Kristiansen S, Holst HD, et al. A tablet app supporting self-management for people with dementia: explorative study of adoption and use patterns. JMIR Mhealth Uhealth. Jan 17, 2020;8(1):e14694. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Brown A, O'Connor S. Mobile health applications for people with dementia: a systematic review and synthesis of qualitative studies. Inform Health Soc Care. Oct 01, 2020;45(4):343-359. [ CrossRef ] [ Medline ]
  • Agarwal P, Gordon D, Griffith J, Kithulegoda N, Witteman HO, Sacha Bhatia R, et al. Assessing the quality of mobile applications in chronic disease management: a scoping review. NPJ Digit Med. Mar 10, 2021;4(1):46. [ CrossRef ] [ Medline ]
  • Kuo H, Chang C, Ma W. A survey of mobile apps for the care management of patients with dementia. Healthcare. Jun 23, 2022;10(7):1173. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Guessi Margarido M, Shah A, Seto E. Smartphone applications for informal caregivers of chronically ill patients: a scoping review. NPJ Digit Med. Mar 21, 2022;5(1):33. [ CrossRef ] [ Medline ]
  • Azad-Khaneghah P, Neubauer N, Miguel Cruz A, Liu L. Mobile health app usability and quality rating scales: a systematic review. Disabil Rehabil Assist Technol. Oct 2021;16(7):712-721. [ CrossRef ] [ Medline ]
  • Wisniewski H, Liu G, Henson P, Vaidyam A, Hajratalli NK, Onnela J, et al. Understanding the quality, effectiveness and attributes of top-rated smartphone health apps. Evid Based Ment Health. Feb 2019;22(1):4-9. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sachdev PS, Blacker D, Blazer DG, Ganguli M, Jeste DV, Paulsen JS, et al. Classifying neurocognitive disorders: the DSM-5 approach. Nat Rev Neurol. Nov 30, 2014;10(11):634-642. [ CrossRef ] [ Medline ]
  • Unsworth H, Dillon B, Collinson L, Powell H, Salmon M, Oladapo T, et al. The NICE Evidence Standards Framework for digital health and care technologies - Developing and maintaining an innovative evidence framework with global impact. Digit Health. Jun 24, 2021;7:20552076211018617. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rowland SP, Fitzgerald JE, Holme T, Powell J, McGregor A. What is the clinical value of mHealth for patients? NPJ Digit Med. Jan 13, 2020;3(1):4. [ CrossRef ] [ Medline ]
  • Stoyanov SR, Hides L, Kavanagh DJ, Wilson H. Development and validation of the user version of the Mobile Application Rating Scale (uMARS). JMIR Mhealth Uhealth. Jun 10, 2016;4(2):e72. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • mHealth Economics 2017/2018 – Connectivity in Digital Health. research2guidance. URL: https://research2guidance.com/product/connectivity-in-digital-health/ [accessed 2023-11-02]
  • Krafft J, Barisch-Fritz B, Krell-Roesch J, Trautwein S, Scharpf A, Woll A. A tablet-based app to support nursing home staff in delivering an individualized cognitive and physical exercise program for individuals with dementia: mixed methods usability study. JMIR Aging. Aug 22, 2023;6:e46480. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Perakslis E, Ginsburg GS. Digital health-The need to assess benefits, risks, and value. JAMA. Jan 12, 2021;325(2):127-128. [ CrossRef ] [ Medline ]
  • Lorca-Cabrera J, Grau C, Martí-Arques R, Raigal-Aran L, Falcó-Pegueroles A, Albacar-Riobóo N. Effectiveness of health web-based and mobile app-based interventions designed to improve informal caregiver's well-being and quality of life: a systematic review. Int J Med Inform. Feb 2020;134:104003. [ CrossRef ] [ Medline ]
  • Timmers T, Janssen L, Pronk Y, van der Zwaard BC, Koëter S, van Oostveen D, et al. Assessing the efficacy of an educational smartphone or tablet app with subdivided and interactive content to increase patients' medical knowledge: randomized controlled trial. JMIR Mhealth Uhealth. Dec 21, 2018;6(12):e10742. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Portenhauser AA, Terhorst Y, Schultchen D, Sander LB, Denkinger MD, Stach M, et al. Mobile apps for older adults: systematic search and evaluation within online stores. JMIR Aging. Feb 19, 2021;4(1):e23313. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Li Y, Ding J, Wang Y, Tang C, Zhang P. Nutrition-related mobile apps in the China App Store: assessment of functionality and quality. JMIR Mhealth Uhealth. Jul 30, 2019;7(7):e13261. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Yousaf K, Mehmood Z, Saba T, Rehman A, Munshi A, Alharbey R, et al. Mobile-health applications for the efficient delivery of health care facility to people with dementia (PwD) and support to their carers: a survey. Biomed Res Int. 2019;2019:7151475. [ CrossRef ] [ Medline ]
  • Hoogendoorn P, Versluis A, van Kampen S, McCay C, Leahy M, Bijlsma M, et al. What makes a quality health app-Developing a global research-based health app quality assessment framework for CEN-ISO/TS 82304-2: Delphi Study. JMIR Form Res. Jan 23, 2023;7:e43905. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Martin W, Sarro F, Jia Y, Zhang Y, Harman M. A survey of App Store analysis for software engineering. IEEE Trans Soft Eng. Sep 1, 2017;43(9):817-847. [ CrossRef ]
  • Singh K, Drouin K, Newmark LP, Lee J, Faxvaag A, Rozenblum R, et al. Many mobile health apps target high-need, high-cost populations, but gaps remain. Health Aff. Dec 01, 2016;35(12):2310-2318. [ CrossRef ] [ Medline ]
  • Pereira-Azevedo N, Osório L, Cavadas V, Fraga A, Carrasquinho E, Cardoso de Oliveira E, et al. Expert involvement predicts mHealth app downloads: multivariate regression analysis of urology apps. JMIR Mhealth Uhealth. Jul 15, 2016;4(3):e86. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rasche P, Wille M, Bröhl C, Theis S, Schäfer K, Knobe M, et al. Prevalence of health app use among older adults in Germany: national survey. JMIR Mhealth Uhealth. Jan 23, 2018;6(1):e26. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Yang L, Wu J, Mo X, Chen Y, Huang S, Zhou L, et al. Changes in mobile health apps usage before and after the COVID-19 outbreak in China: semilongitudinal survey. JMIR Public Health Surveill. Feb 22, 2023;9:e40552. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by A Mavragani, H LaMonica; submitted 24.07.23; peer-reviewed by A Kaplin, A Ranerup; comments to author 10.10.23; revised version received 09.11.23; accepted 03.04.24; published 29.04.24.

©Tzu Han Chen, Shin-Da Lee, Wei-Fen Ma. Originally published in JMIR Formative Research (https://formative.jmir.org), 29.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.

Smithsonian Voices

From the Smithsonian Museums

NATIONAL MUSEUM OF NATURAL HISTORY

A Glowing Review: Meet the Museum Scientist Who Studies the Evolution of Bioluminescence in Corals

Deepwater coral specialist Andrea Quattrini’s new paper pins the origin of bioluminescence in corals to more than 500 million years ago

Naomi Greenberg

IMG_4552.JPG

When humans picture corals, we tend to think of words like “colorful,” “intricate,” and “bustling.” But the shallow tropical reefs that come to mind in the vivid style of “Finding Nemo” are only a small percentage of coral diversity on planet Earth.

Coral reefs have been around for hundreds of millions of years. Today, there are more than six thousand known species of coral that can be found almost anywhere in the ocean. Deep-sea corals can thrive as many as 10,000 feet below the surface, a realm of the ocean only accessible to researchers through remotely operated vehicles.

According to research zoologist Andrea Quattrini , the curator of corals at the National Museum of Natural History , corals are a great system to understand evolution in the deep sea because they occur across such a broad range of depth. Quattrini’s newest paper, published this week in the journal Proceedings of the Royal Society B: Biological Sciences , pushes back the evolution of bioluminescence in corals to at least 540 million years ago. This is much earlier than previously predicted and provides clues into how these ecosystem engineers colonized the abyss.

None

Quattrini also studies the origin of coral skeletal types and the current state of coral biodiversity worldwide. Understanding corals past and present provides the necessary knowledge to support coral conservation in the future.

I had no idea there were so many coral species living in the deep ocean. What is it about deepwater corals that captivates your research interests?

There’s something about the remoteness and the unknown of the deep sea that absolutely intrigues me. I fell in love with the ocean at a very young age and by the time I was twelve, I decided I wanted to be a marine biologist.

In 2003, I went on my first dive in a submersible, called the Johnson Sea Link. We went to a deepwater coral reef off South Carolina.  When I was 700 meters down under the surface of the ocean, exploring this deepwater coral reef that had been around far longer than I had, I felt humbled that I had the opportunity as a human to visit this place and see all the incredible life. And then we saw all this fantastic bioluminescence on the way back up. It made me think: “this world is an amazing place.”

After that, I told myself that I would study these ecosystems for as long as I could.

None

Why are corals such good subjects for studying evolution?

When people picture corals, they think of shallow, warm, sunny environments with these colorful coral reefs. But that’s actually not the majority of corals. There’s so much morphological diversity amongst corals. Some are arboreal, tree-like shapes. Some are sea fans. Others are soft corals, and some are solitary cup corals. The diversity of corals is much more than just reef-building corals.

They’re a great system to understand evolution in the deep sea because of this diversity and because they occur across this broad range of depths. They occur from shallow waters to the deep abyss, from the poles to the tropics. They face various environmental variables that change across depth. Because of this, they can help us answer questions about evolution in the deep sea.

Your recent finding about the origin of bioluminescence is making a big splash. Can you tell us how that research started?

About a decade ago, I was working on a coral sample while on a research cruise. The coral released a light, and that was amazing to see. Since then, I’ve had an interest in bioluminescence. In 2014, I started testing various coral species for bioluminescence, and since then I’ve been compiling a list. So now, every time I go out to sea, I test various corals for this trait.

I worked with a team of collaborators to use those observations of living corals to get a sense of the evolution of bioluminescence, through a process called ancestral state reconstruction. We don’t have any fossil records of bioluminescence, so you take the information from today in extant species, and you move backwards in time. The more living species that share a trait, the more likely it is that their ancestors are going to share that trait as well.

By working backwards, we found that the ability to bioluminesce has been in the genomes of these corals for hundreds of millions of years.

None

What do you hope people will learn from this paper?

The key point of our study is that bioluminescence arose 540 million years ago and has been retained all this time. That tells us that it must have been important for these organisms’ fitness.

What is also interesting is that it arose when animals were exploding and diversifying across the planet during the Cambrian period. It is likely that bioluminescence either enabled diversification of corals in the deep sea or was retained in those families that are most diverse in the deep sea.

Now that the paper is out, I’m excited that people will know that corals are pretty cool animals that can communicate with light. We're hoping that our paper will help also get more people to look for bioluminescence and further explore why it has been so important for so long.

Why is it important to have a detailed picture of coral diversity?

When I started my career, we didn’t really know a lot about deepwater corals, and we didn’t know a lot about coral genetics. But biodiversity is critical for ecosystem functioning. Biodiverse systems equal healthy ecosystems.

This past summer, there were huge heat waves that occurred off Florida, where corals went locally extinct. If a coral species goes locally extinct in one place, you want to be able to understand that species distribution to help protect it somewhere else. Species’ identities are fundamental to this understanding.

In addition, NMNH is leading efforts to help characterize that biodiversity across the northern Gulf of Mexico at sites where corals were injured from the oil spill. We're part of this project to help characterize that biodiversity so that scientists can restore corals and areas that were injured by the spill.

"Biodiversity is critical for ecosystem functioning. Biodiverse systems equal healthy ecosystems." 

— Andrea Quattrini, curator of corals NMNH

None

What do you like about working as a curator at the museum?

I like the fact that I can incorporate the coral collection into my research, and I can help grow the collection as well. But I also think being at a museum is so unique in terms of education and outreach for the public. And we have people from all over the world that come to our collections, so I get to interact with collaborators from across the world.

In our lab, we’re working toward a collective goal of better understanding coral systematics, but also just working to become better human beings and better scientists. 

This interview has been edited for length and clarity.  

Meet a SI-entist : The Smithsonian is so much more than its world-renowned exhibits and artifacts. It is a hub of scientific exploration for hundreds of researchers from around the world. Once a month, we’ll introduce you to a Smithsonian Institution scientist (or SI-entist) and the fascinating work they do behind the scenes at the National Museum of Natural History   

Related Stories  Smithsonian Expedition Yields a New Species of Deep-Sea Coral    Summer Summary: A Mysterious Fossil Tooth, Metallic Planet and Marine Hitchhikers   Meet the Smithsonian Scientist Who Has Spent Decades Exploring Ocean Depths    Scientists Cryopreserve and Revive Coral Fragments in a World First for Conservation

Naomi Greenberg

Naomi Greenberg | READ MORE

Naomi Greenberg is a Science Writing Intern with the Smithsonian’s National Museum of Natural History. She translates natural history research for general consumption in her writing for Smithsonian Voices as well as for the Smithsonian Ocean Portal. She is a senior at Georgetown University, where she founded and led the science section of the campus newspaper, The Hoya, in addition to studying biology and journalism. You can find more of her work  here .

This paper is in the following e-collection/theme issue:

Published on 29.4.2024 in Vol 26 (2024)

The Applications of Artificial Intelligence for Assessing Fall Risk: Systematic Review

Authors of this article:

Author Orcid Image

  • Ana González-Castro 1 , PT, MSc   ; 
  • Raquel Leirós-Rodríguez 2 , PT, PhD   ; 
  • Camino Prada-García 3 , MD, PhD   ; 
  • José Alberto Benítez-Andrades 4 , PhD  

1 Nursing and Physical Therapy Department, Universidad de León, Ponferrada, Spain

2 SALBIS Research Group, Nursing and Physical Therapy Department, Universidad de León, Ponferrada, Spain

3 Department of Preventive Medicine and Public Health, Universidad de Valladolid, Valladolid, Spain

4 SALBIS Research Group, Department of Electric, Systems and Automatics Engineering, Universidad de León, León, Spain

Corresponding Author:

Ana González-Castro, PT, MSc

Nursing and Physical Therapy Department

Universidad de León

Astorga Ave

Ponferrada, 24401

Phone: 34 987442000

Email: [email protected]

Background: Falls and their consequences are a serious public health problem worldwide. Each year, 37.3 million falls requiring medical attention occur. Therefore, the analysis of fall risk is of great importance for prevention. Artificial intelligence (AI) represents an innovative tool for creating predictive statistical models of fall risk through data analysis.

Objective: The aim of this review was to analyze the available evidence on the applications of AI in the analysis of data related to postural control and fall risk.

Methods: A literature search was conducted in 6 databases with the following inclusion criteria: the articles had to be published within the last 5 years (from 2018 to 2024), they had to apply some method of AI, AI analyses had to be applied to data from samples consisting of humans, and the analyzed sample had to consist of individuals with independent walking with or without the assistance of external orthopedic devices.

Results: We obtained a total of 3858 articles, of which 22 were finally selected. Data extraction for subsequent analysis varied in the different studies: 82% (18/22) of them extracted data through tests or functional assessments, and the remaining 18% (4/22) of them extracted through existing medical records. Different AI techniques were used throughout the articles. All the research included in the review obtained accuracy values of >70% in the predictive models obtained through AI.

Conclusions: The use of AI proves to be a valuable tool for creating predictive models of fall risk. The use of this tool could have a significant socioeconomic impact as it enables the development of low-cost predictive models with a high level of accuracy.

Trial Registration: PROSPERO CRD42023443277; https://tinyurl.com/4sb72ssv

Introduction

According to alarming figures reported by the World Health Organization in 2021, falls cause 37.3 million injuries annually that require medical attention and result in 684,000 deaths [ 1 ]. These figures indicate a significant impact of falls on the health care system and on society, both directly and indirectly [ 2 , 3 ].

Life expectancy has progressively increased over the years, leading to an aging population [ 4 ]. By 2050, it is estimated that 16% of the population will be >65 years of age. In this group, the incidence of falls has steadily risen, becoming the leading cause of accidental injury and death (accounting for 55.8% of such deaths, according to some research) [ 5 , 6 ]. It is estimated that 30% of this population falls at least once a year, negatively impacting their physical and psychological well-being [ 7 , 8 ].

Physically, falls are often associated with severe complications that can lead to extended hospitalizations [ 9 ]. These hospitalizations are usually due to serious injuries, often cranioencephalic trauma, fractures, or soft tissue injuries [ 10 , 11 ]. Psychologically, falls among the older adult population tend to result in self-imposed limitations due to the fear of falling again [ 10 , 12 ]. These limitations lead to social isolation as individuals avoid participating in activities or even individual mobility [ 13 ]. Consequently, falls can lead to psychological conditions such as anxiety and depression [ 14 , 15 ]. Numerous research studies on the risk of falls are currently underway, with ongoing investigations into various innovations and intervention ideas [ 16 - 19 ]. These studies encompass the identification of fall risk factors [ 20 , 21 ], strategies for prevention [ 22 , 23 ], and the outcomes following rehabilitation [ 23 , 24 ].

In the health care field, artificial intelligence (AI) is characterized by data management and processing, offering new possibilities to the health care paradigm [ 24 ]. Some applications of AI in the health care domain include assessing tumor interaction processes [ 25 ], serving as a tool for image-based diagnostics [ 26 , 27 ], participating in virus detection [ 28 ], and, most importantly, as a statistical and predictive method [ 29 - 32 ].

Several publications have combined AI techniques to address health care issues [ 33 - 35 ]. Within the field of predictive models, it is important to understand certain differentiations. In AI, we have machine learning and deep learning [ 36 - 38 ]. Machine learning encompasses a set of techniques applied to data and can be done in a supervised or unsupervised manner [ 39 , 40 ]. On the other hand, deep learning is typically used to work with larger data sets compared to machine learning, and its computational cost is higher [ 41 , 42 ].

Some examples of AI techniques include the gradient boosting machine [ 43 ], learning method, and the long short-term memory (LSTM) [ 44 ] and the convolutional neural network (CNN) [ 45 ], all of them are deep learning methods.

For all the reasons mentioned in the preceding section, it was considered necessary to conduct a systematic review to analyze the scientific evidence of AI applications in the analysis of data related to postural control and the risk of falls.

Data Sources and Searches

This systematic review and meta-analysis were prospectively registered on PROSPERO (ID CRD42023443277) and followed the Meta-Analyses of Observational Studies in Epidemiology checklist [ 46 ] and the recommendations of the Cochrane Collaboration [ 47 ].

The search was conducted in January 2024 on the following databases: PubMed, Scopus, ScienceDirect, Web of Science, CINAHL, and Cochrane Library. The Medical Subject Headings (MeSH) terms used for the search included machine learning , artificial intelligent , accidental falls , rehabilitation , and physical therapy specialty . The terms “predictive model” and “algorithms” were also used. These terms were combined using the Boolean operators AND and OR ( Textbox 1 ).

  • (“machine learning”[MeSH Terms] OR “artificial intelligent”[MeSH Terms]) AND “accidental falls”[MeSH Terms]
  • (“machine learning”[MeSH Terms] OR “artificial intelligent”) AND (“rehabilitation”[MeSH Terms] OR “physical therapy specialty”[MeSH Terms])
  • “accidental falls” [Title/Abstract] AND “algorithms” [Title/Abstract]
  • “accidental falls”[Title/Abstract] AND “predictive model” [Title/Abstract]
  • TITLE-ABS-KEY (“machine learning” OR “artificial intelligent”) AND TITLE-ABS-KEY (“accidental falls”)
  • TITLE-ABS-KEY (“machine learning” OR “artificial intelligent”) AND TITLE-ABS-KEY (“rehabilitation” OR “physical therapy specialty”)
  • TITLE-ABS-KEY (“accidental falls” AND “algorithms”)
  • TITLE-ABS-KEY (“accidental falls” AND “predictive model”)

ScienceDirect

  • Title, abstract, keywords: (“machine learning” OR “artificial intelligent”) AND “accidental falls”
  • Title, abstract, keywords: (“machine learning” OR “artificial intelligent”) AND (“rehabilitation” OR “physical therapy specialty”)
  • Title, abstract, keywords: (“accidental falls” AND “algorithms”)
  • Title, abstract, keywords: (“accidental falls” AND “predictive model”)

Web of Science

  • TS=(“machine learning” OR “artificial intelligent”) AND TS=“accidental falls”
  • TS=(“machine learning” OR “artificial intelligent”) AND TS= (“rehabilitation” OR “physical therapy specialty”)
  • AB= (“accidental falls” AND “algorithms”)
  • AB= (“accidental falls” AND “predictive model”)
  • (MH “machine learning” OR MH “artificial intelligent”) AND MH “accidental falls”
  • (MH “machine learning” OR MH “artificial intelligent”) AND (MH “rehabilitation” OR MH “physical therapy specialty”)
  • (AB “accidental falls”) AND (AB “algorithms”)
  • (AB “accidental falls”) AND (AB “predictive model”)

Cochrane Library

  • (“machine learning” OR “artificial intelligent”) in Title Abstract Keyword AND “accidental falls” in Title Abstract Keyword
  • (“machine learning” OR “artificial intelligent”) in Title Abstract Keyword AND (“rehabilitation” OR “physical therapy specialty”) in Title Abstract Keyword
  • “accidental falls” in Title Abstract Keyword AND “algorithms” in Title Abstract Keyword
  • “accidental falls” in Title Abstract Keyword AND “predictive model” in Title Abstract Keyword

Study Selection

After removing duplicates, 2 reviewers (AGC and RLR) independently screened articles for eligibility. In the case of disagreement, a third reviewer (JABA) finally decided whether the study should be included or not. We calculated the κ coefficient and percentage agreement scores to assess reliability before any consensus and estimated the interrater reliability using κ. Interrater reliability was estimated using κ>0.7 indicating a high level of agreement between the reviewers, κ of 0.5 to 0.7 indicating a moderate level of agreement, and κ<0.5 indicating a low level of agreement [ 48 ].

For the selection of results, the inclusion criteria were established as follows: (1) articles should have been published in the last 5 years (from 2018 to the present); (2) they must apply some AI method; (3) AI analyses should be applied to data from samples of humans; and (4) the sample analyzed should consist of people with independent walking, with or without the use of external orthopedic devices.

After screening the data, extracting, obtaining, and screening the titles and abstracts for inclusion criteria, the selected abstracts were obtained in full texts. Titles and abstracts lacking sufficient information regarding inclusion criteria were also obtained as full texts. Full-text articles were selected in case of compliance with inclusion criteria by the 2 reviewers using a data extraction form.

Data Extraction and Quality Assessment

The 2 reviewers mentioned independently extracting data from the included studies using a customized data extraction table in Excel (Microsoft Corporation). In case of disagreement, both reviewers debated until an agreement was reached.

The data extracted from the included articles for further analysis were: demographic information (title, authors, journal, and year), characteristics of the sample (age, inclusion and exclusion criteria, and number of participants), study-specific parameters (study type, AI techniques applied, and data analyzed), and the results obtained. Tables were used to describe both the studies’ characteristics and the extracted data.

Assessment of Risk of Bias

The methodological quality of the selected articles was evaluated using the Critical Review Form for Quantitative Studies [ 49 ]. The ROBINS-E (Risk of Bias in Nonrandomized Studies of Exposures) tool was used to evaluate the risk of bias [ 50 ].

Characteristics of the Selected Studies

A total of 3858 articles were initially retrieved, with 1563 duplicates removed. From the remaining 2295 articles, 2271 were excluded based on the initial selection criteria, leaving 24 articles for the subsequent analysis. In this second analysis, 2 articles were removed as they were systematic reviews, and 22 articles were finally selected [ 51 - 72 ] ( Figure 1 ). After the first reading of all candidate full texts, the kappa score for inclusion of the results of reviewers 1 and 2 was 0.98, indicating a very high level of agreement.

The methodological quality of the 22 analyzed studies (Table S1 in Multimedia Appendix 1 [ 51 , 52 , 54 , 56 , 58 , 59 , 61 , 63 , 64 , 69 , 70 , 72 ]) ranged from 11 points in 2 (9.1%) studies [ 52 , 65 ] to 16 points in 7 (32%) studies [ 53 , 54 , 56 , 63 , 69 - 71 ].

review of a research paper

Study Characteristics and Risk of Bias

All the selected articles were cross-sectional observational studies ( Table 1 ).

In total, 34 characteristics affecting the risk of falls were extracted and classified into high fall-risk and low fall-risk groups with the largest sample sizes significantly differing from the rest. Studies based on data collected from various health care systems had larger sample sizes, ranging from 22,515 to 265,225 participants [ 60 , 65 , 67 ]. In contrast, studies that applied some form of evaluation test had sample sizes ranging from 8 participants [ 56 ] to 746 participants [ 55 ].

It is worth noting the various studies conducted by Dubois et al [ 54 , 72 ], whose publications on fall risk and machine learning started in 2018 and progressed until 2021. A total of 9.1% (2/22) of the articles by this author were included in the final selection [ 54 , 72 ]. Both articles used samples with the same characteristics, even though the first one was composed of 43 participants [ 54 ] and the last one had 30 participants [ 72 ]. All 86.4% (19/22) of the articles used samples of individuals aged ≥65 years [ 51 - 60 , 62 - 65 , 68 - 72 ]. In the remaining 13.6% (3/22) of the articles, the ages ranged between 16 and 62 years [ 61 , 66 , 67 ].

Althobaiti et al [ 61 ] used a sample of participants between the ages of 19 and 35 years for their research, where these participants had to reproduce examples of falls for subsequent analysis. In 2022, Ladios-Martin et al [ 67 ] extracted medical data from participants aged >16 years for their research. Finally, in 2023, the study by Maray et al [ 66 ] used 3 types of samples, with ages ranging from 21 to 62 years. Among the 22 selected articles, only 1 (4.5%) of them did not describe the characteristics of its sample [ 52 ].

Finally, regarding the sex of the samples, 13.6% (3/22) of the articles specified in the characteristics of their samples that only female individuals were included among their participants [ 53 , 59 , 70 ].

a AI: artificial intelligence.

b ML: machine learning.

c nd: none described.

d ADL: activities of daily living.

e TUG: Timed Up and Go.

f BBS: Berg Balance Scale.

g ASM: associative skill memories.

h CNN: convolutional neural network.

i FP: fall prevention.

j IMU: inertial measurement unit.

k AUROC: area under the receiver operating characteristic curve.

l AUPR: area under the precision-recall curve.

m MFS: Morse Fall Scale.

n XGB: extreme gradient boosting.

o MCT: motor control test.

p GBM: gradient boosting machine.

q RF: random forest.

r LOOCV: leave-one-out cross-validation.

s LSTM: long short-term memory.

Applied Assessment Procedures

All articles initially analyzed the characteristics of their samples to subsequently create a predictive model of the risk of falls. However, they did not all follow the same evaluation process.

Regarding the applied assessment procedures, 3 main options stood out: studies with tests or assessments accompanied by sensors or accelerometers [ 51 - 57 , 59 , 61 - 63 , 66 , 70 - 72 ], studies with tests or assessments accompanied by cameras [ 68 , 69 ], or studies based on medical records [ 58 , 60 , 65 , 67 ] ( Figure 2 ). Guillan et al [ 64 ] performed a physical and functional evaluation of the participants. In their study, they evaluated parameters such as walking speed, stride frequency and length, and the minimum space between the toes. Afterward, they asked them to record the fall events they had during the past 2 years in a personal diary.

review of a research paper

In total, 22.7% (5/22) of the studies used the Timed Up and Go test [ 53 , 54 , 69 , 71 , 72 ]. In 18.2% (4/22) of them, the participants performed the test while wearing a sensor to collect data [ 53 , 54 , 71 , 72 ]. In 1 (4.5%) study, the test was recorded with a camera for later analysis [ 69 ]. Another commonly used method in studies was to ask participants to perform everyday tasks or activities of daily living while a sensor collected data for subsequent analysis. Specifically, 18.2% (4/22) of the studies used this method to gather data [ 51 , 56 , 61 , 62 ].

A total of 22.7% (5/22) of the studies asked participants to simulate falls and nonfalls while a sensor collected data [ 52 , 61 - 63 , 66 ]. In this way, the data obtained were used to create the predictive model of falls. As for the tests used, Eichler et al [ 68 ] asked participants to perform the Berg Balance Scale while a camera recorded their performance.

Finally, other authors created their own battery of tests for data extraction [ 55 , 59 , 64 , 70 ]. Gillain et al [ 64 ] used gait records to analyze speed, stride length, frequency, symmetry, regularity, and foot separation. Hu et al [ 59 ] asked their participants to perform normal walking, the postural reflexive response test, and the motor control test. In the study by Noh et al [ 55 ], gait tests were conducted, involving walking 20 m at different speeds. Finally, Greene et al [ 70 ] created a 12-question questionnaire and asked their participants to maintain balance while holding a mobile phone in their hand.

AI Techniques

The selected articles used various techniques within AI. They all had the same objective in applying these techniques, which was to achieve a predictive and classification model for the risk of falls [ 51 - 72 ].

In chronological order, in 2018, Nait Aicha et al [ 51 ] compared single-task learning models with multitask learning, obtaining better evaluation results through multitask learning. In the same year, Dubois et al [ 54 ] applied AI techniques that analyzed multiple parameters to classify the risk of falls in their sample. Qiu et al [ 53 ], also in the same year, used 6 machine learning models (logistic regression, naïve Bayes, decision tree, RF, boosted tree, and support vector machine) in their research.

In contrast, in 2019, Ferrete et al [ 52 ] compared the applicability of 2 different deep learning models: the classifier based on associative skill memories and a CNN classifier. In the same year, after confirming the applicability of AI as a predictive method for the risk of falls, various authors investigated through methods such as the RF to identify factors that can predict and quantify the risk of falls [ 63 , 65 ].

Among the selected articles, 5 (22.7%) were published in 2020 [ 58 - 62 ]. The research conducted by Tunca et al [ 62 ] compared the applicability of deep learning LSTM networks with traditional machine learning applied to the risk of falls. Hu et al [ 59 ] first used cross-validation, where algorithms were trained randomly, and then used the gradient boosting machine algorithm to classify participants as high or low risk. Ye et al [ 60 ] and Hsu et al [ 58 ] both used the extreme gradient boosting (XGBoost) algorithm based on machine learning to create their predictive model. In the same year, Althobaiti et al [ 61 ] trained machine learning models for their research.

In 2021, Lockhart et al [ 57 ] started using 3 deep learning techniques simultaneously with the same goal as before: to create a predictive model for the risk of falls. Specifically, they used the RF, RF with feature engineering, and RF with feature engineering and linear and nonlinear variables. Noh et al [ 55 ], again in the same year, used the XGBoost algorithm, while Roshdibenam et al [ 71 ], on the other hand, used the CNN algorithm for each location of the wearable sensors used in their research. Various machine learning techniques were used for classifying the risk of falls and for balance loss events in the research by Hauth et al [ 56 ]. Dubois et al [ 72 ] used the following algorithms: decision tree, adaptive boosting, neural net, naïve Bayes, k-nearest neighbors, linear support vector machine, radial basis function support vector machine, RF, and quadratic discriminant analysis. Hauth et al [ 56 ], on the other hand, used regularized logistic regression and bidirectional LSTM networks. In the research conducted by Greene et al [ 70 ], AI was used, but the specific procedure that they followed is not described.

Tang et al [ 69 ] published their research with innovation up to that point. In their study, they used a smart gait analyzer with the help of deep learning techniques to assess the diagnostic accuracy of fall risk through vision. Months later, in August 2022, Ladios-Martin et al [ 67 ] published their research, in which they compared 2 deep learning models to achieve the best results in terms of specificity and sensitivity in detecting fall risk. The first model used the Bayesian Point Machine algorithm with a fall prevention variable, and the second one did not use the variable. They obtained better results when using that variable, a mitigating factor defined as a set of care interventions carried out by professionals to prevent the patient from experiencing a fall during hospitalization. Particularly controversial, as its exclusion could obscure the model’s performance. Eichler et al [ 68 ], on the other hand, used machine learning–based classifier training and later tested the performance of RFs in score predictions.

Finally, in January 2023, Maray et al [ 66 ] published their research, linking the previously mentioned terms (AI and fall risk) with 3 wearable devices that are commonly used today. They collected data through these devices and applied transfer learning to generalize the model across heterogeneous devices.

The results of the 22 articles provided promising data, and all of them agreed on the feasibility of applying various AI techniques as a method for predicting and classifying the risk of falls. Specifically, the accuracy values obtained in the studies exceed 70%. Noh et al [ 55 ] achieved the “lowest” accuracy among the studies conducted, with a 70% accuracy rate. Ribeiro et al [ 52 ] obtained an accuracy of 92.7% when using CNN to differentiate between normal gait and fall events. Hsu et al [ 58 ] further demonstrated that the XGBoost model is more sensitive than the Morse Fall Scale. Similarly, in their comparative study, Nait Aicha et al [ 51 ] also showed that a predictive model created from accelerometer data with AI is comparable to conventional models for assessing the risk of falls. More specifically, Dubois et al [ 54 ] concluded that using 1 gait-related parameter (excluding velocity) in combination with another parameter related to seated position allowed for the correct classification of individuals according to their risk of falls.

Principal Findings

The aim of this research was to analyze the scientific evidence regarding the applications of AI in the analysis of data related to postural control and the risk of falls. On the basis of the analysis of results, it can be asserted that the following risk factors were identified in the analyzed studies: age [ 65 ], daily habits [ 65 ], clinical diagnoses [ 65 ], environmental and hygiene factors [ 65 ], sex [ 64 ], stride length [ 55 , 72 ], gait speed [ 55 ], and posture [ 55 ]. This aligns with other research that also identifies sex [ 73 , 74 ], age [ 73 ], and gait speed [ 75 ].

On the other hand, the “fear of falling” has been identified in various studies as a risk factor and a predictor of falls [ 73 , 76 ], but it was not identified in any of the studies included in this review.

As for the characteristics of the analyzed samples, only 9.1% (2/22) of the articles used a sample composed exclusively of women [ 53 , 59 ], and no article used a sample composed exclusively of men. This fact is incongruent with reality, as women have a longer life expectancy than men, and therefore, the number of women aged >65 years is greater than the number of men of the same age [ 77 ]. Furthermore, women experience more falls than men [ 78 ]. The connection between menopause and its consequences, including osteopenia, suggests a higher risk of falls among older women than among men of the same age [ 79 , 80 ].

Within the realm of analysis tools, the most frequently used devices to analyze participants were accelerometers [ 51 - 57 , 59 , 61 - 63 , 66 , 70 - 72 ]. However, only 36.4% (8/22) of the studies provided all the information regarding the characteristics of these devices [ 51 , 53 , 59 , 61 , 63 , 66 , 70 , 72 ]. On the other hand, 18.2% (4/22) of the studies used the term “inertial measurement unit” as the sole description of the devices used [ 55 - 57 , 71 ].

The fact that most of the analyzed procedures involved the use of inertial sensors reflects the current widespread use of these devices for postural control analysis. These sensors, in general (and triaxial accelerometers in particular), have demonstrated great diagnostic capacity for balance [ 81 ]. In addition, they exhibit good sensitivity and reliability, combined with their portability and low economic cost [ 82 ]. Another advantage of triaxial accelerometers is their versatility in both adult and pediatric populations [ 83 - 86 ], although the studies included in this review did not address the pediatric population.

The remaining studies extracted data from cameras [ 68 , 69 ], medical records [ 58 , 60 , 65 , 67 ], and other functional and clinical tests [ 59 , 64 , 70 ]. Regarding the AI techniques used, out of the 18.2% (4/22) of articles that used deep learning techniques [ 52 , 57 , 62 , 71 ], only 4.5% (1/22) did not provide a description of the sample characteristics [ 52 ]. In this case, the authors focused on the AI landscape, while the rest of the articles strike a balance between AI and health sciences.

Regarding the validity of the generated models, only 40.9% (9/22) of the articles assessed this characteristic [ 52 , 53 , 55 , 61 - 64 , 68 , 69 ]. The authors of these 9 (N=22, 40.9%) articles evaluated the validity of the models through accuracy. All the results obtained reflected accuracies exceeding 70%, with Ribeiro et al [ 52 ] achieving a notable accuracy of 92.7% and 100%. Specifically, they obtained a 92.7% accuracy through the CNN model for distinguishing normal gait, the prefall condition, and the falling situation, considering the step before the fall, and 100% when not considering it [ 52 ].

The positive results of sensitivity and specificity can only be compared between the studies of Qiu et al [ 53 ] and Gillain et al [ 64 ], as they were the only ones to take them into account, and in both investigations, they were very high. Similarly, in the case of the F 1 -score, only Althobaiti et al [ 61 ] examined this validity measure. This measure is the result of combining precision and recall into a single figure, and the outcome obtained by these researchers was promising.

Despite these differences, the 22 studies obtained promising results in the health care field [ 51 - 72 ]. Specifically, their outcomes highlight the potential of AI integration into clinical settings. However, further research is necessary to explore how health care professionals can effectively use these predictive models. Consequently, future research should focus on studying the application and integration of the already-developed models. In this context, fall prevention plans could be implemented for the target populations identified by the predictive models. This approach would allow for a retrospective analysis to determine if the combination of predictive models with prevention programs effectively reduces the prevalence of falls in the population.

Limitations

Regarding limitations, the articles showed significant variation in the sample sizes selected. Moreover, even in the study with the largest sample size (with 265,225 participants [ 60 ]), the amount of data analyzed was relatively small. In addition, several of the databases used were not generated specifically for the published research but rather derived from existing medical records [ 58 , 60 , 65 , 67 ]. This could explain the significant variability in the variables analyzed across different studies.

Despite the limitations, this research has strengths, such as being the first systematic review on the use of AI as a tool to analyze postural control and the risk of falls. Furthermore, a total of 6 databases were used for the literature search, and a comprehensive article selection process was carried out by 3 researchers. Finally, only cross-sectional observational studies were selected, and they shared the same objective.

Conclusions

The use of AI in the analysis of data related to postural control and the risk of falls proves to be a valuable tool for creating predictive models of fall risk. It has been identified that most AI studies analyze accelerometer data from sensors, with triaxial accelerometers being the most frequently used.

For future research, it would be beneficial to provide more detailed descriptions of the measurement procedures and the AI techniques used. In addition, exploring larger databases could lead to the development of more robust models.

Conflicts of Interest

None declared.

Quality scores of reviewed studies (Critical Review Form for Quantitative Studies tool results).

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist.

  • Step safely: strategies for preventing and managing falls across the life-course. World Health Organization. 2021. URL: https://www.who.int/publications/i/item/978924002191-4 [accessed 2024-04-02]
  • Keall MD, Pierse N, Howden-Chapman P, Guria J, Cunningham CW, Baker MG. Cost-benefit analysis of fall injuries prevented by a programme of home modifications: a cluster randomised controlled trial. Inj Prev. Feb 2017;23(1):22-26. [ CrossRef ] [ Medline ]
  • Almada M, Brochado P, Portela D, Midão L, Costa E. Prevalence of falls and associated factors among community-dwelling older adults: a cross-sectional study. J Frailty Aging. 2021;10(1):10-16. [ CrossRef ] [ Medline ]
  • Menéndez-González L, Izaguirre-Riesgo A, Tranche-Iparraguirre S, Montero-Rodríguez Á, Orts-Cortés MI. [Prevalence and associated factors of frailty in adults over 70 years in the community]. Aten Primaria. Dec 2021;53(10):102128. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Guirguis-Blake JM, Michael YL, Perdue LA, Coppola EL, Beil TL. Interventions to prevent falls in older adults: updated evidence report and systematic review for the US preventive services task force. JAMA. Apr 24, 2018;319(16):1705-1716. [ CrossRef ] [ Medline ]
  • Pereira CB, Kanashiro AM. Falls in older adults: a practical approach. Arq Neuropsiquiatr. May 2022;80(5 Suppl 1):313-323. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Byun M, Kim J, Kim M. Physical and psychological factors affecting falls in older patients with arthritis. Int J Environ Res Public Health. Feb 09, 2020;17(3):1098. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Goh HT, Nadarajah M, Hamzah NB, Varadan P, Tan MP. Falls and fear of falling after stroke: a case-control study. PM R. Dec 04, 2016;8(12):1173-1180. [ CrossRef ] [ Medline ]
  • Alanazi FK, Lapkin S, Molloy L, Sim J. The impact of safety culture, quality of care, missed care and nurse staffing on patient falls: a multisource association study. J Clin Nurs. Oct 12, 2023;32(19-20):7260-7272. [ CrossRef ] [ Medline ]
  • Hossain A, Lall R, Ji C, Bruce J, Underwood M, Lamb SE. Comparison of different statistical models for the analysis of fracture events: findings from the Prevention of Falls Injury Trial (PreFIT). BMC Med Res Methodol. Oct 02, 2023;23(1):216. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Williams CT, Whyman J, Loewenthal J, Chahal K. Managing geriatric patients with falls and fractures. Orthop Clin North Am. Jul 2023;54(3S):e1-12. [ CrossRef ] [ Medline ]
  • Gadhvi C, Bean D, Rice D. A systematic review of fear of falling and related constructs after hip fracture: prevalence, measurement, associations with physical function, and interventions. BMC Geriatr. Jun 23, 2023;23(1):385. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lohman MC, Fallahi A, Mishio Bawa E, Wei J, Merchant AT. Social mediators of the association between depression and falls among older adults. J Aging Health. Aug 12, 2023;35(7-8):593-603. [ CrossRef ] [ Medline ]
  • Smith AD, Silva AO, Rodrigues RA, Moreira MA, Nogueira JD, Tura LF. Assessment of risk of falls in elderly living at home. Rev Lat Am Enfermagem. Apr 06, 2017;25:e2754. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Koh V, Matchar DB, Chan A. Physical strength and mental health mediate the association between pain and falls (recurrent and/or injurious) among community-dwelling older adults in Singapore. Arch Gerontol Geriatr. Sep 2023;112:105015. [ CrossRef ] [ Medline ]
  • Soh SE, Morgan PE, Hopmans R, Barker AL, Ackerman IN. The feasibility and acceptability of a falls prevention e-learning program for physiotherapists. Physiother Theory Pract. Mar 18, 2023;39(3):631-640. [ CrossRef ] [ Medline ]
  • Morat T, Snyders M, Kroeber P, De Luca A, Squeri V, Hochheim M, et al. Evaluation of a novel technology-supported fall prevention intervention - study protocol of a multi-centre randomised controlled trial in older adults at increased risk of falls. BMC Geriatr. Feb 18, 2023;23(1):103. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • You T, Koren Y, Butts WJ, Moraes CA, Yeh GY, Wayne PM, et al. Pilot studies of recruitment and feasibility of remote Tai Chi in racially diverse older adults with multisite pain. Contemp Clin Trials. May 2023;128:107164. [ CrossRef ] [ Medline ]
  • Aldana-Benítez D, Caicedo-Pareja MJ, Sánchez DP, Ordoñez-Mora LT. Dance as a neurorehabilitation strategy: a systematic review. J Bodyw Mov Ther. Jul 2023;35:348-363. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jawad A, Baattaiah BA, Alharbi MD, Chevidikunnan MF, Khan F. Factors contributing to falls in people with multiple sclerosis: the exploration of the moderation and mediation effects. Mult Scler Relat Disord. Aug 2023;76:104838. [ CrossRef ] [ Medline ]
  • Warren C, Rizo E, Decker E, Hasse A. A comprehensive analysis of risk factors associated with inpatient falls. J Patient Saf. Oct 01, 2023;19(6):396-402. [ CrossRef ] [ Medline ]
  • Gross M, Roigk P, Schoene D, Ritter Y, Pauly P, Becker C, et al. Bundesinitiative Sturzprävention. [Update of the recommendations of the federal falls prevention initiative-identification and prevention of the risk of falling in older people living at home]. Z Gerontol Geriatr. Oct 11, 2023;56(6):448-457. [ CrossRef ] [ Medline ]
  • Li S, Li Y, Liang Q, Yang WJ, Zi R, Wu X, et al. Effects of tele-exercise rehabilitation intervention on women at high risk of osteoporotic fractures: study protocol for a randomised controlled trial. BMJ Open. Nov 07, 2022;12(11):e064328. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. Dec 2017;2(4):230-243. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ye Y, Wu X, Wang H, Ye H, Zhao K, Yao S, et al. Artificial intelligence-assisted analysis for tumor-immune interaction within the invasive margin of colorectal cancer. Ann Med. Dec 2023;55(1):2215541. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kuwahara T, Hara K, Mizuno N, Haba S, Okuno N, Fukui T, et al. Current status of artificial intelligence analysis for the treatment of pancreaticobiliary diseases using endoscopic ultrasonography and endoscopic retrograde cholangiopancreatography. DEN Open. Apr 30, 2024;4(1):e267. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Yokote A, Umeno J, Kawasaki K, Fujioka S, Fuyuno Y, Matsuno Y, et al. Small bowel capsule endoscopy examination and open access database with artificial intelligence: the SEE-artificial intelligence project. DEN Open. Apr 22, 2024;4(1):e258. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ramalingam M, Jaisankar A, Cheng L, Krishnan S, Lan L, Hassan A, et al. Impact of nanotechnology on conventional and artificial intelligence-based biosensing strategies for the detection of viruses. Discov Nano. Dec 01, 2023;18(1):58. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Yerukala Sathipati S, Tsai MJ, Shukla SK, Ho SY. Artificial intelligence-driven pan-cancer analysis reveals miRNA signatures for cancer stage prediction. HGG Adv. Jul 13, 2023;4(3):100190. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Liu J, Dan W, Liu X, Zhong X, Chen C, He Q, et al. Development and validation of predictive model based on deep learning method for classification of dyslipidemia in Chinese medicine. Health Inf Sci Syst. Dec 06, 2023;11(1):21. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Carou-Senra P, Ong JJ, Castro BM, Seoane-Viaño I, Rodríguez-Pombo L, Cabalar P, et al. Predicting pharmaceutical inkjet printing outcomes using machine learning. Int J Pharm X. Dec 2023;5:100181. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Li X, Zhu Y, Zhao W, Shi R, Wang Z, Pan H, et al. Machine learning algorithm to predict the in-hospital mortality in critically ill patients with chronic kidney disease. Ren Fail. Dec 2023;45(1):2212790. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bonnin M, Müller-Fouarge F, Estienne T, Bekadar S, Pouchy C, Ait Si Selmi T. Artificial intelligence radiographic analysis tool for total knee arthroplasty. J Arthroplasty. Jul 2023;38(7 Suppl 2):S199-207.e2. [ CrossRef ] [ Medline ]
  • Kao DP. Intelligent artificial intelligence: present considerations and future implications of machine learning applied to electrocardiogram interpretation. Circ Cardiovasc Qual Outcomes. Sep 2019;12(9):e006021. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • van der Stigchel B, van den Bosch K, van Diggelen J, Haselager P. Intelligent decision support in medical triage: are people robust to biased advice? J Public Health (Oxf). Aug 28, 2023;45(3):689-696. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jakhar D, Kaur I. Artificial intelligence, machine learning and deep learning: definitions and differences. Clin Exp Dermatol. Jan 09, 2020;45(1):131-132. [ CrossRef ] [ Medline ]
  • Ghosh M, Thirugnanam A. Introduction to artificial intelligence. In: Srinivasa KG, Siddesh GM, Sekhar SR, editors. Artificial Intelligence for Information Management: A Healthcare Perspective. Cham, Switzerland. Springer; 2021;88-44.
  • Taulli T. Artificial Intelligence Basics: A Non-Technical Introduction. Berkeley, CA. Apress Berkeley; 2019.
  • Patil S, Joda T, Soffe B, Awan KH, Fageeh HN, Tovani-Palone MR, et al. Efficacy of artificial intelligence in the detection of periodontal bone loss and classification of periodontal diseases: a systematic review. J Am Dent Assoc. Sep 2023;154(9):795-804.e1. [ CrossRef ] [ Medline ]
  • Quek LJ, Heikkonen MR, Lau Y. Use of artificial intelligence techniques for detection of mild cognitive impairment: a systematic scoping review. J Clin Nurs. Sep 10, 2023;32(17-18):5752-5762. [ CrossRef ] [ Medline ]
  • Tan D, Mohd Nasir NF, Abdul Manan H, Yahya N. Prediction of toxicity outcomes following radiotherapy using deep learning-based models: a systematic review. Cancer Radiother. Sep 2023;27(5):398-406. [ CrossRef ] [ Medline ]
  • Rabilloud N, Allaume P, Acosta O, De Crevoisier R, Bourgade R, Loussouarn D, et al. Deep learning methodologies applied to digital pathology in prostate cancer: a systematic review. Diagnostics (Basel). Aug 14, 2023;13(16):2676. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Li K, Yao S, Zhang Z, Cao B, Wilson C, Kalos D, et al. Efficient gradient boosting for prognostic biomarker discovery. Bioinformatics. Mar 04, 2022;38(6):1631-1638. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chen T, Chen Y, Li H, Gao T, Tu H, Li S. Driver intent-based intersection autonomous driving collision avoidance reinforcement learning algorithm. Sensors (Basel). Dec 16, 2022;22(24):9943. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Huynh QT, Nguyen PH, Le HX, Ngo LT, Trinh NT, Tran MT, et al. Automatic acne object detection and acne severity grading using smartphone images and artificial intelligence. Diagnostics (Basel). Aug 03, 2022;12(8):1879. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Brooke BS, Schwartz TA, Pawlik TM. MOOSE reporting guidelines for meta-analyses of observational studies. JAMA Surg. Aug 01, 2021;156(8):787-788. [ CrossRef ] [ Medline ]
  • Scholten RJ, Clarke M, Hetherington J. The Cochrane collaboration. Eur J Clin Nutr. Aug 28, 2005;59 Suppl 1(S1):S147-S196. [ CrossRef ] [ Medline ]
  • Warrens MJ. Kappa coefficients for dichotomous-nominal classifications. Adv Data Anal Classif. Apr 07, 2020;15(1):193-208. [ CrossRef ]
  • Law M, Stewart D, Letts L, Pollock N, Bosch J. Guidelines for critical review of qualitative studies. McMaster University Occupational Therapy Evidence-Based Practice Research Group. URL: https://www.canchild.ca/system/tenon/assets/attachments/000/000/360/original/qualguide.pdf [accessed 2024-04-05]
  • Higgins JP, Morgan RL, Rooney AA, Taylor KW, Thayer KA, Silva RA, et al. Risk of bias in non-randomized studies - of exposure (ROBINS-E). ROBINS-E tool. URL: https://www.riskofbias.info/welcome/robins-e-tool [accessed 2024-04-02]
  • Nait Aicha A, Englebienne G, van Schooten KS, Pijnappels M, Kröse B. Deep learning to predict falls in older adults based on daily-life trunk accelerometry. Sensors (Basel). May 22, 2018;18(5):1654. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ribeiro NF, André J, Costa L, Santos CP. Development of a strategy to predict and detect falls using wearable sensors. J Med Syst. Apr 04, 2019;43(5):134. [ CrossRef ] [ Medline ]
  • Qiu H, Rehman RZ, Yu X, Xiong S. Application of wearable inertial sensors and a new test battery for distinguishing retrospective fallers from non-fallers among community-dwelling older people. Sci Rep. Nov 05, 2018;8(1):16349. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Dubois A, Bihl T, Bresciani JP. Automatic measurement of fall risk indicators in timed up and go test. Inform Health Soc Care. Sep 13, 2019;44(3):237-245. [ CrossRef ] [ Medline ]
  • Noh B, Youm C, Goh E, Lee M, Park H, Jeon H, et al. XGBoost based machine learning approach to predict the risk of fall in older adults using gait outcomes. Sci Rep. Jun 09, 2021;11(1):12183. [ CrossRef ] [ Medline ]
  • Hauth J, Jabri S, Kamran F, Feleke EW, Nigusie K, Ojeda LV, et al. Automated loss-of-balance event identification in older adults at risk of falls during real-world walking using wearable inertial measurement units. Sensors (Basel). Jul 07, 2021;21(14):4661. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lockhart TE, Soangra R, Yoon H, Wu T, Frames CW, Weaver R. Prediction of fall risk among community-dwelling older adults using a wearable system. Sci Rep. 2021;11(1):20976. [ CrossRef ]
  • Hsu YC, Weng HH, Kuo CY, Chu TP, Tsai YH. Prediction of fall events during admission using eXtreme gradient boosting: a comparative validation study. Sci Rep. Oct 08, 2020;10(1):16777. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hu Y, Bishnoi A, Kaur R, Sowers R, Hernandez ME. Exploration of machine learning to identify community dwelling older adults with balance dysfunction using short duration accelerometer data. In: Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society. 2020. Presented at: EMBC '20; July 20-24, 2020;812-815; Montreal, QC. URL: https://ieeexplore.ieee.org/document/9175871 [ CrossRef ]
  • Ye C, Li J, Hao S, Liu M, Jin H, Zheng L, et al. Identification of elders at higher risk for fall with statewide electronic health records and a machine learning algorithm. Int J Med Inform. May 2020;137:104105. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Althobaiti T, Katsigiannis S, Ramzan N. Triaxial accelerometer-based falls and activities of daily life detection using machine learning. Sensors (Basel). Jul 06, 2020;20(13):3777. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tunca C, Salur G, Ersoy C. Deep learning for fall risk assessment with inertial sensors: utilizing domain knowledge in spatio-temporal gait parameters. IEEE J Biomed Health Inform. Jul 2020;24(7):1994-2005. [ CrossRef ]
  • Kim K, Yun G, Park SK, Kim DH. Fall detection for the elderly based on 3-axis accelerometer and depth sensor fusion with random forest classifier. In: Proceedings of the 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2019. Presented at: EMBC '19; July 23-27, 2019;4611-4614; Berlin, Germany. URL: https://ieeexplore.ieee.org/document/8856698 [ CrossRef ]
  • Gillain S, Boutaayamou M, Schwartz C, Brüls O, Bruyère O, Croisier JL, et al. Using supervised learning machine algorithm to identify future fallers based on gait patterns: a two-year longitudinal study. Exp Gerontol. Nov 2019;127:110730. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lo Y, Lynch SF, Urbanowicz RJ, Olson RS, Ritter AZ, Whitehouse CR, et al. Using machine learning on home health care assessments to predict fall risk. Stud Health Technol Inform. Aug 21, 2019;264:684-688. [ CrossRef ] [ Medline ]
  • Maray N, Ngu AH, Ni J, Debnath M, Wang L. Transfer learning on small datasets for improved fall detection. Sensors (Basel). Jan 18, 2023;23(3):1105. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ladios-Martin M, Cabañero-Martínez MJ, Fernández-de-Maya J, Ballesta-López FJ, Belso-Garzas A, Zamora-Aznar FM, et al. Development of a predictive inpatient falls risk model using machine learning. J Nurs Manag. Nov 30, 2022;30(8):3777-3786. [ CrossRef ] [ Medline ]
  • Eichler N, Raz S, Toledano-Shubi A, Livne D, Shimshoni I, Hel-Or H. Automatic and efficient fall risk assessment based on machine learning. Sensors (Basel). Feb 17, 2022;22(4):1557. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tang YM, Wang YH, Feng XY, Zou QS, Wang Q, Ding J, et al. Diagnostic value of a vision-based intelligent gait analyzer in screening for gait abnormalities. Gait Posture. Jan 2022;91:205-211. [ CrossRef ] [ Medline ]
  • Greene BR, McManus K, Ader LG, Caulfield B. Unsupervised assessment of balance and falls risk using a smartphone and machine learning. Sensors (Basel). Jul 13, 2021;21(14):4770. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Roshdibenam V, Jogerst GJ, Butler NR, Baek S. Machine learning prediction of fall risk in older adults using timed up and go test kinematics. Sensors (Basel). May 17, 2021;21(10):3481. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Dubois A, Bihl T, Bresciani JP. Identifying fall risk predictors by monitoring daily activities at home using a depth sensor coupled to machine learning algorithms. Sensors (Basel). Mar 11, 2021;21(6):1957. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vo MT, Thonglor R, Moncatar TJ, Han TD, Tejativaddhana P, Nakamura K. Fear of falling and associated factors among older adults in Southeast Asia: a systematic review. Public Health. Sep 2023;222:215-228. [ CrossRef ] [ Medline ]
  • Torun E, Az A, Akdemir T, Solakoğlu GA, Açiksari K, Güngörer B. Evaluation of the risk factors for falls in the geriatric population presenting to the emergency department. Ulus Travma Acil Cerrahi Derg. Aug 2023;29(8):897-903. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Son NK, Ryu YU, Jeong HW, Jang YH, Kim HD. Comparison of 2 different exercise approaches: Tai Chi versus Otago, in community-dwelling older women. J Geriatr Phys Ther. 2016;39(2):51-57. [ CrossRef ] [ Medline ]
  • Sawa R, Doi T, Tsutsumimoto K, Nakakubo S, Kurita S, Kiuchi Y, et al. Overlapping status of frailty and fear of falling: an elevated risk of incident disability in community-dwelling older adults. Aging Clin Exp Res. Sep 11, 2023;35(9):1937-1944. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Calazans JA, Permanyer I. Levels, trends, and determinants of cause-of-death diversity in a global perspective: 1990-2019. BMC Public Health. Apr 05, 2023;23(1):650. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kakara R, Bergen G, Burns E, Stevens M. Nonfatal and fatal falls among adults aged ≥65 years - United States, 2020-2021. MMWR Morb Mortal Wkly Rep. Sep 01, 2023;72(35):938-943. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Dostan A, Dobson CA, Vanicek N. Relationship between stair ascent gait speed, bone density and gait characteristics of postmenopausal women. PLoS One. Mar 22, 2023;18(3):e0283333. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zheng Y, Wang X, Zhang ZK, Guo B, Dang L, He B, et al. Bushen Yijing Fang reduces fall risk in late postmenopausal women with osteopenia: a randomized double-blind and placebo-controlled trial. Sci Rep. Feb 14, 2019;9(1):2089. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Woelfle T, Bourguignon L, Lorscheider J, Kappos L, Naegelin Y, Jutzeler CR. Wearable sensor technologies to assess motor functions in people with multiple sclerosis: systematic scoping review and perspective. J Med Internet Res. Jul 27, 2023;25:e44428. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Abdollah V, Dief TN, Ralston J, Ho C, Rouhani H. Investigating the validity of a single tri-axial accelerometer mounted on the head for monitoring the activities of daily living and the timed-up and go test. Gait Posture. Oct 2021;90:137-140. [ CrossRef ] [ Medline ]
  • Mielke GI, de Almeida Mendes M, Ekelund U, Rowlands AV, Reichert FF, Crochemore-Silva I. Absolute intensity thresholds for tri-axial wrist and waist accelerometer-measured movement behaviors in adults. Scand J Med Sci Sports. Sep 12, 2023;33(9):1752-1764. [ CrossRef ] [ Medline ]
  • Löppönen A, Delecluse C, Suorsa K, Karavirta L, Leskinen T, Meulemans L, et al. Association of sit-to-stand capacity and free-living performance using Thigh-Worn accelerometers among 60- to 90-yr-old adults. Med Sci Sports Exerc. Sep 01, 2023;55(9):1525-1532. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • García-Soidán JL, Leirós-Rodríguez R, Romo-Pérez V, García-Liñeira J. Accelerometric assessment of postural balance in children: a systematic review. Diagnostics (Basel). Dec 22, 2020;11(1):8. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Leirós-Rodríguez R, García-Soidán JL, Romo-Pérez V. Analyzing the use of accelerometers as a method of early diagnosis of alterations in balance in elderly people: a systematic review. Sensors (Basel). Sep 09, 2019;19(18):3883. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by A Mavragani; submitted 28.11.23; peer-reviewed by E Andrade, M Behzadifar, A Suárez; comments to author 09.01.24; revised version received 30.01.24; accepted 13.02.24; published 29.04.24.

©Ana González-Castro, Raquel Leirós-Rodríguez, Camino Prada-García, José Alberto Benítez-Andrades. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 29.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

Main Navigation

  • Contact NeurIPS
  • Code of Ethics
  • Code of Conduct
  • Create Profile
  • Journal To Conference Track
  • Diversity & Inclusion
  • Proceedings
  • Future Meetings
  • Exhibitor Information
  • Privacy Policy

NeurIPS 2024

Conference Dates: (In person) 9 December - 15 December, 2024

Homepage: https://neurips.cc/Conferences/2024/

Call For Papers 

Author notification: Sep 25, 2024

Camera-ready, poster, and video submission: Oct 30, 2024 AOE

Submit at: https://openreview.net/group?id=NeurIPS.cc/2024/Conference  

The site will start accepting submissions on Apr 22, 2024 

Subscribe to these and other dates on the 2024 dates page .

The Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024) is an interdisciplinary conference that brings together researchers in machine learning, neuroscience, statistics, optimization, computer vision, natural language processing, life sciences, natural sciences, social sciences, and other adjacent fields. We invite submissions presenting new and original research on topics including but not limited to the following:

  • Applications (e.g., vision, language, speech and audio, Creative AI)
  • Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
  • Evaluation (e.g., methodology, meta studies, replicability and validity, human-in-the-loop)
  • General machine learning (supervised, unsupervised, online, active, etc.)
  • Infrastructure (e.g., libraries, improved implementation and scalability, distributed solutions)
  • Machine learning for sciences (e.g. climate, health, life sciences, physics, social sciences)
  • Neuroscience and cognitive science (e.g., neural coding, brain-computer interfaces)
  • Optimization (e.g., convex and non-convex, stochastic, robust)
  • Probabilistic methods (e.g., variational inference, causal inference, Gaussian processes)
  • Reinforcement learning (e.g., decision and control, planning, hierarchical RL, robotics)
  • Social and economic aspects of machine learning (e.g., fairness, interpretability, human-AI interaction, privacy, safety, strategic behavior)
  • Theory (e.g., control theory, learning theory, algorithmic game theory)

Machine learning is a rapidly evolving field, and so we welcome interdisciplinary submissions that do not fit neatly into existing categories.

Authors are asked to confirm that their submissions accord with the NeurIPS code of conduct .

Formatting instructions:   All submissions must be in PDF format, and in a single PDF file include, in this order:

  • The submitted paper
  • Technical appendices that support the paper with additional proofs, derivations, or results 
  • The NeurIPS paper checklist  

Other supplementary materials such as data and code can be uploaded as a ZIP file

The main text of a submitted paper is limited to nine content pages , including all figures and tables. Additional pages containing references don’t count as content pages. If your submission is accepted, you will be allowed an additional content page for the camera-ready version.

The main text and references may be followed by technical appendices, for which there is no page limit.

The maximum file size for a full submission, which includes technical appendices, is 50MB.

Authors are encouraged to submit a separate ZIP file that contains further supplementary material like data or source code, when applicable.

You must format your submission using the NeurIPS 2024 LaTeX style file which includes a “preprint” option for non-anonymous preprints posted online. Submissions that violate the NeurIPS style (e.g., by decreasing margins or font sizes) or page limits may be rejected without further review. Papers may be rejected without consideration of their merits if they fail to meet the submission requiremhttps://www.overleaf.com/read/kcffhyrygkqc#85f742ents, as described in this document. 

Paper checklist: In order to improve the rigor and transparency of research submitted to and published at NeurIPS, authors are required to complete a paper checklist . The paper checklist is intended to help authors reflect on a wide variety of issues relating to responsible machine learning research, including reproducibility, transparency, research ethics, and societal impact. The checklist forms part of the paper submission, but does not count towards the page limit.

Supplementary material: While all technical appendices should be included as part of the main paper submission PDF, authors may submit up to 100MB of supplementary material, such as data, or source code in a ZIP format. Supplementary material should be material created by the authors that directly supports the submission content. Like submissions, supplementary material must be anonymized. Looking at supplementary material is at the discretion of the reviewers.

We encourage authors to upload their code and data as part of their supplementary material in order to help reviewers assess the quality of the work. Check the policy as well as code submission guidelines and templates for further details.

Use of Large Language Models (LLMs): We welcome authors to use any tool that is suitable for preparing high-quality papers and research. However, we ask authors to keep in mind two important criteria. First, we expect papers to fully describe their methodology, and any tool that is important to that methodology, including the use of LLMs, should be described also. For example, authors should mention tools (including LLMs) that were used for data processing or filtering, visualization, facilitating or running experiments, and proving theorems. It may also be advisable to describe the use of LLMs in implementing the method (if this corresponds to an important, original, or non-standard component of the approach). Second, authors are responsible for the entire content of the paper, including all text and figures, so while authors are welcome to use any tool they wish for writing the paper, they must ensure that all text is correct and original.

Double-blind reviewing:   All submissions must be anonymized and may not contain any identifying information that may violate the double-blind reviewing policy.  This policy applies to any supplementary or linked material as well, including code.  If you are including links to any external material, it is your responsibility to guarantee anonymous browsing.  Please do not include acknowledgements at submission time. If you need to cite one of your own papers, you should do so with adequate anonymization to preserve double-blind reviewing.  For instance, write “In the previous work of Smith et al. [1]…” rather than “In our previous work [1]...”). If you need to cite one of your own papers that is in submission to NeurIPS and not available as a non-anonymous preprint, then include a copy of the cited anonymized submission in the supplementary material and write “Anonymous et al. [1] concurrently show...”). Any papers found to be violating this policy will be rejected.

OpenReview: We are using OpenReview to manage submissions. The reviews and author responses will not be public initially (but may be made public later, see below). As in previous years, submissions under review will be visible only to their assigned program committee. We will not be soliciting comments from the general public during the reviewing process. Anyone who plans to submit a paper as an author or a co-author will need to create (or update) their OpenReview profile by the full paper submission deadline. Your OpenReview profile can be edited by logging in and clicking on your name in https://openreview.net/ . This takes you to a URL "https://openreview.net/profile?id=~[Firstname]_[Lastname][n]" where the last part is your profile name, e.g., ~Wei_Zhang1. The OpenReview profiles must be up to date, with all publications by the authors, and their current affiliations. The easiest way to import publications is through DBLP but it is not required, see FAQ . Submissions without updated OpenReview profiles will be desk rejected. The information entered in the profile is critical for ensuring that conflicts of interest and reviewer matching are handled properly. Because of the rapid growth of NeurIPS, we request that all authors help with reviewing papers, if asked to do so. We need everyone’s help in maintaining the high scientific quality of NeurIPS.  

Please be aware that OpenReview has a moderation policy for newly created profiles: New profiles created without an institutional email will go through a moderation process that can take up to two weeks. New profiles created with an institutional email will be activated automatically.

Venue home page: https://openreview.net/group?id=NeurIPS.cc/2024/Conference

If you have any questions, please refer to the FAQ: https://openreview.net/faq

Ethics review: Reviewers and ACs may flag submissions for ethics review . Flagged submissions will be sent to an ethics review committee for comments. Comments from ethics reviewers will be considered by the primary reviewers and AC as part of their deliberation. They will also be visible to authors, who will have an opportunity to respond.  Ethics reviewers do not have the authority to reject papers, but in extreme cases papers may be rejected by the program chairs on ethical grounds, regardless of scientific quality or contribution.  

Preprints: The existence of non-anonymous preprints (on arXiv or other online repositories, personal websites, social media) will not result in rejection. If you choose to use the NeurIPS style for the preprint version, you must use the “preprint” option rather than the “final” option. Reviewers will be instructed not to actively look for such preprints, but encountering them will not constitute a conflict of interest. Authors may submit anonymized work to NeurIPS that is already available as a preprint (e.g., on arXiv) without citing it. Note that public versions of the submission should not say "Under review at NeurIPS" or similar.

Dual submissions: Submissions that are substantially similar to papers that the authors have previously published or submitted in parallel to other peer-reviewed venues with proceedings or journals may not be submitted to NeurIPS. Papers previously presented at workshops are permitted, so long as they did not appear in a conference proceedings (e.g., CVPRW proceedings), a journal or a book.  NeurIPS coordinates with other conferences to identify dual submissions.  The NeurIPS policy on dual submissions applies for the entire duration of the reviewing process.  Slicing contributions too thinly is discouraged.  The reviewing process will treat any other submission by an overlapping set of authors as prior work. If publishing one would render the other too incremental, both may be rejected.

Anti-collusion: NeurIPS does not tolerate any collusion whereby authors secretly cooperate with reviewers, ACs or SACs to obtain favorable reviews. 

Author responses:   Authors will have one week to view and respond to initial reviews. Author responses may not contain any identifying information that may violate the double-blind reviewing policy. Authors may not submit revisions of their paper or supplemental material, but may post their responses as a discussion in OpenReview. This is to reduce the burden on authors to have to revise their paper in a rush during the short rebuttal period.

After the initial response period, authors will be able to respond to any further reviewer/AC questions and comments by posting on the submission’s forum page. The program chairs reserve the right to solicit additional reviews after the initial author response period.  These reviews will become visible to the authors as they are added to OpenReview, and authors will have a chance to respond to them.

After the notification deadline, accepted and opted-in rejected papers will be made public and open for non-anonymous public commenting. Their anonymous reviews, meta-reviews, author responses and reviewer responses will also be made public. Authors of rejected papers will have two weeks after the notification deadline to opt in to make their deanonymized rejected papers public in OpenReview.  These papers are not counted as NeurIPS publications and will be shown as rejected in OpenReview.

Publication of accepted submissions:   Reviews, meta-reviews, and any discussion with the authors will be made public for accepted papers (but reviewer, area chair, and senior area chair identities will remain anonymous). Camera-ready papers will be due in advance of the conference. All camera-ready papers must include a funding disclosure . We strongly encourage accompanying code and data to be submitted with accepted papers when appropriate, as per the code submission policy . Authors will be allowed to make minor changes for a short period of time after the conference.

Contemporaneous Work: For the purpose of the reviewing process, papers that appeared online within two months of a submission will generally be considered "contemporaneous" in the sense that the submission will not be rejected on the basis of the comparison to contemporaneous work. Authors are still expected to cite and discuss contemporaneous work and perform empirical comparisons to the degree feasible. Any paper that influenced the submission is considered prior work and must be cited and discussed as such. Submissions that are very similar to contemporaneous work will undergo additional scrutiny to prevent cases of plagiarism and missing credit to prior work.

Plagiarism is prohibited by the NeurIPS Code of Conduct .

Other Tracks: Similarly to earlier years, we will host multiple tracks, such as datasets, competitions, tutorials as well as workshops, in addition to the main track for which this call for papers is intended. See the conference homepage for updates and calls for participation in these tracks. 

Experiments: As in past years, the program chairs will be measuring the quality and effectiveness of the review process via randomized controlled experiments. All experiments are independently reviewed and approved by an Institutional Review Board (IRB).

Financial Aid: Each paper may designate up to one (1) NeurIPS.cc account email address of a corresponding student author who confirms that they would need the support to attend the conference, and agrees to volunteer if they get selected. To be considered for Financial the student will also need to fill out the Financial Aid application when it becomes available.

IMAGES

  1. Sample of Research Literature Review

    review of a research paper

  2. Scientific Review Summary Examples : Critical review of scientific

    review of a research paper

  3. Sample of Research Literature Review

    review of a research paper

  4. Sample of Research Literature Review

    review of a research paper

  5. Research Paper Or Review Paper : Once you’ve agreed to complete a

    review of a research paper

  6. Literature Review Outline Template

    review of a research paper

VIDEO

  1. Difference between Research paper and a review. Which one is more important?

  2. How to Make Table of Contents for Review Paper ?

  3. Too Busy To Write Papers? You're LYING To Yourself

  4. This Researcher Submitted A Paper In 3 Weeks

  5. Systematic Literature Review Technique

  6. Learn How to Write an Article Review with Examples

COMMENTS

  1. How to review a paper

    How to review a paper. A good peer review requires disciplinary expertise, a keen and critical eye, and a diplomatic and constructive approach. Credit: dmark/iStockphoto. As junior scientists develop their expertise and make names for themselves, they are increasingly likely to receive invitations to review research manuscripts.

  2. How to Write a Literature Review

    A literature review is a survey of credible sources on a topic, often used in dissertations, theses, and research papers. Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research.

  3. How to write a review paper

    a critical review of the relevant literature and then ensuring that their research design, methods, results, and conclusions follow logically from these objectives (Maier, 2013). There exist a number of papers devoted to instruction on how to write a good review paper. Among the most . useful for scientific reviews, in my estimation, are those by

  4. How to write a review article?

    The fundamental rationale of writing a review article is to make a readable synthesis of the best literature sources on an important research inquiry or a topic. This simple definition of a review article contains the following key elements: The question (s) to be dealt with.

  5. Ten Simple Rules for Writing a Literature Review

    Literature reviews are in great demand in most scientific fields. Their need stems from the ever-increasing output of scientific publications .For example, compared to 1991, in 2008 three, eight, and forty times more papers were indexed in Web of Science on malaria, obesity, and biodiversity, respectively .Given such mountains of papers, scientists cannot be expected to examine in detail every ...

  6. How to Write a Peer Review

    Learn how to write a constructive and critical review for a research paper, with tips on structure, format, and language. See examples of effective and ineffective feedback, and suggested language for tricky situations.

  7. How to write a superb literature review

    One of my favourite review-style articles 3 presents a plot bringing together data from multiple research papers (many of which directly contradict each other). This is then used to identify broad ...

  8. How to write a good scientific review article

    Literature reviews are valuable resources for the scientific community. With research accelerating at an unprecedented speed in recent years and more and more original papers being published, review articles have become increasingly important as a means to keep up-to-date with developments in a particular area of research.

  9. How to write a good scientific review article

    Literature reviews are valuable resources for the scientific community. With research accelerating at an unprecedented speed in recent years and more and more original papers being published, review articles have become increasingly important as a means to keep up to date with developments in a particular area of research.

  10. How to write a review paper

    Writing the Review. 1Good scientific writing tells a story, so come up with a logical structure for your paper, with a beginning, middle, and end. Use appropriate headings and sequencing of ideas to make the content flow and guide readers seamlessly from start to finish.

  11. Writing a Literature Review

    A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research ...

  12. What is a review article?

    A review article can also be called a literature review, or a review of literature. It is a survey of previously published research on a topic. It should give an overview of current thinking on the topic. And, unlike an original research article, it will not present new experimental results. Writing a review of literature is to provide a ...

  13. Writing a literature review

    Writing a literature review requires a range of skills to gather, sort, evaluate and summarise peer-reviewed published data into a relevant and informative unbiased narrative. Digital access to research papers, academic texts, review articles, reference databases and public data sets are all sources of information that are available to enrich ...

  14. How to Write an Article Review (With Samples)

    Start your review by referring to the title and author of the article, the title of the journal, and the year of publication in the first paragraph. For example: The article, "Condom use will increase the spread of AIDS," was written by Anthony Zimmerman, a Catholic priest. 4. Write the introduction.

  15. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  16. Step by Step Guide to Reviewing a Manuscript

    Briefly summarize what the paper is about and what the findings are. Try to put the findings of the paper into the context of the existing literature and current knowledge. Indicate the significance of the work and if it is novel or mainly confirmatory. Indicate the work's strengths, its quality and completeness.

  17. How to write the literature review of your research paper

    Learn the types, purpose, and structure of literature review for dissertation, research article, and stand-alone review. Find tips on how to conduct literature search, organize, and cite the sources effectively.

  18. Literature review as a research methodology: An overview and guidelines

    This paper discusses literature review as a methodology for conducting research and offers an overview of different types of reviews, as well as some guidelines to how to both conduct and evaluate a literature review paper. It also discusses common pitfalls and how to get literature reviews published. 1. Introduction.

  19. Writing a Literature Review Research Paper: A step-by-step approach

    A literature review is a surveys scholarly articles, books and other sources relevant to a particular. issue, area of research, or theory, and by so doing, providing a description, summary, and ...

  20. 5 Differences between a research paper and a review paper

    Scholarly literature can be of different types; some of which require that researchers conduct an original study, whereas others can be based on existing research. One of the most popular Q&As led us to conclude that of all the types of scholarly literature, researchers are most confused by the differences between a research paper and a review paper. This infographic explains the five main ...

  21. What is the difference between a research paper and a review paper

    The research paper will be based on the analysis and interpretation of this data. A review article or review paper is based on other published articles. It does not report original research. Review articles generally summarize the existing literature on a topic in an attempt to explain the current state of understanding on the topic.

  22. Peer review guidance: a primer for researchers

    The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors' mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for 'elite ...

  23. (PDF) How to Review a Research Paper

    The state of evidence: what we know and what we don't know about journal peer review In Godlee F, Jefferson T, editors. Peer review in health sciences. Second edition. London: BMJ Books, 2003:45 ...

  24. Research ethics and artificial intelligence for global health

    The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice. In this paper we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town ...

  25. Randomised controlled trials evaluating artificial intelligence in

    This scoping review of randomised controlled trials on artificial intelligence (AI) in clinical practice reveals an expanding interest in AI across clinical specialties and locations. The USA and China are leading in the number of trials, with a focus on deep learning systems for medical imaging, particularly in gastroenterology and radiology. A majority of trials (70 [81%] of 86) report ...

  26. JMIR Formative Research

    This paper is in the following e-collection/theme issue: Formative Evaluation of Digital Health Interventions (2091) Mobile Devices and Apps for Seniors and Healthy Aging (116) Mobile Health (mhealth) (2643) Quality Evaluation and Descriptive Analysis/Reviews of Multiple Existing Mobile Apps (362) Usability of Apps and User Perceptions of mHealth (850) Usability Evaluation Case Studies (168 ...

  27. A Glowing Review: Meet the Museum Scientist Who Studies the Evolution

    According to research zoologist Andrea Quattrini, the curator of corals at the National Museum of Natural History, corals are a great system to understand evolution in the deep sea because they ...

  28. Journal of Medical Internet Research

    All the research included in the review obtained accuracy values of >70% in the predictive models obtained through AI. Conclusions: The use of AI proves to be a valuable tool for creating predictive models of fall risk. ... This paper is in the following e-collection/theme issue: Digital Health Reviews (1120) Machine Learning (1331) Decision ...

  29. NeurIPS 2024 Call for Papers

    Publication of accepted submissions: Reviews, meta-reviews, and any discussion with the authors will be made public for accepted papers (but reviewer, area chair, and senior area chair identities will remain anonymous). Camera-ready papers will be due in advance of the conference. All camera-ready papers must include a funding disclosure.

  30. PDF The effects of working time on productivity and firm performance: a

    work schedules. This paper - alongside two other papers, one on working time, health and safety, and another on working time and work-life "integration" or "balance" - was used as an input into the discussion report for the meeting. This paper provides a comprehensive synthesis of previous research examining the link between