Subscribe to our free Newsletter! →

Home › Science & Technology News

AI is writing convincing fake research papers. How one scientist is fighting back

' src=

By StudyFinds Staff

Reviewed by Chris Melore

Research led by Ahmed Abdeen Hamed, Binghamton University and Xindong Wu, Hefei University of Technology

Sep 04, 2024

AI robot writing

AI robot writing (© Emmy Ljs - stock.adobe.com)

BINGHAMTON, N.Y. —  “Publish or perish” has long been the mantra of academia. But what happens when the publications are penned not by perishing professors but by perpetually productive AIs? As artificial intelligence muscles its way into scientific writing, one researcher is fighting back with a tool that could change the game.

Large language models like ChatGPT continue to become increasingly sophisticated, and there’s growing concern about their potential misuse in academic and scientific circles. These models can produce text that mimics human writing, raising fears about the integrity of scientific literature. Now, Ahmed Abdeen Hamed, a visiting research fellow at Binghamton University, has developed a groundbreaking algorithm that might just be the silver bullet in this high-stakes game of academic authenticity.

Hamed’s creation, aptly named xFakeSci, is not just another run-of-the-mill detection tool. It’s a sophisticated machine-learning algorithm that can sniff out AI-generated papers with an astonishing accuracy of up to 94%. This isn’t just a marginal improvement; it’s a quantum leap, nearly doubling the success rate of conventional data-mining techniques.

“My main research is biomedical informatics, but because I work with medical publications, clinical trials, online resources and mining social media, I’m always concerned about the authenticity of the knowledge somebody is propagating,” Hamed explains in a statement.

His concern isn’t unfounded. The recent global pandemic saw a surge in false research, particularly in biomedical articles, highlighting the urgent need for robust verification methods.

In a study published in Scientific Reports , Hamed and his collaborator, Professor Xindong Wu from Hefei University of Technology in China, put xFakeSci through its paces. They created a testbed of 150 fake articles using ChatGPT , evenly distributed across three hot medical topics: Alzheimer’s, cancer, and depression. These AI-generated papers were then pitted against an equal number of genuine articles on the same subjects.

The algorithm uncovered distinctive patterns that set apart the AI-generated content from human-authored papers. One key difference lies in the use of bigrams – pairs of words that frequently appear together, such as “clinical trials” or “biomedical literature.” Surprisingly, the AI-generated papers contained fewer unique bigrams but used them more pervasively throughout the text.

Person using ChatGPT on their smartphone

“The first striking thing was that the number of bigrams were very few in the fake world, but in the real world, the bigrams were much more rich,” Hamed notes. “Also, in the fake world, despite the fact that were very few bigrams, they were so connected to everything else.”

This pattern, the researchers theorize, stems from the fundamental difference in the objectives of AI models and human scientists . While ChatGPT aims to produce convincing text on a given topic, real scientists focus on accurately reporting their experimental methods and results.

“Because ChatGPT is still limited in its knowledge, it tries to convince you by using the most significant words,” Hamed explains. “It is not the job of a scientist to make a convincing argument to you. A real research paper reports honestly about what happened during an experiment and the method used. ChatGPT is about depth on a single point, while real science is about breadth.”

Study authors warn that as AI language models grow more sophisticated , the line between genuine and fake scientific literature could blur further. Tools like xFakeSci could become crucial gatekeepers, helping maintain the integrity of scientific publications in an age of ubiquitous AI-generated content.

However, Hamed remains cautiously optimistic. While proud of xFakeSci’s impressive 94% detection rate, he’s quick to point out that this still leaves room for improvement.

“We need to be humble about what we’ve accomplished. We’ve done something very important by raising awareness,” the researcher notes, acknowledging that six out of 100 fake papers still slip through the net.

Looking ahead, Hamed plans to expand xFakeSci’s capabilities beyond medicine, venturing into other scientific domains and even the humanities. The ultimate goal? A universal algorithm capable of detecting AI-generated content across all fields — regardless of the AI model used to create it.

Meanwhile, one thing is clear: the battle against AI-generated fake science is just beginning. With tools like xFakeSci, however, the scientific community is better equipped to face this challenge head-on, ensuring that the pursuit of knowledge remains firmly in human hands.

Paper Summary

Methodology.

The researchers employed a two-pronged approach in their study. First, they used ChatGPT to generate 150 fake scientific abstracts, equally distributed across three medical topics: Alzheimer’s, cancer, and depression. These AI-generated abstracts were then compared to an equal number of genuine scientific abstracts from PubMed on the same topics.

The xFakeSci algorithm was developed to analyze these texts, focusing on two main features: the frequency and distribution of bigrams (pairs of words that often appear together) and how these bigrams connect to other words and concepts in the text. The algorithm uses machine learning techniques to identify patterns that differentiate AI-generated text from human-written scientific articles.

Key Results

The study revealed significant differences between AI-generated and human-written scientific articles. AI-generated texts tended to have fewer unique bigrams but used them more extensively throughout the document. The xFakeSci algorithm demonstrated an impressive accuracy rate of up to 94% in identifying AI-generated fake science, substantially outperforming traditional data analysis methods, which typically achieve accuracy rates between 38% and 52%.

Study Limitations

The research primarily focused on scientific abstracts rather than full-length articles, which might exhibit different patterns. The AI-generated content was created using a specific version of ChatGPT, and results may vary with different AI models or as these models evolve.

Additionally, the study currently covers only three medical topics, and its applicability to other scientific fields remains to be tested. The researchers also acknowledge that even with its high accuracy, xFakeSci still misses 6% of fake papers, indicating room for improvement.

Discussion & Takeaways

The study highlights the growing challenge of maintaining scientific integrity in an era of advanced AI language models. It suggests that tools like xFakeSci could play a crucial role in the scientific publishing process, helping to filter out AI-generated fake science. The researchers emphasize the need for ongoing development of such tools to keep pace with evolving AI capabilities. They also stress the importance of raising awareness about this issue in the scientific community and call for the development of ethical guidelines and policies regarding the use of AI in scientific writing and publishing.

Funding & Disclosures

The research was supported by the European Union’s Horizon 2020 research and innovation program, the Foundation for Polish Science, the European Regional Development Fund, and the National Natural Science Foundation of China. The authors declared no competing interests. Ahmed Abdeen Hamed’s work was conducted as part of the Complex Adaptive Systems and Computational Intelligence Lab at Binghamton University, under the supervision of George J. Klir Professor of Systems Science Luis M. Rocha.

' src=

About StudyFinds Staff

StudyFinds sets out to find new research that speaks to mass audiences — without all the scientific jargon. The stories we publish are digestible, summarized versions of research that are intended to inform the reader as well as stir civil, educated debate. StudyFinds Staff articles are AI assisted, but always thoroughly reviewed and edited by a Study Finds staff member. Read our AI Policy for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Editor-in-Chief

Chris Melore

Sophia Naughton

Associate Editor

Related Content

Woman with acne popping pimple

5 Best Acne Spot Treatments, According to Skincare Experts

September 4, 2024

Adorable dogs: Two adult golden retrievers

5 Best Large Dog Breeds, Per Canine Experts

Pumpkin beer

Top 5 Fall Beers, According To Experts

September 3, 2024

Leave a Reply Cancel reply

research summaries written by ai fool scientists

©2024 Study Finds. All rights reserved. Privacy Policy • Disclosure Policy • Do Not Sell My Personal Information

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Pharm Policy Pract
  • v.17(1); 2024
  • PMC10791078

Logo of jppp

Detecting manuscripts written by generative AI and AI-assisted technologies in the field of pharmacy practice

Ammar abdulrahman jairoun.

a Health and Safety Department, Dubai Municipality, Dubai, UAE;

b Discipline of Clinical Pharmacy, School of Pharmaceutical Sciences, Universiti Sains Malaysia (USM), George Town, Malaysia;

Faris El-Dahiyat

c Clinical Pharmacy Program, College of Pharmacy, Al Ain University, Al Ain, UAE;

d Artificial Intelligence Research Center, Al Ain University, Al Ain, UAE

Ghaleb A. ElRefae

Sabaa saleh al-hemyari.

e Pharmacy Department, Emirates Health Services, Dubai, UAE

Moyad Shahwan

f Centre of Medical and Bio-allied Health Sciences Research, Ajman University, Ajman, UAE;

g Department of Clinical Sciences, College of Pharmacy and Health Sciences, Ajman University, Ajman, UAE

Samer H. Zyoud

h Department of Mathematics and Sciences, Ajman University, Ajman, UAE

Khawla Abu Hammour

i Department of Biopharmaceutics and Clinical Pharmacy, Faculty of Pharmacy, The University of Jordan, Amman, Jordan

Zaheer-Ud-Din Babar

j Department of Pharmacy, School of Applied Sciences, University of Huddersfield, Huddersfield, UK

Generative AI can be a powerful research tool, but researchers must employ it ethically and transparently. This commentary addresses how the editors of pharmacy practice journals can identify manuscripts generated by generative AI and AI-assisted technologies. Editors and reviewers must stay well-informed about developments in AI technologies to effectively recognise AI-written papers. Editors should safeguard the reliability of journal publishing and sustain industry standards for pharmacy practice by implementing the crucial strategies outlined in this editorial. Although obstacles, including ignorance, time constraints, and protean AI strategies, might hinder detection efforts, several facilitators can help overcome those obstacles. Pharmacy practice journal editors and reviewers would benefit from educational programmes, collaborations with AI experts, and sophisticated plagiarism-detection techniques geared toward accurately identifying AI-generated text. Academics and practitioners can further uphold the integrity of published research through transparent reporting and ethical standards. Pharmacy practice journal staffs can sustain academic rigour and guarantee the validity of scholarly work by recognising and addressing the relevant barriers and utilising the proper enablers. Navigating the changing world of AI-generated content and preserving standards of excellence in pharmaceutical research and practice requires a proactive strategy of constant learning and community participation.

1. The detection of artificial intelligence–generated manuscripts

Large language models are highly advanced generative artificial intelligence (AI) algorithms trained on vast amounts of language data. These models have progressed remarkably in recent years and been applied in widely used writing tools like OpenAI’s ChatGPT, a popular chatbot capable of analysing text and generating new content in response to user prompts. These tools have had an immediate and profound impact on academics who write articles and the journals that publish them.

Language-based AI can create responses that flow naturally during conversations. It can also produce written works, from poems to fan fiction to children’s books, rapidly (Nolan, 2023 ). ChatGPT has passed the theoretical portion of the United States Medical Licensing Examination without spending years in medical school (DePeau-Wilson, 2023 ). Furthermore, language-based AI has already entered the scientific world; according to a Nature article, ChatGPT has been listed as an author on four preprint manuscripts (Stokel-Walker, 2023 ). Additionally, AI-generated documents have been referenced in various articles (Getahun, 2022 ).

Healthcare academics like any other researchers are also influenced by these tools. These tools can be so deceiving and contain many cons along with their pros. This can be highlighted by the ability of ChatGPT to pass the theory portion of the United State Medical Licensing Examination without having any training or years of attending the medical school (Anderson et al., 2023 ).

A study, focused on the researches based on these AI chatbots like ChatGPT, discovered that it has some drawbacks as medicine may not be a one man show and requires expertise from various healthcare workers (HCWs) cognitive abilities and practice based learnings, which requires human assistance with AI at times, especially in the medical fields such as providing medical consultation, clinical decision making and support systems, assisting with patient’s discharge summaries, writing, translating, and mimicking in the interaction between various HCWs team members in the formulation of effective patient care and policy decisions (Khosravi et al., 2023 ).

AI using natural language models, like ChatGPT, are promising tools for producing conversational writing for different types of articles in sports and exercise medicine (SEM). Scientific integrity, however, maybe threatened by issues related to their use, including those of equity, accuracy, detection, and ethics. Even if the faked references would cause these publications to be rejected by high ranked peer reviewed journals, there is still a dire need to be aware of these dangers to scientific integrity and safeguard the intellectual property in the SEM community. And academic institutions and scientific publishing housed should upgrade their security measures to another level in lieu of this threat (Anderson et al., 2023 ).

The rise of AI-generated content has spurred efforts to distinguish it from human-created content (Else, 2023 ). Several tools like GPTZero, GPT-2 Output Detector, and AI Detector have been developed to determine whether a given text was produced using current AI language models. These tools assess whether a text is “Real” (human-generated) or “Fake” (AI-generated) and provide a confidence percentage (Campagnola, 2022 ).

Due to the controversies surrounding ChatGPT’s risks and potential benefits (Jairoun et al., 2023 ; Abu Hammour et al., 2023 ), pharmacy practitioners have had a mixed reaction toward its practical and academic applications.

Recognising the rapid proliferation of language-based AI technologies, the International Committee of Medical Journal Editors (ICMJE) updated its guidelines in May 2023 to include specific recommendations concerning AI-assisted technology. These revised guidelines now apply to all articles submitted to CMAJ , which follows the ICMJE policy (Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals, 2023 ).

Language-based AI technologies present both opportunities and challenges for researchers, publishers, and the wider scientific community.

This commentary addresses how the editors of pharmacy practice journals can identify manuscripts generated by generative AI and AI-assisted technologies. Editors and reviewers must stay well-informed about developments in AI technologies to effectively recognise AI-written papers. The following methods can assist them in their efforts:

  • Keep Up-To-Date with AI Developments: Keeping abreast of the latest developments in generative AI is essential. Editors and reviewers should read extensively, attend conferences, and rely on reputable sources to understand AI systems’ potential and limitations.
  • Identify Strange Language Patterns: AI-generated manuscripts will typically contain inconsistent or unexpected language patterns. Editors should watch for abrupt changes in writing style, sentence construction, or vocabulary that do not match authors’ experience or previous contributions to the journal.
  • Use Plagiarism-Detection Tools: Employ plagiarism-detection software to identify potential duplicates of previously published content. While AI-generated texts may not be exact copies, they can still contain content similar to that from various sources.
  • Scrutinise References and Citations: Thoroughly examine the references and citations provided. AI-generated content can include inconsistencies, reference unrelated and obscure sources, or fail to adhere to the journal’s formatting requirements.
  • Compare Articles with Existing AI Literature: Compare submitted articles with existing AI-generated articles to identify specific terms or patterns commonly used by AI models.
  • Examine Figures and Tables: Verify the accuracy of data presented in figures and tables. AI-generated manuscripts can include fabricated or misleading data inconsistent with the study’s objectives and findings.
  • Verify Authorship: Confirm the affiliations, email addresses, and prior publications of the corresponding author and co-authors. Contact authors to corroborate their genuine participation in a study.
  • Evaluate Submission Metadata: Check the manuscript’s metadata, such as file characteristics and creation date. AI-generated documents can exhibit unusual metadata patterns.
  • Request AI Model Code and Raw Data: Encourage authors to provide the AI model code and raw data they used in their study. Legitimate authors should have access to these details, whereas AI-generated texts may lack them.
  • Continuously Monitor Published Articles: Monitor published articles for any signs of AI-generated material even after initial checks. Some AI-generated articles may pass initial scrutiny but can be identified through further analysis later on.
  • Seek Assistance from AI Experts: If unsure about a manuscript’s origin, seek advice from AI or natural language processing professionals.
  • Encourage Ethical Engagement With AI: Educate scholars on the potential academic abuses of AI technologies and define a set of ethical guidelines to support their use and development.

2. Enablers and challenges associated with the implementation of AI-generated manuscript detection

To efficiently identify AI-generated articles, journal editors can adopt a multi-layered approach. In addition to plagiarism-detection tools, they can invest in AI-based content analysis tools that examine language patterns and writing styles to identify characteristics unique to AI-generated texts. Additionally, editors can encourage authors to disclose their use of AI and to provide access to their AI model code and raw data to facilitate verification. Journals can create specific guidelines and checklists for reviewers to aid in assessing articles for potential AI involvement. These guidelines should be updated regularly as AI tools develop.

One significant obstacle to identifying articles created by generative AI and AI-assisted technologies in pharmacy practice journals is the potential lack of awareness among journal editors and reviewers about recent developments in generative AI technologies and methods for detecting their products. Staying up-to-date with these rapidly developing technologies can be daunting. Additionally, the limited time available for manuscript reviews may impede the in-depth analysis required to reliably identify AI-generated text. Moreover, editors and reviewers may lack ready access to specialised AI tools and resources that facilitate detection operations, especially with high submission volumes.

Nonetheless, several facilitators can aid in successfully implementing detection tactics. Educational programmes and training sessions on AI developments can enhance journal editors’ and reviewers’ knowledge and skills. AI professionals or experts in natural language processing can provide valuable assistance and insights. Powerful plagiarism-detection technologies can help identify potential AI-generated work by comparing submissions with existing text. Journals could provide peer reviewer training that includes AI detection to equip reviewers with the necessary tools.

Transparent communication between authors and editors regarding any use of AI can promote openness and facilitate detection efforts. Establishing clear expectations and revising editorial policies to enforce ethical standards related to AI-generated material can further promote compliance. By being aware of the challenges and utilising facilitators, pharmacy practice editors and reviewers can improve their ability to detect AI-generated papers and safeguard the integrity of their field’s published research.

Successfully implementing these techniques will require a concerted effort from the academic community and journal publishers. Regular workshops and seminars on AI developments should be organised to keep editors and reviewers informed. Collaborative networks of journal staffs and AI experts would encourage sharing knowledge of and best practices in AI detection. Journal publishers can also explore partnerships with AI software developers to access specialised tools and resources.

While identifying AI-generated content submitted to pharmacy practice journals is challenging, editors and reviews can significantly enhance their detection capabilities by adopting appropriate strategies and collaborating with AI experts. By proactively addressing AI-generated issues and staying informed about AI developments, journals can ensure the integrity of their publications and maintain the trust of their readers and the broader academic community.

3. Optimal and prudent utilisation of artificial intelligence

Authors incorporating artificial intelligence (AI) and AI-assisted technology into their writing should adhere to the following guidelines:

  • Leverage these tools to augment language and improve readability exclusively; refrain from substituting them for critical research tasks such as data interpretation or scientific conclusion formulation.
  • Employ the technology under human supervision and control, meticulously reviewing and editing the output. AI, while capable of producing seemingly authoritative information, may introduce biases, inaccuracies, or incompleteness.
  • Avoid attributing authorship to AI or including AI and AI-assisted technologies as authors or co-authors. As per Elsevier's AI author policy, authorship responsibilities and tasks are exclusive to and executed by humans.
  • Transparently communicate the use of artificial intelligence (AI) and AI-assisted technologies in the writing process. Declarations about the utilisation of AI will be incorporated into the published work when authors make such statements. Ultimately, authors bear the ultimate responsibility and accountability for the content of their work.

4. Conclusion

Generative AI can be a powerful research tool, but researchers must employ it ethically and transparently. Editors should safeguard the reliability of journal publishing and sustain industry standards for pharmacy practice by implementing the crucial strategies outlined in this editorial. Although obstacles, including ignorance, time constraints, and protean AI strategies, might hinder detection efforts, several facilitators can help overcome those obstacles. Pharmacy practice journal editors and reviewers would benefit from educational programmes, collaborations with AI experts, and sophisticated plagiarism-detection techniques geared toward accurately identifying AI-generated text. Academics and practitioners can further uphold the integrity of published research through transparent reporting and ethical standards. Pharmacy practice journal staffs can sustain academic rigour and guarantee the validity of scholarly work by recognising and addressing the relevant barriers and utilising the proper enablers. Navigating the changing world of AI-generated content and preserving standards of excellence in pharmaceutical research and practice requires a proactive strategy of constant learning and community participation

  • Abu Hammour, K., Alhamad, H., Al-Ashwal, F. Y., Halboup, A., Abu Farha, R., & Abu Hammour, A. (2023). ChatGPT in pharmacy practice: A cross-sectional exploration of Jordanian pharmacists’ perception, practice, and concerns . Journal of Pharmaceutical Policy and Practice , 16 ( 1 ), 115. doi: 10.1186/s40545-023-00624-2 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Anderson, N., Belavy, D. L., Perle, S. M., Hendricks, S., Hespanhol, L., Verhagen, E., & Memon, A. R. (2023). AI did not write this manuscript, or did it? Can we trick the AI text detector into generated texts? The potential future of ChatGPT and AI in sports & exercise medicine manuscript generation . BMJ Specialist Journals , e001568. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Campagnola, C. (2022). Perplexity in language models [Internet]. Medium. Towards Data Science. https://towardsdatascience.com/perplexity-in-language-models-87a196019a94 .
  • DePeau-Wilson, M. (2023). AI passes U.S. Medical Licensing Exam [Internet]. Medical News. Med Page Today. https://www.medpagetoday.com/special-reports/exclusives/102705 .
  • Else, H. (2023). Abstracts written by chatGPT fool scientists . Nature , 613 ( 7944 ), 423. doi: 10.1038/d41586-023-00056-7 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Getahun, H. (2022). After an AI bot wrote a scientific paper on itself, the researcher behind the experiment says she hopes she didn’t open a “pandora’s box” [Internet]. Insider. Insider. https:// www.insider.com/artificial-intelligence-bot-wrote-scientific-paperon-
  • Jairoun, A. A., Al-Hemyari, S. S., Shahwan, M., Alnuaimi, G. R., Sa’ed, H. Z., & Jairoun, M. (2023). ChatGPT: Threat or boon to the future of pharmacy practice? Research in Social & Administrative Pharmacy, RSAP , S1551–7411. [ PubMed ] [ Google Scholar ]
  • Khosravi, H., Shafie, M. R., Hajiabadi, M., Raihan, A. S., & Ahmed, I. (2023). Chatbots and ChatGPT: A bibliometric analysis and systematic review of publications in Web of science and scopus databases . arXiv Preprint ArXiv:2304.05436 , 1–30. [ Google Scholar ]
  • Nolan, B. (2023). This man used AI to write and illustrate a children’s book in one weekend. He wasn’t prepared for the backlash. [Internet]. Business Insider. https://www.businessinsider.com/ .
  • Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals . (2023). International committee of medical journal editors; updated May 2023. https://www.icmje.org/recommendations/ (accessed 6 July 2023).
  • Stokel-Walker, C. (2023). ChatGPT listed as author on research papers: Many scientists disapprove . Nature , 613 ( 7945 ), 620–1. doi: 10.1038/d41586-023-00107-z [ PubMed ] [ CrossRef ] [ Google Scholar ]

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NATURE BRIEFING
  • 13 January 2023

Daily briefing: AI-generated abstracts fool scientists

  • Flora Graham

You can also search for this author in PubMed   Google Scholar

Hello Nature readers, would you like to get this Briefing in your inbox free every day? Sign up here

Webpage of ChatGPT is seen on OpenAI's website on a computer monitor

Scientists and publishing specialists are concerned that the increasing sophistication of chatbots could undermine research integrity and accuracy. Credit: Ted Hsu/Alamy

AI-generated abstracts fool scientists

The artificial-intelligence (AI) chatbot ChatGPT can write fake abstracts that scientists have trouble distinguishing from those written by humans. The chatbot was asked to create 50 abstracts on the basis of the titles of articles in five high-impact medical journals. Reviewers spotted only 68% of the ChatGPT abstracts — performing roughly the same as AI-detector software . Researchers are divided over the implications: some find it worrying, but others think that serious scientists are unlikely to use AI-generated abstracts.

Nature | 4 min read

Reference: bioRxiv paper (not peer-reviewed)

Modified mRNA wins for COVID vaccines

The debate over the best design for mRNA in COVID-19 vaccines has been settled: chemically modified mRNA comes out on top. A vaccine made with modified mRNA elicited the same immune protection and caused fewer side effects at the same dose as a version with ‘natural’ mRNA in a comparison by vaccine maker CureVac. This means that the dosage can safely be increased for maximum protection. CureVac had long remained a proponent of ‘unmodified’ mRNA, even after Pfizer–BioNTech and Moderna had success with next-generation mRNA COVID-19 vaccines. The company has now switched its entire infectious-disease vaccine portfolio, leaving only a few ‘unmodified’ COVID-19 jabs under development in Asia.

Nature | 6 min read

US government’s scientific-integrity plan

US President Joe Biden’s administration has unveiled a long-awaited plan to prevent political interference in science conducted at government agencies . The plan aims to strengthen, expand and standardize scientific-integrity policies across agencies and establish an integrity panel to investigate violations by senior officials and political appointees. Government watchdogs praised the plan, but say further steps are needed to secure the role of scientists in government decision-making and prevent the type of political meddling that was reported under former president Donald Trump.

Nature | 5 min read

Reference: White House framework document

Image of the week

Flower inclusion of Symplocos kowalewskii preserved in Baltic amber

This nearly 40-million-year-old flower is by far the largest floral fossil ever discovered preserved in amber. Flower inclusions usually do not exceed 10 millimetres — this one is 28 millimetres across. But the sample, from the Baltic forests of northern Europe, sat in a German museum case and hadn’t been analysed for more than 150 years. Researchers extracted pollen from the sample and say it is closely related to the Asian species of Symplocos . They propose a new name for the flower : Symplocos kowalewskii . ( Scientific American | 4 min read )

Reference: Scientific Reports paper (Carola Radke, MfN (Museum für Naturkunde Berlin))

Features & opinion

Vaccine incentives do not backfire.

Policymakers can stop worrying that offering cash to people who get their jabs could have unintended negative consequences. Trials in Sweden and the United States have shown that monetary incentives don’t reduce people’s trust in vaccine safety or erode their altruism . Communities that can afford incentives can now consider this approach, alongside improving vaccine access, without having to rely on untested assumptions, argues a Nature editorial.

Futures: science fiction from Nature

In the latest short stories for Nature ’s Futures series:

• The echo of a space-weather report prompts some homespun wisdom in ‘ Sailors take warning ’.

• There are lessons for us all in ‘ Excerpts from the User Guide for the SynaTech-3411 3D Bio-Printer (the bits you actually bothered to read) ’.

Five best science books this week

Andrew Robinson’s pick of the top five science books to read this week includes a breathless scientific narrative of the COVID-19 pandemic and an exploration of the ancient world power of Nubia.

Nature | 3 min read

Podcast: get caught up on science

This week, I spoke to the Nature Podcast about some of the most compelling science stories that you might have missed during the holiday season, including the new president of Brazil’s environmental policies, how glass frogs switch on their ‘invisibility cloak’ and what noises dinosaurs might have made.

Nature Podcast | 26 min listen

Subscribe to the Nature Podcast on Apple Podcasts , Google Podcasts or Spotify .

Quote of the day

“it was a moment when i could see the future and just leapt because it was so beautiful.”.

An experience with Python and a set of iconic sea-level-rise figures transformed NASA computational oceanographer Chelle Gentemann into an open-science advocate. Now she is helping to spearhead the Year of Open Science in the United States. ( Nature | 5 min read )

doi: https://doi.org/10.1038/d41586-023-00092-3

Today Leif Penguinson is concealed in the Murchison River Gorge in Australia’s Kalbarri National Park. Can you find the penguin ?

The answer will be in Monday’s e-mail, all thanks to Briefing photo editor and penguin wrangler Tom Houghton.This newsletter is always evolving — tell us what you think! Please send your feedback to [email protected] .

Thanks for reading,

Flora Graham, senior editor, Nature Briefing

With contributions by Katrina Krämer and Dyani Lewis

We’ve recently launched two new e-mails you might like. They’re free, and of course you can unsubscribe at any time.

• Nature Briefing: Cancer — a new weekly newsletter written with cancer researchers in mind. Sign up here to receive the next one.

• Nature Briefing: Translational Research covers biotechnology, drug discovery and pharma. Sign up here to get it free in your inbox each week.

Related Articles

Daily briefing: Why Roman concrete lasts for ages

Daily briefing: World’s first vaccine for bees

Daily briefing: Is subvariant XBB.1.5 a global threat?

Daily briefing: ‘Disruptive’ science has declined — and no one knows why

Postdoctoral Associate- Genetic Epidemiology

Houston, Texas (US)

Baylor College of Medicine (BCM)

research summaries written by ai fool scientists

NOMIS Foundation ETH Postdoctoral Fellowship

The NOMIS Foundation ETH Fellowship Programme supports postdoctoral researchers at ETH Zurich within the Centre for Origin and Prevalence of Life ...

Zurich, Canton of Zürich (CH)

Centre for Origin and Prevalence of Life at ETH Zurich

research summaries written by ai fool scientists

13 PhD Positions at Heidelberg University

GRK2727/1 – InCheck Innate Immune Checkpoints in Cancer and Tissue Damage

Heidelberg, Baden-Württemberg (DE) and Mannheim, Baden-Württemberg (DE)

Medical Faculties Mannheim & Heidelberg and DKFZ, Germany

research summaries written by ai fool scientists

Postdoctoral Associate- Environmental Epidemiology

Open faculty positions at the state key laboratory of brain cognition & brain-inspired intelligence.

The laboratory focuses on understanding the mechanisms of brain intelligence and developing the theory and techniques of brain-inspired intelligence.

Shanghai, China

CAS Center for Excellence in Brain Science and Intelligence Technology (CEBSIT)

research summaries written by ai fool scientists

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

You're reading a free article with opinions that may differ from The Motley Fool's Premium Investing Services. Become a Motley Fool member today to get instant access to our top analyst recommendations, in-depth research, investing resources, and more. Learn More

The State of the AI Arms Race

Author Jeremy Kahn helps us think about the future.

Jeremy Kahn is the AI editor at Fortune Magazine and the author of the new book Mastering AI: A Survival Guide to our Superpowered Future . In this podcast, Motley Fool employee Alex Friedman caught up with Kahn to talk about the current AI landscape.

They also discuss:

  • Bill Gates' initial hesitancy to invest in OpenAI.
  • Where LLMs go from here.
  • Developments in biotech.

To catch full episodes of all The Motley Fool's free podcasts, check out our podcast center . To get started investing, check out our beginner's guide to investing in stocks . A full transcript follows the video.

This video was recorded on August 31, 2024.

Jeremy Kahn: What is it that the human does best and what is it the machine can do best? Let's each be pre-eminent its own realm and pair the two together. If we think about it more like that, then we are able to master AI, and we will be able to reap the rewards of the technology while minimizing a lot of the downside risks.

Mary Long: I'm Mary Long, and that's Jeremy Kahn. He's the AI editor at Fortune Magazine, and the author of the new book, Mastering AI, a survival guide to our super powered future. My colleague Alex Friedman caught up with Kahn, earlier this week to discuss the current state of the AI arms race and to take a look to the future. They also talk about what convinced Bill Gates to move forward with Microsoft 's initial OpenAI investment, how LLMs are being used to shorten clinical trials, and the changing relationship between man and machine.

Alex Friedman: You are the Fortune Magazine AI editor, and you were a tech reporter before this. At what point did you first hear the term artificial intelligence, and when did you really start taking it seriously?

Jeremy Kahn: I guess I first heard the term probably sometime in 2015. Even before I had become a tech reporter at Bloomberg, I was doing some finance coverage and working for a magazine Bloomberg had that I was doing a story about London's little tech hub emerging tech hub. At the time, people said the most successful exit, but in some ways, the most disappointing exit from the London tech scene was this company called Deep Mind, which I knew very little about. But it had just been acquired a couple of years before by [ Alphabet 's] Google for $650 million which was the best exit that the London tech hub had had at the time. But people were upset because they thought that this could actually potentially be a huge future company, and they thought maybe it sold out too early. I didn't know anything about Deep Mind, but I started to look into it, and that's when I first heard about artificial intelligence. Then a few months after writing that story, I got a chance to move over to the tech reporting team at Bloomberg, and then I actually started covering AI at that point. That was basically the beginning of 2016.

Alex Friedman: You've now been covering AI for years. I'm curious, after ChatGPT was released, were you surprised by the reaction and the adaptation of the technology or was this something you've been waiting for a long time?

Jeremy Kahn: Well, yeah, I think all of us who had been following this for a while, we were wondering, when will this breakthrough into the general public consciousness. But I was surprised that it was ChatGPT, that was the thing that did it, and I was surprised by the reaction that ChatGPT. I think in retrospect, I probably shouldn't have been, but yeah, because I'd been following it for so long, and it seemed like the technology was making fairly constant progress, but OpenAI, which I've been following as well for years, had previously, this is months prior to ChatGPT being released had created a model called GPT-3 instruct, which was a version of their GPT-3 large language model, which itself had been out even earlier than that. But it was one that was much more easy to control. One of the things you could do with the instruct model was have it function as a chat bot, have it engage in dialogue.

But OpenAI had not released this as a consumer facing product instead they'd made it available to developers in this little thing they had called AI playground is a sandbox they had that developers could use their technology. They let some reporters play around with it, and I had played around with it a little bit and thought, this is interesting, I didn't think it was going to be a huge thing. Then when ChatGPT initially came out, it looked like the same thing. I thought, this is just like an updated version of this GPT-3 instruct model. But actually, I think the simpleness of the interface and the fact that they made it available freely for anyone to play around with just made the thing go viral. It was the first time people realized that they could actually interact with this AI model, and that you could do almost anything with it. I think the fact that it was designed to be in this dialogue through this very simple interface that looked like a Google Search bar made all the difference. When the GPT-3 instruct model was out, it was actually much harder to us it had all these dials that you could control the output, which were great things for developers, but actually made it much more confusing for the average person to use.

Alex Friedman: You tell a great story in mastering AI about Bill Gates' skepticism about Microsoft's huge investment in OpenAI. Why was he so skeptical and how did Satana Della get Gates to change his mind?

Jeremy Kahn: Gates had been a big skeptic of these large language models. He thought they were never going to work that they were not the path forward to super powerful AI. They seemed too fragile. They didn't get things right. He had played around with some earlier versions of OpenAI's technology. The OpenAI created a system called GPT-2, which was the first system that could write a bit like a person. But if you asked it to write more than a few sentences, it went off in strange directions and stopped making sense. He played around with GPT-3, and he thought GPT-3 was slightly better, but it still had some of the same problems, and it couldn't answer. In particular, Gates thought the real test of a system would be if it could solve hard questions from the AP advanced placement biology test. He had played around with GPT-3 on this, and it had failed on those AP biology test questions, and as a result, he just really didn't think it was going to go anywhere. But so such an Adela he knew this, and when he let the OpenAI guys know that this was the case, that Gates was skeptical, and that Gates in particular had this interest in AP biology. Then, one of the things that OpenAI had done when it created this even more powerful model called GPT-4, which is now out, and is the most powerful model currently out. But before it was released, one of the things that OpenAI did is it had gone to Khan Academy, which is this online tutoring organization that is a nonprofit.

They had asked if they could partner with Khan Academy, and it turned out one of the things reasons they wanted to do this is that Khan Academy had a really good data on AP biology test questions. It had lots of examples of those questions and lots of examples walking you through how to solve those questions successfully and answer successfully. They made sure that GPT-4 was trained on those questions and answers from Khan Academy. As a result, GPT-4 was able to totally ace the AP biology questions. When they brought that system back in to try it out with Bill Gates and he tried his AP biology questions on GPT-4 it completely aced them, and Gates was blown away. That's what really convinced Gates that large language models maybe were a path toward super powerful artificial intelligence. Since then Gates has rode back and a little bit he said, he thinks that this is a big step in that direction, but probably won't take us all the way to systems that can really reason, as well as humans can across a whole range of tasks. But it definitely impressed him and convinced him to allow such an Adela to continue to invest in OpenAI.

Alex Friedman: How do you think Microsoft's $1 billion initial investment in OpenAI impacted the development of generative AI and the overall AI business landscape.

Jeremy Kahn: It was hugely important because it allowed OpenAI to go ahead and train first GPT-3 and then later GPT-4. It was really those models that helped create the landscape of generative AI systems that have come out from competitors and from researchers. Without that investment, it's not clear what would have happened. There were other people working on large language models, but the progress was much slower. There was no one that had devoted as much emphasis to them as OpenAI. I think without that billion dollar investment from Microsoft, it would have been difficult for that to happen as quickly as it did.

Alex Friedman: We're recording this interview at the end of August 2024. I'd love to hear your current analysis of the Big Tech AI arms race that's been taking place over the last decade and where you think it's headed.

Jeremy Kahn: It's fascinating. There's definitely a race on, and it's not over yet, and it's unclear, who's going to win, but it does seem like the competitors are familiar ones, and that they're mostly these really big tech companies that have been around for the last two decades and dominated the Internet and mobile era. For the most part, it's Microsoft, it's Google, it's Meta , and those three, in particular, and then maybe trying to catch up is Apple and Amazon . Those companies really are the ones that are at the forefront of this and then you have this one new entrant, which is OpenAI, but even OpenAI is very closely partnered with Microsoft. That's basically the constellation you have. You have all of these companies are racing toward ever more powerful AI models, basically around the same architecture, which is based on something called a neural network, which is, again, a software loosely based on how the human brain works. Within neural networks, they are all using something called transformers, which was a system that Google actually invented in 2017. It started to implement behind the scenes in Google search. It helped basically clarify what users intent was when they were searching for things because you could understand natural language much better. But Google did not scale up the systems as much as OpenAI did at least initially, and did not try to create systems that could generate content and write the way OpenAI did. But of course, once ChatGPT came out, Google very quickly was under all this pressure to catch up.I think at this point they've shown that they can catch up and have caught up. Gemini, which is Google's most powerful model, is very close, if not completely competitive with OpenAI's GPT-4, on some metrics, it may even be ahead.

There's some other players in this race, there's a company called Anthropic, it's smaller that was founded by people who broke away from OpenAI, that's closely aligned with Amazon at this point, and it is very much part of Amazon's efforts to try to catch up in this race. They have a model called Claude that's very competitive and powerful. Meta has jumped into this with both feet, and it's taken this approach that it wants these models to be open source and it wants everyone building on its technology. I thought the best way to do that was to for the models for free. It doesn't have a big Cloud computing business that is trying to support by offering models that are proprietary. Instead, It thinks it's going to benefit the most by open sourcing these models, but it's created a model called Llama that's very powerful, also equally competitive. It's just interesting to see where this is going to go. The models keep getting larger they are multi modal now, meaning that they can take in audio and video and output audio and video and still images as well.

They can reason about, what they're seeing in imagery and in videos. They can engage in very natural conversation over a mobile phone or through audio. The models are very interesting, it's not clear that they're going to overcome some of these fundamental limitations. You may have heard about things called hallucinations where models make up information that seems plausible, but is not accurate. It turns out as the models have gotten more powerful, they haven't necessarily been hallucinating that much less, and some people think that's a fundamental problem that we're going to need some other technique to solve before we actually get to this is Holy Grail of the AI field called Artificial General Intelligence. Again, that's AI that could think reason like a person across almost any cognitive task. It's not clear how close we are to that, but we're clearly a lot closer than we were before ChatGPT came out in late 2022.

Alex Friedman: In your book, you talk about how Apple was slower than Microsoft or Google and rolling out AI. Since you sent mastering AI to print, Apple has released their version of AI creatively called Apple Intelligence. That's been in large part driven by a partnership between Apple and OpenAI. I'm curious, what do you think about Apple's rollout of their own AI platform?

Jeremy Kahn: Apple was behind, and I think they needed to catch up. I think Apple's instinct is always try to do everything in the house. They have been trying for years to work on advanced AI models of their own. They were not as successful in part because I don't think they ever devoted quite the computing resources to it. Then also they had a problem with hiring some of the best talent, actually, even though Apple is a very good reputation. But I think in particular, among the AI researchers, they really needed to get ahead in this game. They were not seen as at the cutting edge, and then it became a self reinforcing problem. They ultimately decided to partner with OpenAI, which in some ways was an admission that they were behind. That has allowed them to get back in the game, though. I think they have so many devices out there they have a huge distribution channel. Distribution channels do matter, and that's an advantage that they know they have, and they're trying to leverage it. We'll see what happens.

I think there's a chance that people will want to use whatever Apple is offering just because they like Apple products, and they already are embedded in the Apple ecosystem. It's a pain, as everyone knows, to switch your phone or switch a different operating system for your laptop. I think most people don't want to do that. If they can have a product that's pretty good or very close to top of market, without having to switch devices, that's what they're going to go for. Apple's been smart by partnering with OpenAI, which does have the leading models in the market. Apple's also taking this approach that is very much in keeping with their own strategic position around user privacy and data privacy, which is they're going to try to keep as much as possible any data that you're feeding to an AI chat pod or AI system on your device and not have it transmitted over Wi-Fi or over your phone network to the Cloud, because that introduces all security concerns and data privacy concerns. They've said they're only going to hand off the hardest queries to open AIs technology. Ultimately, they may try to have something that runs completely on device.

The way AI is developing, the most powerful models tend to be very large and have to be run in a data center, so you have to use them over the Cloud. But people are very quickly figuring out within six months, how to shrink those models down considerably, and in some cases, be able to mimic some of the capabilities of the largest models with models that are small enough to fit on your phone. I think Apple is betting that that trend is going to continue, and that for what most users are going to want to use a digital assistant for, what they can put on the phone is going to be sufficient.

Alex Friedman: What do you think about the partnership between Apple and Open AI and what this means for the space, especially considering the large stake that Microsoft has in Open AI?

Jeremy Kahn: I don't know how stable a partnership it is. I can't imagine Microsoft's thrilled about it, given its rivalry with Apple, but it's a funny world in Silkin Valley. There's a lot of frenemy relationships. There's already quite a lot of tension in the Microsoft Open AI relationship because Open AI sells services to some of the same corporate customers directly that Microsoft is also trying to sell to. Microsoft wants those people to use Open AI services, but on its own Azure Cloud, it doesn't want them necessarily buying those services directly from Open AI. You already had that tension, and then the Apple relationship just sort of adds to that tension. But it's not clear also how long lasting that Apple open AI relationship will be. I don't think Apple necessarily wants to be in a position where it's dependent on Open AI for what is going to be maybe the most important piece of software that's on your device. While Apple's primarily a device company has always known that, software helps sell those devices and help cement people to those devices. I think if that glue or that cement is being provided by a third party, that's going to be problematic for Apple strategically in the longer run. Apple is trying very hard still to develop its own models that will be competitive in the marketplace. It just hasn't managed to do so yet. That's why I think it had to partner with Open AI. But how long lasting that partnership will be? We'll see.

Alex Friedman: Most people know Open AI and ChatGPT. What comes next after ChatGPT? Where are we headed?

Jeremy Kahn: Well, I think the next thing we're going to see in the very near term is what they call AI agents. This it'll probably be an interface that looks a lot like ChatGPT, but instead of just producing content for you, you can prompt the system to go out and take action for you, and it can take action for you using other software or across the Internet. It will become the main interface, I think for most people with the digital world. Now you can ask ChatGPT to suggest an itinerary for a vacation, but you still have to go and book the vacation yourself. What these new systems will do is it will suggest the itinerary, and then you can say, that sounds great. Go out and make all those bookings, and it will go out and do that for you. It may go out and research things for you and then take actions that you want it to take. It might go out and negotiate on your behalf. There are already some systems out there that are doing insurance negotiations on behalf of doctors to get pre approvals for patients. I think that's an example of where this is all heading. Then with incorporations, you're going to have these systems that will perform lots of tasks for you across different software that now have to be performed manually by people often cutting and pasting things between different pieces of software and doing something with the thing you create. That's all going to be streamlined by essentially these new AI agents.

Alex Friedman: Other than these agents, what are some of the other trends in AI that get you the most excited?

Jeremy Kahn: Agents are interesting, I think in order to have agents that are really going to be effective, we're going to have to have AI that is more reliable and has better reasoning abilities. There's certainly some hints that that is coming. You hear tantalizing rumors and stories that suggest that we're getting closer to agents that really will be able to reason much better than today's large language models have been able to. We'll see where that goes. I mean, there are some people in some AI researchers who really doubt that this will be possible with the current types of architectures and algorithms we have, and that we're going to really need new algorithms to achieve that reasonability. We'll see. But I think that's really interesting. I'm very excited about what AI in general is going to do for certain big fields of human endeavor. One is science and medicine. I'm very excited about AI being used to discover new drugs, essentially to treat conditions. I think we're going to make tremendous progress in curing diseases and treating diseases through AI in the next couple of years. There are already systems today that are a bit like large language models that you can prompt a natural language to give you the recipe for a protein that will do a particular thing. It will bind to a particular site. It will have a certain toxicity profile, and that's going to tremendously speed up drug discovery. Then across the sciences, you see people using AI to make new discoveries.

I think there's potential to discover new chemical compounds, which may have big implications for sustainability and our fight against climate change. I think we're going to see big breakthroughs in science. Then in medicine, more generally, I think also coupling AI with more wearable devices will give us lots of more opportunities for personalized medicine. That's one of the areas I'm most excited about. The other one I'm really excited about is actually the use of AI in education, where I think despite the panic among a lot of teachers when ChatGPT came out that everyone was just going to use to cheat. I think really, if we go a few years ahead and look back, we're going to see this tremendous transformation of education where now every student has a personal tutor that can walk them through how to solve problems and design the right way, not give away the answer, but use a Socratic method to lead the student to the answer and to really teach the student.

Alex Friedman: You mentioned biotech companies really being able to do some cutting edge research to develop new treatments. Are there any biotech companies in your mind right now that are leading the way?

Jeremy Kahn: Yeah, some of the ones I really like are small, private ones. I talked about a company called Pfluen in the book, there's another company called Lab Genius, that's very good, but those are smaller companies. I think if you look at the bigger ones that are publicly traded, Biointec which is famous for its work on the COVID vaccine, but it's also invested very heavily in these AI models, and has done some really amazing stuff. I've been very impressed. I heard one of their lead scientists give a talk at a conference just a couple of months ago. Very impressive what they're doing using these same large language model based systems to discover new drugs. I definitely think they're one to watch. But the whole industry is moving this direction. Recursion labs is one that's out there and they're doing lots of interesting stuff. They're also publicly traded. I would just watch the whole space in general.

Alex Friedman: What is it in particular that those companies are doing that you find so interesting in terms of how they AI?

Jeremy Kahn: Well, I think it's just that they are using these large language model based approaches to discover new compounds and to accelerate all the pre clinical work needed to bring a drug to clinical trials stage. You can't really shorten the clinical trial stage that much. There's places in clinical trials where AI can help as well. It can help select the best sites for clinical trials. It can help potentially run the clinical trial slightly more efficiently, but you can't really shortcut the clinical trial process because it's absolutely necessary for human safety and to make sure things work. But there's a lot that happens before a compound can even make it to clinical trial, and most of that can be accelerated or shortcuted through the use of these new Genre of AI models. I think looking at companies that are really invested heavily in those approaches is interesting. The pharmaceutical industry has actually been very slow to adopt AI. If you look at big pharma company, they've been very slow. A lot of their data is very siloed. They've been very wedded to traditional drug discovery techniques, which are more human and intuition led. They're now I think, playing catch up, mostly through partnerships with these smaller venture backed private companies.

Alex Friedman: Switching gears, what aspects of AI keep you up at night?

Jeremy Kahn: There's lots of risks that I'm worried about, and they're probably not the ones that get the most attention. When I go on podcasts like this, I almost always get asked about some mass unemployment, which is a risk I'm not really worried about. I think we are not going to see mass unemployment from AI. I think some people there's going to be disruption. I think some people may lose their jobs, but I think on a net basis, we will see, as we have seen with every other technology that there will be jobs created in the long term than are lost. The other one I get asked about a lot is, of course, the existential risk of AI that's going to somehow become sentient and kill us all. I think that's a very remote possibility, not in the capacity of systems that we're going to see in the next five years. I think, we're starting to take some sensible steps to take that risk off the table. At least I hope we take those steps. Those are not the ones I'm most worried about. I really worry about our overuse of this technology in our daily lives, and how that may strip us of some of our most important human cognitive abilities. That would include critical thinking. I think it's just too easy when you get a very pat capsule answer from a chat bot or generative of AI search engine which gives you a whole summarized answer to just accept that answer as the truth and not to think too hard about the source of the information.

Even more so than a Google Search, where you still have links, and you still have this idea that the information has some providence, and you have to think a little bit about where is this information coming from? When you get these capsule summary answers from an AI chat, I think it just the tendency is not to think too hard about it, and I worry about us losing some critical thinking skills. There, I worry about the loss of writing ability because I think one of the dangerous things about generative AIs, it creates a world where it's easy to imagine that writing is somehow separable from thinking. I don't think the two are separable at all. I think it's through writing that we actually refine our thinking and refine our arguments and if there's a world where people don't write anymore, they just have a couple jot off some bullet points to give to the chat bot and have it write the document for us, then I think our arguments will get weaker, and we're going to lose a lot of our writing and thinking ability. I also worry about people using AI chatbots as social companions. There's already a significant subpopulation of people who do this and become very reliant on kind of AI as companion bots. I worry about that because I think again, it's not a real relationship with a real person. Although these chat pots are pretty good at simulating a real conversation.

They actually have no real wants or desires or needs. They're trained generally to be very pleasing to people and not to challenge us too much. I think that's very unlike a relationship with a real person who does have needs and desires and isn't always pleasant, and sometimes is in a bad mood and certainly isn't always trying to please us. I think some people are going to find it, well, why should I bother with the real human relationships? Because there's so much messier and complicated and harder than a relationship with the chat bot. The chatbot gives me everything I need in terms of being able to offload my feelings to them. It gives me affirmation, and that's what I want. I worry that we're going to have a generation of people who increasingly do not seek out human contact. I think we're going to have to guard against that danger, and I think we actually have time limits on how long you can have use an AI system as a companion chat bot, particularly for children and teenagers. I worry about those risks. I worry about the consolidation of power to some extent in the hands of just very few companies. I do think that's a concern. In general, I think there's a tendency with this technology to create winner take all economics. For the most part, that means the biggest firms and biggest companies out there right now who have the most data, which they can use to refine AI systems and create systems that are therefore more capable than others. They will accrue more and more power. I think we need to be worried about that a bit. Those are some of the risks I worry about most.

Alex Friedman: One last question, given all of those challenges, what does it mean to truly master AI?

Alex Kahn: I think mastering AI is all about putting the human at the center of this and thinking very hard about what do we want humans to do in our organizations, in our society? What processes should really be reserved exclusively for humans because they require human empathy? I talk a lot about in the book that one of the challenges here with AI is that we will put AI into places where it really doesn't belong because the decisions are so dependent on human empathy in the judicial system or you want to be able to appeal to a human judge. You do not want the judge simply blindly following some algorithm. I worry that increasingly, we're going to be in a world where we put AI systems in places where they're acting as judges and arbiters on human things where empathy is required, and these systems don't have any empathy. I also worry that we're going to look at these systems as a direct substitute for humans in lots of places within businesses and companies when actually, we get the most from them when we use them as compliments to human labor. When they're assistants, and when we look at them as what is it that human does best and what is it the machine can do best? Let's each be sort of pre-eminent in its own realm and pair the two together. I think if we think about it more like that, then we are able to master AI, and we will be able to reap the rewards of the technology while minimizing a lot of the downside risk.

Mary Long: As always, people on the program may have interest in the stocks they talk about, and the Motley Fool may have formal recommendations for or against, so don't buy or sell stocks based solely on what you hear. I'm Mary Long. Thanks for listening. We'll see you tomorrow.

Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Alex Friedman has positions in Apple. Mary Long has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Amazon, Apple, Meta Platforms, and Microsoft. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy .

Related Articles

Worried investor looking at a computer.

Premium Investing Services

Invest better with The Motley Fool. Get stock recommendations, portfolio guidance, and more from The Motley Fool's premium services.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Daily briefing: AI-generated abstracts fool scientists

  • PMID: 36646919
  • DOI: 10.1038/d41586-023-00092-3

PubMed Disclaimer

Similar articles

  • Daily briefing: What scientists think of GPT-4, the new AI chatbot. Graham F. Graham F. Nature. 2023 Mar 17. doi: 10.1038/d41586-023-00839-y. Online ahead of print. Nature. 2023. PMID: 36941419 No abstract available.
  • Daily briefing: AI fed on a diet of AI-generated data spews nonsense. Krämer K. Krämer K. Nature. 2024 Jul 25. doi: 10.1038/d41586-024-02490-7. Online ahead of print. Nature. 2024. PMID: 39075318 No abstract available.
  • Abstracts written by ChatGPT fool scientists. Else H. Else H. Nature. 2023 Jan;613(7944):423. doi: 10.1038/d41586-023-00056-7. Nature. 2023. PMID: 36635510 No abstract available.
  • Recent Trend in Artificial Intelligence-Assisted Biomedical Publishing: A Quantitative Bibliometric Analysis. Miller LE, Bhattacharyya D, Miller VM, Bhattacharyya M. Miller LE, et al. Cureus. 2023 May 19;15(5):e39224. doi: 10.7759/cureus.39224. eCollection 2023 May. Cureus. 2023. PMID: 37337487 Free PMC article. Review.
  • Stakeholders' perspectives on the future of artificial intelligence in radiology: a scoping review. Yang L, Ene IC, Arabi Belaghi R, Koff D, Stein N, Santaguida PL. Yang L, et al. Eur Radiol. 2022 Mar;32(3):1477-1495. doi: 10.1007/s00330-021-08214-z. Epub 2021 Sep 21. Eur Radiol. 2022. PMID: 34545445 Review.

Publication types

  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Nature Publishing Group

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

research summaries written by ai fool scientists

Research Summaries Written by AI Fool Scientists

Scientific American Content: Global's profile photo

Scientific American Content: Global

Abstracts written by ChatGPT fool scientists

  • January 2023
  • Nature 613(7944)
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

No full-text available

Request Full-text Paper PDF

To read the full-text of this research, you can request a copy directly from the author.

  • Anjali Rawal
  • Youjia Zheng
  • Shanu Sushmita
  • REV LAT-AM ENFERM

Isabelle Costa

  • Terra Lucélia

Sayonara Barbosa

  • Ghadeer Mohamed El-Mawardy
  • Radwa Ali Hamed

Carlos Ruiz-Nuñez

  • Javier Gismero Rodríguez
  • Antonio J. Garcia Ruiz

Ivan Herrera Peco

  • Fatimah Gandasari
  • Annisa Septiana Koeswinda
  • Aulia Kharisma Putri
  • Nani Muftihah

Michael Hindelang

  • Alexander Zink
  • Mengqi Zhang

Ayana Srivastava

  • INT J HUM-COMPUT INT

Sadettin Demirel

  • Marta Consuegra-Fernández
  • Javier Sanz-Aznar
  • Joan Gabriel Burguera-Serra
  • Juan José Caballero-Molina
  • Idan Kashtan

Alon Kipnis

  • BRIT J IND RELAT

Pawel Gmyrek

  • Viraj Shetty

Aiysha Gul

  • Henna Patel
  • Aqsa Hidayat

Subhajit Panda

  • Niloofar Radgoudarzi

Bonnie Huang

  • Zhaoyang Liu
  • Wenlan Zhang
  • Sebastiano Filetti
  • Giuseppe Fenza
  • Alessandro Gallo

Alex D. Estrada-García

  • Patricio Narváez

Eduardo Santiago-Ruiz

  • Nicodemus Msafiri Mbwambo

Peter Kaaya

  • Daniil Kolesnikov
  • Alexandra Kozlova
  • Andrey Aleхandrov
  • Oleg Medvedev

Rahman Sharifzadeh

  • J MED INTERNET RES

Ingrid Del Valle García Carreño

  • Chito Naorbe Angeles
  • Brylle Dimaano Samson
  • Bai Rafsan Zahna Ibad Mama
  • Michelle Renee Domingo Ching
  • JaeYong Kim
  • Bathri N Vajravelu

Nural Imik Tanyildizi

  • ULTRASOUND MED BIOL

Zhou Jianqiao

  • Shihyun Park
  • Esta Bovill

Jongbong Lee

  • Tharmarajah Thiruvaran

Vidhyasaharan Sethu

  • Angelos Antikatzidis
  • Michalis Feidakis

Konstantina Marathaki

  • Anirudha Jena
  • Run-Lin Liu

Jian Wang

  • Catherine A. Gao
  • Frederick M. Howard

Nikolay Markov

  • Nurse Educ Pract

Siobhan O'Connor

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

January 13, 2023

Research Summaries Written by AI Fool Scientists

Scientists cannot always differentiate between research abstracts generated by the AI ChatGPT and those written by humans

By Holly Else & Nature magazine

AI Brain over keyboard illustration

Olemedia/Getty Images

An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December 1 . Researchers are divided over the implications for science.

“I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds.

The chatbot, ChatGPT, creates  realistic and intelligent-sounding text  in response to user prompts. It is a ‘ large language model ’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November, and it is free to use.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Since its release, researchers have been  grappling with the ethical issues  surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint 2  and an editorial 3  written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them.

The researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in  JAMA ,  The New England Journal of Medicine ,  The BMJ ,  The Lancet  and  Nature Medicine . They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts.

Under the radar

The ChatGPT-generated abstracts sailed through the plagiarism checker: the median originality score was 100%, which indicates that no plagiarism was detected. The AI-output detector spotted 66% the generated abstracts. But the human reviewers didn't do much better: they correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts. They incorrectly identified 32% of the generated abstracts as being real and 14% of the genuine abstracts as being generated.

“ChatGPT writes believable scientific abstracts,” say Gao and colleagues in the preprint. “The boundaries of ethical and acceptable use of large language models to help scientific writing remain to be determined.”

Wachter says that, if scientists can’t determine whether research is true, there could be “dire consequences”. As well as being problematic for researchers, who could be pulled down flawed routes of investigation, because the research they are reading has been fabricated, there are “implications for society at large because scientific research plays such a huge role in our society”. For example, it could mean that research-informed policy decisions are incorrect, she adds.

But Arvind Narayanan, a computer scientist at Princeton University in New Jersey, says: “It is unlikely that any serious scientist will use ChatGPT to generate abstracts.” He adds that whether generated abstracts can be detected is “irrelevant”. “The question is whether the tool can generate an abstract that is accurate and compelling. It can’t, and so the upside of using ChatGPT is minuscule, and the downside is significant,” he says.

Irene Solaiman, who researches the social impact of AI at  Hugging Face , an AI company with headquarters in New York and Paris, has fears about any reliance on large language models for scientific thinking. “These models are trained on past information and social and scientific progress can often come from thinking, or being open to thinking, differently from the past,” she adds.

The authors suggest that those evaluating scientific communications, such as research papers and conference proceedings, should put policies in place to stamp out the use of AI-generated texts. If institutions choose to allow use of the technology in certain cases, they should establish clear rules around disclosure. Earlier this month, the Fortieth International Conference on Machine Learning, a large AI conference that will be held in Honolulu, Hawaii, in July, announced that it has banned papers written by ChatGPT and other AI language tools.

Solaiman adds that in fields where fake information can endanger people’s safety, such as medicine, journals may have to take a more rigorous approach to verifying information as accurate.

Narayanan says that the solutions to these issues should not focus on the chatbot itself, “but rather the perverse incentives that lead to this behaviour, such as universities conducting hiring and promotion reviews by counting papers with no regard to their quality or impact”.

This article is reproduced with permission and was  first published  on January 12 2023.

/

Deep Points Of View and Latest News

Research Summaries Written by AI Fool Scientists

research summaries written by ai fool scientists

“I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds.

The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November, and it is free to use.

Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint2 and an editorial3 written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them.

The researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts.

Under the radar

The ChatGPT-generated abstracts sailed through the plagiarism checker: the median originality score was 100%, which indicates that no plagiarism was detected. The AI-output detector spotted 66% the generated abstracts. But the human reviewers didn’t do much better: they correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts. They incorrectly identified 32% of the generated abstracts as being real and 14% of the genuine abstracts as being generated.

“ChatGPT writes believable scientific abstracts,” say Gao and colleagues in the preprint. “The boundaries of ethical and acceptable use of large language models to help scientific writing remain to be determined.”

Wachter says that, if scientists can’t determine whether research is true, there could be “dire consequences”. As well as being problematic for researchers, who could be pulled down flawed routes of investigation, because the research they are reading has been fabricated, there are “implications for society at large because scientific research plays such a huge role in our society”. For example, it could mean that research-informed policy decisions are incorrect, she adds.

But Arvind Narayanan, a computer scientist at Princeton University in New Jersey, says: “It is unlikely that any serious scientist will use ChatGPT to generate abstracts.” He adds that whether generated abstracts can be detected is “irrelevant”. “The question is whether the tool can generate an abstract that is accurate and compelling. It can’t, and so the upside of using ChatGPT is minuscule, and the downside is significant,” he says.

Irene Solaiman, who researches the social impact of AI at Hugging Face, an AI company with headquarters in New York and Paris, has fears about any reliance on large language models for scientific thinking. “These models are trained on past information and social and scientific progress can often come from thinking, or being open to thinking, differently from the past,” she adds.

The authors suggest that those evaluating scientific communications, such as research papers and conference proceedings, should put policies in place to stamp out the use of AI-generated texts. If institutions choose to allow use of the technology in certain cases, they should establish clear rules around disclosure. Earlier this month, the Fortieth International Conference on Machine Learning, a large AI conference that will be held in Honolulu, Hawaii, in July, announced that it has banned papers written by ChatGPT and other AI language tools.

Solaiman adds that in fields where fake information can endanger people’s safety, such as medicine, journals may have to take a more rigorous approach to verifying information as accurate.

Narayanan says that the solutions to these issues should not focus on the chatbot itself, “but rather the perverse incentives that lead to this behaviour, such as universities conducting hiring and promotion reviews by counting papers with no regard to their quality or impact”.

This article is reproduced with permission and was first published on January 12 2023. Research Summaries Written by AI Fool Scientists #Research #Summaries #Written #Fool #Scientists

More Stories

research summaries written by ai fool scientists

Satellite imagery reveals hidden monastery at Buddhist holy site: report

research summaries written by ai fool scientists

Racehorse success may depend on their gut microbiome in early life

research summaries written by ai fool scientists

Chinese nuclear reactor is completely meltdown-proof

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

You may have missed

research summaries written by ai fool scientists

HOW FITTING: Democratic Socialists of America Take Credit for Harris Picking Tim Walz as Running Mate | The Gateway Pundit

research summaries written by ai fool scientists

Oklahoma asks Supreme Court to block HHS from stripping federal planning grants over abortion ban

research summaries written by ai fool scientists

Multiple Records Set in Nevada’s US Senate Primary

research summaries written by ai fool scientists

Elizabeth Dorssom Receives the 2024 Michael Brintnall Teaching and Learning Award –

research summaries written by ai fool scientists

Canadian Open: Jack Draper knocked out in men’s singles but wins doubles match with Jannik Sinner | Tennis News

Research Summaries Written by AI Fool Scientists

Scientists cannot always differentiate between research abstracts generated by the AI ChatGPT and those written by humans

With Product You Purchase

Subscribe to our mailing list to get the new updates, enceladus is blanketed in a thick layer of snow, here’s what you need to know about covid’s xbb.1.5 ‘kraken’ variant, مقالات ذات صلة, u.s. deaths from heat are dangerously undercounted, when pain really is in your head, readers discuss black holes’ trippy effects on time, banned swimsuits, the surprising benefits of gossip, اترك تعليقاً إلغاء الرد.

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *

Please enable JavaScript to submit this form.

البريد الإلكتروني *

الموقع الإلكتروني

احفظ اسمي، بريدي الإلكتروني، والموقع الإلكتروني في هذا المتصفح لاستخدامها المرة المقبلة في تعليقي.

ليس لديك حساب؟

انضم الان لأفضل المتخصصين في مجال البحث العلمي في الوطن العربي خطوات بسيطة وتصبح عضواً

Get the Reddit app

https://www.reddit.com/r/Save3rdPartyApps/comments/13yh0jf/dont_let_reddit_kill_3rd_party_apps/

Research Summaries Written by AI Fool Scientists

By continuing, you agree to our User Agreement and acknowledge that you understand the Privacy Policy .

Enter the 6-digit code from your authenticator app

You’ve set up two-factor authentication for this account.

Enter a 6-digit backup code

Create your username and password.

Reddit is anonymous, so your username is what you’ll go by here. Choose wisely—because once you get a name, you can’t change it.

Reset your password

Enter your email address or username and we’ll send you a link to reset your password

Check your inbox

An email with a link to reset your password was sent to the email address associated with your account

Choose a Reddit account to continue

COMMENTS

  1. Research Summaries Written by AI Fool Scientists

    Scientists cannot always differentiate between research abstracts generated by the AI ChatGPT and those written by humans. An artificial-intelligence (AI) chatbot can write such convincing fake ...

  2. Research Summaries Written By AI Fool Scientists

    An anonymous reader quotes a report from Scientific American: An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. "I am very worried," says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and ...

  3. AI is writing convincing fake research papers. How one scientist is

    AI is writing convincing fake research papers. How one scientist is fighting back ... As artificial intelligence muscles its way into scientific writing, one researcher is fighting back with a tool that could change the game. ... StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader ...

  4. Detecting manuscripts written by generative AI and AI-assisted

    Generative AI can be a powerful research tool, but researchers must employ it ethically and transparently. ... clinical decision making and support systems, assisting with patient's discharge summaries, writing, translating, ... Abstracts written by chatGPT fool scientists. Nature, 613 (7944), 423. doi: 10.1038/d41586-023-00056-7 ...

  5. Daily briefing: AI-generated abstracts fool scientists

    AI-generated abstracts fool scientists. The artificial-intelligence (AI) chatbot ChatGPT can write fake abstracts that scientists have trouble distinguishing from those written by humans. The ...

  6. Stories by Holly Else

    Research Summaries Written by AI Fool Scientists Scientists cannot always differentiate between research abstracts generated by the AI ChatGPT and those written by humans Holly Else, Nature magazine

  7. The State of the AI Arms Race

    Jeremy Kahn is the AI editor at Fortune Magazine and the author of the new book Mastering AI: A Survival Guide to our Superpowered Future.In this podcast, Motley Fool employee Alex Friedman caught ...

  8. Daily briefing: AI-generated abstracts fool scientists

    Daily briefing: AI-generated abstracts fool scientists. Nature. 2023 Jan 13. doi: 10.1038/d41586-023-00092-3. Online ahead of print.

  9. Researchers have created an 'AI scientist'. Here's what it can do

    Researchers at Sakana.AI have developed an artificial intelligence (AI) model that may be able to automate the entire scientific research process. The "AI Scientist" can identify a problem ...

  10. Research Summaries Written by AI Fool Scientists

    All groups and messages ... ...

  11. Abstracts written by ChatGPT fool scientists

    Abstracts written by ChatGPT fool scientists. January 2023. Nature 613 (7944) DOI: 10.1038/d41586-023-00056-7. Authors: Holly Else. To read the full-text of this research, you can request a copy ...

  12. Research Summaries Written by AI Fool Scientists

    Scientists cannot always differentiate between research abstracts generated by the AI ChatGPT and those written by humans By Holly Else , Nature magazine on January 13, 2023 Share on Facebook

  13. Research Summaries Written By AI Fool Scientists

    Research Summaries Written By AI Fool Scientists Saturday January 14, 2023. 04:30 AM , from Slashdot An anonymous reader quotes a report from Scientific American: An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the ...

  14. Research Summaries Written by AI Fool Scientists

    An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December 1 . ... Research Summaries Written by AI Fool Scientists. Holly Else, Nature magazine. An artificial-intelligence (AI) ...

  15. Research Summaries Written By AI Fool Scientists

    Research Summaries Written By AI Fool Scientists An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. "I am very worried," says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not ...

  16. Research Summaries Written by AI Fool Scientists : r/technology

    Research Summaries Written by AI Fool Scientists. Software. scientificamerican.com. Archived post. New comments cannot be posted and votes cannot be cast. 18. 16M subscribers in the technology community. Subreddit dedicated to the news and discussions about the creation and use of technology and its….

  17. Research Summaries Written by AI Fool Scientists : r/skeptic

    Computer Vision is the scientific subfield of AI concerned with developing algorithms to extract meaningful information from raw images, videos, and sensor data. This community is home to the academics and engineers both advancing and applying this interdisciplinary field, with backgrounds in computer science, machine learning, robotics ...

  18. SciSummary

    OpenAI is an AI research and deployment company. OpenAI's mission is to ensure that artificial general intelligence benefits all of humanity. ... Research Summaries Written by AI Fool Scientists scientificamerican. upvotes Top Posts Reddit . reReddit: Top posts of February 17, 2023. Reddit . reReddit: Top posts of February 2023 ...

  19. [Old post: 01/13/2023] Research Summaries Written by AI Fool Scientists

    356 subscribers in the TechDystopia community. Database leaks, ransomware attacks, algorithm/AI bias, social media bots/troll armies, fake news and…

  20. Research Summaries Written by AI Fool Scientists

    Holly Else, Nature magazine An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts

  21. Research Summaries Written by AI Fool Scientists

    الرئيسية/ Research Summaries Written by AI Fool Scientists. ... Scientists cannot always differentiate between research abstracts generated by the AI ChatGPT and those written by humans. 13th يناير 2023. 0 ...

  22. Research Summaries Written by AI Fool Scientists : r/AIandRobotics

    Posted by u/AIandRobotics_Bot - 1 vote and 1 comment