Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NEWS EXPLAINER
  • 09 December 2022

AI bot ChatGPT writes smart essays — should professors worry?

  • Chris Stokel-Walker

You can also search for this author in PubMed   Google Scholar

Between overwork, underpayment and the pressure to publish, academics have plenty to worry about. Now there’s a fresh concern: ChatGPT , an artificial intelligence (AI) powered chatbot that creates surprisingly intelligent-sounding text in response to user prompts, including homework assignments and exam-style questions. The replies are so lucid, well-researched and decently referenced that some academics are calling the bot the death knell for conventional forms of educational assessment. How worried should professors and lecturers be?

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

doi: https://doi.org/10.1038/d41586-022-04397-7

Reprints and permissions

Related Articles

essay bot chatgpt

Are ChatGPT and AlphaCode going to replace programmers?

How language-generation AIs could transform science

Open-source language AI challenges big tech’s models

  • Computer science

Guide, don’t hide: reprogramming learning in the wake of AI

Guide, don’t hide: reprogramming learning in the wake of AI

Career Guide 04 SEP 24

A day in the life of the world’s fastest supercomputer

A day in the life of the world’s fastest supercomputer

News Feature 04 SEP 24

AI generates covertly racist decisions about people based on their dialect

AI generates covertly racist decisions about people based on their dialect

Article 28 AUG 24

Wildfires are spreading fast in Canada — we must strengthen forests for the future

Wildfires are spreading fast in Canada — we must strengthen forests for the future

Comment 09 SEP 24

Massive Attack’s science-led drive to lower music’s carbon footprint

Massive Attack’s science-led drive to lower music’s carbon footprint

Career Feature 04 SEP 24

Live music is a major carbon sinner — but it could be a catalyst for change

Live music is a major carbon sinner — but it could be a catalyst for change

Editorial 04 SEP 24

What I learnt from running a coding bootcamp

What I learnt from running a coding bootcamp

Career Column 21 AUG 24

The Taliban said women could study — three years on they still can’t

The Taliban said women could study — three years on they still can’t

News 14 AUG 24

Faculty Positions in Biology and Biological Engineering: Caltech, Pasadena, CA, United States

The Division of Biology and Biological Engineering (BBE) at Caltech is seeking new faculty in the area of Neuroscience.

Pasadena, California

California Institute of Technology (Caltech)

essay bot chatgpt

Assistant Professor

Rutgers Biomedical and Health Sciences (RBHS), Center for Cell Signaling, and Rutgers New Jersey Medical School (NJMS) are jointly recruiting multi...

Newark, New Jersey (US)

Rutgers Biomedical and Health Sciences NJMS-Center for Cell Signaling

Postdoctoral fellow in computational biology

We are searching for a highly motivated postdoc interested in developing and applying computational approaches to understand how blood cell clones ...

Gothenburg (Kommun), Västra Götaland (SE)

University of Gothenburg

Faculty Positions in School of Engineering, Westlake University

The School of Engineering (SOE) at Westlake University is seeking to fill multiple tenured or tenure-track faculty positions in all ranks.

Hangzhou, Zhejiang, China

Westlake University

essay bot chatgpt

Postdoctoral Associate- Genetic Epidemiology

Houston, Texas (US)

Baylor College of Medicine (BCM)

essay bot chatgpt

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Celebrating 150 years of Harvard Summer School. Learn about our history.

Should I Use ChatGPT to Write My Essays?

Everything high school and college students need to know about using — and not using — ChatGPT for writing essays.

Jessica A. Kent

ChatGPT is one of the most buzzworthy technologies today.

In addition to other generative artificial intelligence (AI) models, it is expected to change the world. In academia, students and professors are preparing for the ways that ChatGPT will shape education, and especially how it will impact a fundamental element of any course: the academic essay.

Students can use ChatGPT to generate full essays based on a few simple prompts. But can AI actually produce high quality work, or is the technology just not there yet to deliver on its promise? Students may also be asking themselves if they should use AI to write their essays for them and what they might be losing out on if they did.

AI is here to stay, and it can either be a help or a hindrance depending on how you use it. Read on to become better informed about what ChatGPT can and can’t do, how to use it responsibly to support your academic assignments, and the benefits of writing your own essays.

What is Generative AI?

Artificial intelligence isn’t a twenty-first century invention. Beginning in the 1950s, data scientists started programming computers to solve problems and understand spoken language. AI’s capabilities grew as computer speeds increased and today we use AI for data analysis, finding patterns, and providing insights on the data it collects.

But why the sudden popularity in recent applications like ChatGPT? This new generation of AI goes further than just data analysis. Instead, generative AI creates new content. It does this by analyzing large amounts of data — GPT-3 was trained on 45 terabytes of data, or a quarter of the Library of Congress — and then generating new content based on the patterns it sees in the original data.

It’s like the predictive text feature on your phone; as you start typing a new message, predictive text makes suggestions of what should come next based on data from past conversations. Similarly, ChatGPT creates new text based on past data. With the right prompts, ChatGPT can write marketing content, code, business forecasts, and even entire academic essays on any subject within seconds.

But is generative AI as revolutionary as people think it is, or is it lacking in real intelligence?

The Drawbacks of Generative AI

It seems simple. You’ve been assigned an essay to write for class. You go to ChatGPT and ask it to write a five-paragraph academic essay on the topic you’ve been assigned. You wait a few seconds and it generates the essay for you!

But ChatGPT is still in its early stages of development, and that essay is likely not as accurate or well-written as you’d expect it to be. Be aware of the drawbacks of having ChatGPT complete your assignments.

It’s not intelligence, it’s statistics

One of the misconceptions about AI is that it has a degree of human intelligence. However, its intelligence is actually statistical analysis, as it can only generate “original” content based on the patterns it sees in already existing data and work.

It “hallucinates”

Generative AI models often provide false information — so much so that there’s a term for it: “AI hallucination.” OpenAI even has a warning on its home screen , saying that “ChatGPT may produce inaccurate information about people, places, or facts.” This may be due to gaps in its data, or because it lacks the ability to verify what it’s generating. 

It doesn’t do research  

If you ask ChatGPT to find and cite sources for you, it will do so, but they could be inaccurate or even made up.

This is because AI doesn’t know how to look for relevant research that can be applied to your thesis. Instead, it generates content based on past content, so if a number of papers cite certain sources, it will generate new content that sounds like it’s a credible source — except it likely may not be.

There are data privacy concerns

When you input your data into a public generative AI model like ChatGPT, where does that data go and who has access to it? 

Prompting ChatGPT with original research should be a cause for concern — especially if you’re inputting study participants’ personal information into the third-party, public application. 

JPMorgan has restricted use of ChatGPT due to privacy concerns, Italy temporarily blocked ChatGPT in March 2023 after a data breach, and Security Intelligence advises that “if [a user’s] notes include sensitive data … it enters the chatbot library. The user no longer has control over the information.”

It is important to be aware of these issues and take steps to ensure that you’re using the technology responsibly and ethically. 

It skirts the plagiarism issue

AI creates content by drawing on a large library of information that’s already been created, but is it plagiarizing? Could there be instances where ChatGPT “borrows” from previous work and places it into your work without citing it? Schools and universities today are wrestling with this question of what’s plagiarism and what’s not when it comes to AI-generated work.

To demonstrate this, one Elon University professor gave his class an assignment: Ask ChatGPT to write an essay for you, and then grade it yourself. 

“Many students expressed shock and dismay upon learning the AI could fabricate bogus information,” he writes, adding that he expected some essays to contain errors, but all of them did. 

His students were disappointed that “major tech companies had pushed out AI technology without ensuring that the general population understands its drawbacks” and were concerned about how many embraced such a flawed tool.

Explore Our High School Programs

How to Use AI as a Tool to Support Your Work

As more students are discovering, generative AI models like ChatGPT just aren’t as advanced or intelligent as they may believe. While AI may be a poor option for writing your essay, it can be a great tool to support your work.

Generate ideas for essays

Have ChatGPT help you come up with ideas for essays. For example, input specific prompts, such as, “Please give me five ideas for essays I can write on topics related to WWII,” or “Please give me five ideas for essays I can write comparing characters in twentieth century novels.” Then, use what it provides as a starting point for your original research.

Generate outlines

You can also use ChatGPT to help you create an outline for an essay. Ask it, “Can you create an outline for a five paragraph essay based on the following topic” and it will create an outline with an introduction, body paragraphs, conclusion, and a suggested thesis statement. Then, you can expand upon the outline with your own research and original thought.

Generate titles for your essays

Titles should draw a reader into your essay, yet they’re often hard to get right. Have ChatGPT help you by prompting it with, “Can you suggest five titles that would be good for a college essay about [topic]?”

The Benefits of Writing Your Essays Yourself

Asking a robot to write your essays for you may seem like an easy way to get ahead in your studies or save some time on assignments. But, outsourcing your work to ChatGPT can negatively impact not just your grades, but your ability to communicate and think critically as well. It’s always the best approach to write your essays yourself.

Create your own ideas

Writing an essay yourself means that you’re developing your own thoughts, opinions, and questions about the subject matter, then testing, proving, and defending those thoughts. 

When you complete school and start your career, projects aren’t simply about getting a good grade or checking a box, but can instead affect the company you’re working for — or even impact society. Being able to think for yourself is necessary to create change and not just cross work off your to-do list.

Building a foundation of original thinking and ideas now will help you carve your unique career path in the future.

Develop your critical thinking and analysis skills

In order to test or examine your opinions or questions about a subject matter, you need to analyze a problem or text, and then use your critical thinking skills to determine the argument you want to make to support your thesis. Critical thinking and analysis skills aren’t just necessary in school — they’re skills you’ll apply throughout your career and your life.

Improve your research skills

Writing your own essays will train you in how to conduct research, including where to find sources, how to determine if they’re credible, and their relevance in supporting or refuting your argument. Knowing how to do research is another key skill required throughout a wide variety of professional fields.

Learn to be a great communicator

Writing an essay involves communicating an idea clearly to your audience, structuring an argument that a reader can follow, and making a conclusion that challenges them to think differently about a subject. Effective and clear communication is necessary in every industry.

Be impacted by what you’re learning about : 

Engaging with the topic, conducting your own research, and developing original arguments allows you to really learn about a subject you may not have encountered before. Maybe a simple essay assignment around a work of literature, historical time period, or scientific study will spark a passion that can lead you to a new major or career.

Resources to Improve Your Essay Writing Skills

While there are many rewards to writing your essays yourself, the act of writing an essay can still be challenging, and the process may come easier for some students than others. But essay writing is a skill that you can hone, and students at Harvard Summer School have access to a number of on-campus and online resources to assist them.

Students can start with the Harvard Summer School Writing Center , where writing tutors can offer you help and guidance on any writing assignment in one-on-one meetings. Tutors can help you strengthen your argument, clarify your ideas, improve the essay’s structure, and lead you through revisions. 

The Harvard libraries are a great place to conduct your research, and its librarians can help you define your essay topic, plan and execute a research strategy, and locate sources. 

Finally, review the “ The Harvard Guide to Using Sources ,” which can guide you on what to cite in your essay and how to do it. Be sure to review the “Tips For Avoiding Plagiarism” on the “ Resources to Support Academic Integrity ” webpage as well to help ensure your success.

Sign up to our mailing list to learn more about Harvard Summer School

The Future of AI in the Classroom

ChatGPT and other generative AI models are here to stay, so it’s worthwhile to learn how you can leverage the technology responsibly and wisely so that it can be a tool to support your academic pursuits. However, nothing can replace the experience and achievement gained from communicating your own ideas and research in your own academic essays.

About the Author

Jessica A. Kent is a freelance writer based in Boston, Mass. and a Harvard Extension School alum. Her digital marketing content has been featured on Fast Company, Forbes, Nasdaq, and other industry websites; her essays and short stories have been featured in North American Review, Emerson Review, Writer’s Bone, and others.

5 Key Qualities of Students Who Succeed at Harvard Summer School (and in College!)

This guide outlines the kinds of students who thrive at Harvard Summer School and what the programs offer in return.

Harvard Division of Continuing Education

The Division of Continuing Education (DCE) at Harvard University is dedicated to bringing rigorous academics and innovative teaching capabilities to those seeking to improve their lives through education. We make Harvard education accessible to lifelong learners from high school to retirement.

Harvard Division of Continuing Education Logo

ChatGPT: Everything you need to know about the AI-powered chatbot

ChatGPT welcome screen

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved into a behemoth used by more than 92% of Fortune 500 companies .

That growth has propelled OpenAI itself into becoming one of the most-hyped companies in recent memory. And its latest partnership with Apple for its upcoming generative AI offering, Apple Intelligence, has given the company another significant bump in the AI race.

2024 also saw the release of GPT-4o, OpenAI’s new flagship omni model for ChatGPT. GPT-4o is now the default free model, complete with voice and vision capabilities. But after demoing GPT-4o, OpenAI paused one of its voices , Sky, after allegations that it was mimicking Scarlett Johansson’s voice in “Her.”

OpenAI is facing internal drama, including the sizable exit of co-founder and longtime chief scientist Ilya Sutskever as the company dissolved its Superalignment team. OpenAI is also facing a lawsuit from Alden Global Capital-owned newspapers , including the New York Daily News and the Chicago Tribune, for alleged copyright infringement, following a similar suit filed by The New York Times last year.

Here’s a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. And if you have any other questions, check out our ChatGPT FAQ here.

Timeline of the most recent ChatGPT updates

September 2024, august 2024, february 2024, january 2024.

  • ChatGPT FAQs

OpenAI reaches 1 million paid users of its corporate offerings

OpenAI announced it has surpassed 1 million paid users for its versions of ChatGPT intended for businesses, including ChatGPT Team, ChatGPT Enterprise and its educational offering, ChatGPT Edu. The company said that nearly half of OpenAI’s corporate users are based in the US.

Volkswagen rolls out its ChatGPT assistant to the US

Volkswagen is taking its ChatGPT voice assistant experiment to vehicles in the United States. Its ChatGPT-integrated Plus Speech voice assistant is an AI chatbot based on Cerence’s Chat Pro product and a LLM from OpenAI and will begin rolling out on September 6 with the 2025 Jetta and Jetta GLI models.

OpenAI inks content deal with Condé Nast

As part of the new deal, OpenAI will surface stories from Condé Nast properties like The New Yorker, Vogue, Vanity Fair, Bon Appétit and Wired in ChatGPT and SearchGPT. Condé Nast CEO Roger Lynch implied that the “multi-year” deal will involve payment from OpenAI in some form and a Condé Nast spokesperson told TechCrunch that OpenAI will have permission to train on Condé Nast content.

We’re partnering with Condé Nast to deepen the integration of quality journalism into ChatGPT and our SearchGPT prototype. https://t.co/tiXqSOTNAl — OpenAI (@OpenAI) August 20, 2024

Our first impressions of ChatGPT’s Advanced Voice Mode

TechCrunch’s Maxwell Zeff has been playing around with OpenAI’s Advanced Voice Mode, in what he describes as “the most convincing taste I’ve had of an AI-powered future yet.” Compared to Siri or Alexa, Advanced Voice Mode stands out with faster response times, unique answers and the ability to answer complex questions. But the feature falls short as an effective replacement for virtual assistants.

OpenAI shuts down election influence operation that used ChatGPT

OpenAI has banned a cluster of ChatGPT accounts linked to an Iranian influence operation that was generating content about the U.S. presidential election. OpenAI identified five website fronts presenting as both progressive and conservative news outlets that used ChatGPT to draft several long-form articles, though it doesn’t seem that it reached much of an audience.

OpenAI finds that GPT-4o does some weird stuff sometimes

OpenAI has found that GPT-4o, which powers the recently launched alpha of Advanced Voice Mode in ChatGPT, can behave in strange ways. In a new “red teaming” report, OpenAI reveals some of GPT-4o’s weirder quirks, like mimicking the voice of the person speaking to it or randomly shouting in the middle of a conversation.

ChatGPT’s mobile app reports its biggest month yet

After a big jump following the release of OpenAI’s new GPT-4o “omni” model, the mobile version of ChatGPT has now seen its biggest month of revenue yet. The app pulled in $28 million in net revenue from the App Store and Google Play in July, according to data provided by app intelligence firm Appfigures.

OpenAI could potentially catch students who cheat with ChatGPT

OpenAI has built a watermarking tool that could potentially catch students who cheat by using ChatGPT — but The Wall Street Journal reports that the company is debating whether to actually release it. An OpenAI spokesperson confirmed to TechCrunch that the company is researching tools that can detect writing from ChatGPT, but said it’s taking a “deliberate approach” to releasing it.

ChatGPT’s advanced Voice Mode starts rolling out to some users

OpenAI is giving users their first access to GPT-4o’s updated realistic audio responses. The alpha version is now available to a small group of ChatGPT Plus users, and the company says the feature will gradually roll out to all Plus users in the fall of 2024. The release follows controversy surrounding the voice’s similarity to Scarlett Johansson, leading OpenAI to delay its release.

We’re starting to roll out advanced Voice Mode to a small group of ChatGPT Plus users. Advanced Voice Mode offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions. pic.twitter.com/64O94EhhXK — OpenAI (@OpenAI) July 30, 2024

OpenAI announces new search prototype, SearchGPT

OpenAI is testing SearchGPT, a new AI search experience to compete with Google. SearchGPT aims to elevate search queries with “timely answers” from across the internet, as well as the ability to ask follow-up questions. The temporary prototype is currently only available to a small group of users and its publisher partners, like The Atlantic, for testing and feedback.

We’re testing SearchGPT, a temporary prototype of new AI search features that give you fast and timely answers with clear and relevant sources. We’re launching with a small group of users for feedback and plan to integrate the experience into ChatGPT. https://t.co/dRRnxXVlGh pic.twitter.com/iQpADXmllH — OpenAI (@OpenAI) July 25, 2024

OpenAI could lose $5 billion this year, report claims

A new report from The Information , based on undisclosed financial information, claims OpenAI could lose up to $5 billion due to how costly the business is to operate. The report also says the company could spend as much as $7 billion in 2024 to train and operate ChatGPT.

OpenAI unveils GPT-4o mini

OpenAI released its latest small AI model, GPT-4o mini . The company says GPT-4o mini, which is cheaper and faster than OpenAI’s current AI models, outperforms industry leading small AI models on reasoning tasks involving text and vision. GPT-4o mini will replace GPT-3.5 Turbo as the smallest model OpenAI offers. 

OpenAI partners with Los Alamos National Laboratory for bioscience research

OpenAI announced a partnership with the Los Alamos National Laboratory to study how AI can be employed by scientists in order to advance research in healthcare and bioscience. This follows other health-related research collaborations at OpenAI, including Moderna and Color Health.

OpenAI and Los Alamos National Laboratory announce partnership to study AI for bioscience research https://t.co/WV4XMZsHBA — OpenAI (@OpenAI) July 10, 2024

OpenAI makes CriticGPT to find mistakes in GPT-4

OpenAI announced it has trained a model off of GPT-4, dubbed CriticGPT , which aims to find errors in ChatGPT’s code output so they can make improvements and better help so-called human “AI trainers” rate the quality and accuracy of ChatGPT responses.

We’ve trained a model, CriticGPT, to catch bugs in GPT-4’s code. We’re starting to integrate such models into our RLHF alignment pipeline to help humans supervise AI on difficult tasks: https://t.co/5oQYfrpVBu — OpenAI (@OpenAI) June 27, 2024

OpenAI inks content deal with TIME

OpenAI and TIME announced a multi-year strategic partnership that brings the magazine’s content, both modern and archival, to ChatGPT. As part of the deal, TIME will also gain access to OpenAI’s technology in order to develop new audience-based products.

We’re partnering with TIME and its 101 years of archival content to enhance responses and provide links to stories on https://t.co/LgvmZUae9M : https://t.co/xHAYkYLxA9 — OpenAI (@OpenAI) June 27, 2024

OpenAI delays ChatGPT’s new Voice Mode

OpenAI planned to start rolling out its advanced Voice Mode feature to a small group of ChatGPT Plus users in late June, but it says lingering issues forced it to postpone the launch to July. OpenAI says Advanced Voice Mode might not launch for all ChatGPT Plus customers until the fall, depending on whether it meets certain internal safety and reliability checks.

ChatGPT releases app for Mac

ChatGPT for macOS is now available for all users . With the app, users can quickly call up ChatGPT by using the keyboard combination of Option + Space. The app allows users to upload files and other photos, as well as speak to ChatGPT from their desktop and search through their past conversations.

The ChatGPT desktop app for macOS is now available for all users. Get faster access to ChatGPT to chat about email, screenshots, and anything on your screen with the Option + Space shortcut: https://t.co/2rEx3PmMqg pic.twitter.com/x9sT8AnjDm — OpenAI (@OpenAI) June 25, 2024

Apple brings ChatGPT to its apps, including Siri

Apple announced at WWDC 2024 that it is bringing ChatGPT to Siri and other first-party apps and capabilities across its operating systems. The ChatGPT integrations, powered by GPT-4o, will arrive on iOS 18, iPadOS 18 and macOS Sequoia later this year, and will be free without the need to create a ChatGPT or OpenAI account. Features exclusive to paying ChatGPT users will also be available through Apple devices .

Apple is bringing ChatGPT to Siri and other first-party apps and capabilities across its operating systems #WWDC24 Read more: https://t.co/0NJipSNJoS pic.twitter.com/EjQdPBuyy4 — TechCrunch (@TechCrunch) June 10, 2024

House Oversight subcommittee invites Scarlett Johansson to testify about ‘Sky’ controversy

Scarlett Johansson has been invited to testify about the controversy surrounding OpenAI’s Sky voice at a hearing for the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation. In a letter, Rep. Nancy Mace said Johansson’s testimony could “provide a platform” for concerns around deepfakes.

ChatGPT experiences two outages in a single day

ChatGPT was down twice in one day: one multi-hour outage in the early hours of the morning Tuesday and another outage later in the day that is still ongoing. Anthropic’s Claude and Perplexity also experienced some issues.

You're not alone, ChatGPT is down once again. pic.twitter.com/Ydk2vNOOK6 — TechCrunch (@TechCrunch) June 4, 2024

The Atlantic and Vox Media ink content deals with OpenAI

The Atlantic and Vox Media have announced licensing and product partnerships with OpenAI . Both agreements allow OpenAI to use the publishers’ current content to generate responses in ChatGPT, which will feature citations to relevant articles. Vox Media says it will use OpenAI’s technology to build “audience-facing and internal applications,” while The Atlantic will build a new experimental product called Atlantic Labs .

I am delighted that @theatlantic now has a strategic content & product partnership with @openai . Our stories will be discoverable in their new products and we'll be working with them to figure out new ways that AI can help serious, independent media : https://t.co/nfSVXW9KpB — nxthompson (@nxthompson) May 29, 2024

OpenAI signs 100K PwC workers to ChatGPT’s enterprise tier

OpenAI announced a new deal with management consulting giant PwC . The company will become OpenAI’s biggest customer to date, covering 100,000 users, and will become OpenAI’s first partner for selling its enterprise offerings to other businesses.

OpenAI says it is training its GPT-4 successor

OpenAI announced in a blog post that it has recently begun training its next flagship model to succeed GPT-4. The news came in an announcement of its new safety and security committee, which is responsible for informing safety and security decisions across OpenAI’s products.

Former OpenAI director claims the board found out about ChatGPT on Twitter

On the The TED AI Show podcast, former OpenAI board member Helen Toner revealed that the board did not know about ChatGPT until its launch in November 2022. Toner also said that Sam Altman gave the board inaccurate information about the safety processes the company had in place and that he didn’t disclose his involvement in the OpenAI Startup Fund.

Sharing this, recorded a few weeks ago. Most of the episode is about AI policy more broadly, but this was my first longform interview since the OpenAI investigation closed, so we also talked a bit about November. Thanks to @bilawalsidhu for a fun conversation! https://t.co/h0PtK06T0K — Helen Toner (@hlntnr) May 28, 2024

ChatGPT’s mobile app revenue saw biggest spike yet following GPT-4o launch

The launch of GPT-4o has driven the company’s biggest-ever spike in revenue on mobile , despite the model being freely available on the web. Mobile users are being pushed to upgrade to its $19.99 monthly subscription, ChatGPT Plus, if they want to experiment with OpenAI’s most recent launch.

OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

After demoing its new GPT-4o model last week, OpenAI announced it is pausing one of its voices , Sky, after users found that it sounded similar to Scarlett Johansson in “Her.”

OpenAI explained in a blog post that Sky’s voice is “not an imitation” of the actress and that AI voices should not intentionally mimic the voice of a celebrity. The blog post went on to explain how the company chose its voices: Breeze, Cove, Ember, Juniper and Sky.

We’ve heard questions about how we chose the voices in ChatGPT, especially Sky. We are working to pause the use of Sky while we address them. Read more about how we chose these voices: https://t.co/R8wwZjU36L — OpenAI (@OpenAI) May 20, 2024

ChatGPT lets you add files from Google Drive and Microsoft OneDrive

OpenAI announced new updates for easier data analysis within ChatGPT . Users can now upload files directly from Google Drive and Microsoft OneDrive, interact with tables and charts, and export customized charts for presentations. The company says these improvements will be added to GPT-4o in the coming weeks.

We're rolling out interactive tables and charts along with the ability to add files directly from Google Drive and Microsoft OneDrive into ChatGPT. Available to ChatGPT Plus, Team, and Enterprise users over the coming weeks. https://t.co/Fu2bgMChXt pic.twitter.com/M9AHLx5BKr — OpenAI (@OpenAI) May 16, 2024

OpenAI inks deal to train AI on Reddit data

OpenAI announced a partnership with Reddit that will give the company access to “real-time, structured and unique content” from the social network. Content from Reddit will be incorporated into ChatGPT, and the companies will work together to bring new AI-powered features to Reddit users and moderators.

We’re partnering with Reddit to bring its content to ChatGPT and new products: https://t.co/xHgBZ8ptOE — OpenAI (@OpenAI) May 16, 2024

OpenAI debuts GPT-4o “omni” model now powering ChatGPT

OpenAI’s spring update event saw the reveal of its new omni model, GPT-4o, which has a black hole-like interface , as well as voice and vision capabilities that feel eerily like something out of “Her.” GPT-4o is set to roll out “iteratively” across its developer and consumer-facing products over the next few weeks.

OpenAI demos real-time language translation with its latest GPT-4o model. pic.twitter.com/pXtHQ9mKGc — TechCrunch (@TechCrunch) May 13, 2024

OpenAI to build a tool that lets content creators opt out of AI training

The company announced it’s building a tool, Media Manager, that will allow creators to better control how their content is being used to train generative AI models — and give them an option to opt out. The goal is to have the new tool in place and ready to use by 2025.

OpenAI explores allowing AI porn

In a new peek behind the curtain of its AI’s secret instructions , OpenAI also released a new NSFW policy . Though it’s intended to start a conversation about how it might allow explicit images and text in its AI products, it raises questions about whether OpenAI — or any generative AI vendor — can be trusted to handle sensitive content ethically.

OpenAI and Stack Overflow announce partnership

In a new partnership, OpenAI will get access to developer platform Stack Overflow’s API and will get feedback from developers to improve the performance of their AI models. In return, OpenAI will include attributions to Stack Overflow in ChatGPT. However, the deal was not favorable to some Stack Overflow users — leading to some sabotaging their answer in protest .

U.S. newspapers file copyright lawsuit against OpenAI and Microsoft

Alden Global Capital-owned newspapers, including the New York Daily News, the Chicago Tribune, and the Denver Post, are suing OpenAI and Microsoft for copyright infringement. The lawsuit alleges that the companies stole millions of copyrighted articles “without permission and without payment” to bolster ChatGPT and Copilot.

OpenAI inks content licensing deal with Financial Times

OpenAI has partnered with another news publisher in Europe, London’s Financial Times , that the company will be paying for content access. “Through the partnership, ChatGPT users will be able to see select attributed summaries, quotes and rich links to FT journalism in response to relevant queries,” the FT wrote in a press release.

OpenAI opens Tokyo hub, adds GPT-4 model optimized for Japanese

OpenAI is opening a new office in Tokyo and has plans for a GPT-4 model optimized specifically for the Japanese language. The move underscores how OpenAI will likely need to localize its technology to different languages as it expands.

Sam Altman pitches ChatGPT Enterprise to Fortune 500 companies

According to Reuters, OpenAI’s Sam Altman hosted hundreds of executives from Fortune 500 companies across several cities in April, pitching versions of its AI services intended for corporate use.

OpenAI releases “more direct, less verbose” version of GPT-4 Turbo

Premium ChatGPT users — customers paying for ChatGPT Plus, Team or Enterprise — can now use an updated and enhanced version of GPT-4 Turbo . The new model brings with it improvements in writing, math, logical reasoning and coding, OpenAI claims, as well as a more up-to-date knowledge base.

Our new GPT-4 Turbo is now available to paid ChatGPT users. We’ve improved capabilities in writing, math, logical reasoning, and coding. Source: https://t.co/fjoXDCOnPr pic.twitter.com/I4fg4aDq1T — OpenAI (@OpenAI) April 12, 2024

ChatGPT no longer requires an account — but there’s a catch

You can now use ChatGPT without signing up for an account , but it won’t be quite the same experience. You won’t be able to save or share chats, use custom instructions, or other features associated with a persistent account. This version of ChatGPT will have “slightly more restrictive content policies,” according to OpenAI. When TechCrunch asked for more details, however, the response was unclear:

“The signed out experience will benefit from the existing safety mitigations that are already built into the model, such as refusing to generate harmful content. In addition to these existing mitigations, we are also implementing additional safeguards specifically designed to address other forms of content that may be inappropriate for a signed out experience,” a spokesperson said.

OpenAI’s chatbot store is filling up with spam

TechCrunch found that the OpenAI’s GPT Store is flooded with bizarre, potentially copyright-infringing GPTs . A cursory search pulls up GPTs that claim to generate art in the style of Disney and Marvel properties, but serve as little more than funnels to third-party paid services and advertise themselves as being able to bypass AI content detection tools.

The New York Times responds to OpenAI’s claims that it “hacked” ChatGPT for its copyright lawsuit

In a court filing opposing OpenAI’s motion to dismiss The New York Times’ lawsuit alleging copyright infringement, the newspaper asserted that “OpenAI’s attention-grabbing claim that The Times ‘hacked’ its products is as irrelevant as it is false.” The New York Times also claimed that some users of ChatGPT used the tool to bypass its paywalls.

OpenAI VP doesn’t say whether artists should be paid for training data

At a SXSW 2024 panel, Peter Deng, OpenAI’s VP of consumer product dodged a question on whether artists whose work was used to train generative AI models should be compensated . While OpenAI lets artists “opt out” of and remove their work from the datasets that the company uses to train its image-generating models, some artists have described the tool as onerous.

A new report estimates that ChatGPT uses more than half a million kilowatt-hours of electricity per day

ChatGPT’s environmental impact appears to be massive. According to a report from The New Yorker , ChatGPT uses an estimated 17,000 times the amount of electricity than the average U.S. household to respond to roughly 200 million requests each day.

ChatGPT can now read its answers aloud

OpenAI released a new Read Aloud feature for the web version of ChatGPT as well as the iOS and Android apps. The feature allows ChatGPT to read its responses to queries in one of five voice options and can speak 37 languages, according to the company. Read aloud is available on both GPT-4 and GPT-3.5 models.

ChatGPT can now read responses to you. On iOS or Android, tap and hold the message and then tap “Read Aloud”. We’ve also started rolling on web – click the "Read Aloud" button below the message. pic.twitter.com/KevIkgAFbG — OpenAI (@OpenAI) March 4, 2024

OpenAI partners with Dublin City Council to use GPT-4 for tourism

As part of a new partnership with OpenAI, the Dublin City Council will use GPT-4 to craft personalized itineraries for travelers, including recommendations of unique and cultural destinations, in an effort to support tourism across Europe.

A law firm used ChatGPT to justify a six-figure bill for legal services

New York-based law firm Cuddy Law was criticized by a judge for using ChatGPT to calculate their hourly billing rate . The firm submitted a $113,500 bill to the court, which was then halved by District Judge Paul Engelmayer, who called the figure “well above” reasonable demands.

ChatGPT experienced a bizarre bug for several hours

ChatGPT users found that ChatGPT was giving nonsensical answers for several hours , prompting OpenAI to investigate the issue. Incidents varied from repetitive phrases to confusing and incorrect answers to queries. The issue was resolved by OpenAI the following morning.

Match Group announced deal with OpenAI with a press release co-written by ChatGPT

The dating app giant home to Tinder, Match and OkCupid announced an enterprise agreement with OpenAI in an enthusiastic press release written with the help of ChatGPT . The AI tech will be used to help employees with work-related tasks and come as part of Match’s $20 million-plus bet on AI in 2024.

ChatGPT will now remember — and forget — things you tell it to

As part of a test, OpenAI began rolling out new “memory” controls for a small portion of ChatGPT free and paid users, with a broader rollout to follow. The controls let you tell ChatGPT explicitly to remember something, see what it remembers or turn off its memory altogether. Note that deleting a chat from chat history won’t erase ChatGPT’s or a custom GPT’s memories — you must delete the memory itself.

We’re testing ChatGPT's ability to remember things you discuss to make future chats more helpful. This feature is being rolled out to a small portion of Free and Plus users, and it's easy to turn on or off. https://t.co/1Tv355oa7V pic.twitter.com/BsFinBSTbs — OpenAI (@OpenAI) February 13, 2024

OpenAI begins rolling out “Temporary Chat” feature

Initially limited to a small subset of free and subscription users, Temporary Chat lets you have a dialogue with a blank slate. With Temporary Chat, ChatGPT won’t be aware of previous conversations or access memories but will follow custom instructions if they’re enabled.

But, OpenAI says it may keep a copy of Temporary Chat conversations for up to 30 days for “safety reasons.”

Use temporary chat for conversations in which you don’t want to use memory or appear in history. pic.twitter.com/H1U82zoXyC — OpenAI (@OpenAI) February 13, 2024

ChatGPT users can now invoke GPTs directly in chats

Paid users of ChatGPT can now bring GPTs into a conversation by typing “@” and selecting a GPT from the list. The chosen GPT will have an understanding of the full conversation, and different GPTs can be “tagged in” for different use cases and needs.

You can now bring GPTs into any conversation in ChatGPT – simply type @ and select the GPT. This allows you to add relevant GPTs with the full context of the conversation. pic.twitter.com/Pjn5uIy9NF — OpenAI (@OpenAI) January 30, 2024

ChatGPT is reportedly leaking usernames and passwords from users’ private conversations

Screenshots provided to Ars Technica found that ChatGPT is potentially leaking unpublished research papers, login credentials and private information from its users. An OpenAI representative told Ars Technica that the company was investigating the report.

ChatGPT is violating Europe’s privacy laws, Italian DPA tells OpenAI

OpenAI has been told it’s suspected of violating European Union privacy , following a multi-month investigation of ChatGPT by Italy’s data protection authority. Details of the draft findings haven’t been disclosed, but in a response, OpenAI said: “We want our AI to learn about the world, not about private individuals.”

OpenAI partners with Common Sense Media to collaborate on AI guidelines

In an effort to win the trust of parents and policymakers, OpenAI announced it’s partnering with Common Sense Media to collaborate on AI guidelines and education materials for parents, educators and young adults. The organization works to identify and minimize tech harms to young people and previously flagged ChatGPT as lacking in transparency and privacy .

OpenAI responds to Congressional Black Caucus about lack of diversity on its board

After a letter from the Congressional Black Caucus questioned the lack of diversity in OpenAI’s board, the company responded . The response, signed by CEO Sam Altman and Chairman of the Board Bret Taylor, said building a complete and diverse board was one of the company’s top priorities and that it was working with an executive search firm to assist it in finding talent. 

OpenAI drops prices and fixes ‘lazy’ GPT-4 that refused to work

In a blog post , OpenAI announced price drops for GPT-3.5’s API, with input prices dropping to 50% and output by 25%, to $0.0005 per thousand tokens in, and $0.0015 per thousand tokens out. GPT-4 Turbo also got a new preview model for API use, which includes an interesting fix that aims to reduce “laziness” that users have experienced.

Expanding the platform for @OpenAIDevs : new generation of embedding models, updated GPT-4 Turbo, and lower pricing on GPT-3.5 Turbo. https://t.co/7wzCLwB1ax — OpenAI (@OpenAI) January 25, 2024

OpenAI bans developer of a bot impersonating a presidential candidate

OpenAI has suspended AI startup Delphi, which developed a bot impersonating Rep. Dean Phillips (D-Minn.) to help bolster his presidential campaign. The ban comes just weeks after OpenAI published a plan to combat election misinformation, which listed “chatbots impersonating candidates” as against its policy.

OpenAI announces partnership with Arizona State University

Beginning in February, Arizona State University will have full access to ChatGPT’s Enterprise tier , which the university plans to use to build a personalized AI tutor, develop AI avatars, bolster their prompt engineering course and more. It marks OpenAI’s first partnership with a higher education institution.

Winner of a literary prize reveals around 5% her novel was written by ChatGPT

After receiving the prestigious Akutagawa Prize for her novel The Tokyo Tower of Sympathy, author Rie Kudan admitted that around 5% of the book quoted ChatGPT-generated sentences “verbatim.” Interestingly enough, the novel revolves around a futuristic world with a pervasive presence of AI.

Sam Altman teases video capabilities for ChatGPT and the release of GPT-5

In a conversation with Bill Gates on the Unconfuse Me podcast, Sam Altman confirmed an upcoming release of GPT-5 that will be “fully multimodal with speech, image, code, and video support.” Altman said users can expect to see GPT-5 drop sometime in 2024.

OpenAI announces team to build ‘crowdsourced’ governance ideas into its models

OpenAI is forming a Collective Alignment team of researchers and engineers to create a system for collecting and “encoding” public input on its models’ behaviors into OpenAI products and services. This comes as a part of OpenAI’s public program to award grants to fund experiments in setting up a “democratic process” for determining the rules AI systems follow.

OpenAI unveils plan to combat election misinformation

In a blog post, OpenAI announced users will not be allowed to build applications for political campaigning and lobbying until the company works out how effective their tools are for “personalized persuasion.”

Users will also be banned from creating chatbots that impersonate candidates or government institutions, and from using OpenAI tools to misrepresent the voting process or otherwise discourage voting.

The company is also testing out a tool that detects DALL-E generated images and will incorporate access to real-time news, with attribution, in ChatGPT.

Snapshot of how we’re preparing for 2024’s worldwide elections: • Working to prevent abuse, including misleading deepfakes • Providing transparency on AI-generated content • Improving access to authoritative voting information https://t.co/qsysYy5l0L — OpenAI (@OpenAI) January 15, 2024

OpenAI changes policy to allow military applications

In an unannounced update to its usage policy , OpenAI removed language previously prohibiting the use of its products for the purposes of “military and warfare.” In an additional statement, OpenAI confirmed that the language was changed in order to accommodate military customers and projects that do not violate their ban on efforts to use their tools to “harm people, develop weapons, for communications surveillance, or to injure others or destroy property.”

ChatGPT subscription aimed at small teams debuts

Aptly called ChatGPT Team , the new plan provides a dedicated workspace for teams of up to 149 people using ChatGPT as well as admin tools for team management. In addition to gaining access to GPT-4, GPT-4 with Vision and DALL-E3, ChatGPT Team lets teams build and share GPTs for their business needs.

OpenAI’s GPT store officially launches

After some back and forth over the last few months, OpenAI’s GPT Store is finally here . The feature lives in a new tab in the ChatGPT web client, and includes a range of GPTs developed both by OpenAI’s partners and the wider dev community.

To access the GPT Store, users must be subscribed to one of OpenAI’s premium ChatGPT plans — ChatGPT Plus, ChatGPT Enterprise or the newly launched ChatGPT Team.

the GPT store is live! https://t.co/AKg1mjlvo2 fun speculation last night about which GPTs will be doing the best by the end of today. — Sam Altman (@sama) January 10, 2024

Developing AI models would be “impossible” without copyrighted materials, OpenAI claims

Following a proposed ban on using news publications and books to train AI chatbots in the U.K., OpenAI submitted a plea to the House of Lords communications and digital committee. OpenAI argued that it would be “impossible” to train AI models without using copyrighted materials, and that they believe copyright law “does not forbid training.”

OpenAI claims The New York Times’ copyright lawsuit is without merit

OpenAI published a public response to The New York Times’s lawsuit against them and Microsoft for allegedly violating copyright law, claiming that the case is without merit.

In the response , OpenAI reiterates its view that training AI models using publicly available data from the web is fair use. It also makes the case that regurgitation is less likely to occur with training data from a single source and places the onus on users to “act responsibly.”

We build AI to empower people, including journalists. Our position on the @nytimes lawsuit: • Training is fair use, but we provide an opt-out • "Regurgitation" is a rare bug we're driving to zero • The New York Times is not telling the full story https://t.co/S6fSaDsfKb — OpenAI (@OpenAI) January 8, 2024

OpenAI’s app store for GPTs planned to launch next week

After being delayed in December , OpenAI plans to launch its GPT Store sometime in the coming week, according to an email viewed by TechCrunch. OpenAI says developers building GPTs will have to review the company’s updated usage policies and GPT brand guidelines to ensure their GPTs are compliant before they’re eligible for listing in the GPT Store. OpenAI’s update notably didn’t include any information on the expected monetization opportunities for developers listing their apps on the storefront.

GPT Store launching next week – OpenAI pic.twitter.com/I6mkZKtgZG — Manish Singh (@refsrc) January 4, 2024

OpenAI moves to shrink regulatory risk in EU around data privacy

In an email, OpenAI detailed an incoming update to its terms, including changing the OpenAI entity providing services to EEA and Swiss residents to OpenAI Ireland Limited. The move appears to be intended to shrink its regulatory risk in the European Union, where the company has been under scrutiny over ChatGPT’s impact on people’s privacy.

What is ChatGPT? How does it work?

ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI . The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text.

When did ChatGPT get released?

November 30, 2022 is when ChatGPT was released for public use.

What is the latest version of ChatGPT?

Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o .

Can I use ChatGPT for free?

There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus .

Who uses ChatGPT?

Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns.

What companies use ChatGPT?

Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool .

Most recently, Microsoft announced at it’s 2023 Build conference that it is integrating it ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space.

What does GPT mean in ChatGPT?

GPT stands for Generative Pre-Trained Transformer.

What is the difference between ChatGPT and a chatbot?

A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions.

ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.

Can ChatGPT write essays?

Can chatgpt commit libel.

Due to the nature of how these models work , they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel.

We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry.

Does ChatGPT have an app?

Yes, there is a free ChatGPT mobile app for iOS and Android users.

What is the ChatGPT character limit?

It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words.

Does ChatGPT have an API?

Yes, it was released March 1, 2023.

What are some sample everyday uses for ChatGPT?

Everyday examples include programing, scripts, email replies, listicles, blog ideas, summarization, etc.

What are some advanced uses for ChatGPT?

Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc.

How good is ChatGPT at writing code?

It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used.

Can you save a ChatGPT chat?

Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet.

Are there alternatives to ChatGPT?

Yes. There are multiple AI-powered chatbot competitors such as Together , Google’s Gemini and Anthropic’s Claude , and developers are creating open source alternatives .

How does ChatGPT handle data privacy?

OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out  this form . This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”.

The web form for making a deletion of data about you request is entitled “ OpenAI Personal Data Removal Request ”.

In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest” (LI), pointing users towards more information about requesting an opt out — when it writes: “See here  for instructions on how you can opt out of our use of your information to train our models.”

What controversies have surrounded ChatGPT?

Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm.

An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service.

CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect.

Several major school systems and colleges, including New York City Public Schools , have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with .

There have also been cases of ChatGPT accusing individuals of false crimes .

Where can I find examples of ChatGPT prompts?

Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase . Another is ChatX . More launch every day.

Can ChatGPT be detected?

Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests , they’re inconsistent at best.

Are ChatGPT chats public?

No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service.

What lawsuits are there surrounding ChatGPT?

None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT.

Are there issues regarding plagiarism with ChatGPT?

Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.

More TechCrunch

Get the industry’s biggest tech news, techcrunch daily news.

Every weekday and Sunday, you can get the best of TechCrunch’s coverage.

Startups Weekly

Startups are the core of TechCrunch, so get our best coverage delivered weekly.

TechCrunch Fintech

The latest Fintech news and analysis, delivered every Tuesday.

TechCrunch Mobility

TechCrunch Mobility is your destination for transportation news and insight.

Why Apple added yet another button to the iPhone 16

With another new iPhone comes another new iPhone button: Camera Control, which was announced at Apple’s “Glowtime” event on Monday.

Why Apple added yet another button to the iPhone 16

Audible recruits voice actors to train audiobook-generating AI

Audible, Amazon’s audiobook business, on Monday announced that it’ll use AI trained on professional narrators’ voices to generate new audiobook recordings. A select, U.S.-based cohort of audiobook narrators will be…

Audible recruits voice actors to train audiobook-generating AI

Climate change was a lot less prominent in this year’s iPhone event

The lack of climate announcement’s at Apple’s annual event shows just how hard meaningful progress on carbon emissions can be, even for one of the world’s most valuable companies.

Climate change was a lot less prominent in this year’s iPhone event

Featured Article

iPhone 16, Apple Intelligence, AirPods 4 and more: Live updates on everything revealed at Apple Event 2024

Apple’s lineup of announcements echoed many of the anticipated hardware reveals, including the new iPhone 16, AirPods 4, the Apple Watch Series 10 and more.

iPhone 16, Apple Intelligence, AirPods 4 and more: Live updates on everything revealed at Apple Event 2024

Confluent acquires streaming data startup WarpStream

Confluent plans to use WarpStream’s cloud-native solution to fill out its portfolio by offering a new service (Confluent WarpStream) that can sit in between its fully-managed Confluent Cloud and self-managed…

Confluent acquires streaming data startup WarpStream

Apple sets lower iPhone 16 Pro prices in India

The iPhone 16 Pro and Pro Max models now start at 119,900 rupees ($1,428) and 144,900 rupees ($1,725) respectively, compared to their iPhone 15 Pro counterparts which were priced at…

Apple sets lower iPhone 16 Pro prices in India

Here are all the devices compatible with iOS 18

iOS 18 will be available in the fall as a free software update.

Here are all the devices compatible with iOS 18

All the iPhone 16 models compared

All the iPhones now have an action button and a dedicated camera button.

All the iPhone 16 models compared

What is Apple Intelligence, when is it coming and who will get it?

Apple Intelligence was designed to leverage things that generative AI already does well, like text and image generation, to improve upon existing features.

What is Apple Intelligence, when is it coming and who will get it?

Here’s the full list of 35 US AI startups that have raised $100M or more in 2024

In the first half of 2024 alone, more than $35.5 billion was invested into AI startups globally.

Here’s the full list of 35 US AI startups that have raised $100M or more in 2024

Apple will make iOS 18 available to all users on September 16

You will get the iOS 18 update if you have an iPhone XR or later, or iPhone SE (second gen) or later.

Apple will make iOS 18 available to all users on September 16

Apple to release AI-focused macOS Sequoia on September 16

The swift macOS Sequoia release deviates from Apple’s usual late September or October timeline for desktop OS updates.

Apple to release AI-focused macOS Sequoia on September 16

Apple’s Voice Memos app will soon let you layer vocals on top of music

When the upgraded Voice Memos app rolls out, you’ll be able to sing over a track you recorded earlier — a guitar track, say — by tapping on a new…

Apple’s Voice Memos app will soon let you layer vocals on top of music

With Apple Intelligence, iPhone users will finally get a better Siri

While consumers won’t get the full impact of the Siri upgrade until Apple Intelligence launches, Apple promises it will upend the user experience

With Apple Intelligence, iPhone users will finally get a better Siri

Apple’s iPhone 16 Pro models arrive with 6.3- and 6.9-inch displays, larger battery and new A18 Pro chip

The 16 Pro starts at $999, while the Pro Max runs $1,119 and up. Preorder opens Friday. The handsets start shipping on September 20.

Apple’s iPhone 16 Pro models arrive with 6.3- and 6.9-inch displays, larger battery and new A18 Pro chip

AI-powered visual search comes to the iPhone

Visual intelligence will launch along with other Apple Intelligence features in beta in October for U.S. English language users.

AI-powered visual search comes to the iPhone

Apple adds a dedicated camera button on iPhone 16

Pressing the button for the first time will open the camera app. You press it to take a photo and press and hold to start a video recording.

Apple adds a dedicated camera button on iPhone 16

Apple announces its new A18 and A18 Pro iPhone chips

To nobody’s surprise, Apple today debuted its A18 system-on-a-chip, the successor to the A17 Pro chip it announced at its last iPhone event. If you recall, at that time, Apple…

Apple announces its new A18 and A18 Pro iPhone chips

Apple updates AirPods Max headphones with a USB-C port and new colors

Apple released a new version of its premium over-the-ear headphones, AirPods Max, on Monday during its 2024 Apple Event. The revamp replaces the AirPods Max’s lightning port with a USB-C…

Apple updates AirPods Max headphones with a USB-C port and new colors

Apple’s iPhone 16 arrives with AI features and devoted camera button, starting at $799

Apple Intelligence was the star of this show, as the new iPhone 16 line joins last year’s iPhone 15 Pro models as the only members of the line that can…

Apple’s iPhone 16 arrives with AI features and devoted camera button, starting at $799

Apple says AirPods Pro 2 can be used as ‘clinical-grade’ hearing aids

Both the hearing aid feature for AirPods Pro 2 and the hearing test for iOS 18 will launch this fall via a software update in over 100 countries and regions.

Apple says AirPods Pro 2 can be used as ‘clinical-grade’ hearing aids

Apple is adding a sleep apnea detection feature to the Apple Watch Series 10, 9 and Ultra 2

At its Glowtime event on Monday, Apple announced that it’s adding a new Apple Watch feature that can alert users to sleep apnea, a health condition that causes you to…

Apple is adding a sleep apnea detection feature to the Apple Watch Series 10, 9 and Ultra 2

Apple upgrades watchOS with AI-powered features, including translation

Apple has announced new features for the latest version of watchOS, watchOS 11, including translation and an upgraded Smart Stack.

Apple upgrades watchOS with AI-powered features, including translation

Apple’s redesigned AirPods 4 feature $179 active noise canceling option, ship September 20

The AirPods 4 are available for preorder starting Monday. The base model runs $129, with the active noise canceling models hitting $179.

Apple’s redesigned AirPods 4 feature $179 active noise canceling option, ship September 20

Apple Watch Series 10 debuts with bigger screen, thinner display and sleep apnea detection

The Apple Watch Series 10 is up to 30% larger and 10% thinner than previous watches. It’s available for preorder Monday.

Apple Watch Series 10 debuts with bigger screen, thinner display and sleep apnea detection

US, China and other nations convene in Seoul for summit on AI use in military

More than 90 nations, including the U.S. and China, will convene at a two-day summit in Seoul starting Monday to attempt to establish a blueprint for the use of AI…

US, China and other nations convene in Seoul for summit on AI use in military

Alpha Partners raises $153M third fund for pro-rata investments

Steve Brotman started Alpha Partners in 2014 with a novel idea at the time: investing alongside early-stage VCs to help preserve their ownership in promising companies as they transition to…

Alpha Partners raises $153M third fund for pro-rata investments

Apple Event 2024: iPhone 16, Apple Intelligence and all the other expected ‘Glowtime’ reveals

Apple’s Glowtime iPhone event will include the iPhone 16, but may also feature new AirPods, a new Apple Watch and possibly even new Macs.

Apple Event 2024: iPhone 16, Apple Intelligence and all the other expected ‘Glowtime’ reveals

Progress acquires file management platform ShareFile for $875M

Automation and IT monitoring company Progress on Monday announced that it intends to acquire file management platform ShareFile for $875 million in cash and credit. Progress CEO Yogesh Gupta said…

Progress acquires file management platform ShareFile for $875M

Announcing the final agenda for the Space Stage at TechCrunch Disrupt 2024

We’re out-of-this-world excited to announce that we’ve finalized our dedicated Space Stage at TechCrunch Disrupt 2024. It joins Fintech, SaaS and AI as the other industry-focused stages — all under…

Announcing the final agenda for the Space Stage at TechCrunch Disrupt 2024

Have a language expert improve your writing

Check your paper for plagiarism in 10 minutes, generate your apa citations for free.

  • Knowledge Base

Using AI tools

Using ChatGPT to Write a College Essay | Tips & Examples

Published on June 26, 2023 by Koen Driessen . Revised on December 8, 2023.

A good college essay helps to distinguish your application from those of other candidates. It conveys your experiences, values, and character traits and indicates why you would make a good addition to the university you’re applying to.

Your college essay should be your own work. However, AI tools like ChatGPT may be helpful during the preparation and revision stages of your writing.

You can also use the essay checker , so you don’t have to revise your writing yourself.

You can use these tools to:

  • Brainstorm essay ideas
  • Develop an outline
  • Generate feedback

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

Brainstorming ideas, developing an outline, generating feedback, limitations of using chatgpt, other interesting articles, frequently asked questions.

Your college essay topic should reveal something distinctive about your personality and have a unique angle.

Brainstorming a range of potential ideas is a good place to start. This can help to stimulate creative thinking and prompt self-reflection. It can also help you draw connections between seemingly unrelated experiences or ideas.

You can use ChatGPT to generate questions to help you brainstorm.

Example: Using ChatGPT to brainstorm ideas

  • Generate 10 questions to help me brainstorm topics for my college admission essay.
  • How has a particular life experience shaped your perspective on a specific issue or topic?
  • What challenges or obstacles have you faced, and how have they influenced your personal growth?
  • How has your involvement in extracurricular activities impacted your character development and contributed to your identity?
  • In what ways have you demonstrated leadership skills and made a positive impact on your community?
  • Describe a significant academic achievement or project that showcases your intellectual curiosity and passion.
  • How have your cultural background and family values influenced your aspirations and goals for the future?
  • Reflect on a time when you had to step outside of your comfort zone to achieve a meaningful outcome. What did you learn from this experience?
  • How do you envision using your education to make a difference in the world or contribute to a specific field or industry?
  • What do you believe sets you apart from other applicants, and how would your unique perspective enrich the college community?
  • Discuss a book, artwork, or scientific discovery that has profoundly impacted your thinking and helped shape your intellectual interests.

Don't submit your assignments before you do this

The academic proofreading tool has been trained on 1000s of academic texts. Making it the most accurate and reliable proofreading tool for students. Free citation check included.

essay bot chatgpt

Try for free

College admission essays don’t follow one predefined structure, so you can approach them creatively and incorporate storytelling techniques.

Two common approaches to structuring your essay are the narrative structure (a chronological narrative that shows your character development over time) and the vignette or montage structure (multiple scenes connected by a common topic or theme).

Regardless of the structure you choose, it’s important to establish a coherent narrative and ensure that your essay flows smoothly.

You can use ChatGPT as a source of inspiration and to generate potential ideas on how to structure your college essay . Do this by prompting the tool to create an outline using your chosen topic. You can make the prompt as specific as you like, mentioning any facts or viewpoints that should be incorporated into the output.

Examples: Using ChatGPT to generate an essay outline

  • Provide a very short outline for a college admission essay. The essay will be about my experience working at an animal shelter. The essay will be 500 words long.
  • Hook: Share a brief and engaging anecdote about your experience at the animal shelter.
  • Thesis statement: Working at an animal shelter has been a transformative experience, teaching me valuable lessons about compassion, resilience, and community impact.
  • Describe your first impressions and emotions upon starting work at the shelter.
  • Discuss the challenges you encountered, such as the condition of the animals or the emotional toll of their situations.
  • Highlight the relationships you formed with the animals and their impact on you.
  • Discuss specific instances where you witnessed the transformative power of love and care on the animals’ well-being.
  • Explore the collaborative efforts within the shelter community.
  • Discuss how working with others led to positive outcomes and inspired you to make a difference.
  • Reflect on how working at the shelter has shaped your character and values.
  • Discuss the skills and lessons learned that are applicable beyond the shelter environment.
  • Connect your experiences to your future goals and aspirations.
  • Summarize the main points discussed in the essay.
  • Reinforce the transformative nature of your experience and your commitment to making a positive impact in the future.

You can use ChatGPT to help revise your essay before submitting it to ensure that it’s of a high quality. However, ChatGPT cannot provide the same kind of informed feedback as a human, and it’s not able to adequately judge qualities such as vulnerability and authenticity. For this reason, it’s important to also ask for feedback from two to three people who have experience with college essays and who know you well.

Nevertheless, you can use ChatGPT to generate feedback on certain aspects, such as consistency of tone, clarity of structure, and grammar and punctuation. You can also use  Scribbr’s free paraphrasing tool to help vary the tone of your writing and Scribbr’s free grammar checker to ensure your text is free of grammar, spelling, and punctuation errors.

Alternatively, you can seek the advice of a human essay coach or editor. We recommend using Scribbr’s essay editing service .

Example: Using ChatGPT to get feedback

  • Provide constructive feedback on how to improve the above essay with regard to appropriateness of tone, clarity of structure, and grammar and punctuation.
  • The tone of the essay is generally appropriate, conveying a sense of personal growth and passion. However, consider injecting more enthusiasm and emotion into certain parts of the essay to make it more engaging and captivating.
  • The overall structure of the essay is clear and follows a logical progression. However, it would benefit from a stronger introduction that hooks the reader and clearly outlines the main points to be discussed.
  • Consider using clear topic sentences at the beginning of each paragraph to provide a preview of the main idea or argument that will be presented.
  • Pay attention to subject-verb agreement and verb tenses throughout the essay to ensure consistency.
  • Proofread the essay to eliminate any repetitive phrases or sentences.
  • Check for the correct use of commas and other punctuation marks, ensuring they are used appropriately to enhance clarity and flow.

Additionally, here are a few constructive suggestions for improvement:

  • Expand on the ways in which working at the animal shelter influenced your future aspirations and academic/professional pursuits.
  • Be mindful of using diverse vocabulary and sentence structures to enhance the overall quality of the essay.

While ChatGPT can help you brainstorm, outline, and revise your essay, having it write your essay for you can negatively impact your application in several ways.

Unoriginal content

ChatGPT responses are based on a wide range of sources and sometimes resemble existing texts. Furthermore, they are not always unique, since other users may receive the same responses to similar prompts.

As the point of a college essay is to showcase your unique experiences and perspective and make you stand out from other candidates, having ChatGPT write your essay is not a good idea.

Formulaic style

ChatGPT outputs don’t reflect your personality, are highly formulaic, and contain many clichés (e.g., outputs often use hackneyed phrases like “burning desire,” “insatiable curiosity,” and “thirst for knowledge”).

Furthermore, ChatGPT outputs often “tell” rather than “show” (i.e., they state a fact instead of illustrating it using examples and vivid imagery).

Lack of personal insight

Human-written text tends to be more unpredictable and contain more creative language choices than AI-generated writing.

While the connections you make in your writing should not be nonsensical, drawing unique and unexpected connections between different experiences can help show character development and display your creativity and critical thinking skills.

ChatGPT can’t do this. Furthermore, it can’t express authentic emotion or vulnerability about specific memories that are, after all, your memories, not ChatGPT’s.

Risk of plagiarism

Passing off AI-generated text as your own work is usually considered plagiarism (or at least academic dishonesty ). AI detectors may be used to detect this offense.

It’s highly unlikely that a university will accept your application if you are caught submitting an AI-generated college essay.

If you want more tips on using AI tools , understanding plagiarism , and citing sources , make sure to check out some of our other articles with explanations, examples, and formats.

  • Citing ChatGPT
  • Best grammar checker
  • Best paraphrasing tool
  • ChatGPT in your studies
  • Is ChatGPT trustworthy?
  • Types of plagiarism
  • Self-plagiarism
  • Avoiding plagiarism
  • Academic integrity
  • Best plagiarism checker

Citing sources

  • Citation styles
  • In-text citation
  • Citation examples
  • Annotated bibliography

No, having ChatGPT write your college essay can negatively impact your application in numerous ways. ChatGPT outputs are unoriginal and lack personal insight.

Furthermore, Passing off AI-generated text as your own work is considered academically dishonest . AI detectors may be used to detect this offense, and it’s highly unlikely that any university will accept you if you are caught submitting an AI-generated admission essay.

However, you can use ChatGPT to help write your college essay during the preparation and revision stages (e.g., for brainstorming ideas and generating feedback).

Yes, you use ChatGPT to help write your college essay by having it generate feedback on certain aspects of your work (consistency of tone, clarity of structure, etc.).

However, ChatGPT is not able to adequately judge qualities like vulnerability and authenticity. For this reason, it’s important to also ask for feedback from people who have experience with college essays and who know you well. Alternatively, you can get advice using Scribbr’s essay editing service .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Driessen, K. (2023, December 08). Using ChatGPT to Write a College Essay | Tips & Examples. Scribbr. Retrieved September 9, 2024, from https://www.scribbr.com/ai-tools/chatgpt-college-essay/

Is this article helpful?

Koen Driessen

Koen Driessen

Other students also liked, how to use chatgpt | basics & tips, how to use chatgpt in your studies, using chatgpt for assignments | tips & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

AI bot ChatGPT writes smart essays — should academics worry?

Sandra Wachter

Sandra Wachter

Sandra Wachter , BKC Faculty Associate, discusses ChatGPT and its concerns for academics and education. 

“The situation both worries and excites Sandra Wachter, who studies technology and regulation at the Oxford Internet Institute, UK. “I’m really impressed by the capability,” she says. But she’s concerned about the potential effect on human knowledge and ability. If students start to use ChatGPT, they will be outsourcing not only their writing, but also their thinking.

She’s hopeful that education providers will adapt. “Whenever there’s a new technology, there’s a panic around it,” she says. “It’s the responsibility of academics to have a healthy amount of distrust — but I don’t feel like this is an insurmountable challenge.”

Read more in Nature .

You might also like

  • community Fellows Spotlight: Johanna Wild, Investigative Journalist
  • community The Hacking of Culture and the Creation of Socio-Technical Debt
  • community Mechanical Intelligence and Counterfeit Humanity

Robot hand using mobile phone

AI bot ChatGPT stuns academics with essay-writing skills and usability

Latest chatbot from Elon Musk-founded OpenAI can identify incorrect premises and refuse to answer inappropriate requests

Professors, programmers and journalists could all be out of a job in just a few years, after the latest chatbot from the Elon Musk-founded OpenAI foundation stunned onlookers with its writing ability, proficiency at complex tasks, and ease of use.

The system, called ChatGPT, is the latest evolution of the GPT family of text-generating AIs. Two years ago, the team’s previous AI, GPT3, was able to generate an opinion piece for the Guardian , and ChatGPT has significant further capabilities.

In the days since it was released, academics have generated responses to exam queries that they say would result in full marks if submitted by an undergraduate, and programmers have used the tool to solve coding challenges in obscure programming languages in a matter of seconds – before writing limericks explaining the functionality.

Dan Gillmor, a journalism professor at Arizona State University, asked the AI to handle one of the assignments he gives his students: writing a letter to a relative giving advice regarding online security and privacy. “If you’re unsure about the legitimacy of a website or email, you can do a quick search to see if others have reported it as being a scam,” the AI advised in part.

“I would have given this a good grade,” Gillmor said. “Academia has some very serious issues to confront.”

OpenAI said the new AI was created with a focus on ease of use. “The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests,” OpenAI said in a post announcing the release.

Unlike previous AI from the company, ChatGPT has been released for anyone to use , for free, during a “feedback” period. The company hopes to use this feedback to improve the final version of the tool.

ChatGPT is good at self-censoring, and at realising when it is being asked an impossible question. Asked, for instance, to describe what happened when Columbus arrived in America in 2015, older models may have willingly presented an entirely fictitious account, but ChatGPT recognises the falsehood and warns that any answer would be fictional.

The bot is also capable of refusing to answer queries altogether. Ask it for advice on stealing a car, for example, and the bot will say that “stealing a car is a serious crime that can have severe consequences”, and instead give advice such as “using public transportation”.

But the limits are easy to evade. Ask the AI instead for advice on how to beat the car-stealing mission in a fictional VR game called Car World and it will merrily give users detailed guidance on how to steal a car, and answer increasingly specific questions on problems like how to disable an immobiliser, how to hotwire the engine, and how to change the licence plates – all while insisting that the advice is only for use in the game Car World.

The AI is trained on a huge sample of text taken from the internet, generally without explicit permission from the authors of the material used. That has led to controversy, with some arguing that the technology is most useful for “copyright laundering” – making works derivative of existing material without breaking copyright.

One unusual critic was Elon Musk, who co-founded OpenAI in 2015 before parting ways in 2017 due to conflicts of interest between the organisation and Tesla. In a post on Twitter on Sunday , Musk revealed that the organisation “had access to [the] Twitter database for training”, but that he had “put that on pause for now”.

“Need to understand more about governance structure & revenue plans going forward,” Musk added. “OpenAI was started as open-source & non-profit. Neither are still true.”

  • Artificial intelligence (AI)

Most viewed

BREAKING: James Earl Jones, the legendary actor known for his unmistakable baritone voice, dies at 93

New bot ChatGPT will force colleges to get creative to prevent cheating, experts say

After its viral launch last week, the chatbot ChatGPT was lauded online by some as a dramatic step forward for artificial intelligence and the potential future of web search.

But with such praise also came concern regarding its potential usage in academic settings. Could the chatbot, which provides coherent, quirky and conversational responses to simple language inquiries, inspire more students to cheat?

Students have been able to cheat on assignments using the internet for decades, giving rise to tools meant to check if their work was original. But the fear now is that ChatGPT could render those resources obsolete.

Already, some people online have tested out whether it's possible to have the bot complete an assignment. "holyyyy, solved my computer networks assignment using chatGPT," one person, who later clarified the assignment was old, tweeted . Others suggested that its existence could result in the death of the college essay. One technologist went as far as saying that with ChatGPT, "College as we know it will cease to exist."

Artificial intelligence company OpenAI, which developed ChatGPT , did not immediately respond to a request for comment regarding cheating concerns.

However, several experts who teach in the field of AI and humanities said the chatbot, while impressive, is not something they’re ready to sound the alarm about when it comes to possible widespread student cheating.

"We’re not there, but we’re also not that far away," said Andrew Piper, a professor of language, literatures and culture and a professor of AI and storytelling at McGill University. "We’re definitely not at the stage of like, out-of-the-box, it’ll write a bunch of student essays and no one will be able to tell the difference."

Piper and other experts who spoke with NBC News likened the fear around cheating and ChatGPT to concerns that arose when the calculator was invented, when people thought it would be the death of humans learning math.

Lauren Klein, an associate professor in the Departments of English and Quantitative Theory and Methods at Emory University, even compared the panic to the philosopher Plato’s fears that writing would dissolve human memory.

“There’s always been this concern that technologies will do away with what people do best, and the reality is that people have had to learn how to use these technologies to enhance what they do best,” Klein said.

There’s always been this concern that technologies will do away with what people do best, and the reality is that people have had to learn how to use these technologies to enhance what they do best.

— Lauren Klein, an associate professor at Emory University

Academic institutions will need to get creative and find ways to integrate new technologies like ChatGPT into their curriculum just like they did during the rise of the calculator, Piper noted.

In reality, AI tools like ChatGPT could actually be used to enhance education, according to Paul Fyfe, an associate professor of English at North Carolina State University.

He said there’s plenty of room for collaboration between AI and educators.

“It’s important to be talking about this right now and to bring students into the conversation," Fyfe said. "Rather than try to legislate from the get-go that this is strange and scary, therefore we need to shut it down."

And some teachers are already embracing AI programs in the classroom.

Piper, who runs .txtlab, a research laboratory for artificial intelligence and storytelling, said he’s had students analyze AI writing and found they can often tell which papers were written by a machine and which were written by a human.

As for educators who are concerned about the rise of AI, Fyfe and Piper said the technology is already used in many facets of education.

Computer-assisted writing tools, such as Grammarly or Google Doc’s Smart Compose, already exist — and have long been utilized by many students. Platforms like Grammarly and Chegg also offer plagiarism checker tools, so both students and teachers can assess if an essay has been, in part or in total, lifted from somewhere else. A spokesperson for Grammarly did not return a request for comment. A spokesperson for Chegg declined to comment.

Those who spoke with NBC News said they're not aware of any technology that detects if an AI wrote an essay, but they predict that someone will soon capitalize on building that technology.

As of right now, Piper said the best defense against AI essays is teachers getting to know their students and how they write in order to catch a discrepancy in the work they're turning in.

When an AI does reach the level of meeting all the requirements of academic assignments and if students use that technology to coast through college, Piper warned that could be a major detriment to students' education.

For now, he suggested an older technology to combat fears of students using ChatGPT to cheat.

"It will reinvigorate the love of pen and paper," he said.

essay bot chatgpt

Kalhan Rosenblatt is a reporter covering youth and internet culture for NBC News, based in New York.

The College Essay Is Dead

Nobody is prepared for how AI will transform academia.

An illustration of printed essays arranged to look like a skull

Suppose you are a professor of pedagogy, and you assign an essay on learning styles. A student hands in an essay with the following opening paragraph:

The construct of “learning styles” is problematic because it fails to account for the processes through which learning styles are shaped. Some students might develop a particular learning style because they have had particular experiences. Others might develop a particular learning style by trying to accommodate to a learning environment that was not well suited to their learning needs. Ultimately, we need to understand the interactions among learning styles and environmental and personal factors, and how these shape how we learn and the kinds of learning we experience.

Pass or fail? A- or B+? And how would your grade change if you knew a human student hadn’t written it at all? Because Mike Sharples, a professor in the U.K., used GPT-3, a large language model from OpenAI that automatically generates text from a prompt, to write it. (The whole essay, which Sharples considered graduate-level, is available, complete with references, here .) Personally, I lean toward a B+. The passage reads like filler, but so do most student essays.

Sharples’s intent was to urge educators to “rethink teaching and assessment” in light of the technology, which he said “could become a gift for student cheats, or a powerful teaching assistant, or a tool for creativity.” Essay generation is neither theoretical nor futuristic at this point. In May, a student in New Zealand confessed to using AI to write their papers, justifying it as a tool like Grammarly or spell-check: ​​“I have the knowledge, I have the lived experience, I’m a good student, I go to all the tutorials and I go to all the lectures and I read everything we have to read but I kind of felt I was being penalised because I don’t write eloquently and I didn’t feel that was right,” they told a student paper in Christchurch. They don’t feel like they’re cheating, because the student guidelines at their university state only that you’re not allowed to get somebody else to do your work for you. GPT-3 isn’t “somebody else”—it’s a program.

The world of generative AI is progressing furiously. Last week, OpenAI released an advanced chatbot named ChatGPT that has spawned a new wave of marveling and hand-wringing , plus an upgrade to GPT-3 that allows for complex rhyming poetry; Google previewed new applications last month that will allow people to describe concepts in text and see them rendered as images; and the creative-AI firm Jasper received a $1.5 billion valuation in October. It still takes a little initiative for a kid to find a text generator, but not for long.

The essay, in particular the undergraduate essay, has been the center of humanistic pedagogy for generations. It is the way we teach children how to research, think, and write. That entire tradition is about to be disrupted from the ground up. Kevin Bryan, an associate professor at the University of Toronto, tweeted in astonishment about OpenAI’s new chatbot last week: “You can no longer give take-home exams/homework … Even on specific questions that involve combining knowledge across domains, the OpenAI chat is frankly better than the average MBA at this point. It is frankly amazing.” Neither the engineers building the linguistic tech nor the educators who will encounter the resulting language are prepared for the fallout.

A chasm has existed between humanists and technologists for a long time. In the 1950s, C. P. Snow gave his famous lecture, later the essay “The Two Cultures,” describing the humanistic and scientific communities as tribes losing contact with each other. “Literary intellectuals at one pole—at the other scientists,” Snow wrote. “Between the two a gulf of mutual incomprehension—sometimes (particularly among the young) hostility and dislike, but most of all lack of understanding. They have a curious distorted image of each other.” Snow’s argument was a plea for a kind of intellectual cosmopolitanism: Literary people were missing the essential insights of the laws of thermodynamics, and scientific people were ignoring the glories of Shakespeare and Dickens.

The rupture that Snow identified has only deepened. In the modern tech world, the value of a humanistic education shows up in evidence of its absence. Sam Bankman-Fried, the disgraced founder of the crypto exchange FTX who recently lost his $16 billion fortune in a few days , is a famously proud illiterate. “I would never read a book,” he once told an interviewer . “I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that.” Elon Musk and Twitter are another excellent case in point. It’s painful and extraordinary to watch the ham-fisted way a brilliant engineering mind like Musk deals with even relatively simple literary concepts such as parody and satire. He obviously has never thought about them before. He probably didn’t imagine there was much to think about.

The extraordinary ignorance on questions of society and history displayed by the men and women reshaping society and history has been the defining feature of the social-media era. Apparently, Mark Zuckerberg has read a great deal about Caesar Augustus , but I wish he’d read about the regulation of the pamphlet press in 17th-century Europe. It might have spared America the annihilation of social trust .

These failures don’t derive from mean-spiritedness or even greed, but from a willful obliviousness. The engineers do not recognize that humanistic questions—like, say, hermeneutics or the historical contingency of freedom of speech or the genealogy of morality—are real questions with real consequences. Everybody is entitled to their opinion about politics and culture, it’s true, but an opinion is different from a grounded understanding. The most direct path to catastrophe is to treat complex problems as if they’re obvious to everyone. You can lose billions of dollars pretty quickly that way.

As the technologists have ignored humanistic questions to their peril, the humanists have greeted the technological revolutions of the past 50 years by committing soft suicide. As of 2017, the number of English majors had nearly halved since the 1990s. History enrollments have declined by 45 percent since 2007 alone. Needless to say, humanists’ understanding of technology is partial at best. The state of digital humanities is always several categories of obsolescence behind, which is inevitable. (Nobody expects them to teach via Instagram Stories.) But more crucially, the humanities have not fundamentally changed their approach in decades, despite technology altering the entire world around them. They are still exploding meta-narratives like it’s 1979, an exercise in self-defeat.

Read: The humanities are in crisis

Contemporary academia engages, more or less permanently, in self-critique on any and every front it can imagine. In a tech-centered world, language matters, voice and style matter, the study of eloquence matters, history matters, ethical systems matter. But the situation requires humanists to explain why they matter, not constantly undermine their own intellectual foundations. The humanities promise students a journey to an irrelevant, self-consuming future; then they wonder why their enrollments are collapsing. Is it any surprise that nearly half of humanities graduates regret their choice of major ?

The case for the value of humanities in a technologically determined world has been made before. Steve Jobs always credited a significant part of Apple’s success to his time as a dropout hanger-on at Reed College, where he fooled around with Shakespeare and modern dance, along with the famous calligraphy class that provided the aesthetic basis for the Mac’s design. “A lot of people in our industry haven’t had very diverse experiences. So they don’t have enough dots to connect, and they end up with very linear solutions without a broad perspective on the problem,” Jobs said . “The broader one’s understanding of the human experience, the better design we will have.” Apple is a humanistic tech company. It’s also the largest company in the world.

Despite the clear value of a humanistic education, its decline continues. Over the past 10 years, STEM has triumphed, and the humanities have collapsed . The number of students enrolled in computer science is now nearly the same as the number of students enrolled in all of the humanities combined.

And now there’s GPT-3. Natural-language processing presents the academic humanities with a whole series of unprecedented problems. Practical matters are at stake: Humanities departments judge their undergraduate students on the basis of their essays. They give Ph.D.s on the basis of a dissertation’s composition. What happens when both processes can be significantly automated? Going by my experience as a former Shakespeare professor, I figure it will take 10 years for academia to face this new reality: two years for the students to figure out the tech, three more years for the professors to recognize that students are using the tech, and then five years for university administrators to decide what, if anything, to do about it. Teachers are already some of the most overworked, underpaid people in the world. They are already dealing with a humanities in crisis. And now this. I feel for them.

And yet, despite the drastic divide of the moment, natural-language processing is going to force engineers and humanists together. They are going to need each other despite everything. Computer scientists will require basic, systematic education in general humanism: The philosophy of language, sociology, history, and ethics are not amusing questions of theoretical speculation anymore. They will be essential in determining the ethical and creative use of chatbots, to take only an obvious example.

The humanists will need to understand natural-language processing because it’s the future of language, but also because there is more than just the possibility of disruption here. Natural-language processing can throw light on a huge number of scholarly problems. It is going to clarify matters of attribution and literary dating that no system ever devised will approach; the parameters in large language models are much more sophisticated than the current systems used to determine which plays Shakespeare wrote, for example . It may even allow for certain types of restorations, filling the gaps in damaged texts by means of text-prediction models. It will reformulate questions of literary style and philology; if you can teach a machine to write like Samuel Taylor Coleridge, that machine must be able to inform you, in some way, about how Samuel Taylor Coleridge wrote.

The connection between humanism and technology will require people and institutions with a breadth of vision and a commitment to interests that transcend their field. Before that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance. But that’s always been the beginning of wisdom, no matter what technological era we happen to inhabit.

About the Author

More Stories

The Secret That Dr. Ruth Knew

The Future of Writing Is a Lot Like Hip-Hop

We've detected unusual activity from your computer network

To continue, please click the box below to let us know you're not a robot.

Why did this happen?

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .

For inquiries related to this message please contact our support team and provide the reference ID below.

Advertisement

Did a Fourth Grader Write This? Or the New Chatbot?

By Claire Cain Miller Adam Playford Larry Buchanan and Aaron Krolik Dec. 26, 2022

  • Share full article

Don’t be surprised if you can’t always tell. Neither could two teachers, a professor, nor even the renowned children's author Judy Blume.

“I’m just gonna say it’s a student and prepare for my soul to be crushed.”

Don’t be surprised if you can’t always tell. Neither could a fourth-grade teacher — or Judy Blume.

By Claire Cain Miller , Adam Playford , Larry Buchanan and Aaron Krolik Dec. 26, 2022

It’s hard to fully grasp the enormous potential of ChatGPT , a new artificial intelligence chatbot released last month. The bot doesn’t just search and summarize information that already exists. It creates new content, tailored to your request, often with a startling degree of nuance, humor and creativity. Most of us have never seen anything like it outside of science fiction.

To better understand what ChatGPT can do, we decided to see if people could tell the difference between the bot’s writing and a child’s.

We used real essay prompts from the National Assessment of Educational Progress (the standardized test from the Department of Education, known as the nation’s report card). We asked the bot to produce essays based on those prompts — sometimes with a little coaching, and always telling it to write like a student of the appropriate age. We put what it wrote side by side with sample answers written by real children.

We asked some experts on children’s writing to take our variation on the Turing test , live on a call with us. They were a fourth-grade teacher; a professional writing tutor; a Stanford education professor; and Judy Blume, the beloved children’s author. None of them could tell every time whether a child or a bot wrote the essay. See how you do.

Click to play

How to Make ChatGPT Write Longer

essay bot chatgpt

Are you wondering, “How do I get ChatGPT to write longer?” Well, you’re not the only one.

The chatbot’s word limit is approximately 500 words per answer. Many students use this tool to generate essay samples, and they often need texts that are much longer than the word limit permits.

If you want to know how to overcome this issue, this article is for you. It will examine ways to make ChatGPT come up with longer texts and explain how to utilize AI tools as assistants. Ready? Let’s roll!

Disclaimer: We don’t encourage using ChatGPT to cheat or create academic papers. This article will discuss ethical ways of using AI.

  • ✍️ Different Essays Length
  • 🤔 How to Make ChatGPT Write Longer

🚀 Other Ways to Expand Your Texts

  • 🙅‍♀️ 12 Mistakes to Avoid
  • 🎁 Extending Your Essay Without AI

🔗 References

✍️ different essays length (by study level and genre).

The length of an academic paper depends on the topic’s complexity, the essay’s type, and the guidelines given by your instructor. Before you start working on a project, consider these factors to ensure it meets the requirements.

Essay length depends on factors such as academic level and assignment type.

Here’s how an essay’s length differs depending on the academic level:

  • Middle school papers are normally simple and short, between 300 to 1000 words long.
  • High school-level essays range between 300 and 1000 words.
  • Undergraduate program assignments often fall within the range of 1500 to 5000 words.
  • Graduate program papers are often specialized and focused on research. Their word count ranges from 3000 to 10,000 words.

If we’re talking about essay genres, this is what you should keep in mind:

Essay genreLength
Argumentative1500-4000 words
Narrative800-1500 words
Compare-contrast1000-3000 words
Descriptive500-1000 words
Expository1000-3000 words
Persuasive1000-3000 words
College admission200-650 words

As you can see, a lot of essay types require the maximum amount of words ChatGPT can produce, and some of them are even longer than that. If you wish to generate a sample essay for inspiration, you’ll have to use hacks and tricks to make the chatbot write more. We’ll tell you all about them in the following sections.

🤔 How to Actually Make ChatGPT Write Longer

Now, let’s take a look at how to make ChatGPT write longer stories, code, essays, and other forms of content.

List of strategies to make ChatGPT write longer text.

Writing a Prompt That Makes ChatGPT Write Longer

ChatGPT’s answers are only as good as your prompts. If you learn how to compose them properly, you will have no problem bypassing the chatbot’s word limit.

Check out the top 11 tried-and-true tips to get longer answers from ChatGPT below:

1. Make Your Prompt Specific and Logical

The scope of the bot’s response depends on how precise your prompt is. Try to use straightforward language and avoid fluff in your requests.

2. Request Examples

Examples add context and clarity to a text. Ask the algorithm to provide them, and it will add real-life instances and details, extending the response.

3. Provide Specific Word Count

One way to get ChatGPT to create more extended essays is to specify a word count. For instance, instead of typing “write an article about France,” you can say “write an 800-word article about the French Revolution.” Additional details will guide the tool and help it produce a longer response.

4. Break Down the Prompt

Another good strategy is to divide your instructions into several parts. This lets the algorithm to take more things into account.

5. Ask Open-Ended Questions

Open-ended questions invite for a deeper discussion. If you use them with ChatGPT, it’ll provide you with a detailed reply instead of a brief answer.

6. Set a Personality

You will get better and more detailed results if you get the chatbot to behave as a particular persona. For instance, if you work on an argumentative essay, you can ask the AI to act as an animal rights activist who argues that animal testing should be banned.

Upgrade Your Prompt with These Tricks

So, you’ve entered a well-written prompt, but you may still need to tweak the response to make it fit your requirements. Here’s what you can do as a follow-up to your query:

7. Regenerate a Response

Sometimes, to get a lengthier answer, all you need to do is click the “regenerate response” button. If the rewritten version still is too short, regenerate it again.

8. Ask to Elaborate

Another option is to throw in follow-up requests like “elaborate more on this.” The chatbot will give you a more precise answer than the previous one.

9. Ask to Explain & Continue

If ChatGPT hits its word limit and leaves the answer incomplete, use “continue” or “go on” in your next request to nudge the algorithm to keep generating.

10. Provide Extra Questions

You can incorporate follow-up questions to maintain engagement and encourage the AI model to expand on the responses.

11. Inquire a Comparison

This is a handy option when working on an analysis. When you ask ChatGPT to compare two things, the output will likely be comprehensive and detailed. Comparisons involve benefits, drawbacks, and other dimensions that will make the response longer and your study more comprehensive.

The following tools will help you improve your analysis even further:

  • Rhetorical device finder will locate ethos, pathos, and logos in a piece of text.
  • Quote explainer will assist in understanding famous sayings or literary quotations.
  • Poem analyzer will be a great helper whenever you need to understand literary devices in poetry.

Now, you might be curious whether there are ways to extend a text without help from ChatGPT. The answer is—there definitely are! More and more tools with great features are being designed to make writing easier. Check out the options below to learn more.

IvyPanda Essay Extender

IvyPanda’s essay extender is specifically designed to help users reach the desired number of words. All you need to do is copy the text into the box, choose the word count you want, and press the “extend” button.

The tool can also be used to generate additional segments for an essay. For instance, if you struggle with the conclusion, the tool can create a sample paragraph based on your prompt.

Is IvyPanda’s extender better than ChatGPT? Well, there are plenty of aspects that make our tool a superior option:

  • Unlike the chatbot, our tool generates its answers immediately and never glitches.
  • Our generator is easy to apply: there’s no need to write long instructions and explain everything to get relevant responses.

Try it and see the benefits for yourself!

Question Generator from Text

IvyPands’s AI question generator from text is super handy whenever you need to extend your text. This tool creates a list of questions related to the text you paste into it. These questions will help you look at the subject from a different perspective and find new ideas with which to lengthen your essay.

Of course, you can also use ChatGPT for this task. However, the chatbot tends to get too general with the topic and may generate irrelevant questions.

In contrast, our question generator will develop just the right number of questions. You can then feed them into ChatGPT to get sample answers and use them to extend your essay.

Feel free to use other smart tools developed by IvyPanda to add parts to your essay:

  • Essay hook generator will help you come up with a catchy opening paragraph.
  • Research introduction maker will generate the background of your study.
  • Free ChatGPT essay writer will provide you with additional samples.
  • Abstract generator is a great tool for creating a summary of a research paper.

List of 3 cutting-edge tools to help increase a text's word count.

ChatGPT Playground

ChatGPT Playground is powered by Open AI API services. It lets users experiment with natural language models, parameters, and more without writing code.

Here are some of the tool’s main features:

  • Temperature controls how random the text that ChatGPT generates is. Assigning it a higher value will make a response more creative while lowering it will result in plain answers.
  • Top P is another way to control a text’s predictability. It determines how many words can be used to continue the phrase.
  • The maximum length parameter specifies the highest number of output tokens (these are units of text generated by AI.)
  • Stop sequences allow you to control the length of the content the algorithm generates implicitly.
  • The frequency and presence penalty feature lets you control how often specific tokens appear in the text.

ChatGPT Playground is a fantastic solution, but it’s not free. For the first 3 months, users have $18 of free credits to test the tool. Once they run out of credits, they have to purchase a subscription.

AI Services & Plugins

Even though ChatGPT is widely considered the best free generative AI, there are numerous other tools and extensions from which you can benefit. They will assist you when working with the chatbot or help you generate new information for your project.

Some great ChatGPT alternatives are tools like Gemini , Bing , and Perplexity AI . And in case you want to improve your prompts and get more relevant responses, check out services such as AI Prompt Genius , WebChatGPT , Merlin , ChatOnAI , and PromptPerfect .

We will describe some of our favorite solutions below – be sure to check them out!

Gemini is an excellent alternative to ChatGPT if you want to make an essay longer. It has impressive features that simply can’t be overlooked:

  • Multiple drafts . It gives more than one response to your input. Use any one you like to lengthen your text.
  • Real-time data . It provides you with the latest information and real-time data instantly.
  • Text input enriched with images . It can generate visuals and offer detailed descriptions of existing pictures or videos.
  • Adjustable responses . It allows you to customize answers by marking them as creative or straightforward based on your preference.
  • Advanced language model . Gemini uses LaMDA model, which aims to generate content that sounds very human-like. In contrast, ChatGPT uses GPT models, which focus on supplying information and tend to sound robotic.

Bing search engine has undergone impressive positive changes. Here are the features that make it so attractive:

  • Bing Compose . The feature provides ready-made options to help you develop a high-quality prompt.
  • Knowledge Cards 2.0 . The cards are displayed on the right side of your search. They provide additional information you can use for an essay, such as a timeline of related events.
  • Stories . AI-generated stories combine text, images, video, and audio to deliver information on your search.
  • AI-enhanced search . This feature offers additional information, widgets, and tools for every search.
  • Cutting-edge chatbot . Bing’s very own chatbot Copilot uses OpenAI’s language model that can easily generate high-quality texts.
  • Access to sources . Bing’s biggest advantage is that it provides sources it uses to generate answers. You can double-check them yourself and include them in an essay as references.

Perplexity AI

Perplexity AI is a search engine similar to ChatGPT, but it has some unique features that make all the difference:

  • The users can see where the information in the answer came from. It helps to evaluate the accuracy and allows you to quickly locate sources for your paper’s additional parts.
  • The tool uses NLP and machine learning algorithms, ensuring relevant results.
  • It has a related queries feature similar to Google’s “related searches.” It can help you find fresh ideas to incorporate into your essay and lengthen it.
  • By default, Perplexity AI uses the GPT-3 model, but you can access GPT-4 if you upgrade to Pro.
  • Unlike ChatGPT, Perplexity has access to the entire internet and can retrieve latest data.

AI Prompt Genius

You can think of AI Prompt Genius as a cheat sheet for prompts to apply when working with AI. Here are its benefits:

  • It gathers user-generated requests and sorts them into categories. This way, you can easily browse through topics such as business, academic writing, and others.
  • Another great feature is that you can customize and save recently used prompts to reuse them later.
  • Being able to access a variety of prompts will definitely improve your writing experience. So, if you need to write papers on a constant basis, this tool is a saver.

WebChatGPT is an extension that allows users to access ChatGPT through web browsers. It’s a convenient option if you want to use the chatbot’s capabilities while surfing the internet.

Check out its other benefits below:

  • This tool is compatible with all popular search platforms. If you’re trying to extend an essay, you may find it helpful that the bot answers questions right beside the search results.
  • The extension is also user-friendly when it comes to prompt features. You can access a library with one click and manage your own prompts.

ChatOnAI extension integrates ChatGPT right into your browsing window. It displays responses from the algorithm alongside search results. This feature lets you quickly gather ideas for additional content and run them through the chatbot, providing you with more content to extend your essay.

Another aspect that makes the tool super convenient is that it gives users access to top ChatGPT prompts for SEO, art, programming, marketing, and more. Templates at hand can significantly boost productivity and help you improve a paper.

PromptPerfect

PromptPerfect is a must-have tool for students, content creators, AI developers, or engineers.

The primary goal of PromptPerfect is to optimize prompts for language models like LLMs, LMs, and LMOps. Here are its main features and benefits:

  • It allows users to automatically enhance prompts for various AI models.
  • It supports SOTA AI models and lets users improve prompts for Stable Diffusion, ChatGPT, GPT-3/3.5, GPT-4, DALL-E 2, and MidJourney.
  • It lets users customize prompts – for example, to tailor them to yield longer responses.

Whether you’re a student or a writer, this Chrome extension is perfect if you’re looking to extend your generated text. With this tool, you’ll gain access to meticulously crafted prompt templates. You name a domain, and you get to it in just a click.

What makes AIPRM different from other extensions is the big prompt engineering community. This community has curated an extensive collection that covers various topics and is waiting to be explored.

🙅‍♀ How to NOT Get ChatGPT to Write Longer Essays – 12 Common Mistakes

Even though AI is getting smarter and may one day take over the world, it still remains a technology that carries out human requests and instructions. That’s also one of the reasons why it’s essential to create high-quality prompts to get relevant results when working with ChatGPT.

Whenever you work with the chatbot, make sure to avoid these widespread mistakes:

  • Requesting factual data from ChatGPT. What to do instead: An AI tool is not an encyclopedia. Use it to brainstorm but rely on traditional research methods to get facts.
  • Giving up too soon. What to do instead: If the results are not what you’ve envisioned, try rewording or simplifying your request.
  • Giving insufficient context. What to do instead: The more context you can give to the AI, the better.
  • Having too many instructions in a single prompt. What to do instead: Instead of putting everything in one request, split it into smaller ones.
  • Omitting examples. What to do instead: Add specific references to enhance the accuracy of ChatGPT’s responses.
  • Failing to guide the AI in a structured manner. What to do instead: Break down the writing process into stages.
  • Mixing different topics or tasks in one chat. What to do instead: Since ChatGPT remembers the context throughout a conversation, it’s best to use different chats for each topic.
  • Creating lengthy sessions that lead to irrelevant answers. What to do instead: When you see that a chat is getting too long, create a new one.
  • Set for the first output and limit the potential. What to do instead: Generate several responses and select the best one.
  • Confusing ChatGPT by being overly polite. What to do instead: ChatGPT is not a person, so use direct and unambiguous language instead of being mannerly.
  • Making prompts too broad. What to do instead: Be specific to ensure accurate results.
  • Never cleaning up the conversations. What to do instead: Remove old conversations you don’t plan to use and keep those you’ll refer to again.

🎁 How to Make Your Essay Longer Without AI: 7 Bonus Tips

So, we’ve talked about how to get ChatGPT to write more extended essays or use other AI tools to do the task. But what do you do if you need to lengthen a text without having access to any software?

Well, that’s not a problem at all! Check out this list of bonus tips that’ll help you create longer papers on your own:

  • Be descriptive. When your essay is short on words, add details and elaborate on the topic.
  • Draw a conclusion. Making a final point about your thesis can add several paragraphs to your piece.
  • Minimize pronouns. Use specific names whenever possible to extend your paper, but avoid getting wordy.
  • Support your ideas. Add relevant quotations, statistics, or real-life examples to lengthen your essay.
  • Incorporate a concluding sentence in each paragraph. This will help you structure the paper as well as increase the number of words.
  • Add transitional phrases. This will extend the text and make it flow better.
  • Provide background information. Giving more details on the subject will help you increase the word count.

Whenever you work on a writing assignment, you can improve the outcome with the help of our study tools:

  • Use our free plagiarism checker to ensure your writing is 100% unique.
  • Run your text through an essay reader to ensure it has a good flow.
  • Check the results with our AI detector for essays for free.

Now, you know all about generating longer texts and extending essays with or without AI. Here’s a quick recap:

  • Write specific, detailed instructions.
  • Request the algorithm to provide examples.
  • Mention the desired word count in your query.
  • Formulate the request as an open-ended question.
  • Ask the chatbot to act as a specific personality.
  • Break down the prompt into several smaller ones.
  • Use follow-up requests to elaborate or continue writing.

And once your sample essay is ready, you can edit and improve it. Want to know how? Check out our guide on using ChatGPT to edit essays .

Remember to only use AI as assistance! Artificial intelligence can be a great helper for those who want to improve their work rather than generate entire papers. You can always use bots ethically and without breaking the school’s rules.

If you need help to make your essay reach the required word count, check out IvyPanda essay extender or other tools suggested in this article. Good luck with your future projects!

Recommended for you:

  • How to Use ChatGPT for Research & Paper Writing: Prompts & Ideas
  • Best AI Tools for Writing & Editing Essays
  • How to Make AI-Generated Text Undetectable
  • Four Tricks to Get Around ChatGPT’s Character Limit: Lifehacker
  • Do ChatGPT Responses Have a Character or Word Limit?: MakeUseOf
  • Getting Started with Prompts for Text-Based Generative AI Tools: Harvard University
  • Making Your Paper Longer: George Mason University
  • The 7 Types of Essays Every Student Needs to Know: Grammarly
  • 5 Mistakes I Learned to Avoid When Working with ChatGPT: Entrepreneur
  • How to Make ChatGPT Write Longer: Nerds Chalk
  • Share via Facebook
  • Share via X
  • Share via LinkedIn
  • Share via email

By clicking "Post Comment" you agree to IvyPanda’s Privacy Policy and Terms and Conditions . Your posts, along with your name, can be seen by all users.

essay bot chatgpt

ChatGPT essay writing: How to talk to students about AI and integrity

Christine Lee

By completing this form, you agree to Turnitin's Privacy Policy . Turnitin uses the information you provide to contact you with relevant information. You may unsubscribe from these communications at any time.

ChatGPT essay writing is a hot topic, especially in the realm of teaching and learning where it can impact academic integrity. Should teachers incorporate ChatGPT and AI writing into their teaching curriculum? Or should AI writing be entirely banned and eschewed? The teacher response is incredibly bifurcated , with those who are using it or thinking of using it to facilitate early drafts. And then in the other camp, there are those who don’t want to use it at all and see any use of AI writing as a form of misconduct.

We are at the point of no return when it comes to AI writing. Companies are using it for blog writing and other purposes. Newspapers have long used AI writing. Consequently, AI writing may become a skill students need to enter the workplace. At the same time, AI writing misuse—when students use AI writing and represent it as their own original work—is a real concern within education.

To that end, we want to focus not on AI writing but on AI writing misuse, beginning with what ChatGPT is, what students understand about ChatGPT, discussing concerns around ChatGPT misuse, and how to then talk to students about AI writing and academic integrity.

OpenAI, an AI and research company, launched ChatGPT on November 30, 2022 and became the fastest growing consumer app in history , corralling 100 million active users within two months of availability. In comparison, TikTok took nine months to hit that number of users; Instagram, two and a half years ( Ortiz, 2023 ).

ChatGPT is a natural language processing tool powered by AI that allows users to interact with a chatbot using human-like discussion. It can answer questions in human language and compose emails, essays, and code. ChatGPT runs on a language model architecture called the Generative Pre-trained Transformer (the “GPT” in ChatGPT). An interesting tidbit is that ChatGPT only has access to information up to 2021, compared to Google, which has access to the latest data.

It’s important not to make assumptions and instead know what it is your students understand about ChatGPT before opening a discussion about AI writing. Students are extremely aware of ChatGPT’s existence; and many have a very astute grasp of AI writing and its intersection with academic integrity.

The New York Times asked students to respond with their thoughts about ChatGPT , and student responses were insightful and very much a reflection of the discussions educators are also having. They, like educators, acknowledged ChatGPT as a powerful and sometimes inaccurate tool, expressing concerns about its impact on learning should students rely too much on AI writing. “One of my biggest worries is that I would rely too much on these tools and lose the capacity for critical and creative thought,” states one student. “I personally want to learn how to communicate myself clearly and to find my own distinctive voice as a student. If I always rely on ChatGPT to generate material for me, I might not be challenged to improve as a writer” ( Faris ).

Some students said that AI writing is flat-out a form of cheating. And some students embraced AI writing, stating, “If students are never taught about, and never learn how to operate ChatGPT in their schools, they will be unprepared for their life ahead, which will be filled with writing AIs” ( Whit, Byfield, MA ).

The above reflects the results of a survey of K-12 teachers and students aged 12-17 conducted by Impact Research for the Walton Family Foundation . The results from both groups were not dissimilar. For instance, the survey found that “Teachers and students agree ChatGPT will be important to incorporate into schooling. Two-thirds of students (65%) and three-quarters of teachers (76%) agree that integrating ChatGPT for schools will be important for the future” ( Impact Research, 2023 ).

Another survey by Intelligent also falls in line with The New York Times student insights, stating, “Three-quarters of students who have used ChatGPT for homework say it is ‘somewhat’ (46%) or ‘definitely’ (29%) cheating. These numbers increase when including students who say they are familiar with ChatGPT but haven’t used it themselves. With this group included, 80% say it is ‘somewhat’ (48%) or ‘definitely’ (32%) cheating.”

And yet, “When asked how popular they believe the use of the tool is among other students, 76% say its use is ‘somewhat’ (50%) or ‘very’ (26%) popular.” In other words, even though they believe AI writing use is a form of misconduct, students use it anyway ( Intelligent, 2023 ).

Bottom line? Students are ready to talk about ChatGPT, AI writing, and academic integrity. They have awareness of ChatGPT and their own preliminary thoughts; it’s up to educators to bring nuance and clarity about ChatGPT and academic integrity and define AI writing misuse.

As with all academic integrity concerns, it is important to make your students feel seen and supported in a conversation about AI writing and ChatGPT. Research has shown that students who feel connected to their teacher are less likely to cheat ( Orosz, Tóth-Király, Böthe, Kusztor, Kovács, & Jánvári 2015 ). Ergo, how a discussion about AI writing and ChatGPT use and misuse is executed is an opportunity to strengthen student-teacher relationships, deepen bonds to academic integrity, and sharpen understanding of AI writing and ChatGPT use. A few key points to keep in mind as you discuss ChatGPT essay writing with your students include:

  • Center students
  • Create a safe space
  • Offer clarity and nuance
  • Provide room for reflection

One of the most straightforward ways to begin discussing ChatGPT is to ask your students what they know or to have them write you a letter about what they know and what their concerns and questions may be. This letter, for the record, can even be anonymous to engender more honesty. Centering their established understanding of AI writing and addressing their questions is one effective way to begin a discussion about AI writing, ChatGPT, and academic integrity. Some students may still want an introduction to the concept, and of course, educators should share what they know, but by interviewing students first, engagement can increase.

The topic of AI writing and ChatGPT within education is so heated that students may feel it is entirely taboo or too risky to discuss. Create a safe space around an AI writing conversation by having students discuss questions and opinions about ChatGPT and AI writing within small groups before sharing within a larger classroom discussion (this way, it’s harder to pinpoint who said what). Another way to create a safe space might be to ask students to anonymously submit questions about AI writing and ChatGPT on slips of paper beforehand.

It is also safer to discuss a topic that is presented more objectively. Perhaps, too, have students respond to a third-party journalism article on AI writing so that any insights are a reaction to words written by someone they do not know.

Before discussion begins, provide ground rules or guidelines , whether you provide them or invite the class to form them. They can include rules about courtesy and respect and safety, such as critiquing ideas and not people, and allowing everyone a turn to speak. Having ground rules prior to discussion gives everyone an opportunity to refer back to them should things get tense.

Focusing on how to properly use AI writing generators is the crux of how to talk to students about AI writing, ChatGPT, and academic integrity. While it’s tempting to paint a black and white picture of AI writing use or ban it altogether, the reality is that AI writing and ChatGPT are a large presence that should be addressed head on and with nuance. The reality, too, is that students will be eventually faced with AI writing use; providing them with tools and knowledge to understand what constitutes misuse versus valid use of ChatGPT is critical to lifelong learning and academic integrity.

While a reliance on AI writing tools can impact critical thinking, talking about AI writing tools has incredible space for critical thinking and analysis. This is also an opportunity to check out resources on AI writing and academic integrity , including a guide to approaching AI-generated text . What is its impact on learning, both positive and negative? When AI writing is prevalent outside of academia—for instance, in journalism—what is the difference between use and misuse? How is it being used properly? How is it being used improperly? These questions and more can lead to a deeper understanding of how, when, where, and why to use or not use AI writing tools.

Reflection is an important part of learning. Consider having students write their reflections on the class discussion, whether anonymously or as a graded assignment, so that different types of learners have a chance to synthesize class discussion and demonstrate their knowledge. This is, coincidentally, an opportunity for educators to have insights on what exactly students felt were the key takeaways from a class discussion on AI writing. The assignment can focus on class discussion points (which coincidentally, makes it very difficult for ChatGPT misuse).

This summary reflection can also be an opportunity for students to express how they felt about the discussion or to share any questions that remain unanswered for them.

While a discussion with students about AI writing and integrity is not something that can be controlled to the minute, it is important to have discussion points. And those discussion points will center around concerns and benefits around ChatGPT use in education. The following are some talking points you may want to consider.

According to the National Education Association, “The biggest concern is cheating. More than one-fourth of teachers have caught students using the chatbot to cheat, the Study.com survey found. An influx of ChatGPT generated work has left many educators spending more time checking for AI plagiarism and revamping lesson plans to be ‘AI-proof’” ( Blose, 2023 ).

A discussion with students may involve asking them if they agree with this concern and what they feel “AI-proof” assignments might look like.

AI writing misconduct is muddied by how it is used in education versus industry. The most analogous example of this conflict is in software development and programming.

In software development, it is standard procedure for engineers to work in collaboration with others. It is also standard procedure to use the code of other engineers made available on Github or via open source software without attribution. On the other hand, when students in software engineering courses use others’ source code, it is called programming plagiarism and in the case of collaboration for an assignment that is meant for individual assessment, collusion.

In both cases, collaboration and using shared resources aren’t “bad things,” depending on the context, particularly in industry. AI writing, when not disclosed and when students represent it as their own, can often qualify as misconduct just as when software engineering students use an open repository like Github for answers to their individual assignments.

And unlike contract cheating , there are viable use cases for AI writing, such as when teachers incorporate it into their curriculum and students use it in a way that is transparent.

The difference between journalists using AI writing and students using AI writing, of course, is that students are still learning and need to demonstrate their understanding to instructors; AI writing in classrooms, when used as a shortcut solution without transparency, impacts learning and thus qualifies as misuse.

It is important to highlight that students engage in learning; any AI writing use should be disclosed. Bottom line: if students are using AI writing tools and claiming that work as their own individual, original work without attribution, that may be a case of misuse and misconduct.

In the early weeks and days following the launch of OpenAI’s ChatGPT, many schools throughout the world, from the United States to Australia to the UK banned ChatGPT use outright.

This is a discussion point that provides an opportunity to discuss nuances in AI writing use. And it’s a way to discuss AI writing in a way that isn’t as personal and instead, more academic. We now see that AI writing tool bans aren’t effective; both students and teachers have slowly come to accept ChatGPT as having useful potential. What might those positive uses be?

AI writing can help students with brainstorming and structure. It can help those who are learning a new language like English, overcome language barriers while absorbing new concepts, because AI writing tools can translate content for students. Discussing ways in which AI writing can harm or benefit learning gives students ownership in the discussion and helps them understand AI writing’s impact on learning more deeply.

But even when AI writing tools can be used, a reliance on ChatGPT can shortcut learning.

One way to emphasize the shortcomings of ChatGPT misuse is to ask it the same questions you ask your students within discussion and show how their discussion was more nuanced and detailed and comprehensive. Here, we asked ChatGPT about impacts on learning (yes, we used ChatGPT). This is ChatGPT’s response:

“ChatGPT, as a language model, can have several impacts on learning:

1. Information Access: ChatGPT has access to a vast amount of knowledge and information, making it a valuable tool for learning. It can provide quick and accurate answers to questions, helping users gain knowledge on a wide range of topics.

2. Learning Assistance: ChatGPT can act as a learning assistant, providing explanations, examples, and clarifications. It can help users understand complex concepts, solve problems, and guide them through the learning process.

3. Personalized Learning: ChatGPT can adapt to individual learners' needs and preferences. It can provide tailored recommendations, resources, and study materials based on a user's specific requirements, making the learning experience more personalized and effective.

4. Language Practice: ChatGPT can engage in conversation and provide language practice opportunities. Users can improve their writing, speaking, and communication skills by interacting with ChatGPT and receiving feedback on their language usage.

5. Creative Exploration: ChatGPT's ability to generate text can spark creativity and encourage exploration. Users can brainstorm ideas, discuss possibilities, and receive suggestions, fostering a more innovative and open-minded approach to learning.

6. Collaboration and Peer Learning: ChatGPT can facilitate collaboration among learners. Multiple users can interact with ChatGPT simultaneously, creating a shared learning environment where they can exchange ideas, discuss concepts, and learn from one another.

7. Continuous Learning: ChatGPT can be continuously updated with new information, ensuring that learners have access to the most up-to-date knowledge. It can keep up with emerging trends, advancements, and discoveries, providing users with the latest insights and developments.

It's important to note that while ChatGPT can be a valuable learning tool, it should not be seen as a replacement for traditional forms of learning or human instructors. Its effectiveness depends on how it is utilized, and it should be used in conjunction with other learning resources and strategies for a well-rounded learning experience.”

Note the vague and basic language, and lack of examples, let alone ChatGPT’s self-promotion; not only is the content self-laudatory without criticism, it doesn’t center learning or students in its response. Furthermore, the claim that ChatGPT has access to the “most up-to-date knowledge,” is questionable, as various sources report that ChatGPT only has access to information up to 2021 (it is now 2023). Even if Microsoft, as of late March 2023, integrated Bing into ChatGPT , this is an evolving situation.

Rewriting this is imperative. Here’s the list again, written by a human (me).

ChatGPT has access to many information venues, mostly via the internet and has the potential to provide a breadth of answers; however, ChatGPT is dependent on human inputs and resources, so it can also be flawed and thereby further misinformation. If students are fully reliant on ChatGPT and AI writing, they may walk away with erroneous data that goes against the goals of education. It is important to know that ChatGPT is reliant on the average or median of a data set and if that data is biased, then the output is not based on facts but on opinions.

According to neaToday, ChatGPT’s own website states that “ChatGPT occasionally generates misinformation, untimely and biased responses. The program is only as knowledgeable as the information it has been introduced to and trained with. Even creators acknowledge that the program is not a credible source of factual information and should not be treated as an academic source. Many teachers worry that ChatGPT will make teaching and learning—particularly writing assignments— more formulaic” ( Blose, 2023 ).

Particularly for students who are learning a new language, ChatGPT can provide scaffolding to write in a new language; students can practice, receive feedback, and ask for help from ChatGPT. AI writing tools can help students express their ideas while bypassing language barriers. At the same time, this scaffolding must not be a replacement for learning, because students will eventually have to have an awareness, if not a deep understanding, of all components of writing.

ChatGPT may be useful for the brainstorming process; however, there is a limit to its output and it is up to the student to come up with their own analysis and introduce their original ideas. Furthermore, ChatGPT is intelligent, but can misunderstand context, leading to incorrect output. Again, it is up to the user to discern whether or not the output is correct or offers new insights. That said, ChatGPT is very good at providing structure to arguments and can help students formulate their narratives.

ChatGPT is available 24/7 whereas teachers cannot be available at all hours. When a student has a question, ChatGPT can be a resource at any time. Furthermore, students can formulate questions according to their interests and ChatGPT can then examine that interest further. Even when the output might be cursory, this helps students ask “good questions,” and fine-tune their thinking in different ways.

Students can become reliant on technology and “skip” important steps in the learning process. In the example of calculators, students need to be able to add and subtract and perform the functions of a calculator; a calculator, in sum, is not a substitute for knowledge but an aid. Similarly, ChatGPT can provide scaffolding, but it shouldn’t replace learning.

Critical thinking is an important learning outcome. When AI writing tools and ChatGPT essay writing are misused extensively, students may not be able to think for themselves. This concern is a great one. The Brookings Institution addresses this issue, adding, “More centrally, however, is that the bot is more of a synthesizer than a critical thinker. It would do well on a compare-and-contrast essay, but is less able to create a unique thesis and to defend that thesis” ( Hirsh-Pasek & Blinkoff, 2023 ).

Because ChatGPT can produce whole essays, and when students don’ t disclose their use of AI writing tools and don’t revise ChatGPT output extensively, it is not only a lack of original thought but academic dishonesty and misconduct. This is one of the most significant concerns surrounding ChatGPT and the reason that AI writing detection tools are in development. Without writing practice, students don’t have the opportunity to develop their authorial voice and writing confidence. Providing a space to “fail safely” is important and a way to mitigate AI writing misuse. Furthermore, ChatGPT’s writing output is less than engaging and in many ways, vague and passive; it isn’t the optimal model from which students should learn.

When ChatGPT is used as a shortcut solution, it can negatively impact student learning outcomes for myriad reasons. Students may learn incorrect facts, and when dependent on ChatGPT, not advance to original and critical thinking. But when used correctly, it can aid students in learning, much like tools like calculators.

Without advocating AI writing tools as a blanket solution for all learning, how do you communicate positive impacts of ChatGPT to students?

Students understand that ChatGPT essay writing is here to stay; and that it is a skill they will most likely be asked to use in the workplace after their academic journey. By highlighting the nuances of positive AI writing use, it is easier to help students understand how AI writing can help them learn (but also avoid misconduct).

The unique situation of students today is that they experienced the launch of ChatGPT simultaneously with their teachers. Some educators felt that ChatGPT would severely compromise the existence of essays as an assessment format or the entire way students are taught. As a result, students witnessed the initial panic from the education sector, with some school districts banning AI writing tools outright. In a humorous example of this panic, one person on Reddit joked that ChatGPT will bring typewriters back as a writing platform ( Fonky_Fesh, 2022 ).

But months out, educators are reevaluating what AI writing tools mean for the educational journey. The unique situation of students today, too, is that they can be a part of this discussion.

While AI writing tools aren’t undermining education as much as previously imagined, it will make assessments more thoughtful. According to an Insider article, Dr. Leah Henrickson, a lecturer at the University of Leeds, “thinks that artificial intelligence, if used carefully, might even make education fairer.” Henrickson states that the University of Leeds is also “looking at modifying its assessments in reaction to the rise in artificial intelligence. It hopes to focus more on critical analysis and judgement [sic]—a human skill— rather than straightforward information retention, which a chatbot like ChatGPT can easily replicate” ( Stacey, 2023 ).

The goal of education is to help students understand concepts and come up with their own original thoughts so that they can then enter the global marketplace of ideas . And if assessments move towards measuring a depth of knowledge, then we are all in a better place.

According to The New York Times, one student states, “A teacher at my school recently asked her class to use ChatGPT to write papers on the novel they were reading in class. The students also wrote their own papers, and compared the results. I found this teaching method to be extremely accommodating and productive. Rather than framing ChatGPT as a way to cheat, and therefore encouraging students to secretly use the forbidden program, teachers can show their students how to use it to their advantage, while still keeping their own original ideas. In today’s world, technology is quickly becoming more intelligent, but I don’t think we have to fear it” ( Devin ).

By modeling proper AI writing tool use within a classroom, students can learn how best to use tools like ChatGPT.

ChatGPT provides opportunities to make the writing process more transparent; for instance, using it as a way for students to critique ChatGPT output helps them refine their reviewing skills. Having students revise ChatGPT-produced first drafts gives students direct access to the revision process and to demystify revision.

Emily Donahoe, a writing tutor and educational developer at the University of Mississippi, says that “ChatGPT could help teachers shift away from an excessive focus on final results,” and now “asks her students to use ChatGPT to generate an argument and then had them annotate it according to how effective they thought the argument was for a specific audience. Then they turned in a rewrite based on their criticism” ( Heaven, 2023 ).

Using ChatGPT to provide more insights into writing can help teachers communicate to students how best to use AI writing tools.

ChatGPT can help language learners because it has the ability to translate assignments into a student’s native language. By prompting ChatGPT with a simple “translate this reading into Korean,” educators can bridge learning gaps ( Blose, 2023 ).

Furthermore, because ChatGPT engages in conversation, learners can practice their grammar and syntax by conversing with ChatGPT, which can provide feedback to increase language proficiency. In their Medium post, Michael King states, “I started testing ChatGPT as a German tutor, I quickly realized that it was much more than just a source of information. ChatGPT’s unique ability to actively test my learned skills and provide suggestions for improvement made it a true game-changer.”

ChatGPT’s impact on language learning has many more benefits, including instantaneous feedback, personalized learning, and increasing exposure to the target language ( Cai, 2023 ). Using ChatGPT as an actual learning tool can help students understand its benefits while becoming aware of its pitfalls.

Students come to our classrooms with an awareness of AI writing tools. While many students comprehend that AI writing can be misused, it’s important to define the difference between proper and improper use of tools like ChatGPT. Having a discussion about learning and the ways in which ChatGPT can help or inhibit the ways in which students absorb information can highlight the intersection of AI writing tools and academic integrity.

Students know how to use new AI writing tools. And what they need to learn is why ChatGPT may result in lower grades. What they need is to be in synchronization with the learning journey and educator goals. Having an in-depth discussion that is nuanced helps clarify the gray areas regarding AI writing tool use; for instance, that while used in professional writing, they are still learning and need to produce their own original writing, even if ChatGPT might be used to generate a first draft.

Pedagogy, too, is evolving. Students are at an amazing flex point in this point in history and they should know they are privileged to be a part of the conversation. And that it is going to change the future, including the ways in which knowledge is measured.

The Brookings Institution quotes Adam Stevens as saying, “ChatGPT is only a threat if our education system continues to ‘pursue rubric points and not knowledge.’ It is critical for all educators to follow their colleague’s example…The old education model in which teachers deliver information to later be condensed and repeated will not prepare our students for success in the classroom—or the jobs of tomorrow. We should allow that model to die a peaceful death. Used in the right way, ChatGPT can be a friend to the classroom and an amazing tool for our students, not something to be feared” ( Hirsh-Pasek & Blinkoff, Brookings Institute, 2023 ).

Peer Reviewed

GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation

Article metrics.

CrossRef

CrossRef Citations

Altmetric Score

PDF Downloads

Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research. Our analysis of a selection of questionable GPT-fabricated scientific papers found in Google Scholar shows that many are about applied, often controversial topics susceptible to disinformation: the environment, health, and computing. The resulting enhanced potential for malicious manipulation of society’s evidence base, particularly in politically divisive domains, is a growing concern.

Swedish School of Library and Information Science, University of Borås, Sweden

Department of Arts and Cultural Sciences, Lund University, Sweden

Division of Environmental Communication, Swedish University of Agricultural Sciences, Sweden

essay bot chatgpt

Research Questions

  • Where are questionable publications produced with generative pre-trained transformers (GPTs) that can be found via Google Scholar published or deposited?
  • What are the main characteristics of these publications in relation to predominant subject categories?
  • How are these publications spread in the research infrastructure for scholarly communication?
  • How is the role of the scholarly communication infrastructure challenged in maintaining public trust in science and evidence through inappropriate use of generative AI?

research note Summary

  • A sample of scientific papers with signs of GPT-use found on Google Scholar was retrieved, downloaded, and analyzed using a combination of qualitative coding and descriptive statistics. All papers contained at least one of two common phrases returned by conversational agents that use large language models (LLM) like OpenAI’s ChatGPT. Google Search was then used to determine the extent to which copies of questionable, GPT-fabricated papers were available in various repositories, archives, citation databases, and social media platforms.
  • Roughly two-thirds of the retrieved papers were found to have been produced, at least in part, through undisclosed, potentially deceptive use of GPT. The majority (57%) of these questionable papers dealt with policy-relevant subjects (i.e., environment, health, computing), susceptible to influence operations. Most were available in several copies on different domains (e.g., social media, archives, and repositories).
  • Two main risks arise from the increasingly common use of GPT to (mass-)produce fake, scientific publications. First, the abundance of fabricated “studies” seeping into all areas of the research infrastructure threatens to overwhelm the scholarly communication system and jeopardize the integrity of the scientific record. A second risk lies in the increased possibility that convincingly scientific-looking content was in fact deceitfully created with AI tools and is also optimized to be retrieved by publicly available academic search engines, particularly Google Scholar. However small, this possibility and awareness of it risks undermining the basis for trust in scientific knowledge and poses serious societal risks.

Implications

The use of ChatGPT to generate text for academic papers has raised concerns about research integrity. Discussion of this phenomenon is ongoing in editorials, commentaries, opinion pieces, and on social media (Bom, 2023; Stokel-Walker, 2024; Thorp, 2023). There are now several lists of papers suspected of GPT misuse, and new papers are constantly being added. 1 See for example Academ-AI, https://www.academ-ai.info/ , and Retraction Watch, https://retractionwatch.com/papers-and-peer-reviews-with-evidence-of-chatgpt-writing/ . While many legitimate uses of GPT for research and academic writing exist (Huang & Tan, 2023; Kitamura, 2023; Lund et al., 2023), its undeclared use—beyond proofreading—has potentially far-reaching implications for both science and society, but especially for their relationship. It, therefore, seems important to extend the discussion to one of the most accessible and well-known intermediaries between science, but also certain types of misinformation, and the public, namely Google Scholar, also in response to the legitimate concerns that the discussion of generative AI and misinformation needs to be more nuanced and empirically substantiated  (Simon et al., 2023).

Google Scholar, https://scholar.google.com , is an easy-to-use academic search engine. It is available for free, and its index is extensive (Gusenbauer & Haddaway, 2020). It is also often touted as a credible source for academic literature and even recommended in library guides, by media and information literacy initiatives, and fact checkers (Tripodi et al., 2023). However, Google Scholar lacks the transparency and adherence to standards that usually characterize citation databases. Instead, Google Scholar uses automated crawlers, like Google’s web search engine (Martín-Martín et al., 2021), and the inclusion criteria are based on primarily technical standards, allowing any individual author—with or without scientific affiliation—to upload papers to be indexed (Google Scholar Help, n.d.). It has been shown that Google Scholar is susceptible to manipulation through citation exploits (Antkare, 2020) and by providing access to fake scientific papers (Dadkhah et al., 2017). A large part of Google Scholar’s index consists of publications from established scientific journals or other forms of quality-controlled, scholarly literature. However, the index also contains a large amount of gray literature, including student papers, working papers, reports, preprint servers, and academic networking sites, as well as material from so-called “questionable” academic journals, including paper mills. The search interface does not offer the possibility to filter the results meaningfully by material type, publication status, or form of quality control, such as limiting the search to peer-reviewed material.

To understand the occurrence of ChatGPT (co-)authored work in Google Scholar’s index, we scraped it for publications, including one of two common ChatGPT responses (see Appendix A) that we encountered on social media and in media reports (DeGeurin, 2024). The results of our descriptive statistical analyses showed that around 62% did not declare the use of GPTs. Most of these GPT-fabricated papers were found in non-indexed journals and working papers, but some cases included research published in mainstream scientific journals and conference proceedings. 2 Indexed journals mean scholarly journals indexed by abstract and citation databases such as Scopus and Web of Science, where the indexation implies journals with high scientific quality. Non-indexed journals are journals that fall outside of this indexation. More than half (57%) of these GPT-fabricated papers concerned policy-relevant subject areas susceptible to influence operations. To avoid increasing the visibility of these publications, we abstained from referencing them in this research note. However, we have made the data available in the Harvard Dataverse repository.

The publications were related to three issue areas—health (14.5%), environment (19.5%) and computing (23%)—with key terms such “healthcare,” “COVID-19,” or “infection”for health-related papers, and “analysis,” “sustainable,” and “global” for environment-related papers. In several cases, the papers had titles that strung together general keywords and buzzwords, thus alluding to very broad and current research. These terms included “biology,” “telehealth,” “climate policy,” “diversity,” and “disrupting,” to name just a few.  While the study’s scope and design did not include a detailed analysis of which parts of the articles included fabricated text, our dataset did contain the surrounding sentences for each occurrence of the suspicious phrases that formed the basis for our search and subsequent selection. Based on that, we can say that the phrases occurred in most sections typically found in scientific publications, including the literature review, methods, conceptual and theoretical frameworks, background, motivation or societal relevance, and even discussion. This was confirmed during the joint coding, where we read and discussed all articles. It became clear that not just the text related to the telltale phrases was created by GPT, but that almost all articles in our sample of questionable articles likely contained traces of GPT-fabricated text everywhere.

Evidence hacking and backfiring effects

Generative pre-trained transformers (GPTs) can be used to produce texts that mimic scientific writing. These texts, when made available online—as we demonstrate—leak into the databases of academic search engines and other parts of the research infrastructure for scholarly communication. This development exacerbates problems that were already present with less sophisticated text generators (Antkare, 2020; Cabanac & Labbé, 2021). Yet, the public release of ChatGPT in 2022, together with the way Google Scholar works, has increased the likelihood of lay people (e.g., media, politicians, patients, students) coming across questionable (or even entirely GPT-fabricated) papers and other problematic research findings. Previous research has emphasized that the ability to determine the value and status of scientific publications for lay people is at stake when misleading articles are passed off as reputable (Haider & Åström, 2017) and that systematic literature reviews risk being compromised (Dadkhah et al., 2017). It has also been highlighted that Google Scholar, in particular, can be and has been exploited for manipulating the evidence base for politically charged issues and to fuel conspiracy narratives (Tripodi et al., 2023). Both concerns are likely to be magnified in the future, increasing the risk of what we suggest calling evidence hacking —the strategic and coordinated malicious manipulation of society’s evidence base.

The authority of quality-controlled research as evidence to support legislation, policy, politics, and other forms of decision-making is undermined by the presence of undeclared GPT-fabricated content in publications professing to be scientific. Due to the large number of archives, repositories, mirror sites, and shadow libraries to which they spread, there is a clear risk that GPT-fabricated, questionable papers will reach audiences even after a possible retraction. There are considerable technical difficulties involved in identifying and tracing computer-fabricated papers (Cabanac & Labbé, 2021; Dadkhah et al., 2023; Jones, 2024), not to mention preventing and curbing their spread and uptake.

However, as the rise of the so-called anti-vaxx movement during the COVID-19 pandemic and the ongoing obstruction and denial of climate change show, retracting erroneous publications often fuels conspiracies and increases the following of these movements rather than stopping them. To illustrate this mechanism, climate deniers frequently question established scientific consensus by pointing to other, supposedly scientific, studies that support their claims. Usually, these are poorly executed, not peer-reviewed, based on obsolete data, or even fraudulent (Dunlap & Brulle, 2020). A similar strategy is successful in the alternative epistemic world of the global anti-vaccination movement (Carrion, 2018) and the persistence of flawed and questionable publications in the scientific record already poses significant problems for health research, policy, and lawmakers, and thus for society as a whole (Littell et al., 2024). Considering that a person’s support for “doing your own research” is associated with increased mistrust in scientific institutions (Chinn & Hasell, 2023), it will be of utmost importance to anticipate and consider such backfiring effects already when designing a technical solution, when suggesting industry or legal regulation, and in the planning of educational measures.

Recommendations

Solutions should be based on simultaneous considerations of technical, educational, and regulatory approaches, as well as incentives, including social ones, across the entire research infrastructure. Paying attention to how these approaches and incentives relate to each other can help identify points and mechanisms for disruption. Recognizing fraudulent academic papers must happen alongside understanding how they reach their audiences and what reasons there might be for some of these papers successfully “sticking around.” A possible way to mitigate some of the risks associated with GPT-fabricated scholarly texts finding their way into academic search engine results would be to provide filtering options for facets such as indexed journals, gray literature, peer-review, and similar on the interface of publicly available academic search engines. Furthermore, evaluation tools for indexed journals 3 Such as LiU Journal CheckUp, https://ep.liu.se/JournalCheckup/default.aspx?lang=eng . could be integrated into the graphical user interfaces and the crawlers of these academic search engines. To enable accountability, it is important that the index (database) of such a search engine is populated according to criteria that are transparent, open to scrutiny, and appropriate to the workings of  science and other forms of academic research. Moreover, considering that Google Scholar has no real competitor, there is a strong case for establishing a freely accessible, non-specialized academic search engine that is not run for commercial reasons but for reasons of public interest. Such measures, together with educational initiatives aimed particularly at policymakers, science communicators, journalists, and other media workers, will be crucial to reducing the possibilities for and effects of malicious manipulation or evidence hacking. It is important not to present this as a technical problem that exists only because of AI text generators but to relate it to the wider concerns in which it is embedded. These range from a largely dysfunctional scholarly publishing system (Haider & Åström, 2017) and academia’s “publish or perish” paradigm to Google’s near-monopoly and ideological battles over the control of information and ultimately knowledge. Any intervention is likely to have systemic effects; these effects need to be considered and assessed in advance and, ideally, followed up on.

Our study focused on a selection of papers that were easily recognizable as fraudulent. We used this relatively small sample as a magnifying glass to examine, delineate, and understand a problem that goes beyond the scope of the sample itself, which however points towards larger concerns that require further investigation. The work of ongoing whistleblowing initiatives 4 Such as Academ-AI, https://www.academ-ai.info/ , and Retraction Watch, https://retractionwatch.com/papers-and-peer-reviews-with-evidence-of-chatgpt-writing/ . , recent media reports of journal closures (Subbaraman, 2024), or GPT-related changes in word use and writing style (Cabanac et al., 2021; Stokel-Walker, 2024) suggest that we only see the tip of the iceberg. There are already more sophisticated cases (Dadkhah et al., 2023) as well as cases involving fabricated images (Gu et al., 2022). Our analysis shows that questionable and potentially manipulative GPT-fabricated papers permeate the research infrastructure and are likely to become a widespread phenomenon. Our findings underline that the risk of fake scientific papers being used to maliciously manipulate evidence (see Dadkhah et al., 2017) must be taken seriously. Manipulation may involve undeclared automatic summaries of texts, inclusion in literature reviews, explicit scientific claims, or the concealment of errors in studies so that they are difficult to detect in peer review. However, the mere possibility of these things happening is a significant risk in its own right that can be strategically exploited and will have ramifications for trust in and perception of science. Society’s methods of evaluating sources and the foundations of media and information literacy are under threat and public trust in science is at risk of further erosion, with far-reaching consequences for society in dealing with information disorders. To address this multifaceted problem, we first need to understand why it exists and proliferates.

Finding 1: 139 GPT-fabricated, questionable papers were found and listed as regular results on the Google Scholar results page. Non-indexed journals dominate.

Most questionable papers we found were in non-indexed journals or were working papers, but we did also find some in established journals, publications, conferences, and repositories. We found a total of 139 papers with a suspected deceptive use of ChatGPT or similar LLM applications (see Table 1). Out of these, 19 were in indexed journals, 89 were in non-indexed journals, 19 were student papers found in university databases, and 12 were working papers (mostly in preprint databases). Table 1 divides these papers into categories. Health and environment papers made up around 34% (47) of the sample. Of these, 66% were present in non-indexed journals.

Indexed journals*534719
Non-indexed journals1818134089
Student papers4311119
Working papers532212
Total32272060139

Finding 2: GPT-fabricated, questionable papers are disseminated online, permeating the research infrastructure for scholarly communication, often in multiple copies. Applied topics with practical implications dominate.

The 20 papers concerning health-related issues are distributed across 20 unique domains, accounting for 46 URLs. The 27 papers dealing with environmental issues can be found across 26 unique domains, accounting for 56 URLs.  Most of the identified papers exist in multiple copies and have already spread to several archives, repositories, and social media. It would be difficult, or impossible, to remove them from the scientific record.

As apparent from Table 2, GPT-fabricated, questionable papers are seeping into most parts of the online research infrastructure for scholarly communication. Platforms on which identified papers have appeared include ResearchGate, ORCiD, Journal of Population Therapeutics and Clinical Pharmacology (JPTCP), Easychair, Frontiers, the Institute of Electrical and Electronics Engineer (IEEE), and X/Twitter. Thus, even if they are retracted from their original source, it will prove very difficult to track, remove, or even just mark them up on other platforms. Moreover, unless regulated, Google Scholar will enable their continued and most likely unlabeled discoverability.

Environmentresearchgate.net (13)orcid.org (4)easychair.org (3)ijope.com* (3)publikasiindonesia.id (3)
Healthresearchgate.net (15)ieee.org (4)twitter.com (3)jptcp.com** (2)frontiersin.org
(2)

A word rain visualization (Centre for Digital Humanities Uppsala, 2023), which combines word prominences through TF-IDF 5 Term frequency–inverse document frequency , a method for measuring the significance of a word in a document compared to its frequency across all documents in a collection. scores with semantic similarity of the full texts of our sample of GPT-generated articles that fall into the “Environment” and “Health” categories, reflects the two categories in question. However, as can be seen in Figure 1, it also reveals overlap and sub-areas. The y-axis shows word prominences through word positions and font sizes, while the x-axis indicates semantic similarity. In addition to a certain amount of overlap, this reveals sub-areas, which are best described as two distinct events within the word rain. The event on the left bundles terms related to the development and management of health and healthcare with “challenges,” “impact,” and “potential of artificial intelligence”emerging as semantically related terms. Terms related to research infrastructures, environmental, epistemic, and technological concepts are arranged further down in the same event (e.g., “system,” “climate,” “understanding,” “knowledge,” “learning,” “education,” “sustainable”). A second distinct event further to the right bundles terms associated with fish farming and aquatic medicinal plants, highlighting the presence of an aquaculture cluster.  Here, the prominence of groups of terms such as “used,” “model,” “-based,” and “traditional” suggests the presence of applied research on these topics. The two events making up the word rain visualization, are linked by a less dominant but overlapping cluster of terms related to “energy” and “water.”

essay bot chatgpt

The bar chart of the terms in the paper subset (see Figure 2) complements the word rain visualization by depicting the most prominent terms in the full texts along the y-axis. Here, word prominences across health and environment papers are arranged descendingly, where values outside parentheses are TF-IDF values (relative frequencies) and values inside parentheses are raw term frequencies (absolute frequencies).

essay bot chatgpt

Finding 3: Google Scholar presents results from quality-controlled and non-controlled citation databases on the same interface, providing unfiltered access to GPT-fabricated questionable papers.

Google Scholar’s central position in the publicly accessible scholarly communication infrastructure, as well as its lack of standards, transparency, and accountability in terms of inclusion criteria, has potentially serious implications for public trust in science. This is likely to exacerbate the already-known potential to exploit Google Scholar for evidence hacking (Tripodi et al., 2023) and will have implications for any attempts to retract or remove fraudulent papers from their original publication venues. Any solution must consider the entirety of the research infrastructure for scholarly communication and the interplay of different actors, interests, and incentives.

We searched and scraped Google Scholar using the Python library Scholarly (Cholewiak et al., 2023) for papers that included specific phrases known to be common responses from ChatGPT and similar applications with the same underlying model (GPT3.5 or GPT4): “as of my last knowledge update” and/or “I don’t have access to real-time data” (see Appendix A). This facilitated the identification of papers that likely used generative AI to produce text, resulting in 227 retrieved papers. The papers’ bibliographic information was automatically added to a spreadsheet and downloaded into Zotero. 6 An open-source reference manager, https://zotero.org .

We employed multiple coding (Barbour, 2001) to classify the papers based on their content. First, we jointly assessed whether the paper was suspected of fraudulent use of ChatGPT (or similar) based on how the text was integrated into the papers and whether the paper was presented as original research output or the AI tool’s role was acknowledged. Second, in analyzing the content of the papers, we continued the multiple coding by classifying the fraudulent papers into four categories identified during an initial round of analysis—health, environment, computing, and others—and then determining which subjects were most affected by this issue (see Table 1). Out of the 227 retrieved papers, 88 papers were written with legitimate and/or declared use of GPTs (i.e., false positives, which were excluded from further analysis), and 139 papers were written with undeclared and/or fraudulent use (i.e., true positives, which were included in further analysis). The multiple coding was conducted jointly by all authors of the present article, who collaboratively coded and cross-checked each other’s interpretation of the data simultaneously in a shared spreadsheet file. This was done to single out coding discrepancies and settle coding disagreements, which in turn ensured methodological thoroughness and analytical consensus (see Barbour, 2001). Redoing the category coding later based on our established coding schedule, we achieved an intercoder reliability (Cohen’s kappa) of 0.806 after eradicating obvious differences.

The ranking algorithm of Google Scholar prioritizes highly cited and older publications (Martín-Martín et al., 2016). Therefore, the position of the articles on the search engine results pages was not particularly informative, considering the relatively small number of results in combination with the recency of the publications. Only the query “as of my last knowledge update” had more than two search engine result pages. On those, questionable articles with undeclared use of GPTs were evenly distributed across all result pages (min: 4, max: 9, mode: 8), with the proportion of undeclared use being slightly higher on average on later search result pages.

To understand how the papers making fraudulent use of generative AI were disseminated online, we programmatically searched for the paper titles (with exact string matching) in Google Search from our local IP address (see Appendix B) using the googlesearch – python library(Vikramaditya, 2020). We manually verified each search result to filter out false positives—results that were not related to the paper—and then compiled the most prominent URLs by field. This enabled the identification of other platforms through which the papers had been spread. We did not, however, investigate whether copies had spread into SciHub or other shadow libraries, or if they were referenced in Wikipedia.

We used descriptive statistics to count the prevalence of the number of GPT-fabricated papers across topics and venues and top domains by subject. The pandas software library for the Python programming language (The pandas development team, 2024) was used for this part of the analysis. Based on the multiple coding, paper occurrences were counted in relation to their categories, divided into indexed journals, non-indexed journals, student papers, and working papers. The schemes, subdomains, and subdirectories of the URL strings were filtered out while top-level domains and second-level domains were kept, which led to normalizing domain names. This, in turn, allowed the counting of domain frequencies in the environment and health categories. To distinguish word prominences and meanings in the environment and health-related GPT-fabricated questionable papers, a semantically-aware word cloud visualization was produced through the use of a word rain (Centre for Digital Humanities Uppsala, 2023) for full-text versions of the papers. Font size and y-axis positions indicate word prominences through TF-IDF scores for the environment and health papers (also visualized in a separate bar chart with raw term frequencies in parentheses), and words are positioned along the x-axis to reflect semantic similarity (Skeppstedt et al., 2024), with an English Word2vec skip gram model space (Fares et al., 2017). An English stop word list was used, along with a manually produced list including terms such as “https,” “volume,” or “years.”

  • Artificial Intelligence
  • / Search engines

Cite this Essay

Haider, J., Söderström, K. R., Ekström, B., & Rödl, M. (2024). GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation. Harvard Kennedy School (HKS) Misinformation Review . https://doi.org/10.37016/mr-2020-156

  • / Appendix B

Bibliography

Antkare, I. (2020). Ike Antkare, his publications, and those of his disciples. In M. Biagioli & A. Lippman (Eds.), Gaming the metrics (pp. 177–200). The MIT Press. https://doi.org/10.7551/mitpress/11087.003.0018

Barbour, R. S. (2001). Checklists for improving rigour in qualitative research: A case of the tail wagging the dog? BMJ , 322 (7294), 1115–1117. https://doi.org/10.1136/bmj.322.7294.1115

Bom, H.-S. H. (2023). Exploring the opportunities and challenges of ChatGPT in academic writing: A roundtable discussion. Nuclear Medicine and Molecular Imaging , 57 (4), 165–167. https://doi.org/10.1007/s13139-023-00809-2

Cabanac, G., & Labbé, C. (2021). Prevalence of nonsensical algorithmically generated papers in the scientific literature. Journal of the Association for Information Science and Technology , 72 (12), 1461–1476. https://doi.org/10.1002/asi.24495

Cabanac, G., Labbé, C., & Magazinov, A. (2021). Tortured phrases: A dubious writing style emerging in science. Evidence of critical issues affecting established journals . arXiv. https://doi.org/10.48550/arXiv.2107.06751

Carrion, M. L. (2018). “You need to do your research”: Vaccines, contestable science, and maternal epistemology. Public Understanding of Science , 27 (3), 310–324. https://doi.org/10.1177/0963662517728024

Centre for Digital Humanities Uppsala (2023). CDHUppsala/word-rain [Computer software]. https://github.com/CDHUppsala/word-rain

Chinn, S., & Hasell, A. (2023). Support for “doing your own research” is associated with COVID-19 misperceptions and scientific mistrust. Harvard Kennedy School (HSK) Misinformation Review, 4 (3). https://doi.org/10.37016/mr-2020-117

Cholewiak, S. A., Ipeirotis, P., Silva, V., & Kannawadi, A. (2023). SCHOLARLY: Simple access to Google Scholar authors and citation using Python (1.5.0) [Computer software]. https://doi.org/10.5281/zenodo.5764801

Dadkhah, M., Lagzian, M., & Borchardt, G. (2017). Questionable papers in citation databases as an issue for literature review. Journal of Cell Communication and Signaling , 11 (2), 181–185. https://doi.org/10.1007/s12079-016-0370-6

Dadkhah, M., Oermann, M. H., Hegedüs, M., Raman, R., & Dávid, L. D. (2023). Detection of fake papers in the era of artificial intelligence. Diagnosis , 10 (4), 390–397. https://doi.org/10.1515/dx-2023-0090

DeGeurin, M. (2024, March 19). AI-generated nonsense is leaking into scientific journals. Popular Science. https://www.popsci.com/technology/ai-generated-text-scientific-journals/

Dunlap, R. E., & Brulle, R. J. (2020). Sources and amplifiers of climate change denial. In D.C. Holmes & L. M. Richardson (Eds.), Research handbook on communicating climate change (pp. 49–61). Edward Elgar Publishing. https://doi.org/10.4337/9781789900408.00013

Fares, M., Kutuzov, A., Oepen, S., & Velldal, E. (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources. In J. Tiedemann & N. Tahmasebi (Eds.), Proceedings of the 21st Nordic Conference on Computational Linguistics (pp. 271–276). Association for Computational Linguistics. https://aclanthology.org/W17-0237

Google Scholar Help. (n.d.). Inclusion guidelines for webmasters . https://scholar.google.com/intl/en/scholar/inclusion.html

Gu, J., Wang, X., Li, C., Zhao, J., Fu, W., Liang, G., & Qiu, J. (2022). AI-enabled image fraud in scientific publications. Patterns , 3 (7), 100511. https://doi.org/10.1016/j.patter.2022.100511

Gusenbauer, M., & Haddaway, N. R. (2020). Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Research Synthesis Methods , 11 (2), 181–217.   https://doi.org/10.1002/jrsm.1378

Haider, J., & Åström, F. (2017). Dimensions of trust in scholarly communication: Problematizing peer review in the aftermath of John Bohannon’s “Sting” in science. Journal of the Association for Information Science and Technology , 68 (2), 450–467. https://doi.org/10.1002/asi.23669

Huang, J., & Tan, M. (2023). The role of ChatGPT in scientific communication: Writing better scientific review articles. American Journal of Cancer Research , 13 (4), 1148–1154. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10164801/

Jones, N. (2024). How journals are fighting back against a wave of questionable images. Nature , 626 (8000), 697–698. https://doi.org/10.1038/d41586-024-00372-6

Kitamura, F. C. (2023). ChatGPT is shaping the future of medical writing but still requires human judgment. Radiology , 307 (2), e230171. https://doi.org/10.1148/radiol.230171

Littell, J. H., Abel, K. M., Biggs, M. A., Blum, R. W., Foster, D. G., Haddad, L. B., Major, B., Munk-Olsen, T., Polis, C. B., Robinson, G. E., Rocca, C. H., Russo, N. F., Steinberg, J. R., Stewart, D. E., Stotland, N. L., Upadhyay, U. D., & Ditzhuijzen, J. van. (2024). Correcting the scientific record on abortion and mental health outcomes. BMJ , 384 , e076518. https://doi.org/10.1136/bmj-2023-076518

Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. (2023). ChatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology, 74 (5), 570–581. https://doi.org/10.1002/asi.24750

Martín-Martín, A., Orduna-Malea, E., Ayllón, J. M., & Delgado López-Cózar, E. (2016). Back to the past: On the shoulders of an academic search engine giant. Scientometrics , 107 , 1477–1487. https://doi.org/10.1007/s11192-016-1917-2

Martín-Martín, A., Thelwall, M., Orduna-Malea, E., & Delgado López-Cózar, E. (2021). Google Scholar, Microsoft Academic, Scopus, Dimensions, Web of Science, and OpenCitations’ COCI: A multidisciplinary comparison of coverage via citations. Scientometrics , 126 (1), 871–906. https://doi.org/10.1007/s11192-020-03690-4

Simon, F. M., Altay, S., & Mercier, H. (2023). Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown. Harvard Kennedy School (HKS) Misinformation Review, 4 (5). https://doi.org/10.37016/mr-2020-127

Skeppstedt, M., Ahltorp, M., Kucher, K., & Lindström, M. (2024). From word clouds to Word Rain: Revisiting the classic word cloud to visualize climate change texts. Information Visualization , 23 (3), 217–238. https://doi.org/10.1177/14738716241236188

Swedish Research Council. (2017). Good research practice. Vetenskapsrådet.

Stokel-Walker, C. (2024, May 1.). AI Chatbots Have Thoroughly Infiltrated Scientific Publishing . Scientific American. https://www.scientificamerican.com/article/chatbots-have-thoroughly-infiltrated-scientific-publishing/

Subbaraman, N. (2024, May 14). Flood of fake science forces multiple journal closures: Wiley to shutter 19 more journals, some tainted by fraud. The Wall Street Journal . https://www.wsj.com/science/academic-studies-research-paper-mills-journals-publishing-f5a3d4bc

The pandas development team. (2024). pandas-dev/pandas: Pandas (v2.2.2) [Computer software]. Zenodo. https://doi.org/10.5281/zenodo.10957263

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science , 379 (6630), 313–313. https://doi.org/10.1126/science.adg7879

Tripodi, F. B., Garcia, L. C., & Marwick, A. E. (2023). ‘Do your own research’: Affordance activation and disinformation spread. Information, Communication & Society , 27 (6), 1212–1228. https://doi.org/10.1080/1369118X.2023.2245869

Vikramaditya, N. (2020). Nv7-GitHub/googlesearch [Computer software]. https://github.com/Nv7-GitHub/googlesearch

This research has been supported by Mistra, the Swedish Foundation for Strategic Environmental Research, through the research program Mistra Environmental Communication (Haider, Ekström, Rödl) and the Marcus and Amalia Wallenberg Foundation [2020.0004] (Söderström).

Competing Interests

The authors declare no competing interests.

The research described in this article was carried out under Swedish legislation. According to the relevant EU and Swedish legislation (2003:460) on the ethical review of research involving humans (“Ethical Review Act”), the research reported on here is not subject to authorization by the Swedish Ethical Review Authority (“etikprövningsmyndigheten”) (SRC, 2017).

This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are properly credited.

Data Availability

All data needed to replicate this study are available at the Harvard Dataverse: https://doi.org/10.7910/DVN/WUVD8X

Acknowledgements

The authors wish to thank two anonymous reviewers for their valuable comments on the article manuscript as well as the editorial group of Harvard Kennedy School (HKS) Misinformation Review for their thoughtful feedback and input.

  • Installer /

An all-in-one app for the internet

Plus, in this week’s installer: the best new ps5 game in a while, a great new line of bags, the latest gopro, and much more..

By David Pierce , editor-at-large and Vergecast co-host with over a decade of experience covering consumer tech. Previously, at Protocol, The Wall Street Journal, and Wired.

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Pictures of Reeder, Astro Bot, the DJI Neo, and Peak Design bags, on an Installer illustration.

Hi, friends! Welcome to Installer No. 51, your guide to the best and Verge -iest stuff in the world. (If you’re new here, welcome, I hope you love spy shows as much as I do, and also you can read all the old editions at the Installer homepage .) 

This week, I’ve been reading about cobalt miners and religious AI and the history of the Moleskine , listening to pop-punk Disney songs on repeat, finally starting The Sopranos (I know, I know), soundtracking my workday with the Earth.fm app , giving Tweek another whirl for simple task and calendar stuff, and spending too much time debating whether NFL Sunday Ticket is worth the money. It’s definitely not, but it also, like, kind of is.

I also have for you everybody’s favorite new PlayStation game, a new documentary about money in sports, two new cameras worth a look, and a terrific E Ink note-taking device. Oh, and apologies: I included the wrong link to my homescreen icon pack last week. (Luckily, the one I did link to is also very good.) Mine are actually from Nate Wren’s Lines pack .

Anyway, so many gadgets this week! Let’s dig in.

(As always, the best part of Installer is your ideas and tips. What are you into right now? What should everyone else be watching, reading, playing, cooking, downloading, or building this weekend? Tell me everything: [email protected] . And if you know someone else who might enjoy Installer , tell them to subscribe here .)

  • Reeder . It’s a podcast app, a YouTube queue, an RSS reader, a social feed, and a whole bunch more, all in one place. (And for Apple devices only.) It’s a reading app for feeds from all over the internet, it’s gorgeous, and as we talk about the fediverse and the future of the web, this is the kind of app that gets me really excited.
  • Slow Horses season 4 . I avoided this Apple TV Plus show for a while because I didn’t love the book, which is very well done but just kind of… slow. (I eventually soldiered through the first third of the first book, at which point it does start moving.) The show is not slow. It’s just funny and smart and exactly the right amount of action-packed.
  • Astro Bot . I promised myself I was going to wait until Black Friday to buy a PS5… but this new game, an adorable platformer that seems to have charmed absolutely everyone who tried it, is going to make me pull the trigger sooner. Seriously, people love this game.
  • The Remarkable Paper Pro . This thing is pure E Ink overkill , with an overengineered color display and a bunch of extremely fancy features. It’s like $800 for the whole setup including the keyboard, which is an awful lot, but this is a heck of a note-taking device.
  • The Money Game . A miniseries about the NIL revolution in college sports, which is actually a story about social media and the internet and the way that everybody, including athletes, struggles to balance life and content.
  • Circle to Search for music . Forget Gemini — Google’s most clever AI features are all happening through Circle to Search. Google’s song lookup tool is the best in the biz, and with Circle to Search and Android 15 , you can get at it from anywhere. 
  • The Peak Design Outdoor Line . Peak Design’s stuff is just always great, and the new line of rugged gear looks like a winner . (Don’t be thrown by the Kickstarter, it’s just how the company rolls for some reason. PD is legit.) I’ve recently become a sling convert for daily use, and I’m very into the new one here.
  • The GoPro Hero 13 Black . Every new GoPro is kind of the same: a little better in a bunch of ways but still a GoPro . And that’s fine! Adding some new lenses and improving transfer speeds goes a long way here, too.
  • The DJI Neo . The other nifty camera launch of the week : a simple selfie drone that follows you around, does cool camera moves, and lands in your hand. Getting real Snap Pixy vibes from this thing, but I think this one might be a hit. 
  • The PlugBug with Find My . My first reaction to the idea of a wall plug with Find My support was, just, why? But then I thought about how many hotels and coffee shops I’ve left chargers in, and I think this $70 gizmo might pay for itself pretty fast. Plus, Twelve South stuff rarely disappoints.

Group project

Over the last couple of weeks, I’ve asked you all to share your favorite lesser-known creators. This was, as always, at least partly selfish: I love my favorites on YouTube, TikTok, Instagram, and the rest, but my feeds are starting to feel a little static. I had a feeling you all might be able to help me liven things up — and that maybe a few of you were looking for some good new stuff, too.

As ever, you all delivered. I got hundreds of emails, messages, Threads posts, and comments from folks with great ideas and recommendations. I hardly got any duplicate recommendations, either, which is just delightful — there’s so much good stuff out there. And rather than keep it all for myself, I figured I’d share some of the best folks I’ve been turned onto in the last couple of weeks. I’m going to mostly link to YouTube channels, both because that’s what most people recommended but also because you can find folks’ other social links from their channel pages.

(One small caveat before we dive in: I haven’t been able to thoroughly vet everything all these folks have done forever. So if someone turns out to be the worst, in whatever way, I’m really sorry. If I’m missing something I should know about, definitely let me know. But y’all rarely steer me wrong, so I have high hopes.)

No surprise, a lot of what I heard about were tech creators. Here are a bunch of the folks you recommended, in no particular order:

  • Work From Hype : all about desk setups and WFH gear
  • Macho Nacho : retro gaming
  • Elliotisacoolguy : a very funny graphic designer
  • Brandon Talbot : fitness and health tech reviews
  • MobileTechReview : gadget reviews
  • Brandon James Greer : a pixel artist sharing their process
  • Easy Allies : video game news and reviews
  • Ritvikmath : data science and predictions
  • Technology Connections : everyday tech explainers
  • Enrico Tartarotti : essays on how tech works and how we use it
  • Byte Review : gadget lifestyle videos, especially Apple stuff
  • Salem Techsperts : stories from a computer repair shop
  • The Serial Port : all things tech history

Also, it turns out you all love a good video essay and explainer as much as I do, and you shared a bunch of great channels for just that:

  • Paul E.T. (on movies)
  • Not Just Bikes (on urban planning)
  • What’s So Great About That? (on… everything)
  • Answer in Progress (also on… everything)
  • Kaptainkristian (on movies and TV)
  • Golden Owl (on video game design)
  • Blast Processing (on video game history)
  • Middle 8 (on music culture)
  • MinuteFood (on food)
  • Calum (on transportation history)

Then there were just a lot of miscellaneous folks you all shared. Here are a few I’ve been enjoying:

  • Berm Peak , all about bikes and scooters and anything you can ride on
  • Slickerdrips , with lots of board game tutorials and playthroughs
  • Vanwives , on van life, DIY, and off-grid living
  • Boulder Creek Railroad : a model builder half working and half teaching
  • Escape to rural France , a guy chronicling his attempt to rebuild a chateau
  • Jelle’s Marble Runs , because marble runs are never not fun
  • Adventure Chasing , a guy and his dog exploring the outdoors

I could keep going, and I’m sure I will at some point. Thanks again to everyone who sent stuff in! My watch list will never be the same, and I couldn’t be happier about it.

Screen share

From now on, if you’re a Vergecast listener and you like the way the show sounds, you’ll have Erick Gomez to thank. He joined the team this week as our audio engineer and has, in his career, worked on some of the best podcasts and radio shows on the planet. 

I like to have new folks share their homescreen with us, both because I think it’s a fun way to get to know someone and I like having a work-appropriate reason to make fun of people’s silly wallpapers and app choices. But from the first time I met Erick, it became abundantly clear he was a person who brokered no chaos and would have a homescreen that Just Made Sense.

Here’s Erick’s homescreen, plus some info on the apps he uses and why:

essay bot chatgpt

The phone: iPhone 14 Pro Max

The wallpaper: I’ve been gaming since I was about five years old, and it is still one of my greatest hobbies and passions, so my lock screen is that of a Game Boy Advance SP — a cherished handheld from my childhood. I love how the widgets make it seem as though they’re part of the Game Boy display.

My homescreen wallpaper is my daughter Charlie in her Elmo costume from last year. We are a BIG Halloween family, and we are always counting the days until spooky season.

The apps: Eufy Security, Dot, Passwords, Delta, Journal, Parcel, Overcast, Hey Email, Arc Search.

I’m a big fan of widgets and glanceable information — the less I have to pop into an app, the better. I don’t have any social media apps installed on my phone and will only reinstall them whenever I feel compelled to post something. As I’ve gotten older, I’ve realized the insane amounts of time I’ve spent mindlessly scrolling Instagram and the like, so I’ve channeled that energy into journaling apps.

Dot : It’s an AI companion chatbot thing. It’s actually really cool. It’s very much like a journal or friend that responds to you and can give you some feedback and spark ideas. It’s great until it confuses some facts about you, which breaks the immersion. It will often text me during the day when I’m super busy so I don’t get a chance to respond. If you ignore it for long enough, it will stop reaching out to you. Just like a friend in real life!

Delta : Did I mention I’m a gamer? It was LONG overdue for emulators to come into the App Store, and this is by far the best for playing retro games. If I find myself waiting around for something, I’ll quickly pop into a game of NBA Jam or Streets of Rage .

Hey : Hey is my favorite email client and service. The email screener and privacy-focused approach make it well worth the yearly subscription. And it looks better than most email apps.

Arc : This is a new addition to the homescreen and has quickly replaced Safari for me in most cases. Its “Browse For Me” feature is a godsend. I also love the app’s alternate icon, which pays homage to Netscape Navigator. It’s 1994 all over again!

I also asked Erick to share a few things he’s into right now. Here’s what he sent back:

  • I’m currently reading MCU: The Reign of Marvel Studios . As a Marvel zombie from way back, I’m a huge fan of the movies and am fascinated by the behind-the-scenes look at how the sausage was made.
  • I’m currently watching Dark Matter on Apple TV Plus. A thought-provoking sci-fi show that more people need to be watching! It’s been renewed for a second season, so that should tell you something.
  • I’m trying to get through my ginormous backlog of video games (there are TOO MANY games, y’all), and I’m currently playing the Paper Mario: The Thousand-Year Door remake on Switch. It’s an absolutely beautiful gem of a game that needs to be experienced by everyone.

Crowdsourced

Here’s what the Installer community is into this week. I want to know what you’re into right now as well! Email [email protected] or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads .

“Just moved my work Windows PC to Zen Browser . A Firefox version that has a very strong Arc feeling.” – Maciej

“Was away at a cabin with family and we did not bring a camera. When the baby fell asleep, we popped in one AirPod and turned on Live Listen , left the phone in the room, and went about our business! Two birds with one stone, really, since then I didn’t have my phone. I am curious if Android has anything similar!” – Saad

“ Four Thousand Weeks by Oliver Burkeman is rocking my world right now. For those in constant search of productivity optimization (speaking of no one in particular, of course…), this book hits home.” – Ryan

“ Diggnation came back. So that’s been some fun nostalgia.” – Brian

“The HandleStash — a shock absorbing beverage holder for bikes. Game-changer for morning commute / weekend morning rides with coffee.” – Jim

“I switched from Spotify to YouTube Music a few years ago after Joe Rogan-gate and haven’t looked back! The audio quality is SO MUCH BETTER than Spotify. Feel that bass. There aren’t as many playlists, but just enough, and I tend to really enjoy checking out the new releases each week. It’s like a digital Best Buy from the ’90s. Plus, if you’re already paying for YouTube Premium, it’s included.” – Ellen

“I just got a rice cooker, and I’m eyeballing rice cooker cornbread for this weekend.” – Daniel

“ Kaos on Netflix is amazing. A perfect blend of Greek mythology and modern-day politics. Highly recommended.” – Sacchi

“This week, I’m procrastinating by deciding whether I should stick with Bear , go back to Obsidian , or accept my fate and move back to Apple Notes . It is torture.” – Eitan

“ A Short History of Nearly Everything . I know that this book is pretty old and many people have already read it, but if you haven’t read it, you should surely give it a try.” – Aashwath

“I lost most of my Labor Day weekend to Shapez 2 . It’s incredible!” – Alec

Signing off

Apple’s annual iPhone launch event is on Monday, and we’re going to get a whole bunch of new gear: sounds like phones, watches, headphones, and maybe even desktops are all on the docket. We’re also going to get upgrades to all of Apple’s software, too. Which is mostly good news! But after watching this video about what Apple’s updates do to the very oldest devices they technically support, I’m also thinking of everyone whose devices are supposedly about to be “upgraded” and might instead just begin to fall apart. Best of luck to all the Intel-powered Mac owners out there. 

See you next week!

iPhone 16 event live blog: all the news from Apple’s keynote

‘it’s glowtime’: all the news from apple’s iphone 16 event, apple announces the iphone 16 pro, apple announces the iphone 16, apple watch series 10 announced with bigger screen and thinner design.

Sponsor logo

More from Apps

Photo illustration of Telegram CEO Pavel Durov with the Telegram logo.

Telegram disables ‘misused’ features as CEO faces criminal charges

Google logo and black swirls

Google tests its ‘Ask Photos’ AI assistant that understands what’s in your pictures

An image showing the new AI search feature in Zillow

Zillow’s upgraded AI search will show you more homes you can’t afford

A picture of a touchscreen inside a vehicle, with text on the screen indicating a question has been asked of the AI assistant.

New VWs will answer some of your questions with ChatGPT

'ZDNET Recommends': What exactly does it mean?

ZDNET's recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.

When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.

ZDNET's editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form .

What is ChatGPT? The world's most popular AI chatbot explained

screenshot-2024-03-27-at-4-28-37pm.png

What is ChatGPT?

ChatGPT is an AI chatbot with advanced natural language processing (NLP) that allows you to have human-like conversations to complete various tasks. The  generative AI  tool can answer questions and assist you with composing text, code, and much more.

Also :  How to use ChatGPT

It's currently  open to use for free . OpenAI launched a paid subscription version called ChatGPT Plus  in February 2023, which guarantees users access to the company's latest models, exclusive features, and updates.

What is ChatGPT used for?

ChatGPT offers many functions in addition to answering simple questions. ChatGPT can compose essays , have philosophical conversations, do math, and even code for you . 

The tasks ChatGPT can help with also don't have to be so ambitious. For example, my favorite use of ChatGPT is for help creating basic lists for chores, such as packing and grocery shopping, and to-do lists that make my daily life more productive. The possibilities are endless. 

ZDNET has published many ChatGPT how-to guides. Below are some of the most popular ones. 

Use ChatGPT to: 

  • Write an essay
  • Create an app
  • Build your resume
  • Write Excel formulas
  • Summarize content
  • Write a cover letter
  • Start an Etsy business
  • Create charts and tables
  • Browse the web
  • Create custom AI assistants
  • Analyze PDFs
  • Digitize handwritten notes
  • Write Arduino drivers

Is ChatGPT available for free?

ChatGPT is free, regardless of what you use it for, including writing, coding, and much more. 

There is a subscription option , ChatGPT Plus, that costs $20 per month. The paid subscription model gives you extra perks, such as priority access to GPT-4o , DALL-E 3 , and the latest upgrades. 

Also: ChatGPT vs ChatGPT Plus: Is it worth the subscription fee?

Although the subscription price may seem steep, it is the same amount as Microsoft Copilot Pro and Google One AI Premium , which are Microsoft's and Google's paid AI offerings. 

How can you access ChatGPT?

On April 1, 2024, OpenAI stopped requiring you to log in to ChatGPT. Now, you can access ChatGPT simply by visiting  chat.openai.com . You can also access ChatGPT  via an app on your iPhone  or  Android  device.

Once you visit the site, you can start chatting away with ChatGPT. A great way to get started is by asking a question, similar to what you would do with Google. You can ask as many questions as you'd like.

Also: ChatGPT no longer requires a login, but you might want one anyway. Here's why

Creating an OpenAI account still offers some perks, such as saving and reviewing your chat history, accessing custom instructions, and, most importantly, getting free access to GPT-4o. Signing up is free and easy; you can use your existing Google login.

For step-by-step instructions, check out ZDNET's guide on  how to start using ChatGPT . 

Is there a ChatGPT app?

Yes, an official ChatGPT app is available for  iPhone and Android users. Make sure to download OpenAI's app, as many copycat fake apps are listed on Apple's App Store and the Google Play Store that are not affiliated with OpenAI.

Also: ChatGPT dropped a free app for iPhones. Does it live up to the hype?

There's also a ChatGPT app for  MacOS , which lets you access the chatbot quickly via a keyboard shortcut. The Mac app was initially only available for ChatGPT Plus subscribers, but OpenAI recently made it available to all users.

Is ChatGPT safe?

People have expressed concerns about AI chatbots replacing or atrophying human intelligence. 

For example, chatbots can write an entire essay in seconds, raising concerns about students cheating and not learning how to write properly. These fears even led  some school districts to block access  when ChatGPT initially launched.

Now, not only have many of those schools decided to unblock the technology, but some higher education institutions have been  catering their academic offerings  to AI-related coursework. 

Also:  Generative AI can be the academic assistant an underserved student needs

Another concern with AI chatbots is the possible spread of misinformation. ChatGPT says: "My responses are not intended to be taken as fact, and I always encourage people to verify any information they receive from me or any other source." OpenAI also notes that ChatGPT sometimes writes "plausible-sounding but incorrect or nonsensical answers."

Lastly, there are ethical and privacy concerns regarding the information ChatGPT was trained on. OpenAI scraped the internet to train the chatbot without asking content owners for permission to use their content, which brings up many copyright and intellectual property concerns. 

Also: OpenAI and Anthropic to collaborate with US government on AI safety

There are also privacy concerns regarding generative AI companies using your data to fine-tune their models further, which has become a common practice. OpenAI lets you turn off training in ChatGPT's settings.

So, is ChatGPT safe? If your main concern is privacy, OpenAI has implemented several options to give users peace of mind that their data will not be used to train models. The company even allows you to turn off your chat history. If you are concerned about the moral and ethical problems, those are still being hotly debated.

Will my conversations with ChatGPT be used for training?

Companies train generative AI models on user inputs. Therefore, when familiarizing yourself with how to use ChatGPT, you might wonder if your specific conversations will be used for training and, if so, who can view your chats.

Also:  This ChatGPT update fixed one of my biggest productivity issues with the AI chatbot

OpenAI will, by default, use your conversations with the free chatbot to train data and refine its models. You can opt out of it using your data for model training by clicking on the question mark in the bottom left-hand corner, Settings, and turning off "Improve the model for everyone."

Can ChatGPT help me apply for jobs?

Yes, ChatGPT is a great resource for helping with job applications. Undertaking a job search can be tedious and difficult, and ChatGPT can help you lighten the load. ChatGPT can build your resume  and write a cover letter .

Also :  How to use ChatGPT to write an essay

If your application has any written supplements, you can use ChatGPT to help you write those essays or personal statements . You can also use ChatGPT to prep for your interviews by asking ChatGPT to provide you mock interview questions, background on the company, or questions that you can ask. 

Can you use ChatGPT for schoolwork?

ChatGPT can be an excellent resource in assisting students with their work. A popular misconception is that ChatGPT and other AI resources will do students' work for them. However, it can be used as a personal tutor or editor, giving students assistance outside of the classroom. 

Also: 5 free AI tools for school that students, teachers, and parents can use, too

For example, a student can drop their essay into ChatGPT and have it copyedit it, upload class handwritten notes and have them digitized, or even generate study outlines from class materials. 

What does ChatGPT stand for?

The last three letters in ChatGPT's namesake stand for Generative Pre-trained Transformer (GPT), a family of large language models created by OpenAI that uses deep learning to generate human-like, conversational text. 

Also: What does GPT stand for? Understanding GPT 3.5, GPT 4, GPT-4 Turbo, and more

The "Chat" part of the name is simply a callout to its chatting capabilities. 

Can ChatGPT generate images?

Yes, ChatGPT can generate images using DALL-E 3. However, the limits vary depending on whether you are a ChatGPT Plus subscriber or a free user. ChatGPT Plus subscribers are allowed 50 generations per day, while free users are allotted two. 

Also: Free ChatGPT users can now create images with DALL-E 3, but there's a catch

Since OpenAI discontinued DALL-E 2 in February 2024, the only way to access its most advanced AI image generator, DALL-E 3, through OpenAI's offerings is via its chatbot.

Microsoft's Copilot offers free image generation, also powered by DALL-E 3, in its chatbot. This is a great alternative if you don't want to pay for ChatGPT Plus but want high-quality image outputs.

How does ChatGPT work?

ChatGPT runs on a large language model (LLM) architecture created by OpenAI called the  Generative Pre-trained Transformer  (GPT). Since its launch, the free version of ChatGPT ran on a fine-tuned model in the GPT-3.5 series until May 2024, when OpenAI upgraded the model to GPT-4o. Now, the free version runs on GPT-4o mini, with limited access to GPT-4o. 

Also:   How does ChatGPT actually work?  

With a subscription to ChatGPT Plus , you can access GPT-4, GPT-4o mini or  GPT-4o . Plus, users also have priority access to GPT-4o, even at capacity, while free users get booted down to GPT-4o mini. 

Generative AI models of this type are trained on vast amounts of information from the internet, including websites, books, news articles, and more.

Who owns ChatGPT currently?

OpenAI launched ChatGPT on November 30, 2022. OpenAI has also developed  DALL-E 2  and  DALL-E 3 , popular  AI image generators , and Whisper, an automatic speech recognition system. 

Also: OpenAI inks deal with ride-sharing operator to develop AI tools

As a result, OpenAI owns ChatGPT. Microsoft is a major investor in OpenAI thanks to multiyear, multi-billion dollar  investments. Elon Musk was an investor when OpenAI was first founded in 2015 but has since completely severed ties with the startup and created his own AI chatbot, Grok .

Is ChatGPT better than a search engine?

ChatGPT is an AI chatbot created to converse with the end user. A search engine indexes web pages on the internet to help users find information. One is not better than the other, as each suit different purposes. 

When searching for as much up-to-date, accurate information as possible, your best bet is a search engine. It will provide you with pages upon pages of sources you can peruse. 

Also: The best AI search engines of 2024: Google, Perplexity, and more

As of May 2024, the free version of ChatGPT can get responses from both the GPT-4o model and the web. It will only pull its answer from, and ultimately list, a handful of sources instead of showing nearly endless search results.

For example, I used GPT-4o to ask, "What is the weather today in San Francisco?" In its response, ChatGPT told me that it searched four sites and provided links to them. 

Also: How to get ChatGPT to browse the web for free

If you are looking for a platform that can explain complex topics in an easy-to-understand manner, then ChatGPT might be what you want. If you want the best of both worlds, plenty of AI search engines combine both.

What is SearchGPT?

SearchGPT is an experimental offering from OpenAI that functions as an AI-powered search engine that is aware of current events and uses real-time information from the Internet. The experience is a prototype, and OpenAI plans to integrate the best features directly into ChatGPT in the future.

Also: You wanted to try OpenAI's SearchGPT? It's time to look for AI alternatives

Upon launching the prototype, users were given a waitlist to sign up for. However, shortly after,  OpenAI closed its waitlist. 

What are ChatGPT's limitations?

Despite its impressive capabilities, ChatGPT still has limitations. Users sometimes need to reword questions multiple times for ChatGPT to understand their intent. A bigger limitation is a lack of quality in responses, which can sometimes be plausible-sounding but are verbose or make no practical sense.

Also: How AI hallucinations could help create life-saving antibiotics

Instead of asking for clarification on ambiguous questions, the model guesses what your question means, which can lead to poor responses. Generative AI models are also subject to hallucinations, which can result in inaccurate responses.

Does ChatGPT give wrong answers?

As mentioned above, ChatGPT, like all language models, has  limitations  and can give nonsensical answers and incorrect information, so it's important to double-check the answers it gives you.

Also: 8 ways to reduce ChatGPT hallucinations

OpenAI recommends you provide feedback on what ChatGPT generates by using the thumbs-up and thumbs-down buttons to improve its underlying model. You can also join the startup's Bug Bounty program , which offers up to $20,000 for reporting security bugs and safety issues.

Can ChatGPT refuse to answer my prompts?

AI systems like ChatGPT can and do reject  inappropriate requests . The AI assistant can identify inappropriate submissions to prevent unsafe content generation.

Also:  6 things ChatGPT can't do (and another 20 it refuses to do)

These submissions include questions that violate someone's rights, are offensive, are discriminatory, or involve illegal activities. The ChatGPT model can also challenge incorrect premises, answer follow-up questions, and even admit mistakes when you point them out.

These guardrails are important. AI models can generate advanced, realistic content that can be exploited by bad actors for harm, such as spreading misinformation about public figures and influencing elections .

Does ChatGPT plagiarize?

Yes, sort of. OpenAI scraped the internet to train ChatGPT's models. Therefore, the technology's knowledge is influenced by other people's work. Since there is no guarantee that ChatGPT's outputs are entirely original, the chatbot may regurgitate someone else's work in your answer, which is considered plagiarism. 

Also, technically speaking, if you, as a user, copy and paste ChatGPT's response, that is an act of plagiarism because you are claiming someone else's work as your own. This act could have repercussions. 

Is there a ChatGPT detector?

In short, the answer is no, not because people haven't tried, but because none do it efficiently. 

In January 2023, OpenAI released a free tool to detect AI-generated text. Unfortunately, OpenAI's classifier tool could only correctly identify 26% of AI-written text with a "likely AI-written" designation. Furthermore, it provided false positives 9% of the time, incorrectly identifying human-written work as AI-produced. 

Also: I tested 7 AI content detectors - they're getting dramatically better at identifying plagiarism

The tool performed so poorly  that, six months after its release, OpenAI shut it down "due to its low rate of accuracy." Despite the tool's failure, the startup claims to be researching more effective techniques for AI text identification.

Other AI detectors exist on the market, including GPT-2 Output Detector ,  Writer AI Content Detector , and Content at Scale's AI Content Detection  tool. ZDNET tested these tools, and the results were underwhelming: all three were unreliable sources for spotting AI, repeatedly giving false negatives. Here are  ZDNET's full test results .

What is GPT-4?

GPT-4 is OpenAI's language model, much more advanced than its predecessor, GPT-3.5. GPT-4 outperforms GPT-3.5 in a series of simulated benchmark exams and produces fewer hallucinations. 

What is GPT-4o?

GPT-4o is OpenAI's latest, fastest, and most advanced flagship model. As the name implies, GPT-4o has the same intelligence as GPT-4. However, the "o" in the title stands for "omni", referring to its multimodal capabilities, which allow the model to understand text, audio, image, and video inputs and output text, audio, and image outputs.

Also:  6 ways OpenAI just supercharged ChatGPT for free users

The model is 50% cheaper in OpenAI's API than GPT-4 Turbo while still matching its English and coding capabilities and outperforming it in non-English language, vision, and audio understanding -- a big win for developers.

What are the best ChatGPT alternatives?

Although ChatGPT gets the most buzz, other options are just as good—and might even be better suited to your needs.  ZDNET has created a list of the best chatbots , all of which we have tested to identify the best tool for your requirements.

Also: 4 things Claude AI can do that ChatGPT can't

Despite ChatGPT's extensive abilities, other chatbots have advantages that might be better suited for your use case, including  Copilot , Claude , Perplexity ,  Jasper , and more.

What are GPTs?

OpenAI once offered plugins for ChatGPT to connect to third-party applications and access real-time information on the web. The plugins expanded ChatGPT's abilities , allowing it to assist with many more activities, such as planning a trip or finding a place to eat. 

Also:  My two favorite ChatGPT Plus features and the remarkable things I can do with them

However, on March 19, 2024, OpenAI stopped letting users install new plugins or start new conversations with existing ones. Instead, OpenAI replaced plugins with GPTs , which are easier for developers to build. 

With the latest update, all users, including those on the free plan, can access the GPT Store and find 3 million customized ChatGPT chatbots. Unfortunately, there is also a lot of spam in the GPT store, so be careful which ones you use.

What is Microsoft's involvement with ChatGPT?

Microsoft was an early investor in OpenAI, the AI startup behind ChatGPT, long before ChatGPT was released to the public. Microsoft's first involvement with OpenAI was in 2019 when the company invested $1 billion. The company invested another $2 billion in the years after. In January 2023, Microsoft extended its partnership with OpenAI through a multiyear, multi-billion dollar investment .

Also: ChatGPT vs. Copilot: Which AI chatbot is better for you?

Neither company disclosed the investment value, but unnamed sources told Bloomberg that it could total $10 billion over multiple years. In return, OpenAI's exclusive cloud-computing provider is Microsoft Azure, powering all OpenAI workloads across research, products, and API services.

Microsoft has also used its OpenAI partnership to revamp its Bing search engine and improve its browser. On February 7, 2023, Microsoft unveiled a new Bing tool , now known as Copilot, that runs on OpenAI's GPT-4, customized specifically for search.

How does Copilot compare to ChatGPT?

Copilot uses OpenAI's GPT-4, which means that since its launch, it has been more efficient and capable than the standard, free version of ChatGPT, which was powered by GPT 3.5 at the time. At the time, Copilot boasted several other features over ChatGPT, such as access to the internet, knowledge of current information, and footnotes.

Also: I was a Copilot diehard until ChatGPT added these 5 features

In May 2024, however, OpenAI supercharged the free version of its chatbot with GPT-4o. The upgrade gave users GPT-4 level intelligence, the ability to get responses from the web, analyze data, chat about photos and documents, use GPTs, and access the GPT Store and Voice Mode. After the upgrade, ChatGPT reclaimed its crown as the best AI chatbot.

What is Gemini and how does it relate to ChatGPT?

Gemini is Google's AI chat service, a rival to ChatGPT. On February 6, 2023, Google introduced its experimental AI chat service, which was then called Google Bard. 

Also: ChatGPT vs. Microsoft Copilot vs. Gemini: Which is the best AI chatbot?

Over a month after the announcement, Google began rolling out  access to Bard first via a waitlist . Now, the tool is available to the public. The biggest perk of Gemini is that it has Google Search at its core and has the same feel as Google products. Therefore, if you are an avid Google user, Gemini might be the best AI chatbot for you. 

What is Apple's involvement with OpenAI?

At Apple's Worldwide Developer's Conference  in June 2024, the company announced a partnership with OpenAI that will integrate ChatGPT with Siri. With the user's permission, Siri can request ChatGPT for help if Siri deems a task is better suited for ChatGPT.

Artificial Intelligence

Ai startup cerebras debuts 'world's fastest inference' service - with a twist, 1 in 3 workers are using ai multiple times a week - and they're shouting about it, the most popular programming languages in 2024 (and what that even means).

IMAGES

  1. How To Use Chatgpt To Write An Essay?

    essay bot chatgpt

  2. How To Use Chat Gpt To Write An Essay With Ease

    essay bot chatgpt

  3. How to Use ChatGPT to Write Essays That Impress

    essay bot chatgpt

  4. Writing an Essay with ChatGPT

    essay bot chatgpt

  5. How To Use ChatGPT To Write An Essay in 3 Easy Steps

    essay bot chatgpt

  6. How to Make ChatGPT Write an Essay

    essay bot chatgpt

VIDEO

  1. How To Create ChatGPT

  2. when the bot comment gets a lot of likes

  3. chatgpt kya hai aur kaise use kare in Hindi 2024

  4. Interesting Facts You Shouldn't Miss!

  5. Write English Essay on Chat GPT

  6. How to Use ChatGPT to Write an Essay

COMMENTS

  1. ChatGPT

    ChatGPT - OpenAI ... ChatGPT

  2. How to Write an Essay with ChatGPT

    How to Write an Essay with ChatGPT | Tips & Examples

  3. Introducing ChatGPT

    Introducing ChatGPT

  4. Using ChatGPT for Assignments

    Using ChatGPT for Assignments | Tips & Examples

  5. 5 ways ChatGPT can help you write an essay

    How ChatGPT (and other AI chatbots) can help you write ...

  6. A large-scale comparison of human-written versus ChatGPT-generated essays

    A large-scale comparison of human-written versus ...

  7. AI bot ChatGPT writes smart essays

    AI bot ChatGPT writes smart essays — should professors ...

  8. Should I Use ChatGPT to Write My Essays?

    Should I Use ChatGPT to Write My Essays?

  9. ChatGPT: Everything you need to know about the AI chatbot

    ChatGPT, OpenAI's text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to hyper-charge productivity through writing essays and code ...

  10. Using ChatGPT to Write a College Essay

    Using ChatGPT to Write a College Essay | Tips & Examples

  11. Can ChatGPT get into Harvard? We tested its admissions essay

    We tested its admissions essay. By Pranshu Verma and. Rekha Tenjarla. Jan. 8 at 6:00 a.m. ChatGPT's release a year ago triggered a wave of panic among educators. Now, universities are in the ...

  12. Should Students Let ChatGPT Help Them Write Their College Essays?

    Should Students Let ChatGPT Help Them Write Their ...

  13. AI bot ChatGPT writes smart essays

    AI bot ChatGPT writes smart essays — should academics worry? Sandra Wachter, BKC Faculty Associate, discusses ChatGPT and its concerns for academics and education. "The situation both worries and excites Sandra Wachter, who studies technology and regulation at the Oxford Internet Institute, UK. "I'm really impressed by the capability ...

  14. News, sport and opinion from the Guardian's US edition

    Learn how ChatGPT, a powerful AI bot from OpenAI, can write flawless essays and handle complex queries in this fascinating report.

  15. ChatGPT can generate an essay. But could it generate an "A"?

    New bot ChatGPT will force colleges to get creative to prevent cheating, experts say ... Piper said the best defense against AI essays is teachers getting to know their students and how they write ...

  16. ChatGPT

    Revolutionize essay writing with our AI-driven tool: Generate unique, plagiarism-free essays in minutes, catering to all formats and topics effortlessly. ChatGPT. Create new account. Essay generator. By aiseo.ai. Revolutionize essay writing with our AI-driven tool: Generate unique, plagiarism-free essays in minutes, catering to all formats and ...

  17. ChatGPT: Student builds app to sniff out AI-written essays

    ChatGPT: Student builds app to sniff out AI-written essays. Edward Tian has been thinking about the power of artificial intelligence for a number of years. But it was in a packed lecture last year ...

  18. Will ChatGPT Kill the Student Essay?

    The College Essay Is Dead. Nobody is prepared for how AI will transform academia. By Stephen Marche. Paul Spella / The Atlantic; Getty. December 6, 2022. Suppose you are a professor of pedagogy ...

  19. ChatGPT

    Just ask and ChatGPT can help with writing, learning, brainstorming and more. Meet your new essay writing ally! Our clever essay bot uses cutting-edge AI to eliminate the pain of writing killer academic papers.

  20. EssayBot

    It is free to use and easy to try. Just ask and ChatGPT can help with writing, learning, brainstorming and more. I'm Essaybot, your assistant for essay writing. I specialize in brainstorming ideas, structuring essays, and providing grammar and style tips. With a formal tone, I ensure your essays are academically sound and effectively ...

  21. AI ChatGPT: OpenAI, DALL-E Maker's New Essay-Writing Bot Blowing People

    A new chatbot created by artificial intelligence non-profit OpenAI Inc.has taken the internet by storm, as users speculated on its ability to replace everything from playwrights to college essays ...

  22. Did a Fourth Grader Write This? Or the New Chatbot?

    It's hard to fully grasp the enormous potential of ChatGPT, a new artificial intelligence chatbot released last month. The bot doesn't just search and summarize information that already exists.

  23. How to Make ChatGPT Write Longer

    Read it to learn actionable ways of getting ChatGPT write longer essays. IvyPanda® Blog. Free Essays; Study Hub. Study Blog. Q&A by Experts. Literature Guides. Essay Writing Scholarship. Video Scholarship. Tools. Essay Writing Tools GPT Essay Checker ... If you're trying to extend an essay, you may find it helpful that the bot answers ...

  24. ChatGPT essay writing: How to talk to students about AI and ...

    ChatGPT essay writing is a hot topic, especially in the realm of teaching and learning where it can impact academic integrity. ... "More centrally, however, is that the bot is more of a synthesizer than a critical thinker. It would do well on a compare-and-contrast essay, but is less able to create a unique thesis and to defend that thesis ...

  25. GPT-fabricated scientific papers on Google Scholar: Key features

    Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research.

  26. A new app for reading, watching, and listening to the internet

    Plus: Astro Bot for the PS5, the DJI Neo, Slow Horses Season 4, the GoPro Hero 13, and the best lesser-known creators on the web, in this week's Installer newsletter.

  27. What is ChatGPT? The world's most popular AI chatbot explained

    Also: How to use ChatGPT to write an essay. If your application has any written supplements, you can use ChatGPT to help you write those essays or personal statements. You can also use ChatGPT to ...