Chat GPT Essay Example: Enhancing Communication and Creativity

Chat GPT Essay Example: Enhancing Communication and Creativity

Introduction

In the realm of artificial intelligence (AI), the emergence of cutting-edge technologies has revolutionized various aspects of human life. One such remarkable innovation is the application of Chat Generative Pre-trained Transformers (Chat GPT), a sophisticated AI model developed to enhance communication and foster creativity. This article delves into a comprehensive chat GPT essay example, shedding light on its potential to revolutionize essay writing, nurture creativity, and reshape the boundaries of human-machine interaction.

Unleashing the Power of Chat GPT in Essay Writing

Crafting engaging introductions.

Essay writing often hinges on the ability to captivate readers from the outset. With chat GPT, crafting captivating introductions becomes a seamless endeavor. By analyzing vast databases of literary masterpieces, historical narratives, and contemporary discourse, chat GPT generates introductions that effortlessly grab readers' attention. For instance, when tasked with an essay about climate change, chat GPT might begin with a thought-provoking quote or a startling statistic to immediately immerse the audience.

Developing Well-Structured Arguments

The hallmark of a compelling essay lies in its coherent structure and logical progression of arguments. Chat GPT excels in this domain by employing its intricate algorithms to organize ideas seamlessly. It sifts through a plethora of information, identifying key points and arranging them into a structured framework. As a result, essay writers can leverage chat GPT to streamline the process of outlining and presenting arguments in a clear and organized manner.

Fostering Creativity and Originality

Creativity is the lifeblood of impactful essay writing. Chat GPT serves as a wellspring of inspiration by generating novel perspectives and creative insights. By amalgamating diverse concepts and innovative viewpoints, chat GPT empowers essayists to infuse their compositions with fresh ideas and imaginative flair. Writers can collaborate with chat GPT to explore unconventional angles, injecting a unique and captivating essence into their essays.

Enhancing Language Proficiency

A hallmark of exceptional essayists is their command over language and eloquence in expression. Chat GPT functions as a linguistic virtuoso, providing writers with an extensive lexicon and refined syntax. As writers engage in a collaborative dance with chat GPT, they absorb linguistic nuances and expand their vocabulary. This symbiotic relationship elevates the quality of written communication, enabling writers to convey complex ideas with eloquence and precision.

Realizing the Benefits of Chat GPT Essay Example

Efficiency and time savings.

The utilization of chat GPT in essay writing translates to unparalleled efficiency and time savings. Traditional research and idea generation often consume significant hours. However, chat GPT's rapid data analysis and idea synthesis expedite these preliminary phases. Writers can harness this efficiency to focus on refining their arguments, conducting deeper analyses, and perfecting the overall essay structure.

Diverse Essay Topics and Styles

Chat GPT transcends the limitations of human expertise by delving into a multitude of subjects and writing styles. Whether crafting a persuasive argument, a reflective personal essay, or a comprehensive research paper, chat GPT adapts to the desired tone and style. This adaptability equips writers with a versatile tool that seamlessly tailors its output to suit the requirements of diverse essay genres.

Optimized Research and Data Utilization

The chat GPT essay example illustrates the AI's prowess in data-driven research and information synthesis. By swiftly scouring vast repositories of information, chat GPT extracts relevant data points, statistics, and scholarly references. Writers can integrate these meticulously curated sources to bolster their arguments, thereby enhancing the essay's credibility and substantiating key claims.

Promotion of Critical Thinking

Contrary to misconceptions about AI stifling human creativity, chat GPT fosters critical thinking and analytical prowess. Collaborating with chat GPT prompts writers to engage in thoughtful deliberation, as they evaluate the AI-generated content and mold it to align with their vision. This iterative process stimulates cognitive faculties, encouraging essayists to critically assess, modify, and augment the AI-generated material.

Applications Beyond Essay Writing

Educational tool for learning.

The chat GPT essay example not only revolutionizes essay composition but also serves as an invaluable educational tool. Students can interact with chat GPT to explore intricate concepts, seek clarifications, and brainstorm ideas. This AI-driven learning experience cultivates a dynamic environment that nurtures intellectual curiosity and facilitates holistic understanding.

Innovative Content Generation

Beyond academia, chat GPT finds its footing in creative content creation. From crafting compelling marketing copy to generating engaging blog posts, chat GPT proves its mettle in diverse content-generation endeavors. Brands and businesses can harness its capabilities to resonate with target audiences, amplify brand messaging, and establish a distinctive online presence.

Language Translation and Cross-Cultural Communication

Chat GPT's language proficiency extends beyond essay writing, offering seamless translation services and fostering cross-cultural communication. In an increasingly interconnected world, chat GPT bridges language barriers, enabling individuals to communicate effortlessly across diverse linguistic landscapes.

Virtual Collaborative Writing Partner

Imagine embarking on a writing journey with an AI companion that understands your voice, style, and preferences. Chat GPT evolves into a virtual collaborative writing partner, providing real-time suggestions, refining sentence structures, and injecting creative sparks into your narrative. This partnership promises to elevate the art of writing and stimulate unparalleled literary synergies.

Frequently Asked Questions (FAQs)

How does chat gpt enhance creativity in essay writing.

Chat GPT enhances creativity by generating novel perspectives, unique insights, and imaginative angles, empowering writers to infuse their essays with fresh ideas.

Is chat GPT proficient in various writing styles?

Absolutely, chat GPT adapts to diverse writing styles, whether persuasive, informative, reflective, or analytical, ensuring seamless alignment with the desired tone and genre.

Can chat GPT assist in refining essay arguments?

Yes, chat GPT excels in structuring arguments by organizing ideas coherently and logically, offering writers a streamlined approach to presenting their viewpoints.

Does chat GPT replace human critical thinking?

No, chat GPT complements human critical thinking by stimulating thoughtful evaluation and iterative refinement of AI-generated content, fostering cognitive engagement.

What are the practical applications of chat GPT beyond essay writing?

Chat GPT finds utility in education as a learning tool, content creation for marketing and branding, language translation, cross-cultural communication, and virtual collaborative writing partnerships.

How can chat GPT revolutionize language translation?

Chat GPT breaks down language barriers by providing accurate and contextually relevant translations, facilitating seamless communication across diverse linguistic landscapes.

In the dynamic landscape of AI-driven innovation, the chat GPT essay example stands as a testament to the transformative potential of technology in the realm of communication and creativity. As writers embark on a collaborative journey with chat GPT, they unlock a tapestry of benefits, ranging from enhanced efficiency and creativity to optimized research and critical thinking. Beyond essay writing, chat GPT's applications span realms as diverse as education, content creation, language translation, and collaborative writing partnerships. As we traverse this exhilarating era of human-AI symbiosis, the chat GPT essay example beckons writers to explore uncharted horizons, ushering in a new chapter of limitless expression and intellectual evolution.

Explore more about Chat GPT

Learn about the Latest AI Innovations

Try Picasso AI

Are you looking to stand out in the world of art and creativity? Picasso AI is the answer you've been waiting for. Our artificial intelligence platform allows you to generate unique and realistic images from simple text descriptions.

ChatGPT by OpenAI as an AI Essay Writer, Is It Helpful?

image

Table of contents

  • 1 What Is ChatGPT?
  • 2.1 Helps When You Stuck with Ideas
  • 3.1 Helps With an Outline
  • 3.2 Helps to Find Relevant Sources and Citing
  • 3.3 Providing Explanations And Examples of Various Writing Techniques
  • 3.4 Everything Depends on Your Prompt
  • 4.1 Based on Existing Data Language Model
  • 4.2 There Are Plenty of AI Detectors
  • 4.3 Your Essay Could Be Not Plagiarism-Free
  • 4.4 Chat GPT Uses Generic Phrases and Truisms
  • 4.5 Has Repetitive Elements and Noticeably Generated Sentence Structure
  • 4.6 Can’t Synthesize Properly
  • 4.7 The Information Given by ChatGPT Is Not Always Credible
  • 4.8 Can’t Show Your Reflections and Insights
  • 5 Evaluate the Content to Ensure It Meets Assignment Requirements
  • 6 Bottom line

ChatGPT is a language model that can be used to generate essay ideas and content for writing. This article will explore how to use ChatGPT to write an essay, including tips on generating relevant and useful content. Here we answer the question, can ChatGPT write essays as a real writer?

What Is ChatGPT?

With the huge popularity of ChatGPT, almost everyone knows what it is. However, if not, you are at the right place to find out what it is.

OpenAI developed ChatGPT, an Artificial Intelligence (AI) model that utilizes Natural Language Processing (NLP) to produce text that resembles human writing. The acronym stands for Conversational Generative Pre-trained Transformer, and it has been trained to generate responses on vast amounts of data.

Its abilities are intended to assist users in obtaining helpful information and generating new ideas on a wide range of topics. ChatGPT’s sophisticated features are based on the GPT-3.5 architecture, which allows it to understand natural language and generate human-like responses. This means that users can interact with ChatGPT more intuitively and conversationally. With many possibilities to use ChatGPT, we will look deeper into its usage for essay and assignment writing.

Pros of Utilizing ChatGPT

By using this AI chatbot appropriately, students can benefit from a powerful tool to enhance the quality of their projects. Let’s explore how you can use ChatGPT in studying and writing.

Helps When You Stuck with Ideas

It is common for even the most talented creatives to struggle with producing novel and captivating subjects for their essays. AI chatbot can be beneficial for overcoming writer’s obstacles and helping to create innovative ideas.

With ChatGPT, you can input your query and receive instant suggestions for possible angles to explore. Its vast knowledge base and understanding of context allow it to create relevant and unique ideas that inspire you to take your essay in a new and exciting direction.

Can Write Catchy Headings

Every who wrote an essay once knows the significance of headings. They concisely summarize the material and captivate the reader’s interest. With the help of AI machine, you can get numerous innovative and captivating hints on  how to title an essay . It helps to save time and energy, especially in composing attractive headings.

Look down at an example of how you can use Chat GPT to write an essay title:

chatgpt written essay title

Helps With an Outline

Another point for using Chat GPT to write an essay is that it can create a thorough outline for your topic. Upon break-in into the article topic, it can produce a range of related subtopics and key moments which you could develop in your writing.

Using ChatGPT, you can make an essay outline for your paper with a clear and logical structure. You know how it is important for your readers to follow and easily understand your arguments.

Here you can see an example of using chatGPT to write an essay outline:

chatgpt essay outline

Helps to Find Relevant Sources and Citing

ChatGPT can offer a synopsis of the source’s main ideas, enabling you to quickly assess its utility to your essay. Moreover, it can partly be a citation generator of scholarly articles, books, and other materials for your article’s subject matter. Furthermore, students can input a question or topic into ChatGPT, and AI can provide relevant information or suggest sources that they can explore further.

An example of how to use ChatGPT to find relevant sources:

chatgpt finds sources

Providing Explanations And Examples of Various Writing Techniques

ChatGPT has been extensively trained on a vast dataset and can utilize a diverse array of essay styles and techniques. It can provide coherent explanations of these techniques and practical examples to demonstrate their usage when prompted.

For instance, if a user struggles to understand the concept of foreshadowing, they can ask the ChatGPT to explain it. It can define foreshadowing and give examples from literature or film to help the user understand how it works.

Also, it can suggest techniques that may be appropriate for a specific topic. For example, if the user is writing a persuasive essay, ChatGPT can suggest techniques such as emotional appeals or logical arguments to help the user effectively persuade their audience.

Everything Depends on Your Prompt

ChatGPT aims to produce replies derived from the information it is provided, implying that the initial input’s quality primarily determines its output’s effectiveness.

This feature can be beneficial since it compels users to carefully consider their essay’s subject. By knowing how to write a well-crafted prompt, individuals can assist ChatGPT in generating more precise and suitable responses that cater to their particular requirements.

As we can see, AI is not detailed enough, and even the chatbot can’t contextualize the query. B, on the other hand, is particular and focused. This makes it easy for the machine to provide a relevant response.

Cons of Utilizing ChatGPT

Although the AI machine can be a salvation tool for certain students, it’s crucial to recognize its potential limitations. This section will explore the shortcomings of employing ChatGPT to write an essay.

Based on Existing Data Language Model

Even though it is a valuable AI chatbot, it is not genuine artificial intelligence and solely relies on its pre-existing training data till 2021. Consequently, it may be impossible to incorporate new information or context outside the date it has yet to be exposed.

As a result, ChatGPT essays can be inaccurate or irrelevant to the research’s subject matter, which can put students in an unfavorable position, especially those who seek to create up-to-date content.

There Are Plenty of AI Detectors

One of the potential drawbacks of using ChatGPT for writing is that plenty of AI detectors can catch whether or not a chatbot has developed the paper.

If an academic paper is developed entirely by Chat GPT, it may contain content with punctuation, coherence and synthesizing mistakes. This is where Papersowl comes in. It provides editing, proofreading, easy writing from scratch, and comprehensive research papers for students. It also contains a lot of articles on how to handle common student issues in writing assignments.

Your Essay Could Be Not Plagiarism-Free

While Chat GPT is designed to produce unique and original responses based on the input it receives, there is still the possibility that some of the content it produces may bear similarities to existing references.

This is especially true if the user inputs specific phrases or sentences already present in published works. Therefore, users need to learn how to write without plagiarism and edit the content created by the bot. This ensures that it is entirely original and cited adequately if necessary. However, if this is a problem, Papersowl can provide plagiarism-free assignments.

Chat GPT Uses Generic Phrases and Truisms

The chatbot has been trained on extensive data, encompassing a considerable quantity of text featuring customary phrases and expressions. Although these phrases can prove advantageous in informal discussions or marketing situations, they might need to be more suitable or productive for scholarly essays.

Also, relying too heavily on these phrases can lead to a lack of originality and creativity in the essay. Therefore, it’s important to use Chat GPT’s suggestions only as a starting point. You must review and edit your essay carefully, this ensures it is appropriate and effective for the intended audience and purpose.

Here is an example of such:

weak thesis statement from chatgpt

Has Repetitive Elements and Noticeably Generated Sentence Structure

Because Chat GPT is a machine learning algorithm, it creates responses based on patterns in the input data it has been trained on. This can often result in repetitive phrasing or sentence structures that may be noticeable to a human reader.

For example, if a user requests it to create multiple paragraphs regarding a specific subject, the machine can develop comparable phrases or sentence patterns throughout each paragraph. This could result in essays will seem systematic or deficient in ingenuity, potentially causing a detrimental effect on the paper’s overall quality and of course your grade.

Can’t Synthesize Properly

One other potential drawback of employing ChatGPT to write your paper is that it may only sometimes be able to perform the essential step of synthesizing sources . While it only tries to understand natural communication and context, it can often produce awkward or confusing sentences that require editing or revision. Not to mention its inability to synthesize sources for your literature review, because surely it can’t highlight the importance specifically of your paper. It can just summarize the findings and is prone to errors.

chatgpt dissertation literature review

Note: Although ChatGPT generates this statement as a result of the prompt, there are instances whereby this is an unintended response by the AI model.

The Information Given by ChatGPT Is Not Always Credible

The bot employs patterns from past examples to create responses rather than relying on comprehensive topic comprehension. As a result, the information it supplies may be partial, prejudiced, or untrue. Thus, users must exercise critical thinking and fact-checking using ChatGPT for research objectives.

Can’t Show Your Reflections and Insights

While GPT 3 essay writers can provide helpful information and develop content, they may need more ability to offer unique perspectives and original thoughts essential to writing an engaging essay.

As an AI bot, it is limited to the data it has been trained on and cannot draw from personal experiences or emotions that can add depth and richness to an academic report. Therefore, if it requires a more personal touch or a unique perspective, it may require additional input from the writer.

Evaluate the Content to Ensure It Meets Assignment Requirements

Lastly, ChatGPT essay writing may not consistently fulfill the project’s specific criteria. There is a possibility that the created content could be inadequate, either by deviating from the assigned topic, being too general, or lacking sufficient detail, which could impact the final grade or result in a fallen exam.

Thus, it is essential to evaluate the Chat GPT-produced content meticulously to guarantee its precision, appropriateness, and depth in compliance with the assignment’s requirements.

Bottom line

Besides being a good tool for coping with topics and essay structure, we cannot suggest using ChatGPT as a panacea for your writing assignment. It’s still a machine, not a human and even not AI. It is still essential to provide clear and specific questions, review and edit the rendered text, avoid copy-and-paste habits, and evaluate the relevance and accuracy of the information provided. As a result, getting an A without proper editing and proofreading is almost impossible, but PapersOwl is always here to help you with any task.

Readers also enjoyed

Persuasive Writing: Secrets How to Win Minds with Words

WHY WAIT? PLACE AN ORDER RIGHT NOW!

Just fill out the form, press the button, and have no worries!

We use cookies to give you the best experience possible. By continuing we’ll assume you board with our cookie policy.

essay written by chat gpt example

Celebrating 150 years of Harvard Summer School. Learn about our history.

Should I Use ChatGPT to Write My Essays?

Everything high school and college students need to know about using — and not using — ChatGPT for writing essays.

Jessica A. Kent

ChatGPT is one of the most buzzworthy technologies today.

In addition to other generative artificial intelligence (AI) models, it is expected to change the world. In academia, students and professors are preparing for the ways that ChatGPT will shape education, and especially how it will impact a fundamental element of any course: the academic essay.

Students can use ChatGPT to generate full essays based on a few simple prompts. But can AI actually produce high quality work, or is the technology just not there yet to deliver on its promise? Students may also be asking themselves if they should use AI to write their essays for them and what they might be losing out on if they did.

AI is here to stay, and it can either be a help or a hindrance depending on how you use it. Read on to become better informed about what ChatGPT can and can’t do, how to use it responsibly to support your academic assignments, and the benefits of writing your own essays.

What is Generative AI?

Artificial intelligence isn’t a twenty-first century invention. Beginning in the 1950s, data scientists started programming computers to solve problems and understand spoken language. AI’s capabilities grew as computer speeds increased and today we use AI for data analysis, finding patterns, and providing insights on the data it collects.

But why the sudden popularity in recent applications like ChatGPT? This new generation of AI goes further than just data analysis. Instead, generative AI creates new content. It does this by analyzing large amounts of data — GPT-3 was trained on 45 terabytes of data, or a quarter of the Library of Congress — and then generating new content based on the patterns it sees in the original data.

It’s like the predictive text feature on your phone; as you start typing a new message, predictive text makes suggestions of what should come next based on data from past conversations. Similarly, ChatGPT creates new text based on past data. With the right prompts, ChatGPT can write marketing content, code, business forecasts, and even entire academic essays on any subject within seconds.

But is generative AI as revolutionary as people think it is, or is it lacking in real intelligence?

The Drawbacks of Generative AI

It seems simple. You’ve been assigned an essay to write for class. You go to ChatGPT and ask it to write a five-paragraph academic essay on the topic you’ve been assigned. You wait a few seconds and it generates the essay for you!

But ChatGPT is still in its early stages of development, and that essay is likely not as accurate or well-written as you’d expect it to be. Be aware of the drawbacks of having ChatGPT complete your assignments.

It’s not intelligence, it’s statistics

One of the misconceptions about AI is that it has a degree of human intelligence. However, its intelligence is actually statistical analysis, as it can only generate “original” content based on the patterns it sees in already existing data and work.

It “hallucinates”

Generative AI models often provide false information — so much so that there’s a term for it: “AI hallucination.” OpenAI even has a warning on its home screen , saying that “ChatGPT may produce inaccurate information about people, places, or facts.” This may be due to gaps in its data, or because it lacks the ability to verify what it’s generating. 

It doesn’t do research  

If you ask ChatGPT to find and cite sources for you, it will do so, but they could be inaccurate or even made up.

This is because AI doesn’t know how to look for relevant research that can be applied to your thesis. Instead, it generates content based on past content, so if a number of papers cite certain sources, it will generate new content that sounds like it’s a credible source — except it likely may not be.

There are data privacy concerns

When you input your data into a public generative AI model like ChatGPT, where does that data go and who has access to it? 

Prompting ChatGPT with original research should be a cause for concern — especially if you’re inputting study participants’ personal information into the third-party, public application. 

JPMorgan has restricted use of ChatGPT due to privacy concerns, Italy temporarily blocked ChatGPT in March 2023 after a data breach, and Security Intelligence advises that “if [a user’s] notes include sensitive data … it enters the chatbot library. The user no longer has control over the information.”

It is important to be aware of these issues and take steps to ensure that you’re using the technology responsibly and ethically. 

It skirts the plagiarism issue

AI creates content by drawing on a large library of information that’s already been created, but is it plagiarizing? Could there be instances where ChatGPT “borrows” from previous work and places it into your work without citing it? Schools and universities today are wrestling with this question of what’s plagiarism and what’s not when it comes to AI-generated work.

To demonstrate this, one Elon University professor gave his class an assignment: Ask ChatGPT to write an essay for you, and then grade it yourself. 

“Many students expressed shock and dismay upon learning the AI could fabricate bogus information,” he writes, adding that he expected some essays to contain errors, but all of them did. 

His students were disappointed that “major tech companies had pushed out AI technology without ensuring that the general population understands its drawbacks” and were concerned about how many embraced such a flawed tool.

Explore Our High School Programs

How to Use AI as a Tool to Support Your Work

As more students are discovering, generative AI models like ChatGPT just aren’t as advanced or intelligent as they may believe. While AI may be a poor option for writing your essay, it can be a great tool to support your work.

Generate ideas for essays

Have ChatGPT help you come up with ideas for essays. For example, input specific prompts, such as, “Please give me five ideas for essays I can write on topics related to WWII,” or “Please give me five ideas for essays I can write comparing characters in twentieth century novels.” Then, use what it provides as a starting point for your original research.

Generate outlines

You can also use ChatGPT to help you create an outline for an essay. Ask it, “Can you create an outline for a five paragraph essay based on the following topic” and it will create an outline with an introduction, body paragraphs, conclusion, and a suggested thesis statement. Then, you can expand upon the outline with your own research and original thought.

Generate titles for your essays

Titles should draw a reader into your essay, yet they’re often hard to get right. Have ChatGPT help you by prompting it with, “Can you suggest five titles that would be good for a college essay about [topic]?”

The Benefits of Writing Your Essays Yourself

Asking a robot to write your essays for you may seem like an easy way to get ahead in your studies or save some time on assignments. But, outsourcing your work to ChatGPT can negatively impact not just your grades, but your ability to communicate and think critically as well. It’s always the best approach to write your essays yourself.

Create your own ideas

Writing an essay yourself means that you’re developing your own thoughts, opinions, and questions about the subject matter, then testing, proving, and defending those thoughts. 

When you complete school and start your career, projects aren’t simply about getting a good grade or checking a box, but can instead affect the company you’re working for — or even impact society. Being able to think for yourself is necessary to create change and not just cross work off your to-do list.

Building a foundation of original thinking and ideas now will help you carve your unique career path in the future.

Develop your critical thinking and analysis skills

In order to test or examine your opinions or questions about a subject matter, you need to analyze a problem or text, and then use your critical thinking skills to determine the argument you want to make to support your thesis. Critical thinking and analysis skills aren’t just necessary in school — they’re skills you’ll apply throughout your career and your life.

Improve your research skills

Writing your own essays will train you in how to conduct research, including where to find sources, how to determine if they’re credible, and their relevance in supporting or refuting your argument. Knowing how to do research is another key skill required throughout a wide variety of professional fields.

Learn to be a great communicator

Writing an essay involves communicating an idea clearly to your audience, structuring an argument that a reader can follow, and making a conclusion that challenges them to think differently about a subject. Effective and clear communication is necessary in every industry.

Be impacted by what you’re learning about : 

Engaging with the topic, conducting your own research, and developing original arguments allows you to really learn about a subject you may not have encountered before. Maybe a simple essay assignment around a work of literature, historical time period, or scientific study will spark a passion that can lead you to a new major or career.

Resources to Improve Your Essay Writing Skills

While there are many rewards to writing your essays yourself, the act of writing an essay can still be challenging, and the process may come easier for some students than others. But essay writing is a skill that you can hone, and students at Harvard Summer School have access to a number of on-campus and online resources to assist them.

Students can start with the Harvard Summer School Writing Center , where writing tutors can offer you help and guidance on any writing assignment in one-on-one meetings. Tutors can help you strengthen your argument, clarify your ideas, improve the essay’s structure, and lead you through revisions. 

The Harvard libraries are a great place to conduct your research, and its librarians can help you define your essay topic, plan and execute a research strategy, and locate sources. 

Finally, review the “ The Harvard Guide to Using Sources ,” which can guide you on what to cite in your essay and how to do it. Be sure to review the “Tips For Avoiding Plagiarism” on the “ Resources to Support Academic Integrity ” webpage as well to help ensure your success.

Sign up to our mailing list to learn more about Harvard Summer School

The Future of AI in the Classroom

ChatGPT and other generative AI models are here to stay, so it’s worthwhile to learn how you can leverage the technology responsibly and wisely so that it can be a tool to support your academic pursuits. However, nothing can replace the experience and achievement gained from communicating your own ideas and research in your own academic essays.

About the Author

Jessica A. Kent is a freelance writer based in Boston, Mass. and a Harvard Extension School alum. Her digital marketing content has been featured on Fast Company, Forbes, Nasdaq, and other industry websites; her essays and short stories have been featured in North American Review, Emerson Review, Writer’s Bone, and others.

5 Key Qualities of Students Who Succeed at Harvard Summer School (and in College!)

This guide outlines the kinds of students who thrive at Harvard Summer School and what the programs offer in return.

Harvard Division of Continuing Education

The Division of Continuing Education (DCE) at Harvard University is dedicated to bringing rigorous academics and innovative teaching capabilities to those seeking to improve their lives through education. We make Harvard education accessible to lifelong learners from high school to retirement.

Harvard Division of Continuing Education Logo

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 30 October 2023

A large-scale comparison of human-written versus ChatGPT-generated essays

  • Steffen Herbold 1 ,
  • Annette Hautli-Janisz 1 ,
  • Ute Heuer 1 ,
  • Zlata Kikteva 1 &
  • Alexander Trautsch 1  

Scientific Reports volume  13 , Article number:  18617 ( 2023 ) Cite this article

25k Accesses

40 Citations

98 Altmetric

Metrics details

  • Computer science
  • Information technology

ChatGPT and similar generative AI models have attracted hundreds of millions of users and have become part of the public discourse. Many believe that such models will disrupt society and lead to significant changes in the education system and information generation. So far, this belief is based on either colloquial evidence or benchmarks from the owners of the models—both lack scientific rigor. We systematically assess the quality of AI-generated content through a large-scale study comparing human-written versus ChatGPT-generated argumentative student essays. We use essays that were rated by a large number of human experts (teachers). We augment the analysis by considering a set of linguistic characteristics of the generated essays. Our results demonstrate that ChatGPT generates essays that are rated higher regarding quality than human-written essays. The writing style of the AI models exhibits linguistic characteristics that are different from those of the human-written essays. Since the technology is readily available, we believe that educators must act immediately. We must re-invent homework and develop teaching concepts that utilize these AI models in the same way as math utilizes the calculator: teach the general concepts first and then use AI tools to free up time for other learning objectives.

Similar content being viewed by others

essay written by chat gpt example

ChatGPT-3.5 as writing assistance in students’ essays

essay written by chat gpt example

Perception, performance, and detectability of conversational artificial intelligence across 32 university courses

essay written by chat gpt example

L2 writer engagement with automated written corrective feedback provided by ChatGPT: A mixed-method multiple case study

Introduction.

The massive uptake in the development and deployment of large-scale Natural Language Generation (NLG) systems in recent months has yielded an almost unprecedented worldwide discussion of the future of society. The ChatGPT service which serves as Web front-end to GPT-3.5 1 and GPT-4 was the fastest-growing service in history to break the 100 million user milestone in January and had 1 billion visits by February 2023 2 .

Driven by the upheaval that is particularly anticipated for education 3 and knowledge transfer for future generations, we conduct the first independent, systematic study of AI-generated language content that is typically dealt with in high-school education: argumentative essays, i.e. essays in which students discuss a position on a controversial topic by collecting and reflecting on evidence (e.g. ‘Should students be taught to cooperate or compete?’). Learning to write such essays is a crucial aspect of education, as students learn to systematically assess and reflect on a problem from different perspectives. Understanding the capability of generative AI to perform this task increases our understanding of the skills of the models, as well as of the challenges educators face when it comes to teaching this crucial skill. While there is a multitude of individual examples and anecdotal evidence for the quality of AI-generated content in this genre (e.g. 4 ) this paper is the first to systematically assess the quality of human-written and AI-generated argumentative texts across different versions of ChatGPT 5 . We use a fine-grained essay quality scoring rubric based on content and language mastery and employ a significant pool of domain experts, i.e. high school teachers across disciplines, to perform the evaluation. Using computational linguistic methods and rigorous statistical analysis, we arrive at several key findings:

AI models generate significantly higher-quality argumentative essays than the users of an essay-writing online forum frequented by German high-school students across all criteria in our scoring rubric.

ChatGPT-4 (ChatGPT web interface with the GPT-4 model) significantly outperforms ChatGPT-3 (ChatGPT web interface with the GPT-3.5 default model) with respect to logical structure, language complexity, vocabulary richness and text linking.

Writing styles between humans and generative AI models differ significantly: for instance, the GPT models use more nominalizations and have higher sentence complexity (signaling more complex, ‘scientific’, language), whereas the students make more use of modal and epistemic constructions (which tend to convey speaker attitude).

The linguistic diversity of the NLG models seems to be improving over time: while ChatGPT-3 still has a significantly lower linguistic diversity than humans, ChatGPT-4 has a significantly higher diversity than the students.

Our work goes significantly beyond existing benchmarks. While OpenAI’s technical report on GPT-4 6 presents some benchmarks, their evaluation lacks scientific rigor: it fails to provide vital information like the agreement between raters, does not report on details regarding the criteria for assessment or to what extent and how a statistical analysis was conducted for a larger sample of essays. In contrast, our benchmark provides the first (statistically) rigorous and systematic study of essay quality, paired with a computational linguistic analysis of the language employed by humans and two different versions of ChatGPT, offering a glance at how these NLG models develop over time. While our work is focused on argumentative essays in education, the genre is also relevant beyond education. In general, studying argumentative essays is one important aspect to understand how good generative AI models are at conveying arguments and, consequently, persuasive writing in general.

Related work

Natural language generation.

The recent interest in generative AI models can be largely attributed to the public release of ChatGPT, a public interface in the form of an interactive chat based on the InstructGPT 1 model, more commonly referred to as GPT-3.5. In comparison to the original GPT-3 7 and other similar generative large language models based on the transformer architecture like GPT-J 8 , this model was not trained in a purely self-supervised manner (e.g. through masked language modeling). Instead, a pipeline that involved human-written content was used to fine-tune the model and improve the quality of the outputs to both mitigate biases and safety issues, as well as make the generated text more similar to text written by humans. Such models are referred to as Fine-tuned LAnguage Nets (FLANs). For details on their training, we refer to the literature 9 . Notably, this process was recently reproduced with publicly available models such as Alpaca 10 and Dolly (i.e. the complete models can be downloaded and not just accessed through an API). However, we can only assume that a similar process was used for the training of GPT-4 since the paper by OpenAI does not include any details on model training.

Testing of the language competency of large-scale NLG systems has only recently started. Cai et al. 11 show that ChatGPT reuses sentence structure, accesses the intended meaning of an ambiguous word, and identifies the thematic structure of a verb and its arguments, replicating human language use. Mahowald 12 compares ChatGPT’s acceptability judgments to human judgments on the Article + Adjective + Numeral + Noun construction in English. Dentella et al. 13 show that ChatGPT-3 fails to understand low-frequent grammatical constructions like complex nested hierarchies and self-embeddings. In another recent line of research, the structure of automatically generated language is evaluated. Guo et al. 14 show that in question-answer scenarios, ChatGPT-3 uses different linguistic devices than humans. Zhao et al. 15 show that ChatGPT generates longer and more diverse responses when the user is in an apparently negative emotional state.

Given that we aim to identify certain linguistic characteristics of human-written versus AI-generated content, we also draw on related work in the field of linguistic fingerprinting, which assumes that each human has a unique way of using language to express themselves, i.e. the linguistic means that are employed to communicate thoughts, opinions and ideas differ between humans. That these properties can be identified with computational linguistic means has been showcased across different tasks: the computation of a linguistic fingerprint allows to distinguish authors of literary works 16 , the identification of speaker profiles in large public debates 17 , 18 , 19 , 20 and the provision of data for forensic voice comparison in broadcast debates 21 , 22 . For educational purposes, linguistic features are used to measure essay readability 23 , essay cohesion 24 and language performance scores for essay grading 25 . Integrating linguistic fingerprints also yields performance advantages for classification tasks, for instance in predicting user opinion 26 , 27 and identifying individual users 28 .

Limitations of OpenAIs ChatGPT evaluations

OpenAI published a discussion of the model’s performance of several tasks, including Advanced Placement (AP) classes within the US educational system 6 . The subjects used in performance evaluation are diverse and include arts, history, English literature, calculus, statistics, physics, chemistry, economics, and US politics. While the models achieved good or very good marks in most subjects, they did not perform well in English literature. GPT-3.5 also experienced problems with chemistry, macroeconomics, physics, and statistics. While the overall results are impressive, there are several significant issues: firstly, the conflict of interest of the model’s owners poses a problem for the performance interpretation. Secondly, there are issues with the soundness of the assessment beyond the conflict of interest, which make the generalizability of the results hard to assess with respect to the models’ capability to write essays. Notably, the AP exams combine multiple-choice questions with free-text answers. Only the aggregated scores are publicly available. To the best of our knowledge, neither the generated free-text answers, their overall assessment, nor their assessment given specific criteria from the used judgment rubric are published. Thirdly, while the paper states that 1–2 qualified third-party contractors participated in the rating of the free-text answers, it is unclear how often multiple ratings were generated for the same answer and what was the agreement between them. This lack of information hinders a scientifically sound judgement regarding the capabilities of these models in general, but also specifically for essays. Lastly, the owners of the model conducted their study in a few-shot prompt setting, where they gave the models a very structured template as well as an example of a human-written high-quality essay to guide the generation of the answers. This further fine-tuning of what the models generate could have also influenced the output. The results published by the owners go beyond the AP courses which are directly comparable to our work and also consider other student assessments like Graduate Record Examinations (GREs). However, these evaluations suffer from the same problems with the scientific rigor as the AP classes.

Scientific assessment of ChatGPT

Researchers across the globe are currently assessing the individual capabilities of these models with greater scientific rigor. We note that due to the recency and speed of these developments, the hereafter discussed literature has mostly only been published as pre-prints and has not yet been peer-reviewed. In addition to the above issues concretely related to the assessment of the capabilities to generate student essays, it is also worth noting that there are likely large problems with the trustworthiness of evaluations, because of data contamination, i.e. because the benchmark tasks are part of the training of the model, which enables memorization. For example, Aiyappa et al. 29 find evidence that this is likely the case for benchmark results regarding NLP tasks. This complicates the effort by researchers to assess the capabilities of the models beyond memorization.

Nevertheless, the first assessment results are already available – though mostly focused on ChatGPT-3 and not yet ChatGPT-4. Closest to our work is a study by Yeadon et al. 30 , who also investigate ChatGPT-3 performance when writing essays. They grade essays generated by ChatGPT-3 for five physics questions based on criteria that cover academic content, appreciation of the underlying physics, grasp of subject material, addressing the topic, and writing style. For each question, ten essays were generated and rated independently by five researchers. While the sample size precludes a statistical assessment, the results demonstrate that the AI model is capable of writing high-quality physics essays, but that the quality varies in a manner similar to human-written essays.

Guo et al. 14 create a set of free-text question answering tasks based on data they collected from the internet, e.g. question answering from Reddit. The authors then sample thirty triplets of a question, a human answer, and a ChatGPT-3 generated answer and ask human raters to assess if they can detect which was written by a human, and which was written by an AI. While this approach does not directly assess the quality of the output, it serves as a Turing test 31 designed to evaluate whether humans can distinguish between human- and AI-produced output. The results indicate that humans are in fact able to distinguish between the outputs when presented with a pair of answers. Humans familiar with ChatGPT are also able to identify over 80% of AI-generated answers without seeing a human answer in comparison. However, humans who are not yet familiar with ChatGPT-3 are not capable of identifying AI-written answers about 50% of the time. Moreover, the authors also find that the AI-generated outputs are deemed to be more helpful than the human answers in slightly more than half of the cases. This suggests that the strong results from OpenAI’s own benchmarks regarding the capabilities to generate free-text answers generalize beyond the benchmarks.

There are, however, some indicators that the benchmarks may be overly optimistic in their assessment of the model’s capabilities. For example, Kortemeyer 32 conducts a case study to assess how well ChatGPT-3 would perform in a physics class, simulating the tasks that students need to complete as part of the course: answer multiple-choice questions, do homework assignments, ask questions during a lesson, complete programming exercises, and write exams with free-text questions. Notably, ChatGPT-3 was allowed to interact with the instructor for many of the tasks, allowing for multiple attempts as well as feedback on preliminary solutions. The experiment shows that ChatGPT-3’s performance is in many aspects similar to that of the beginning learners and that the model makes similar mistakes, such as omitting units or simply plugging in results from equations. Overall, the AI would have passed the course with a low score of 1.5 out of 4.0. Similarly, Kung et al. 33 study the performance of ChatGPT-3 in the United States Medical Licensing Exam (USMLE) and find that the model performs at or near the passing threshold. Their assessment is a bit more optimistic than Kortemeyer’s as they state that this level of performance, comprehensible reasoning and valid clinical insights suggest that models such as ChatGPT may potentially assist human learning in clinical decision making.

Frieder et al. 34 evaluate the capabilities of ChatGPT-3 in solving graduate-level mathematical tasks. They find that while ChatGPT-3 seems to have some mathematical understanding, its level is well below that of an average student and in most cases is not sufficient to pass exams. Yuan et al. 35 consider the arithmetic abilities of language models, including ChatGPT-3 and ChatGPT-4. They find that they exhibit the best performance among other currently available language models (incl. Llama 36 , FLAN-T5 37 , and Bloom 38 ). However, the accuracy of basic arithmetic tasks is still only at 83% when considering correctness to the degree of \(10^{-3}\) , i.e. such models are still not capable of functioning reliably as calculators. In a slightly satiric, yet insightful take, Spencer et al. 39 assess how a scientific paper on gamma-ray astrophysics would look like, if it were written largely with the assistance of ChatGPT-3. They find that while the language capabilities are good and the model is capable of generating equations, the arguments are often flawed and the references to scientific literature are full of hallucinations.

The general reasoning skills of the models may also not be at the level expected from the benchmarks. For example, Cherian et al. 40 evaluate how well ChatGPT-3 performs on eleven puzzles that second graders should be able to solve and find that ChatGPT is only able to solve them on average in 36.4% of attempts, whereas the second graders achieve a mean of 60.4%. However, their sample size is very small and the problem was posed as a multiple-choice question answering problem, which cannot be directly compared to the NLG we consider.

Research gap

Within this article, we address an important part of the current research gap regarding the capabilities of ChatGPT (and similar technologies), guided by the following research questions:

RQ1: How good is ChatGPT based on GPT-3 and GPT-4 at writing argumentative student essays?

RQ2: How do AI-generated essays compare to essays written by students?

RQ3: What are linguistic devices that are characteristic of student versus AI-generated content?

We study these aspects with the help of a large group of teaching professionals who systematically assess a large corpus of student essays. To the best of our knowledge, this is the first large-scale, independent scientific assessment of ChatGPT (or similar models) of this kind. Answering these questions is crucial to understanding the impact of ChatGPT on the future of education.

Materials and methods

The essay topics originate from a corpus of argumentative essays in the field of argument mining 41 . Argumentative essays require students to think critically about a topic and use evidence to establish a position on the topic in a concise manner. The corpus features essays for 90 topics from Essay Forum 42 , an active community for providing writing feedback on different kinds of text and is frequented by high-school students to get feedback from native speakers on their essay-writing capabilities. Information about the age of the writers is not available, but the topics indicate that the essays were written in grades 11–13, indicating that the authors were likely at least 16. Topics range from ‘Should students be taught to cooperate or to compete?’ to ‘Will newspapers become a thing of the past?’. In the corpus, each topic features one human-written essay uploaded and discussed in the forum. The students who wrote the essays are not native speakers. The average length of these essays is 19 sentences with 388 tokens (an average of 2.089 characters) and will be termed ‘student essays’ in the remainder of the paper.

For the present study, we use the topics from Stab and Gurevych 41 and prompt ChatGPT with ‘Write an essay with about 200 words on “[ topic ]”’ to receive automatically-generated essays from the ChatGPT-3 and ChatGPT-4 versions from 22 March 2023 (‘ChatGPT-3 essays’, ‘ChatGPT-4 essays’). No additional prompts for getting the responses were used, i.e. the data was created with a basic prompt in a zero-shot scenario. This is in contrast to the benchmarks by OpenAI, who used an engineered prompt in a few-shot scenario to guide the generation of essays. We note that we decided to ask for 200 words because we noticed a tendency to generate essays that are longer than the desired length by ChatGPT. A prompt asking for 300 words typically yielded essays with more than 400 words. Thus, using the shorter length of 200, we prevent a potential advantage for ChatGPT through longer essays, and instead err on the side of brevity. Similar to the evaluations of free-text answers by OpenAI, we did not consider multiple configurations of the model due to the effort required to obtain human judgments. For the same reason, our data is restricted to ChatGPT and does not include other models available at that time, e.g. Alpaca. We use the browser versions of the tools because we consider this to be a more realistic scenario than using the API. Table 1 below shows the core statistics of the resulting dataset. Supplemental material S1 shows examples for essays from the data set.

Annotation study

Study participants.

The participants had registered for a two-hour online training entitled ‘ChatGPT – Challenges and Opportunities’ conducted by the authors of this paper as a means to provide teachers with some of the technological background of NLG systems in general and ChatGPT in particular. Only teachers permanently employed at secondary schools were allowed to register for this training. Focusing on these experts alone allows us to receive meaningful results as those participants have a wide range of experience in assessing students’ writing. A total of 139 teachers registered for the training, 129 of them teach at grammar schools, and only 10 teachers hold a position at other secondary schools. About half of the registered teachers (68 teachers) have been in service for many years and have successfully applied for promotion. For data protection reasons, we do not know the subject combinations of the registered teachers. We only know that a variety of subjects are represented, including languages (English, French and German), religion/ethics, and science. Supplemental material S5 provides some general information regarding German teacher qualifications.

The training began with an online lecture followed by a discussion phase. Teachers were given an overview of language models and basic information on how ChatGPT was developed. After about 45 minutes, the teachers received a both written and oral explanation of the questionnaire at the core of our study (see Supplementary material S3 ) and were informed that they had 30 minutes to finish the study tasks. The explanation included information on how the data was obtained, why we collect the self-assessment, and how we chose the criteria for the rating of the essays, the overall goal of our research, and a walk-through of the questionnaire. Participation in the questionnaire was voluntary and did not affect the awarding of a training certificate. We further informed participants that all data was collected anonymously and that we would have no way of identifying who participated in the questionnaire. We orally informed participants that they consent to the use of the provided ratings for our research by participating in the survey.

Once these instructions were provided orally and in writing, the link to the online form was given to the participants. The online form was running on a local server that did not log any information that could identify the participants (e.g. IP address) to ensure anonymity. As per instructions, consent for participation was given by using the online form. Due to the full anonymity, we could by definition not document who exactly provided the consent. This was implemented as further insurance that non-participation could not possibly affect being awarded the training certificate.

About 20% of the training participants did not take part in the questionnaire study, the remaining participants consented based on the information provided and participated in the rating of essays. After the questionnaire, we continued with an online lecture on the opportunities of using ChatGPT for teaching as well as AI beyond chatbots. The study protocol was reviewed and approved by the Research Ethics Committee of the University of Passau. We further confirm that our study protocol is in accordance with all relevant guidelines.

Questionnaire

The questionnaire consists of three parts: first, a brief self-assessment regarding the English skills of the participants which is based on the Common European Framework of Reference for Languages (CEFR) 43 . We have six levels ranging from ‘comparable to a native speaker’ to ‘some basic skills’ (see supplementary material S3 ). Then each participant was shown six essays. The participants were only shown the generated text and were not provided with information on whether the text was human-written or AI-generated.

The questionnaire covers the seven categories relevant for essay assessment shown below (for details see supplementary material S3 ):

Topic and completeness

Logic and composition

Expressiveness and comprehensiveness

Language mastery

Vocabulary and text linking

Language constructs

These categories are used as guidelines for essay assessment 44 established by the Ministry for Education of Lower Saxony, Germany. For each criterion, a seven-point Likert scale with scores from zero to six is defined, where zero is the worst score (e.g. no relation to the topic) and six is the best score (e.g. addressed the topic to a special degree). The questionnaire included a written description as guidance for the scoring.

After rating each essay, the participants were also asked to self-assess their confidence in the ratings. We used a five-point Likert scale based on the criteria for the self-assessment of peer-review scores from the Association for Computational Linguistics (ACL). Once a participant finished rating the six essays, they were shown a summary of their ratings, as well as the individual ratings for each of their essays and the information on how the essay was generated.

Computational linguistic analysis

In order to further explore and compare the quality of the essays written by students and ChatGPT, we consider the six following linguistic characteristics: lexical diversity, sentence complexity, nominalization, presence of modals, epistemic and discourse markers. Those are motivated by previous work: Weiss et al. 25 observe the correlation between measures of lexical, syntactic and discourse complexities to the essay gradings of German high-school examinations while McNamara et al. 45 explore cohesion (indicated, among other things, by connectives), syntactic complexity and lexical diversity in relation to the essay scoring.

Lexical diversity

We identify vocabulary richness by using a well-established measure of textual, lexical diversity (MTLD) 46 which is often used in the field of automated essay grading 25 , 45 , 47 . It takes into account the number of unique words but unlike the best-known measure of lexical diversity, the type-token ratio (TTR), it is not as sensitive to the difference in the length of the texts. In fact, Koizumi and In’nami 48 find it to be least affected by the differences in the length of the texts compared to some other measures of lexical diversity. This is relevant to us due to the difference in average length between the human-written and ChatGPT-generated essays.

Syntactic complexity

We use two measures in order to evaluate the syntactic complexity of the essays. One is based on the maximum depth of the sentence dependency tree which is produced using the spaCy 3.4.2 dependency parser 49 (‘Syntactic complexity (depth)’). For the second measure, we adopt an approach similar in nature to the one by Weiss et al. 25 who use clause structure to evaluate syntactic complexity. In our case, we count the number of conjuncts, clausal modifiers of nouns, adverbial clause modifiers, clausal complements, clausal subjects, and parataxes (‘Syntactic complexity (clauses)’). The supplementary material in S2 shows the difference between sentence complexity based on two examples from the data.

Nominalization is a common feature of a more scientific style of writing 50 and is used as an additional measure for syntactic complexity. In order to explore this feature, we count occurrences of nouns with suffixes such as ‘-ion’, ‘-ment’, ‘-ance’ and a few others which are known to transform verbs into nouns.

Semantic properties

Both modals and epistemic markers signal the commitment of the writer to their statement. We identify modals using the POS-tagging module provided by spaCy as well as a list of epistemic expressions of modality, such as ‘definitely’ and ‘potentially’, also used in other approaches to identifying semantic properties 51 . For epistemic markers we adopt an empirically-driven approach and utilize the epistemic markers identified in a corpus of dialogical argumentation by Hautli-Janisz et al. 52 . We consider expressions such as ‘I think’, ‘it is believed’ and ‘in my opinion’ to be epistemic.

Discourse properties

Discourse markers can be used to measure the coherence quality of a text. This has been explored by Somasundaran et al. 53 who use discourse markers to evaluate the story-telling aspect of student writing while Nadeem et al. 54 incorporated them in their deep learning-based approach to automated essay scoring. In the present paper, we employ the PDTB list of discourse markers 55 which we adjust to exclude words that are often used for purposes other than indicating discourse relations, such as ‘like’, ‘for’, ‘in’ etc.

Statistical methods

We use a within-subjects design for our study. Each participant was shown six randomly selected essays. Results were submitted to the survey system after each essay was completed, in case participants ran out of time and did not finish scoring all six essays. Cronbach’s \(\alpha\) 56 allows us to determine the inter-rater reliability for the rating criterion and data source (human, ChatGPT-3, ChatGPT-4) in order to understand the reliability of our data not only overall, but also for each data source and rating criterion. We use two-sided Wilcoxon-rank-sum tests 57 to confirm the significance of the differences between the data sources for each criterion. We use the same tests to determine the significance of the linguistic characteristics. This results in three comparisons (human vs. ChatGPT-3, human vs. ChatGPT-4, ChatGPT-3 vs. ChatGPT-4) for each of the seven rating criteria and each of the seven linguistic characteristics, i.e. 42 tests. We use the Holm-Bonferroni method 58 for the correction for multiple tests to achieve a family-wise error rate of 0.05. We report the effect size using Cohen’s d 59 . While our data is not perfectly normal, it also does not have severe outliers, so we prefer the clear interpretation of Cohen’s d over the slightly more appropriate, but less accessible non-parametric effect size measures. We report point plots with estimates of the mean scores for each data source and criterion, incl. the 95% confidence interval of these mean values. The confidence intervals are estimated in a non-parametric manner based on bootstrap sampling. We further visualize the distribution for each criterion using violin plots to provide a visual indicator of the spread of the data (see Supplementary material S4 ).

Further, we use the self-assessment of the English skills and confidence in the essay ratings as confounding variables. Through this, we determine if ratings are affected by the language skills or confidence, instead of the actual quality of the essays. We control for the impact of these by measuring Pearson’s correlation coefficient r 60 between the self-assessments and the ratings. We also determine whether the linguistic features are correlated with the ratings as expected. The sentence complexity (both tree depth and dependency clauses), as well as the nominalization, are indicators of the complexity of the language. Similarly, the use of discourse markers should signal a proper logical structure. Finally, a large lexical diversity should be correlated with the ratings for the vocabulary. Same as above, we measure Pearson’s r . We use a two-sided test for the significance based on a \(\beta\) -distribution that models the expected correlations as implemented by scipy 61 . Same as above, we use the Holm-Bonferroni method to account for multiple tests. However, we note that it is likely that all—even tiny—correlations are significant given our amount of data. Consequently, our interpretation of these results focuses on the strength of the correlations.

Our statistical analysis of the data is implemented in Python. We use pandas 1.5.3 and numpy 1.24.2 for the processing of data, pingouin 0.5.3 for the calculation of Cronbach’s \(\alpha\) , scipy 1.10.1 for the Wilcoxon-rank-sum tests Pearson’s r , and seaborn 0.12.2 for the generation of plots, incl. the calculation of error bars that visualize the confidence intervals.

Out of the 111 teachers who completed the questionnaire, 108 rated all six essays, one rated five essays, one rated two essays, and one rated only one essay. This results in 658 ratings for 270 essays (90 topics for each essay type: human-, ChatGPT-3-, ChatGPT-4-generated), with three ratings for 121 essays, two ratings for 144 essays, and one rating for five essays. The inter-rater agreement is consistently excellent ( \(\alpha >0.9\) ), with the exception of language mastery where we have good agreement ( \(\alpha =0.89\) , see Table  2 ). Further, the correlation analysis depicted in supplementary material S4 shows weak positive correlations ( \(r \in 0.11, 0.28]\) ) between the self-assessment for the English skills, respectively the self-assessment for the confidence in ratings and the actual ratings. Overall, this indicates that our ratings are reliable estimates of the actual quality of the essays with a potential small tendency that confidence in ratings and language skills yields better ratings, independent of the data source.

Table  2 and supplementary material S4 characterize the distribution of the ratings for the essays, grouped by the data source. We observe that for all criteria, we have a clear order of the mean values, with students having the worst ratings, ChatGPT-3 in the middle rank, and ChatGPT-4 with the best performance. We further observe that the standard deviations are fairly consistent and slightly larger than one, i.e. the spread is similar for all ratings and essays. This is further supported by the visual analysis of the violin plots.

The statistical analysis of the ratings reported in Table  4 shows that differences between the human-written essays and the ones generated by both ChatGPT models are significant. The effect sizes for human versus ChatGPT-3 essays are between 0.52 and 1.15, i.e. a medium ( \(d \in [0.5,0.8)\) ) to large ( \(d \in [0.8, 1.2)\) ) effect. On the one hand, the smallest effects are observed for the expressiveness and complexity, i.e. when it comes to the overall comprehensiveness and complexity of the sentence structures, the differences between the humans and the ChatGPT-3 model are smallest. On the other hand, the difference in language mastery is larger than all other differences, which indicates that humans are more prone to making mistakes when writing than the NLG models. The magnitude of differences between humans and ChatGPT-4 is larger with effect sizes between 0.88 and 1.43, i.e., a large to very large ( \(d \in [1.2, 2)\) ) effect. Same as for ChatGPT-3, the differences are smallest for expressiveness and complexity and largest for language mastery. Please note that the difference in language mastery between humans and both GPT models does not mean that the humans have low scores for language mastery (M=3.90), but rather that the NLG models have exceptionally high scores (M=5.03 for ChatGPT-3, M=5.25 for ChatGPT-4).

When we consider the differences between the two GPT models, we observe that while ChatGPT-4 has consistently higher mean values for all criteria, only the differences for logic and composition, vocabulary and text linking, and complexity are significant. The effect sizes are between 0.45 and 0.5, i.e. small ( \(d \in [0.2, 0.5)\) ) and medium. Thus, while GPT-4 seems to be an improvement over GPT-3.5 in general, the only clear indicator of this is a better and clearer logical composition and more complex writing with a more diverse vocabulary.

We also observe significant differences in the distribution of linguistic characteristics between all three groups (see Table  3 ). Sentence complexity (depth) is the only category without a significant difference between humans and ChatGPT-3, as well as ChatGPT-3 and ChatGPT-4. There is also no significant difference in the category of discourse markers between humans and ChatGPT-3. The magnitude of the effects varies a lot and is between 0.39 and 1.93, i.e., between small ( \(d \in [0.2, 0.5)\) ) and very large. However, in comparison to the ratings, there is no clear tendency regarding the direction of the differences. For instance, while the ChatGPT models write more complex sentences and use more nominalizations, humans tend to use more modals and epistemic markers instead. The lexical diversity of humans is higher than that of ChatGPT-3 but lower than that of ChatGPT-4. While there is no difference in the use of discourse markers between humans and ChatGPT-3, ChatGPT-4 uses significantly fewer discourse markers.

We detect the expected positive correlations between the complexity ratings and the linguistic markers for sentence complexity ( \(r=0.16\) for depth, \(r=0.19\) for clauses) and nominalizations ( \(r=0.22\) ). However, we observe a negative correlation between the logic ratings and the discourse markers ( \(r=-0.14\) ), which counters our intuition that more frequent use of discourse indicators makes a text more logically coherent. However, this is in line with previous work: McNamara et al. 45 also find no indication that the use of cohesion indices such as discourse connectives correlates with high- and low-proficiency essays. Finally, we observe the expected positive correlation between the ratings for the vocabulary and the lexical diversity ( \(r=0.12\) ). All observed correlations are significant. However, we note that the strength of all these correlations is weak and that the significance itself should not be over-interpreted due to the large sample size.

Our results provide clear answers to the first two research questions that consider the quality of the generated essays: ChatGPT performs well at writing argumentative student essays and outperforms the quality of the human-written essays significantly. The ChatGPT-4 model has (at least) a large effect and is on average about one point better than humans on a seven-point Likert scale.

Regarding the third research question, we find that there are significant linguistic differences between humans and AI-generated content. The AI-generated essays are highly structured, which for instance is reflected by the identical beginnings of the concluding sections of all ChatGPT essays (‘In conclusion, [...]’). The initial sentences of each essay are also very similar starting with a general statement using the main concepts of the essay topics. Although this corresponds to the general structure that is sought after for argumentative essays, it is striking to see that the ChatGPT models are so rigid in realizing this, whereas the human-written essays are looser in representing the guideline on the linguistic surface. Moreover, the linguistic fingerprint has the counter-intuitive property that the use of discourse markers is negatively correlated with logical coherence. We believe that this might be due to the rigid structure of the generated essays: instead of using discourse markers, the AI models provide a clear logical structure by separating the different arguments into paragraphs, thereby reducing the need for discourse markers.

Our data also shows that hallucinations are not a problem in the setting of argumentative essay writing: the essay topics are not really about factual correctness, but rather about argumentation and critical reflection on general concepts which seem to be contained within the knowledge of the AI model. The stochastic nature of the language generation is well-suited for this kind of task, as different plausible arguments can be seen as a sampling from all available arguments for a topic. Nevertheless, we need to perform a more systematic study of the argumentative structures in order to better understand the difference in argumentation between human-written and ChatGPT-generated essay content. Moreover, we also cannot rule out that subtle hallucinations may have been overlooked during the ratings. There are also essays with a low rating for the criteria related to factual correctness, indicating that there might be cases where the AI models still have problems, even if they are, on average, better than the students.

One of the issues with evaluations of the recent large-language models is not accounting for the impact of tainted data when benchmarking such models. While it is certainly possible that the essays that were sourced by Stab and Gurevych 41 from the internet were part of the training data of the GPT models, the proprietary nature of the model training means that we cannot confirm this. However, we note that the generated essays did not resemble the corpus of human essays at all. Moreover, the topics of the essays are general in the sense that any human should be able to reason and write about these topics, just by understanding concepts like ‘cooperation’. Consequently, a taint on these general topics, i.e. the fact that they might be present in the data, is not only possible but is actually expected and unproblematic, as it relates to the capability of the models to learn about concepts, rather than the memorization of specific task solutions.

While we did everything to ensure a sound construct and a high validity of our study, there are still certain issues that may affect our conclusions. Most importantly, neither the writers of the essays, nor their raters, were English native speakers. However, the students purposefully used a forum for English writing frequented by native speakers to ensure the language and content quality of their essays. This indicates that the resulting essays are likely above average for non-native speakers, as they went through at least one round of revisions with the help of native speakers. The teachers were informed that part of the training would be in English to prevent registrations from people without English language skills. Moreover, the self-assessment of the language skills was only weakly correlated with the ratings, indicating that the threat to the soundness of our results is low. While we cannot definitively rule out that our results would not be reproducible with other human raters, the high inter-rater agreement indicates that this is unlikely.

However, our reliance on essays written by non-native speakers affects the external validity and the generalizability of our results. It is certainly possible that native speaking students would perform better in the criteria related to language skills, though it is unclear by how much. However, the language skills were particular strengths of the AI models, meaning that while the difference might be smaller, it is still reasonable to conclude that the AI models would have at least comparable performance to humans, but possibly still better performance, just with a smaller gap. While we cannot rule out a difference for the content-related criteria, we also see no strong argument why native speakers should have better arguments than non-native speakers. Thus, while our results might not fully translate to native speakers, we see no reason why aspects regarding the content should not be similar. Further, our results were obtained based on high-school-level essays. Native and non-native speakers with higher education degrees or experts in fields would likely also achieve a better performance, such that the difference in performance between the AI models and humans would likely also be smaller in such a setting.

We further note that the essay topics may not be an unbiased sample. While Stab and Gurevych 41 randomly sampled the essays from the writing feedback section of an essay forum, it is unclear whether the essays posted there are representative of the general population of essay topics. Nevertheless, we believe that the threat is fairly low because our results are consistent and do not seem to be influenced by certain topics. Further, we cannot with certainty conclude how our results generalize beyond ChatGPT-3 and ChatGPT-4 to similar models like Bard ( https://bard.google.com/?hl=en ) Alpaca, and Dolly. Especially the results for linguistic characteristics are hard to predict. However, since—to the best of our knowledge and given the proprietary nature of some of these models—the general approach to how these models work is similar and the trends for essay quality should hold for models with comparable size and training procedures.

Finally, we want to note that the current speed of progress with generative AI is extremely fast and we are studying moving targets: ChatGPT 3.5 and 4 today are already not the same as the models we studied. Due to a lack of transparency regarding the specific incremental changes, we cannot know or predict how this might affect our results.

Our results provide a strong indication that the fear many teaching professionals have is warranted: the way students do homework and teachers assess it needs to change in a world of generative AI models. For non-native speakers, our results show that when students want to maximize their essay grades, they could easily do so by relying on results from AI models like ChatGPT. The very strong performance of the AI models indicates that this might also be the case for native speakers, though the difference in language skills is probably smaller. However, this is not and cannot be the goal of education. Consequently, educators need to change how they approach homework. Instead of just assigning and grading essays, we need to reflect more on the output of AI tools regarding their reasoning and correctness. AI models need to be seen as an integral part of education, but one which requires careful reflection and training of critical thinking skills.

Furthermore, teachers need to adapt strategies for teaching writing skills: as with the use of calculators, it is necessary to critically reflect with the students on when and how to use those tools. For instance, constructivists 62 argue that learning is enhanced by the active design and creation of unique artifacts by students themselves. In the present case this means that, in the long term, educational objectives may need to be adjusted. This is analogous to teaching good arithmetic skills to younger students and then allowing and encouraging students to use calculators freely in later stages of education. Similarly, once a sound level of literacy has been achieved, strongly integrating AI models in lesson plans may no longer run counter to reasonable learning goals.

In terms of shedding light on the quality and structure of AI-generated essays, this paper makes an important contribution by offering an independent, large-scale and statistically sound account of essay quality, comparing human-written and AI-generated texts. By comparing different versions of ChatGPT, we also offer a glance into the development of these models over time in terms of their linguistic properties and the quality they exhibit. Our results show that while the language generated by ChatGPT is considered very good by humans, there are also notable structural differences, e.g. in the use of discourse markers. This demonstrates that an in-depth consideration not only of the capabilities of generative AI models is required (i.e. which tasks can they be used for), but also of the language they generate. For example, if we read many AI-generated texts that use fewer discourse markers, it raises the question if and how this would affect our human use of discourse markers. Understanding how AI-generated texts differ from human-written enables us to look for these differences, to reason about their potential impact, and to study and possibly mitigate this impact.

Data availability

The datasets generated during and/or analysed during the current study are available in the Zenodo repository, https://doi.org/10.5281/zenodo.8343644

Code availability

All materials are available online in form of a replication package that contains the data and the analysis code, https://doi.org/10.5281/zenodo.8343644 .

Ouyang, L. et al. Training language models to follow instructions with human feedback (2022). arXiv:2203.02155 .

Ruby, D. 30+ detailed chatgpt statistics–users & facts (sep 2023). https://www.demandsage.com/chatgpt-statistics/ (2023). Accessed 09 June 2023.

Leahy, S. & Mishra, P. TPACK and the Cambrian explosion of AI. In Society for Information Technology & Teacher Education International Conference , (ed. Langran, E.) 2465–2469 (Association for the Advancement of Computing in Education (AACE), 2023).

Ortiz, S. Need an ai essay writer? here’s how chatgpt (and other chatbots) can help. https://www.zdnet.com/article/how-to-use-chatgpt-to-write-an-essay/ (2023). Accessed 09 June 2023.

Openai chat interface. https://chat.openai.com/ . Accessed 09 June 2023.

OpenAI. Gpt-4 technical report (2023). arXiv:2303.08774 .

Brown, T. B. et al. Language models are few-shot learners (2020). arXiv:2005.14165 .

Wang, B. Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX. https://github.com/kingoflolz/mesh-transformer-jax (2021).

Wei, J. et al. Finetuned language models are zero-shot learners. In International Conference on Learning Representations (2022).

Taori, R. et al. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca (2023).

Cai, Z. G., Haslett, D. A., Duan, X., Wang, S. & Pickering, M. J. Does chatgpt resemble humans in language use? (2023). arXiv:2303.08014 .

Mahowald, K. A discerning several thousand judgments: Gpt-3 rates the article + adjective + numeral + noun construction (2023). arXiv:2301.12564 .

Dentella, V., Murphy, E., Marcus, G. & Leivada, E. Testing ai performance on less frequent aspects of language reveals insensitivity to underlying meaning (2023). arXiv:2302.12313 .

Guo, B. et al. How close is chatgpt to human experts? comparison corpus, evaluation, and detection (2023). arXiv:2301.07597 .

Zhao, W. et al. Is chatgpt equipped with emotional dialogue capabilities? (2023). arXiv:2304.09582 .

Keim, D. A. & Oelke, D. Literature fingerprinting : A new method for visual literary analysis. In 2007 IEEE Symposium on Visual Analytics Science and Technology , 115–122, https://doi.org/10.1109/VAST.2007.4389004 (IEEE, 2007).

El-Assady, M. et al. Interactive visual analysis of transcribed multi-party discourse. In Proceedings of ACL 2017, System Demonstrations , 49–54 (Association for Computational Linguistics, Vancouver, Canada, 2017).

Mennatallah El-Assady, A. H.-J. & Butt, M. Discourse maps - feature encoding for the analysis of verbatim conversation transcripts. In Visual Analytics for Linguistics , vol. CSLI Lecture Notes, Number 220, 115–147 (Stanford: CSLI Publications, 2020).

Matt Foulis, J. V. & Reed, C. Dialogical fingerprinting of debaters. In Proceedings of COMMA 2020 , 465–466, https://doi.org/10.3233/FAIA200536 (Amsterdam: IOS Press, 2020).

Matt Foulis, J. V. & Reed, C. Interactive visualisation of debater identification and characteristics. In Proceedings of the COMMA workshop on Argument Visualisation, COMMA , 1–7 (2020).

Chatzipanagiotidis, S., Giagkou, M. & Meurers, D. Broad linguistic complexity analysis for Greek readability classification. In Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications , 48–58 (Association for Computational Linguistics, Online, 2021).

Ajili, M., Bonastre, J.-F., Kahn, J., Rossato, S. & Bernard, G. FABIOLE, a speech database for forensic speaker comparison. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16) , 726–733 (European Language Resources Association (ELRA), Portorož, Slovenia, 2016).

Deutsch, T., Jasbi, M. & Shieber, S. Linguistic features for readability assessment. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications , 1–17, https://doi.org/10.18653/v1/2020.bea-1.1 (Association for Computational Linguistics, Seattle, WA, USA \(\rightarrow\) Online, 2020).

Fiacco, J., Jiang, S., Adamson, D. & Rosé, C. Toward automatic discourse parsing of student writing motivated by neural interpretation. In Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022) , 204–215, https://doi.org/10.18653/v1/2022.bea-1.25 (Association for Computational Linguistics, Seattle, Washington, 2022).

Weiss, Z., Riemenschneider, A., Schröter, P. & Meurers, D. Computationally modeling the impact of task-appropriate language complexity and accuracy on human grading of German essays. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications , 30–45, https://doi.org/10.18653/v1/W19-4404 (Association for Computational Linguistics, Florence, Italy, 2019).

Yang, F., Dragut, E. & Mukherjee, A. Predicting personal opinion on future events with fingerprints. In Proceedings of the 28th International Conference on Computational Linguistics , 1802–1807, https://doi.org/10.18653/v1/2020.coling-main.162 (International Committee on Computational Linguistics, Barcelona, Spain (Online), 2020).

Tumarada, K. et al. Opinion prediction with user fingerprinting. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021) , 1423–1431 (INCOMA Ltd., Held Online, 2021).

Rocca, R. & Yarkoni, T. Language as a fingerprint: Self-supervised learning of user encodings using transformers. In Findings of the Association for Computational Linguistics: EMNLP . 1701–1714 (Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 2022).

Aiyappa, R., An, J., Kwak, H. & Ahn, Y.-Y. Can we trust the evaluation on chatgpt? (2023). arXiv:2303.12767 .

Yeadon, W., Inyang, O.-O., Mizouri, A., Peach, A. & Testrow, C. The death of the short-form physics essay in the coming ai revolution (2022). arXiv:2212.11661 .

TURING, A. M. I.-COMPUTING MACHINERY AND INTELLIGENCE. Mind LIX , 433–460, https://doi.org/10.1093/mind/LIX.236.433 (1950). https://academic.oup.com/mind/article-pdf/LIX/236/433/30123314/lix-236-433.pdf .

Kortemeyer, G. Could an artificial-intelligence agent pass an introductory physics course? (2023). arXiv:2301.12127 .

Kung, T. H. et al. Performance of chatgpt on usmle: Potential for ai-assisted medical education using large language models. PLOS Digital Health 2 , 1–12. https://doi.org/10.1371/journal.pdig.0000198 (2023).

Article   Google Scholar  

Frieder, S. et al. Mathematical capabilities of chatgpt (2023). arXiv:2301.13867 .

Yuan, Z., Yuan, H., Tan, C., Wang, W. & Huang, S. How well do large language models perform in arithmetic tasks? (2023). arXiv:2304.02015 .

Touvron, H. et al. Llama: Open and efficient foundation language models (2023). arXiv:2302.13971 .

Chung, H. W. et al. Scaling instruction-finetuned language models (2022). arXiv:2210.11416 .

Workshop, B. et al. Bloom: A 176b-parameter open-access multilingual language model (2023). arXiv:2211.05100 .

Spencer, S. T., Joshi, V. & Mitchell, A. M. W. Can ai put gamma-ray astrophysicists out of a job? (2023). arXiv:2303.17853 .

Cherian, A., Peng, K.-C., Lohit, S., Smith, K. & Tenenbaum, J. B. Are deep neural networks smarter than second graders? (2023). arXiv:2212.09993 .

Stab, C. & Gurevych, I. Annotating argument components and relations in persuasive essays. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers , 1501–1510 (Dublin City University and Association for Computational Linguistics, Dublin, Ireland, 2014).

Essay forum. https://essayforum.com/ . Last-accessed: 2023-09-07.

Common european framework of reference for languages (cefr). https://www.coe.int/en/web/common-european-framework-reference-languages . Accessed 09 July 2023.

Kmk guidelines for essay assessment. http://www.kmk-format.de/material/Fremdsprachen/5-3-2_Bewertungsskalen_Schreiben.pdf . Accessed 09 July 2023.

McNamara, D. S., Crossley, S. A. & McCarthy, P. M. Linguistic features of writing quality. Writ. Commun. 27 , 57–86 (2010).

McCarthy, P. M. & Jarvis, S. Mtld, vocd-d, and hd-d: A validation study of sophisticated approaches to lexical diversity assessment. Behav. Res. Methods 42 , 381–392 (2010).

Article   PubMed   Google Scholar  

Dasgupta, T., Naskar, A., Dey, L. & Saha, R. Augmenting textual qualitative features in deep convolution recurrent neural network for automatic essay scoring. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications , 93–102 (2018).

Koizumi, R. & In’nami, Y. Effects of text length on lexical diversity measures: Using short texts with less than 200 tokens. System 40 , 554–564 (2012).

spacy industrial-strength natural language processing in python. https://spacy.io/ .

Siskou, W., Friedrich, L., Eckhard, S., Espinoza, I. & Hautli-Janisz, A. Measuring plain language in public service encounters. In Proceedings of the 2nd Workshop on Computational Linguistics for Political Text Analysis (CPSS-2022) (Potsdam, Germany, 2022).

El-Assady, M. & Hautli-Janisz, A. Discourse Maps - Feature Encoding for the Analysis of Verbatim Conversation Transcripts (CSLI lecture notes (CSLI Publications, Center for the Study of Language and Information, 2019).

Hautli-Janisz, A. et al. QT30: A corpus of argument and conflict in broadcast debate. In Proceedings of the Thirteenth Language Resources and Evaluation Conference , 3291–3300 (European Language Resources Association, Marseille, France, 2022).

Somasundaran, S. et al. Towards evaluating narrative quality in student writing. Trans. Assoc. Comput. Linguist. 6 , 91–106 (2018).

Nadeem, F., Nguyen, H., Liu, Y. & Ostendorf, M. Automated essay scoring with discourse-aware neural models. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications , 484–493, https://doi.org/10.18653/v1/W19-4450 (Association for Computational Linguistics, Florence, Italy, 2019).

Prasad, R. et al. The Penn Discourse TreeBank 2.0. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08) (European Language Resources Association (ELRA), Marrakech, Morocco, 2008).

Cronbach, L. J. Coefficient alpha and the internal structure of tests. Psychometrika 16 , 297–334. https://doi.org/10.1007/bf02310555 (1951).

Article   MATH   Google Scholar  

Wilcoxon, F. Individual comparisons by ranking methods. Biom. Bull. 1 , 80–83 (1945).

Holm, S. A simple sequentially rejective multiple test procedure. Scand. J. Stat. 6 , 65–70 (1979).

MathSciNet   MATH   Google Scholar  

Cohen, J. Statistical power analysis for the behavioral sciences (Academic press, 2013).

Freedman, D., Pisani, R. & Purves, R. Statistics (international student edition). Pisani, R. Purves, 4th edn. WW Norton & Company, New York (2007).

Scipy documentation. https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html . Accessed 09 June 2023.

Windschitl, M. Framing constructivism in practice as the negotiation of dilemmas: An analysis of the conceptual, pedagogical, cultural, and political challenges facing teachers. Rev. Educ. Res. 72 , 131–175 (2002).

Download references

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

Faculty of Computer Science and Mathematics, University of Passau, Passau, Germany

Steffen Herbold, Annette Hautli-Janisz, Ute Heuer, Zlata Kikteva & Alexander Trautsch

You can also search for this author in PubMed   Google Scholar

Contributions

S.H., A.HJ., and U.H. conceived the experiment; S.H., A.HJ, and Z.K. collected the essays from ChatGPT; U.H. recruited the study participants; S.H., A.HJ., U.H. and A.T. conducted the training session and questionnaire; all authors contributed to the analysis of the results, the writing of the manuscript, and review of the manuscript.

Corresponding author

Correspondence to Steffen Herbold .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information 1., supplementary information 2., supplementary information 3., supplementary tables., supplementary figures., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Herbold, S., Hautli-Janisz, A., Heuer, U. et al. A large-scale comparison of human-written versus ChatGPT-generated essays. Sci Rep 13 , 18617 (2023). https://doi.org/10.1038/s41598-023-45644-9

Download citation

Received : 01 June 2023

Accepted : 22 October 2023

Published : 30 October 2023

DOI : https://doi.org/10.1038/s41598-023-45644-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Defense against adversarial attacks: robust and efficient compressed optimized neural networks.

  • Insaf Kraidia
  • Afifa Ghenai
  • Samir Brahim Belhaouari

Scientific Reports (2024)

AI-driven translations for kidney transplant equity in Hispanic populations

  • Oscar A. Garcia Valencia
  • Charat Thongprayoon
  • Wisit Cheungpasitporn

Solving Not Answering. Validation of Guidance for Writing Higher-Order Multiple-Choice Questions in Medical Science Education

  • Maria Xiromeriti
  • Philip M. Newton

Medical Science Educator (2024)

How will the state think with ChatGPT? The challenges of generative artificial intelligence for public administrations

  • Thomas Cantens

AI & SOCIETY (2024)

Students’ Holistic Reading of Socio-Scientific Texts on Climate Change in a ChatGPT Scenario

  • Kason Ka Ching Cheung
  • Jack K. H. Pun

Research in Science Education (2024)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

essay written by chat gpt example

essay written by chat gpt example

  • Gradehacker
  • Meet the Team
  • Essay Writing
  • Degree Accelerator
  • Entire Class Bundle
  • Learning Center
  • Gradehacker TV
  • Fill a Request
  • Book a Call
  • Video Tutorials
  • YouTube Video Tutorials

Write an Essay From Scratch With Chat GPT: Step-by-Step Tutorial

Santiago mallea.

  • Best Apps And Tools , Writing Tips

Chat GPT Essay Writer

Chief of Content At Gradehacker

  • Updated on June, 2024

How can I use ChatGPT to write an essay from scratch?

To write an essay with Chat GPT, you need to:

  • Understand your prompt
  • Choose a topic
  • Write the entire prompt in Chat GPT
  • Break down the arguments you got
  • Write one prompt at a time
  • Check the sources
  • Create your first draft
  • Edit your draft

essay written by chat gpt example

Want an actual human help you write?

If you are looking for a more personalized approach, get in touch with our team and get a quality AI-free essay

How amazing would it be if there was a robot willing to help you write a college essay from scratch?

A few years ago, that may have sounded like something so futuristic that could only be seen in movies. But actually, we are closer than you might think so.

Artificial Intelligence tools are everywhere , and college students have noticed it. Among all, there is one revolutionary AI that learns over time and writes all types of content, from typical conversations to academic texts.

But can Chat GPT write essays from scratch?

We tried it, and the answer is kind so (for now at least.)

Here at Gradehacker, we have years of being the non-traditional adult students’ #1 resource.

We have lots of experience helping people like you write their essays on time or get their college degree sooner , and we know how important it is to be updated with the latest tools.

AIs and Chat GPT are going to stay for a while , so you better learn how to use them properly. If you ever wondered whether it was possible to write an essay from scratch with Chat GPT, you are about to find out!

Now, in case you aren’t familiarized with Chat GPT or don’t know the basics of how it works, we recommend watching our video first!

How we Used ChatGPT to Write Essays

So, to try our experiment with ChatGPT, we created two different college assignments that any student could find:

  • An argumentative essay about America's healthcare system
  • A book review of George Orwell's 1984

Our main goal is to test ChatGPT’s essay-writing skills and see how much students can use it to write their academic assignments.

Now, we are pretty aware that this (or any) artificial intelligence can carry a wide range of problems such as:

  • Giving you incorrect premises and information
  • Delivering a piece of writing that is plagiarised from somewhere else
  • Does not include citations or list the sources it used
  • Is not always available to use

That’s why after receiving our first rough draft, we’ll edit the parts of the text that are necessary and run what we get through our plagiarism checker. After our revision, we’ll ask the AI to expand on the information or make the changes we need.

We’ll consider that final version after our revision as the best possible work that ChatGPT could have done to write an essay from scratch.

And to cover the lack of citations, we’ll see what academic sources the chatbot considers worthy for us to use when writing our paper.

Now, we don’t think that AIs are ready to deliver fully edited and well-written academic writing assignments that you can simply submit to your professor without reading them first.

But is it possible to speed up the writing process and save time by asking Chat GPT to write essays?

Let’s see!

Can ChatGPT Write an Argumentative Paper?

First, we’ll see how it can handle one of the most common academic essays: an argumentative paper.

We chose the American healthcare system as our topic, but as we know that we to find a specific subject with a wide range of sources to write a strong and persuasive essay, we are focusing on structural racism in our healthcare system and how African Americans accessed it during covid.

It’s a clear and specific topic that we included in our list of best topics for your research paper. If you want similar alternatives for college papers, be sure to watch our video !

Instructions and Essay Prompt

Take a position on an issue and compose a 5-page paper that supports it.

In the introduction, establish why your topic is important and present a specific, argumentative thesis statement that previews your argument.

The body of your essay should be logical, coherent, and purposeful. It should synthesize your research and your own informed opinions in order to support your thesis.

Address other positions on the topic along with arguments and evidence that support those positions. 

Write a conclusion that restates your thesis and reminds your reader of your main points.

First Results

After giving ChatGPT this prompt, this is what we received:

The first draft we received

To begin with, after copying and pasting these paragraphs into a word document, it only covered two and a half pages.

While the introduction directly tackles the main topic, it fails to provide a clear thesis statement. And even if it’s included in a separate section, the thesis is broad and lacks factual evidence or statistics to support it.

Throughout the body of the text, the AI lists many real-life issues that contribute to the topic of the paper. Still, these are never fully explained nor supported with evidence.

For example, in the first paragraph, it says that “African Americans have long experienced poorer health outcomes compared to other racial groups.” Here it would be interesting to add statistics that prove this information is correct.

Something that really stood up for us, was that ChatGPT credited a source to back up important data, even though it didn’t cite it properly. It talks about a study conducted by the Kaiser Family Foundation that supports that in 2019, 11% of African Americans and 6% of non-Hispanic Whites were uninsured. 

We checked the original article and found that the information was almost 100% accurate . The correct rates were 8% for White Americans and 10.9% for African Americans, but the biggest issue was that the study included more updated statistics from 2021.

essay written by chat gpt example

Then, when addressing other issues like transportation and discrimination, the problem is presented without any problems, but once again, there are no sources that support them .

Once the essay starts developing the thesis statement on how these issues could be fixed, we can see the same problem.

But even if they lack supporting evidence , the arguments listed are cohesive and make sense . These were:

  • Expanding Medicaid coverage
  • Provide incentives for healthcare providers to practice in underserved areas
  • Invest in telehealth services
  • Improve transportation infrastructure, particularly in rural areas
  • Train healthcare providers on cultural competence and anti-racism
  • Increase diversity in the healthcare workforce
  • Implement patient-centered care models

These are all strong ideas that could be stronger and more persuasive with specific information and statistics.

Still, the main problem is that there is no counter-argument that is against the essay’s main arguments.

Overall, ChatGPT delivered a cohesive first draft that tackled the topic by explaining its multiple issues and listing possible solutions. However, there is a clear lack of evidence, no counter-arguments were included, and the essay we got was half the length we needed.

Changes and Final Results

In our second attempt, we asked the AI to expand on each section and subtopic of the essay . While the final result ended up repeating some parts on multiple occasions, ChatGPT wrote more extensively and even included in-text citations with their corresponding reference.

By pasting all these new texts (without editing) into a new document, we get more than seven pages, which is a great starting point for writing a better essay.

Explanation of the issues and use of sources

The new introduction stayed pretty much the same, but the difference is that now the thesis statement is stronger and even had a cited statistic to back it up . Unfortunately, while the information is correct, the source isn’t.

Clicking on the link included in the references took us to a non-existing page , and after looking for that data on Google, we found that it actually belonged to a study from the National Library of Medicine.

essay written by chat gpt example

But then, the AI did a solid job expanding on the issues that were related to the paper’s topic. But again, while some sources were useful, sometimes the information reflected in the text didn’t correspond to it.

For example, when citing an article posted in KFF to evidence the importance of transportation as a critical factor in health disparities, when we go to the site, we don’t find any mention of that issue.

Similarly, when addressing the higher rates of infection and death compared to White Americans, the AI once again cited the wrong source. The statistics came from a study conducted by the CDC , but from a different article than the one that is credited.

And sometimes, the information displayed was incorrect.

In that same section, when listing the percentages of death in specific states, we see in the cited source that the statistics don’t match.

However, what’s interesting is that if we search for that data on Google, we find a different study that backs it up. So, even if Chat GPT didn’t include inaccurate information in the text, it failed to properly acknowledge the real source.

And so did this problem of having correct information but citing the wrong source continued throughout the paper.

Chat GPT Argumentative Paper Counter-arguments

Solutions and counter-arguments

When we asked the AI to write more about the solutions it mentioned in the first draft, we received more extensive arguments with supporting evidence for each case.

As we were expecting , the statistics were real, but the source credited wasn’t the original and didn’t mention anything related to what was included in the text. 

And it wasn’t any different with the counterarguments. They made sense and had a strong point, but the sources credited weren’t correct. 

For instance, regarding telehealth services, it recognized the multiple barriers it would take for low-income areas to adopt this modality. It credited an article posted in the KKF mainly written by “Gillespie,” but after searching for the information, we see that the original study was conducted by other people.

Still, the fact that Chat GPT now provided us with multiple data and information we could use to develop counter-arguments and later refute them is excellent progress. 

Chat GPT wrote more detailed solutions

The good news is that none of the multiple paragraphs that Chat GPT delivered had plagiarism issues.

After running them through our plagiarism checker, it only found a few parts that had duplicated content, but these were sentences composed of commonly used phrases that other articles about different topics also had.

For example, multiple times it recognized as plagiarism phrases like “according to the CDC” or “according to a report by the Kaiser Family Foundation.” And even these “ plagiarism issues ” could be easily solved by rearranging the order or adding new words.

Checking for plagiarism is a critical part of the essay writing process. If you are not using one yet, be sure to pick one as soon as possible. We recommend checking our list of best plagiarism checkers.

Key Takeaways

So, what did we learn by asking Chat GPT to write an argumentative paper?

  • It's better if the AI writes section per section
  • It can give you accurate information related to issues, solutions, and counterarguments
  • There is a high chance the source credited won't be the right one
  • The texts, which can have duplicated content among themselves, don't appear to be plagiarized

It’s clear that we still need to do a lot of editing and writing.

However, considering that Chat GPT wrote this in less than an hour , the AI proved to be a solid tool. It gave us many strong arguments, interesting and accurate statistics, and an order that we cal follow to structure our argumentative paper.

If writing these types of assignments isn’t your strength, be sure to watch our tutorial on how to write an exceptional argumentative essay!

essay written by chat gpt example

You deserve reliable study support

Get a quality reference and discover how Gradehacker can help you succeed with 100% original samples written by real experts (not AI)

Can Chat GPT Write a Book Review?

For our second experiment, we want to see if Chat GPT can write an essay for a literature class.

To do so, we picked one of the novels we consider one of the 5 must-read books any college student should read: 1984 by George Orwell. There is so much written and discussed about this literary classic that we thought it would be a perfect choice for an artificial intelligence chatbot like Chat GPT to write something about.

Write a book review of the book 1984 by George Orwell. The paper needs to include an introduction with the author’s title, publication information (Publisher, year, number of pages), genre, and a brief introduction to the review.

Then, write a summary of the plot with the basic parts of the plot: situation, conflict, development, climax, and resolution.

Continue by describing the setting and the point of view and discussing the book’s literary devices.

Lastly, analyze the book, and explain the particular style of writing or literary elements used.

And then write a conclusion.

This is the first draft we got:

The first draft we got

Starting with the introduction, all the information is correct , while including the number of pages is worthless as it depends on the edition of the book.

The summary is also accurate, but it relies too heavily on the plot instead of the context and world described in the novel , which is arguably the reason 1984 transcended time. For example, there is no mention of Big Brother, the leader of the totalitarian superstate.

Now, the setting and point of view section is the poorest section written by Chat GPT . It is very short and lacks development.

The literary devices are not necessarily wrong, but it would be better to focus more on each . For instance, talk more about the importance of symbolism or explain how the book critiques propaganda, totalitarianism, and individual freedom.

The analysis of Orwell’s writing is simple , but the conclusion is clear and straightforward, so it might be the best piece that the AI wrote.

For the second draft, instead of submitting the entire prompt, we wrote one command per section . As a result, Chat GPT focused on each part of the review and tossed more paragraphs with more detailed information in every case.

Chat GPT Book Review Better Analysis 1

It’s clear that this way, the AI can write better and more developed texts that are easier to edit and improve . Each section analyzes more in-depth the topic it’s reviewing, which facilitates the upcoming process of structuring the most useful paragraphs into a cohesive essay.

For example, it now added more literary devices used by Orwell and gave specific examples of the symbolism of the novel.

Of course, there are many sentences and ideas that are repeated throughout the different sections. But now, because each has more specific information, we can take these parts and structure a new paragraph that comprises the most valuable sentences.

Book Review Literary Devices

Now, even if sometimes book reviews don’t need to include citations from external sources apart from the novel we are analyzing, Chat GPT gave us five different options for us to choose from.

The only problem was that we couldn’t find any of them on Google.

The names of the authors were real people, but the titles of the articles and essays were nowhere to be found. This made us think that it’s likely that the AI picked real-life writers and created a title for a fictional essay about 1984 or George Orwell .

essay written by chat gpt example

Finally, we need to see if the texts are original or plagiarized material.

After running it through our plagiarism detection software, we found that it was mostly original content with only a few issues on sight . But nothing too big to worry about.

One easy-to-solve example is in the literary devices section, where it directly quotes a sentence from the book. In this case, we would just need to add the in-text citation.

The biggest plagiarism problem was with one sentence (or six words, to be more specific) from the conclusion that linked to the introduction from a summary review . But by rearranging the word order or adding synonyms, this issue can be easily solved too.

So, what are the most important tips we can take from Chat GPT writing a book review?

  • It will review each section more in-depth if you ask it one prompt at a time
  • The analysis and summary of the book were accurate
  • If you ask it to list scholarly sources, the AI will create unexisting sources based on real authors
  • Very few plagiarism issues

Once again, there is still a lot of work to do.

The writing sample chat GPT gave us is a solid start, but we need to rearrange all the paragraphs into one cohesive essay that perfectly summarizes the different aspects of the novel. Plus, we would also have to find scholarly sources on our own.

Still, the AI can do the heavy lifting and give you a great starting point.

If writing book reviews isn’t your strong suit, you have our tutorial and tips!

essay written by chat gpt example

Transform your adult college journey

Book a 30-min coaching call for free and find the personalized support and mentorship you deserve!

Save Time And Use Chat GPT to Write Your Essay

We know that writing essays can be a tedious task.

Sometimes, kicking off the process can be harder than what it looks. That’s why understanding how to use a powerful tool like Chat GPT can truly make the difference.

It may not have the critical thinking skills you have or write a high-quality essay from scratch, but by using our tips, it can deliver you a solid first draft to start writing your entire essay.

But if you want to have an expert team of writers giving you personalized support or aren’t sure about editing an AI-written essay, you can trust Gradehacker to help you with your assignments.

You can also check out our related blog posts if you want to learn how to take your writing skills to the next level!

Best Websites to Download Free College Textbooks Gradehacker

7 Best Websites to Find Free College Textbooks in 2024

essay written by chat gpt example

Best iPads For College Students

How To Be More Productive

How To Be More Productive | Tips For Non-Traditional Students

Mnemonic-Techniques cover

Studying with Mnemonic Techniques 

Discussion Boards Cover

How To Nail Every Discussion Board | Tips To Improve Your Discussion Posts

Study Habits That Keep College Students Focused Cover

Study Habits That Keep College Students Focused

Picture of Santiago Mallea

Santiago Mallea is a curious and creative journalist who first helped many college students as a Gradehacker consultant in subjects like literature, communications, ethics, and business. Now, as a Content Creator in our blog, YouTube channel, and TikTok, he assists non-traditional students improve their college experience by sharing the best tips. You can find him on LinkedIn .

  • Best Apps and Tools
  • Writing Tips
  • Financial Tips and Scholarships
  • Career Planning
  • Non-Traditional Students
  • Student Wellness
  • Cost & Pricing

essay written by chat gpt example

  • 2525 Ponce de Leon Blvd Suite 300 Coral Gables, FL 33134 USA
  • Phone: (786) 991-9293
  • Gradehacker 2525 Ponce de Leon Blvd Suite 300 Coral Gables, FL 33134 USA

About Gradehacker

Business hours.

Mon - Fri: 10:00 am - 7 pm ET ​​Sat - Sun: 10 am - 3 pm ET ​

© 2024 Gradehacker LLC All Rights Reserved.

essay written by chat gpt example

How to Grade Essays with ChatGPT

Introduction.

The rise of large language models (LLMs) like OpenAI’s ChatGPT has opened exciting possibilities in essay grading. With its advanced natural language processing capabilities, ChatGPT offers a new dimension in assessing written work, potentially revolutionizing the grading process for educators and researchers. Let’s delve into how ChatGPT could potentially make essay grading easier, more efficient, and more accurate.

ChatGPT can analyze written content for various parameters, including content quality, argument structure, coherence, and adherence to guidelines. Whether you use a continuous scoring system (e.g., quality of writing) or a discrete one (e.g., essay positions), ChatGPT can be tailored to your specific needs, offering customized feedback for different writing styles and assignments. Literature also suggests that LLMs can significantly increase grading efficiency, alleviating some of the burden on educators (Abedi et al., 2023; Okonkwo & Ade-Ibijola, 2021; Richter et al., 2019). Imagine grading hundreds of essays and providing feedback on them – a time-consuming and tiring task. ChatGPT can automate the initial assessment, flagging essays that require further attention based on specific criteria. Additionally, ChatGPT can identify stylistic strengths and weaknesses, analyze the use of literary devices, and even point out potential inconsistencies in an argument’s logic. This could free up valuable educator time for student interaction and curriculum development.

However, caution against over-reliance on this new technology is adivsed in scenarios where biased or inaccurate models could unfairly impact individual students. It is essential to recognize both the potential advantages and limitations of LLMs. This blog post aims to delve into and reflect on ChatGPT’s capabilities for grading and classifying essays and to provide insights into the practical application of using ChatGPT in educational settings.

In this blog, we will explore:

  • Essay grading with ChatGPT and ChatGPT API
  • Steps for essay grading with ChatGPT API
  • Steps for essay classification with ChatGPT API
  • Cost & computation times

For steps 2 and 3, we will provide detailed instructions on how to access and set up the ChatGPT API, prepare and upload your text dataset, and efficiently grade or classify numerous essays. Additionally, we will compare the outcomes of human grading to those obtained through GPT grading.

## Essay Grading with ChatGPT and ChatGPT API

For a single essay, we can simply ask ChatGPT to grade as follows:

essay written by chat gpt example

For multiple essays, we could request ChatGPT to grade each one individually. However, when dealing with a large number of essays (e.g., 50, 100, 1000, etc.), manually grading them in this way becomes a laborious and time-consuming task. In such cases, we can leverage the ChatGPT API service to evaluate numerous essays at once, providing greater flexibility and efficiency. ChatGPT API is a versatile tool that enables developers to integrate ChatGPT into their own applications, services, or websites. When you use the API, you also gain more control over the interaction, such as the ability to adjust temperature, maximum tokens, and the presence of system messages.

It is important to understand the distinctions between ChatGPT’s web interface and the pretrained models accessible through the OpenAI API .

ChatGPT’s web version provides a user-friendly chat interface, requiring no coding knowledge and offering features like integrated system tools. However, it is less customizable and is not designed for managing high volumes of requests. Additionally, due to its internal short-term memory span, previous conversations can influence later responses. In contrast, the OpenAI API offers pretrained models without a built-in interface, necessitating coding experience for integration. These models excel at managing large request volumes, but lack ChatGPT’s conversational memory; they process each input independently. This fundamental difference can lead to variations in the outputs generated by ChatGPT’s web interface and the OpenAI API.

Here’s an example of grading a single essay using the ChatAPI with Python:

Interestingly, this example produces a single score rather than the sentence generated above via the ChatGPT web interface. This difference could be attributed to the ChatGPT API interpreting the prompt more directly than the ChatGPT online service, even though they both use the same pretrained model. Alternatively, the variability in ChatGPT’s results might be due to inherent randomness in its responses.

By implementing a loop with multiple texts, we can acquire scores for an entire set of essays. Let’s see how to do that.

Steps for Essay Grading with ChatGPT API

Get and set up a chatgpt api key.

We assume that you have already installed the Python OpenAI library on your system and have an active OpenAI account. Setting up and obtaining access to the ChatGPT API involves the following steps:

Obtain an OpenAI key: Vist the OpenAI API website at https://platform.openai.com/api-keys and click +Create a new secret key button. Save your key securely, as you cannot regenerate the same code due to OpenAI’s security policies.

Set ip API key: In your Python script or notebook, set up the API key using the following code, replacing “YOUR-API-KEY” with your actual API key:

Load the text dataset

In this post, we will grade a series of essays about the iPad usage in schools

Text Stance_iPad Scores
0 Some people allow Ipads because some people ne… AMB 1
1 I have a tablet. But it is a lot of money. But… AMB 1
2 Do you think we should get rid of the Ipad wh… AMB 1
3 I said yes because the teacher will not be tal… AMB 2
4 Well I would like the idea . But then for it … AMB 4

Score the multiple essays

Grading 50 essays takes only 25 seconds.

Text Stance_iPad Scores Scores_GPT
0 Some people allow Ipads because some people ne… AMB 1 2.0
1 I have a tablet. But it is a lot of money. But… AMB 1 2.0
2 Do you think we should get rid of the Ipad wh… AMB 1 2.0
3 I said yes because the teacher will not be tal… AMB 2 2.0
4 Well I would like the idea . But then for it … AMB 4 4.0

Compare human grading scores with GPT grading scores

For these data, we happend to have scores given by human raters as well, allowing us how similar the human scores are to the scores generated by ChatGPT.

Using the code provided in the accompanying script, we get the following:

essay written by chat gpt example

A contigency table (confusion matrix) of the scores is:

Scores_GPT 1.0 2.0 3.0 4.0 5.0
Scores
0 1 7 0 0 0
1 0 9 0 0 0
2 0 4 1 0 0
3 0 8 2 0 0
4 0 8 3 2 0
5 0 0 2 2 0
6 0 0 0 0 1

The averages and standard deviations of human grading and GPT grading scores are 2.54 ( SD = 1.68) and 2.34 ( SD = 0.74), respectively. The correlation between them is 0.62, indicating a fairly strong positive linear relationship. Additionally, the Root Mean Squared Error (RMSE) is 1.36, providing a measure of the GPT’s prediction accuracy compared to the actual human grading scores.

Steps for Essay Classification with ChatGPT API

ChatGPT can be utilized not only for scoring essays but also for classifying essays based on some categorical variable such as writers’ opinions regarding iPad usage in schools. Here are the steps to guide you through the process, assuming you already have access to the ChatGPT API and have loaded your text dataset:

Classify multiple essays

Classifying 50 essays takes only 27 seconds.

We create a new column re_Stance_iPad based on the mapping of values from the existing Stance_iPad column. Except for AFF and NEG opinions, opinions on AMB, BAL, and NAR are unclear. Therefore, AMB, BAL, and NAR are combined as OTHER.

Text Stance_iPad Scores Scores_GPT re_Stance_iPad Stance_iPad_GPT
0 Some people allow Ipads because some people ne… AMB 1 2.0 OTHER OTHER
1 I have a tablet. But it is a lot of money. But… AMB 1 2.0 OTHER OTHER
2 Do you think we should get rid of the Ipad wh… AMB 1 2.0 OTHER OTHER
3 I said yes because the teacher will not be tal… AMB 2 2.0 OTHER OTHER
4 Well I would like the idea . But then for it … AMB 4 4.0 OTHER OTHER

Compare human classification with GPT classification

Stance_iPad_GPT AFF NEG OTHER
re_Stance_iPad
AFF 7 0 3
NEG 0 9 1
OTHER 3 1 26

ChatGPT achieves an accuracy of approximately 84%, demonstrating its correctness in classification. An F1 score of 0.84, reflecting the harmonic mean of precision and recall, signifies a well-balanced performance in terms of both precision and recall. Additionally, the Cohen’s Kappa value of 0.71, which measures the agreement between predicted and actual classifications while accounting for chance, indicates substantial agreement beyond what would be expected by chance alone.

Cost & Computation times

How long does it take to assess all essays.

Grading and classifying 50 essays each took 25 and 27 seconds , resulting in a rate of about 2 essays per second.

What is the cost of assessing all essays?

In this blog, we utilized GPT-3.5-turbo-0125. According to OpenAI’s pricing page , the cost for input processing is $0.0005 per 1,000 tokens, and for output, it is $0.0015 per 1,000 tokens, indicating that the ChatGPT API charges for both tokens sent out and tokens received.

The total expenditure for grading all essays —50 assessing essay quality and 50 for essay classification—was approximately $0.01 .

What are tokens and how to count them?

Tokens can be viewed as fragments of words. When the API receives prompts, it breaks down the input into tokens. These divisions do not always align with the beginning or end of words; tokens may include spaces and even parts of words. To grasp the concept of tokens and their length equivalencies better, here are some helpful rules of thumb:

  • 1 token ≈ 4 characters in English.
  • 1 token ≈ ¾ of a word.
  • 100 tokens ≈ 75 words.
  • 1 to 2 sentences ≈ 30 tokens.
  • 1 paragraph ≈ 100 tokens.
  • 1,500 words ≈ 2,048 tokens.

To get additional context on how tokens are counted, consider this:

The prompt at the beginning of this blog, requesting that OpenAI grade an essay, contains 129 tokens, and the output contains 12 tokens.

The input cost is $0.0000645, and the output cost is $0.000018.

ChatGPT provides an alternative approach to essay grading. This post has delved into the practical application of ChatGPT’s natural language processing capabilities, demonstrating how it can be used for efficient and accurate essay grading, with a comparison to human grading. The flexibility of ChatGPT is particularly evident when handling large volumes of essays, making it a viable alternative tool for educators and researchers. By employing the ChatGPT API key service, the grading process becomes not only streamlined but also adaptable to varying scales, from individual essays to hundreds or even thousands.

This technology has the potential to significantly enhance the efficiency of the grading process. By automating the assessment of written work, teachers and researchers can devote more time to other critical aspects of education. However, it’s important to acknowledge the limitations of current LLMs in this context. While they can assist in grading, relying solely on LLMs for final grades could be problematic, especially if LLMs are biased or inaccurate. Such scenarios could lead to unfair outcomes for individual students, highlighting the need for human oversight in the grading process. For large scale research, where we look at always across many essays, this is less of a concern (see e.g., Mozer et al., 2023)

The guide in this blog has provided a step-by-step walkthrough of setting up and accessing the ChatGPT API essay grading.

We also explored the reliability of ChatGPT’s grading, as compared to human grading. The moderate positive correlation of 0.62 attests to same consistency between human grading and ChatGPT’s evaluations. The classification results reveal that the model achieves an accuracy of approximately 84%, and the Cohen’s Kappa value of 0.71 indicates substantial agreement beyond what would be expected by chance alone. See the related study (Kim et al., 2024) for more on this.

In essence, this comprehensive guide underscores the transformative potential of ChatGPT in essay grading, presenting it as a valuable approach in the ever-evolving educational fields. This post gives an overview; we next dig in a bit more, thinking about prompt engineering + providing examples to improve accuracy.

Writer’s Comments

The api experience: a blend of ease and challenge.

Starting your journey with the ChatGPT API will be surprisingly smooth, especially if you have some Python experience. Copying and pasting code from this blog, followed by acquiring your own ChatGPT API and tweaking prompts and datasets, might seem like a breeze. However, this simplicity masks the underlying complexity. Bumps along the road are inevitable, reminding us that “mostly” easy does not mean entirely challenge-free.

The biggest hurdle you will likely face is mastering the art of crafting effective prompts. While ChatGPT’s responses are impressive, they can also be unpredictably variable. Conducting multiple pilot runs with 5-10 essays is crucial. Experimenting with diverse prompts on the same essays can act as a stepping stone, refining your approach and building confidence for wider application.

When things click, the benefits are undeniable. Automating the grading process with ChatGPT can save considerable time. Human graders, myself included, can struggle with maintaining consistent standards across a mountain of essays. ChatGPT, on the other hand, might be more stable when grading large batches in a row.

It is crucial to acknowledge that this method is not a magic bullet. Continuous scoring is not quite there yet, and limitations still exist. But the good news is that LLMs like ChatGPT are constantly improving, and new options are emerging.

Overall Reflections: A Journey of Discovery

The exploration of the ChatGPT API can be a blend of innovation, learning, and the occasional frustration. While AI grading systems like ChatGPT are not perfect, their ability to save time and provide consistent grading scheme makes them an intriguing addition to the educational toolkit. As we explore and refine these tools, the horizon for their application in educational settings seems ever-expanding, offering a glimpse into a future where AI and human educators work together to enhance the learning experience. Who knows, maybe AI will become a valuable partner in the grading process in the future!

Call to Action

Have you experimented with using ChatGPT for grading? Share your experiences and questions in the comments below! We can all learn from each other as we explore the potential of AI in education.

  • Abedi, M., Alshybani, I., Shahadat, M. R. B., & Murillo, M. (2023). Beyond Traditional Teaching: The Potential of Large Language Models and Chatbots in Graduate Engineering Education. Qeios. https://doi.org/10.32388/MD04B0
  • Kim, Y., Mozer, R., Miratrix, L., & Al-Ademi, S. (2024). ChatGPT vs. Machine Learning: Assessing the Efficacy and Accuracy of Large Language Models for Automated Essay Scoring (in preparation).
  • Okonkwo, C. W., & Ade-Ibijola, A. (2021). Chatbots applications in education: A systematic review. Computers and Education: Artificial Intelligence, 2, 100033. https://doi.org/10.1016/j.caeai.2021.100033
  • Pricing . (n.d.). OpenAI. Retrieved March 2, 2024, from https://openai.com/pricing#language-models
  • Mozer, R., Miratrix, L., Relyea, J. E., & Kim, J. S. (2023). Combining Human and Automated Scoring Methods in Experimental Assessments of Writing: A Case Study Tutorial. Journal of Educational and Behavioral Statistics, 10769986231207886. https://doi.org/10.3102/10769986231207886
  • Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators?. International Journal of Educational Technology in Higher Education, 16(1), 1-27. https://doi.org/10.1186/s41239-019-0171-0
  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Happiness Hub Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • Happiness Hub
  • This Or That Game
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Computers and Electronics
  • Online Communications

How to Get ChatGPT to Write an Essay: Prompts, Outlines, & More

Last Updated: June 2, 2024 Fact Checked

Getting ChatGPT to Write the Essay

Using ai to help you write, expert interview.

This article was co-authored by Bryce Warwick, JD and by wikiHow staff writer, Nicole Levine, MFA . Bryce Warwick is currently the President of Warwick Strategies, an organization based in the San Francisco Bay Area offering premium, personalized private tutoring for the GMAT, LSAT and GRE. Bryce has a JD from the George Washington University Law School. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 51,341 times.

Are you curious about using ChatGPT to write an essay? While most instructors have tools that make it easy to detect AI-written essays, there are ways you can use OpenAI's ChatGPT to write papers without worrying about plagiarism or getting caught. In addition to writing essays for you, ChatGPT can also help you come up with topics, write outlines, find sources, check your grammar, and even format your citations. This wikiHow article will teach you the best ways to use ChatGPT to write essays, including helpful example prompts that will generate impressive papers.

Things You Should Know

  • To have ChatGPT write an essay, tell it your topic, word count, type of essay, and facts or viewpoints to include.
  • ChatGPT is also useful for generating essay topics, writing outlines, and checking grammar.
  • Because ChatGPT can make mistakes and trigger AI-detection alarms, it's better to use AI to assist with writing than have it do the writing.

Step 1 Create an account with ChatGPT.

  • Before using the OpenAI's ChatGPT to write your essay, make sure you understand your instructor's policies on AI tools. Using ChatGPT may be against the rules, and it's easy for instructors to detect AI-written essays.
  • While you can use ChatGPT to write a polished-looking essay, there are drawbacks. Most importantly, ChatGPT cannot verify facts or provide references. This means that essays created by ChatGPT may contain made-up facts and biased content. [1] X Research source It's best to use ChatGPT for inspiration and examples instead of having it write the essay for you.

Step 2 Gather your notes.

  • The topic you want to write about.
  • Essay length, such as word or page count. Whether you're writing an essay for a class, college application, or even a cover letter , you'll want to tell ChatGPT how much to write.
  • Other assignment details, such as type of essay (e.g., personal, book report, etc.) and points to mention.
  • If you're writing an argumentative or persuasive essay , know the stance you want to take so ChatGPT can argue your point.
  • If you have notes on the topic that you want to include, you can also provide those to ChatGPT.
  • When you plan an essay, think of a thesis, a topic sentence, a body paragraph, and the examples you expect to present in each paragraph.
  • It can be like an outline and not an extensive sentence-by-sentence structure. It should be a good overview of how the points relate.

Step 3 Ask ChatGPT to write the essay.

  • "Write a 2000-word college essay that covers different approaches to gun violence prevention in the United States. Include facts about gun laws and give ideas on how to improve them."
  • This prompt not only tells ChatGPT the topic, length, and grade level, but also that the essay is personal. ChatGPT will write the essay in the first-person point of view.
  • "Write a 4-page college application essay about an obstacle I have overcome. I am applying to the Geography program and want to be a cartographer. The obstacle is that I have dyslexia. Explain that I have always loved maps, and that having dyslexia makes me better at making them."

Tyrone Showers

Tyrone Showers

Be specific when using ChatGPT. Clear and concise prompts outlining your exact needs help ChatGPT tailor its response. Specify the desired outcome (e.g., creative writing, informative summary, functional resume), any length constraints (word or character count), and the preferred emotional tone (formal, humorous, etc.)

Step 4 Add to or change the essay.

  • In our essay about gun control, ChatGPT did not mention school shootings. If we want to discuss this topic in the essay, we can use the prompt, "Discuss school shootings in the essay."
  • Let's say we review our college entrance essay and realize that we forgot to mention that we grew up without parents. Add to the essay by saying, "Mention that my parents died when I was young."
  • In the Israel-Palestine essay, ChatGPT explored two options for peace: A 2-state solution and a bi-state solution. If you'd rather the essay focus on a single option, ask ChatGPT to remove one. For example, "Change my essay so that it focuses on a bi-state solution."

Step 5 Ask for sources.

Pay close attention to the content ChatGPT generates. If you use ChatGPT often, you'll start noticing its patterns, like its tendency to begin articles with phrases like "in today's digital world." Once you spot patterns, you can refine your prompts to steer ChatGPT in a better direction and avoid repetitive content.

Step 1 Generate essay topics.

  • "Give me ideas for an essay about the Israel-Palestine conflict."
  • "Ideas for a persuasive essay about a current event."
  • "Give me a list of argumentative essay topics about COVID-19 for a Political Science 101 class."

Step 2 Create an outline.

  • "Create an outline for an argumentative essay called "The Impact of COVID-19 on the Economy."
  • "Write an outline for an essay about positive uses of AI chatbots in schools."
  • "Create an outline for a short 2-page essay on disinformation in the 2016 election."

Step 3 Find sources.

  • "Find peer-reviewed sources for advances in using MRNA vaccines for cancer."
  • "Give me a list of sources from academic journals about Black feminism in the movie Black Panther."
  • "Give me sources for an essay on current efforts to ban children's books in US libraries."

Step 4 Create a sample essay.

  • "Write a 4-page college paper about how global warming is changing the automotive industry in the United States."
  • "Write a 750-word personal college entrance essay about how my experience with homelessness as a child has made me more resilient."
  • You can even refer to the outline you created with ChatGPT, as the AI bot can reference up to 3000 words from the current conversation. For example: "Write a 1000 word argumentative essay called 'The Impact of COVID-19 on the United States Economy' using the outline you provided. Argue that the government should take more action to support businesses affected by the pandemic."

Step 5 Use ChatGPT to proofread and tighten grammar.

  • One way to do this is to paste a list of the sources you've used, including URLs, book titles, authors, pages, publishers, and other details, into ChatGPT along with the instruction "Create an MLA Works Cited page for these sources."
  • You can also ask ChatGPT to provide a list of sources, and then build a Works Cited or References page that includes those sources. You can then replace sources you didn't use with the sources you did use.

Expert Q&A

  • Because it's easy for teachers, hiring managers, and college admissions offices to spot AI-written essays, it's best to use your ChatGPT-written essay as a guide to write your own essay. Using the structure and ideas from ChatGPT, write an essay in the same format, but using your own words. Thanks Helpful 0 Not Helpful 0
  • Always double-check the facts in your essay, and make sure facts are backed up with legitimate sources. Thanks Helpful 0 Not Helpful 0
  • If you see an error that says ChatGPT is at capacity , wait a few moments and try again. Thanks Helpful 0 Not Helpful 0

essay written by chat gpt example

  • Using ChatGPT to write or assist with your essay may be against your instructor's rules. Make sure you understand the consequences of using ChatGPT to write or assist with your essay. Thanks Helpful 1 Not Helpful 0
  • ChatGPT-written essays may include factual inaccuracies, outdated information, and inadequate detail. [3] X Research source Thanks Helpful 0 Not Helpful 0

You Might Also Like

How Do You Know Someone Blocked You on Discord

Thanks for reading our article! If you’d like to learn more about completing school assignments, check out our in-depth interview with Bryce Warwick, JD .

  • ↑ https://help.openai.com/en/articles/6783457-what-is-chatgpt
  • ↑ https://platform.openai.com/examples/default-essay-outline
  • ↑ https://www.ipl.org/div/chatgpt/

About This Article

Bryce Warwick, JD

  • Send fan mail to authors

Is this article up to date?

essay written by chat gpt example

Featured Articles

Protect Yourself from Predators (for Kids)

Trending Articles

Superhero Name Generator

Watch Articles

Wear a Headband

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

Keep up with the latest tech with wikiHow's free Tech Help Newsletter

7 Surefire Signs That ChatGPT Has Written an Essay Revealed

essay written by chat gpt example

Researchers at the University of Cambridge have revealed the seven telltale signs that a piece of written content was generated by ChatGPT , after carefully analyzing more than 150 essays written by high school students and undergraduates.

They found that ChatGPT loves an Oxford Comma, repeats phrases and spits out tautological statements practically empty of meaning at a much higher frequency than humans.

While the findings are interesting, the sample size is quite small. There’s also no guarantee that the linguistic habits and techniques identified couldn’t and wouldn’t be used by a human. What’s more, AI content detection tools are largely unreliable; there’s still no way to know for certain that any given written content is AI-generated.

Get the latest tech news, straight to your inbox

Stay informed on the top business tech stories with Tech.co's weekly highlights reel.

By signing up to receive our newsletter, you agree to our Privacy Policy . You can unsubscribe at any time.

The 7 Telltale Signs Content is AI-Generated

The researchers at Cambridge analyzed 164 essays written by high school students with four essays written with a helping hand from ChatGPT.

The ChatGPT-assisted essays were generally more information-heavy and had more reflective elements, but the markers at Cambridge found that they lacked the level of comparison and analysis typically found in human-generated content. 

According to UK-based publication The Telegraph , which broke the story, the researchers identified seven key indicators of AI content:

  • Frequent use of Latin root words and “vocabulary above the expected level”
  • Paragraphs starting with singular words like “however”, and then a comma 
  • Lots of numbered lists with colons
  • Unnecessary clarificatory language (e.g. “true fact”)
  • Tautological language (“Lets come together to unite”)
  • Repetition of the same word or phrase twice 
  • Consistent and frequent use of Oxford commas in sentences

Are There Any Other Ways to Spot ChatGPT Plagiarism?

Yes and no. There are many tools online that claim to be able to detect AI content, but when I tested a wide range of them last year, I found many to be wildly inaccurate.

For instance, OpenAI’s own text classifier – which was eventually shut down because it performed so poorly – was unable to identify that text written by ChatGPT (effectively itself) was AI-generated.

Even Turnitin has been using automated processes to detect plagiarized content in academic work for years, and they’ve also developed a powerful AI content checker. The company has always maintained that verdicts arrived at by their tools should be treated as an indication, not a cast-iron accusation.

“Given that our false positive rate is not zero” Turnitin explains in a blog post discussing its AI content detection capabilities.

Surfshark logo

“You as the instructor will need to apply your professional judgment, knowledge of your students, and the specific context surrounding the assignment”.

None of these tools are infallible – and worse still, many of the free ones you’ll find lurking at the top of the Google Search results are completely and utterly useless.

Is It Wrong to Use AI for School or College Work?

While asking AI tools like ChatGPT and Gemini to write you an essay isn’t quite “plagiarism” in the same way copying content written by other people and passing it off as your own is, it’s certainly not advised.

Whether it’s objectively plagiarism or not is likely irrelevant – the educational institution you’re enrolled in has probably created guidelines explicitly banning generative AI. Many universities have already taken a similar approach to peer review and other academic processes.

Besides, the whole point of writing an essay is to consider the range of ideas and views on the topic you’re writing about and evaluate them using your head. Getting an AI to do it for you defeats the whole point of writing the essay in the first place.

Our advice – considering the consequences of being accused of plagiarism while at university – is to stick to the rules. Who knows – you might learn something while you’re at it!

Stay informed on the top business tech stories with Tech.co's weekly highlights reel.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at [email protected]

  • Artificial Intelligence

Written by:

essay written by chat gpt example

California Lawmakers Pass Controversial AI Bill

California state lawmakers on Wednesday passed sweeping...

essay written by chat gpt example

Klarna Boss Says He Can Operate on Half of His Staff Thanks to AI

Sebastian Siemiatkowski is the latest CEO to use AI...

essay written by chat gpt example

Twitter Adds Election Warning to AI Chatbot Tool

Elon Musk's social media platform has succumbed to pressure...

essay written by chat gpt example

Secrets of Claude AI to Be Revealed With System Prompt Release Notes

Anthropic will log changes made to Claude, giving users a...

How to Use ChatGPT for Writing a Personal Statement

How to Write a Personal Statement Using AI and ChatGPT

For the past two years since AI tools like ChatGPT became available, concerns have arisen about their potential to be unethical, make students less capable, and facilitate cheating.

However, universities are now recognizing that AI tools can be beneficial when used properly. It’s important to use AI ethically to enhance efficiency and overcome barriers to reaching your full potential.

According to Elliot Newstead , Head of UK Student Recruitment and Outreach at the University of Leicester, AI tools like ChatGPT can be beneficial for students applying to university if used correctly.

In the following section, we’ll explore how to use ChatGPT ethically to assist with writing your personal statement.

Using ChatGPT for Brainstorming

The first way to use ChatGPT is for brainstorming ideas.

When writing a personal statement, you need to create a document that showcases why you are a good fit for a particular degree. It’s essentially a CV for university admissions, highlighting your interests, passion, credentials, and future goals related to the degree.

Start by brainstorming career ideas.

For example, if you’re applying for a psychology degree but are unsure about future career paths, ask ChatGPT:

“ What are some career paths that I can pursue with a degree in Psychology?”

career paths

ChatGPT can provide various options such as clinical psychologist, counseling psychologist, health psychologist, school psychologist, social worker and researcher. You can then choose a couple of these paths to focus on in your personal statement.

Another useful feature is to generate a list of key achievements or skills needed for your chosen degree.

For example, if you’re applying for a business management program, ask ChatGPT:

“ List my key achievements and experiences relevant to a business administration program. “

key achievements and experiences

It might suggest skills like leadership, project management, and strategic planning. You could then consider gaining relevant experience, like starting a podcast, to strengthen your application.

Gather Information

An essential part of your personal statement is discussing the subject in depth. Include information that interests you and is relevant, showing your passion and willingness to learn.

For example, if you’re applying for a degree in neuroscience or psychology and have a strong interest in dementia research, you might want to mention a new method, finding, or drug that has caught your attention.

You can use a platform called R Discovery, which has a tool called Ask R Discovery . It searches through the literature and provides you with relevant research findings.

You can ask a question like:

“ What is a recent research finding in dementia research?”

Discovery Researcher

The AI tool will generate a summary and provide literature for further exploration. This can be a great topic to include in your personal statement to demonstrate your genuine interest in the subject to the admissions tutor.

Structuring Your Statement

The next way to use ChatGPT is to structure and generate an outline for your personal statement.

For instance, ask it:

“Create an outline for a personal statement for a computer science degree.”

Include details about your background and experience for a more tailored outline.

personal statement outline

Typically, a personal statement should start with a compelling hook, followed by your academic background, achievements, relevant coursework, practical experience, skills, career goals, and a strong conclusion.

You can copy this outline into your document and start writing. This method saves time while ensuring your statement remains original and ethically written.

Writing the Content

ChatGPT can also assist with writing the content of your personal statement.

For example, you might ask it:

“Explain in the first person how my internship at a tech company prepared me for a computer science degree.”

personal statement content

It can help draft strong paragraphs by relating your experience to the course you’re applying for, discussing practical applications, teamwork, and professional development.

Refining and Editing

Once you have a draft, use ChatGPT for refining and editing. For example, you can ask ChatGPT to paraphrase a sentence or simplify a paragraph. It can also help with critical discussion points, such as presenting challenges or opposing arguments related to your experiences.

ChatGPT can also assist with adding recent research references or discovering relevant literature to include in your statement. This can be particularly useful if you want to demonstrate a deeper understanding of your field.

I hope you find this information useful. If so, feel free to share it on your social media and let me know in the comments.

Related articles

  • 7 Reasons Why Copywriters Must Be Shameless
  • 5 Factors that Distinguish Great Copywriters
  • 10 Things Epic Copywriters Do
  • Career Advice for Writers: How to Become a Copywriter
  • 5 Things You Need to Know About Copywriting

Christina Walker

A professional freelance web copywriter with several years’ experience in web marketing and SEO copywriting.

Other posts by Christina Walker

Comments (0)

Have a language expert improve your writing

Check your paper for plagiarism in 10 minutes, generate your apa citations for free.

  • Knowledge Base

Using AI tools

  • Using ChatGPT for Assignments | Tips & Examples

Using ChatGPT for Assignments | Tips & Examples

Published on February 13, 2023 by Jack Caulfield and Tobias Solis. Revised on November 16, 2023.

People are still figuring out the best use cases for ChatGPT , the popular chatbot based on a powerful AI language model. This article provides some ideas for how to use ChatGPT and other AI tools to assist with your academic writing .

These tools can assist you with the following steps when writing a paper, thesis , or dissertation :

  • Developing a research question
  • Creating an outline
  • Generating literature ideas

Paraphrasing and summarizing text

  • Getting feedback

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

Developing a research question with chatgpt, creating an outline of your paper with chatgpt, generate literature ideas, getting feedback from chatgpt, limitations of chatgpt, other interesting articles, frequently asked questions.

You can use ChatGPT to help you formulate a clear and concise research question by generating and improving potential questions.

With the help of ChatGPT, you can develop a large number of possible research questions within a short time. This is a great way to generate ideas and different approaches to a topic.

However, make sure that the generated questions make sense and meet your needs. As you can see from our example, you can keep working with ChatGPT to adapt its suggestions to your needs (or just edit the text directly if you already have a clear idea of what you want).

First input

Develop three possible research questions on the following topic: “The influence of social media on political opinion formation.”

ChatGPT output

  • To what extent does exposure to diverse political perspectives on social media affect individuals’ political opinions and behaviors?
  • How does the personalized algorithm of social media platforms shape the formation of political opinions among its users?
  • What are the underlying factors that determine the effect of social media on political opinion formation, and how can they be measured and analyzed?

Second input

Based on the second research question, develop three further research questions on the topic of polarization in US politics.

  • Build my scientific method, qualitative discourse analysis, into each question
  • Include the term “filter bubble” in each question
  • How does the filter bubble effect on social media platforms contribute to the polarization of political attitudes in the US? A qualitative discourse analysis.
  • What is the role of filter bubbles in perpetuating and reinforcing political echo chambers in the US? A qualitative discourse analysis.
  • How does the influence of filter bubbles on social media affect the ability of individuals to engage in productive political discourse across ideological divides in the US? A qualitative discourse analysis.

Third input

Try to incorporate “qualitative discourse analysis” into the questions themselves, instead of using it as a separate sentence.

  • How does a qualitative discourse analysis of filter bubble effect on social media platforms contribute to understanding the polarization of political attitudes in the US?
  • What is the contribution of a qualitative discourse analysis to understanding the role of filter bubbles in perpetuating and reinforcing political echo chambers in the US?
  • How does a qualitative discourse analysis of the influence of filter bubbles on social media inform our understanding of the ability of individuals to engage in productive political discourse across ideological divides in the US?

Don't submit your assignments before you do this

The academic proofreading tool has been trained on 1000s of academic texts. Making it the most accurate and reliable proofreading tool for students. Free citation check included.

essay written by chat gpt example

Try for free

You can also use ChatGPT to help you draft a research paper outline or thesis outline . To do this, try generating possible headings and subheadings and then improving them. ChatGPT can help to generate a clear and well-structured outline, especially if you keep adjusting the structure with its help.

However, it doesn’t have a deep understanding of your topic. As a result, inappropriate headings may sometimes be generated. It’s important to check that the outline makes sense and covers all the points needed to answer your research question.

You should also ensure that the headings and subheadings follow a logical structure. Scribbr’s Proofreading & Editing service can help you with this.

Create an outline for an academic research paper with the following research question:

“What is the contribution of a qualitative discourse analysis to understanding the role of filter bubbles in perpetuating and reinforcing political echo chambers in the US?”

The paper will be about 8 pages long.

  • Background information on filter bubbles and political echo chambers
  • Purpose of the study
  • Research question
  • Thesis statement
  • Definition and concept of filter bubbles
  • Definition and concept of political echo chambers
  • Previous studies on the role of filter bubbles in perpetuating and reinforcing political echo chambers
  • Critiques and limitations of previous studies
  • Research design
  • Participants
  • Data collection and analysis
  • Validity and reliability
  • Findings from qualitative discourse analysis
  • Results related to the research question
  • Implications of the findings
  • Interpretation of the results
  • Comparison with previous studies
  • Limitations of the study
  • Implications for future research
  • Summary of the results
  • Implications for understanding the role of filter bubbles in perpetuating and reinforcing political echo chambers
  • Conclusion and recommendations for future research
  • List of references used in the paper
  • Data collection materials
  • Examples of qualitative data analysis

This seems like too much information to fit into only 8 pages. Can you condense the outline, for example removing the appendices and simplifying the structure?

  • Background on filter bubbles and political echo chambers
  • Definition of filter bubbles and political echo chambers
  • Previous studies on the role of filter bubbles
  • Limitations of previous studies
  • Implications for understanding the role of filter bubbles
  • Final thoughts

ChatGPT can be a useful way to generate ideas regarding what sources you might use in your work. Try asking it for suggestions about where to look and what to consider.

However, we advise against using it to find sources directly. It’s not designed for this and, if asked to do it, often just creates fake sources and presents them as real. Search a database like Google Scholar instead.

What types of sources could I use to write a paper on the following research question?

When writing a paper on the contribution of qualitative discourse analysis to understanding the role of filter bubbles in perpetuating and reinforcing political echo chambers in the US, you could use the following types of sources:

  • Academic journals and articles: These are a good place to start when you’re looking for information on specific topics and they can provide you with in-depth analysis and perspectives on the topic at hand.
  • Books and book chapters: These sources can provide comprehensive overviews of a particular topic and can also offer theoretical frameworks and historical perspectives.
  • Government reports and studies: Government agencies, such as the Federal Communications Commission (FCC), may have conducted studies on the role of filter bubbles in perpetuating political echo chambers.
  • News articles and media reports: These sources can provide insight into the current state of the issue, as well as opinions from various stakeholders.
  • Online sources: Websites, blogs, and online forums can provide a range of perspectives on the topic and can also be useful in exploring the experiences of individuals who are affected by filter bubbles and political echo chambers.
  • Data and statistics: Relevant data and statistics on the prevalence and impact of filter bubbles and political echo chambers in the US can be useful in providing evidence to support your arguments.
  • Expert interviews and focus groups: Conducting expert interviews or focus groups with individuals who have experience with or knowledge of the topic can provide valuable insights and perspectives that can be used to support your arguments.

You can also try using ChatGPT to paraphrase or summarize text. ChatGPT can quickly condense the text or generate a number of different paraphrases of the same idea. You can keep typing “again” in the chat until you are satisfied with the result.

This kind of paraphrasing/summarizing can be helpful to, for example, non-native speakers who want to improve the fluency of their text. However, be careful. The outputs often:

  • Contain grammatical errors
  • Include false information

So you should always check the output to make sure it’s correct. If you’re unsure, you can run the text through the Scribbr Grammar Checker and cross-reference the information against other, more credible sources .

An alternative is to use the Scribbr Paraphrasing Tool or Scribbr’s free text summarizer , which were designed specifically for these purposes.

Paraphrase the following text:

“A filter bubble results from a social media algorithm’s attempts to cater to an individual user based on data such as their search history and previous behavior on the site. It results in the user only being presented with information that confirms their biases.”

Restructure the sentences and use other phrasings for the following parts:

  • “results from”
  • “only being presented with”

“A filter bubble is created when a social media algorithm customizes its content for a particular user based on their data, such as their search history and past actions on the platform. This leads to the user being confined to information that reinforces their pre-existing beliefs.” Note We strongly advise against having ChatGPT write text for you and trying to pass it off as your own work. This is academically dishonest and is widely considered plagiarism . It will usually result in an automatic fail, as well as other potential consequences .

Check for common mistakes

Use the best grammar checker available to check for common mistakes in your text.

Fix mistakes for free

You can also use ChatGPT to get feedback on the argumentation and structure of your paper and suggestions for improvements.

An advantage of this is that you can get feedback from ChatGPT on different text excerpts at any time. However, one downside is that ChatGPT does not have a deep understanding of your research question, so its feedback and suggestions may be shallow or inappropriate.

This kind of feedback cannot replace input from an experienced academic supervisor who’s familiar with your topic. Use ChatGPT feedback as a supplement, not a substitute for human feedback.

If you need in-depth feedback on your writing from someone with academic expertise, try Scribbr’s Proofreading & Editing service .

[Text you want to get feedback on.]

Thank you for submitting your research paper for review. Let me provide you with my constructive criticism:

Clarity of Explanations: Overall, the explanations are clear and concise, but there are a few areas that could benefit from additional clarification. For example, in the first paragraph, you mention “the mental lexicon,” but do not provide a definition or further explanation for what this is. It would also be helpful to provide a definition of “lemma.”

Smoothness of Language: The language in the paper is generally smooth, but there are a few instances where it could be refined. For example, the sentence “The symptoms vary from adding superfluous words and taking pauses to hoarseness of the voice” could be rephrased to “The symptoms of these disorders can range from excessive use of words and pauses to hoarseness.”

Structure of Text: The structure of the text is clear, with each paragraph building on the previous one. However, the transition between the first two paragraphs could be smoother. Perhaps you could add a sentence or two to bridge the gap between the two topics.

Even though ChatGPT is a powerful writing tool, you should keep the following limitations in mind:

No original research

Risk of plagiarism, fake sources, fake “facts”.

ChatGPT was trained using existing texts, and the texts created by the AI are based on this data. It’s not capable of doing its own research, only of synthesizing sources that it was trained on.

Additionally, because the texts that it was trained on only go up to 2021, it’s unfamiliar with more recent world events, cultural trends, and developments in different fields of research. This means it may provide outdated information.

Because ChatGPT was trained using existing text, the AI tends to reuse that content in a way that may be considered plagiarism. The AI may copy text from an online publication in its responses.

Additionally, using ChatGPT to write your assignment for you is considered plagiarism regardless of where it got the content from, because you’re passing off something you didn’t write as your own work.

ChatGPT was trained on a wide variety of sources, but it’s not able to accurately tell you what sources it’s using for any given output. Worse, it has a tendency to make up sources that don’t actually exist when you ask.

In addition, ChatGPT is not designed as a citation generator and cannot add correctly formatted citations to your work. Instead, try a tool designed for this purpose, like the Scribbr Citation Generator .

ChatGPT is not always trustworthy when it comes to giving you factually correct information. While it doesn’t deliberately lie, it also doesn’t have “knowledge” in a traditional sense, and it lacks the ability to verify its claims.

Instead, its responses are based on recognizing and reproducing patterns that it has seen in other texts. This means that if it doesn’t “know” something, it tends to guess. And its guesses are presented with full confidence, just like established facts.

So it’s very important that you carefully check any factual claims made by ChatGPT against more credible sources —and don’t cite ChatGPT as a source of factual information for your research.

If you want more tips on using AI tools , understanding plagiarism , and citing sources , make sure to check out some of our other articles with explanations, examples, and formats.

  • Citing ChatGPT
  • Best grammar checker
  • Best paraphrasing tool
  • ChatGPT in your studies
  • Deep learning
  • Types of plagiarism
  • Self-plagiarism
  • Avoiding plagiarism
  • Academic integrity
  • Best plagiarism checker

Citing sources

  • Citation styles
  • In-text citation
  • Citation examples
  • Annotated bibliography

You can use ChatGPT to assist in the writing process for your research paper , thesis , or dissertation in the following ways:

  • Paraphrasing text

No, it’s not a good idea to do so in general—first, because it’s normally considered plagiarism or academic dishonesty to represent someone else’s work as your own (even if that “someone” is an AI language model). Even if you cite ChatGPT , you’ll still be penalized unless this is specifically allowed by your university . Institutions may use AI detectors to enforce these rules.

Second, ChatGPT can recombine existing texts, but it cannot really generate new knowledge. And it lacks specialist knowledge of academic topics. Therefore, it is not possible to obtain original research results, and the text produced may contain factual errors.

However, you can usually still use ChatGPT for assignments in other ways, as a source of inspiration and feedback.

No, it is not possible to cite your sources with ChatGPT . You can ask it to create citations, but it isn’t designed for this task and tends to make up sources that don’t exist or present information in the wrong format. ChatGPT also cannot add citations to direct quotes in your text.

Instead, use a tool designed for this purpose, like the Scribbr Citation Generator .

But you can use ChatGPT for assignments in other ways, to provide inspiration, feedback, and general writing advice.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Caulfield, J. & Solis, T. (2023, November 16). Using ChatGPT for Assignments | Tips & Examples. Scribbr. Retrieved August 26, 2024, from https://www.scribbr.com/ai-tools/chatgpt-assignments/

Is this article helpful?

Jack Caulfield

Jack Caulfield

Other students also liked, writing strong research questions | criteria & examples, how to create a structured research paper outline | example, student guide: information literacy | meaning & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

essay written by chat gpt example

ChatGPT essay writing: How to talk to students about AI and integrity

Christine Lee

By completing this form, you agree to Turnitin's Privacy Policy . Turnitin uses the information you provide to contact you with relevant information. You may unsubscribe from these communications at any time.

ChatGPT essay writing is a hot topic, especially in the realm of teaching and learning where it can impact academic integrity. Should teachers incorporate ChatGPT and AI writing into their teaching curriculum? Or should AI writing be entirely banned and eschewed? The teacher response is incredibly bifurcated , with those who are using it or thinking of using it to facilitate early drafts. And then in the other camp, there are those who don’t want to use it at all and see any use of AI writing as a form of misconduct.

We are at the point of no return when it comes to AI writing. Companies are using it for blog writing and other purposes. Newspapers have long used AI writing. Consequently, AI writing may become a skill students need to enter the workplace. At the same time, AI writing misuse—when students use AI writing and represent it as their own original work—is a real concern within education.

To that end, we want to focus not on AI writing but on AI writing misuse, beginning with what ChatGPT is, what students understand about ChatGPT, discussing concerns around ChatGPT misuse, and how to then talk to students about AI writing and academic integrity.

OpenAI, an AI and research company, launched ChatGPT on November 30, 2022 and became the fastest growing consumer app in history , corralling 100 million active users within two months of availability. In comparison, TikTok took nine months to hit that number of users; Instagram, two and a half years ( Ortiz, 2023 ).

ChatGPT is a natural language processing tool powered by AI that allows users to interact with a chatbot using human-like discussion. It can answer questions in human language and compose emails, essays, and code. ChatGPT runs on a language model architecture called the Generative Pre-trained Transformer (the “GPT” in ChatGPT). An interesting tidbit is that ChatGPT only has access to information up to 2021, compared to Google, which has access to the latest data.

It’s important not to make assumptions and instead know what it is your students understand about ChatGPT before opening a discussion about AI writing. Students are extremely aware of ChatGPT’s existence; and many have a very astute grasp of AI writing and its intersection with academic integrity.

The New York Times asked students to respond with their thoughts about ChatGPT , and student responses were insightful and very much a reflection of the discussions educators are also having. They, like educators, acknowledged ChatGPT as a powerful and sometimes inaccurate tool, expressing concerns about its impact on learning should students rely too much on AI writing. “One of my biggest worries is that I would rely too much on these tools and lose the capacity for critical and creative thought,” states one student. “I personally want to learn how to communicate myself clearly and to find my own distinctive voice as a student. If I always rely on ChatGPT to generate material for me, I might not be challenged to improve as a writer” ( Faris ).

Some students said that AI writing is flat-out a form of cheating. And some students embraced AI writing, stating, “If students are never taught about, and never learn how to operate ChatGPT in their schools, they will be unprepared for their life ahead, which will be filled with writing AIs” ( Whit, Byfield, MA ).

The above reflects the results of a survey of K-12 teachers and students aged 12-17 conducted by Impact Research for the Walton Family Foundation . The results from both groups were not dissimilar. For instance, the survey found that “Teachers and students agree ChatGPT will be important to incorporate into schooling. Two-thirds of students (65%) and three-quarters of teachers (76%) agree that integrating ChatGPT for schools will be important for the future” ( Impact Research, 2023 ).

Another survey by Intelligent also falls in line with The New York Times student insights, stating, “Three-quarters of students who have used ChatGPT for homework say it is ‘somewhat’ (46%) or ‘definitely’ (29%) cheating. These numbers increase when including students who say they are familiar with ChatGPT but haven’t used it themselves. With this group included, 80% say it is ‘somewhat’ (48%) or ‘definitely’ (32%) cheating.”

And yet, “When asked how popular they believe the use of the tool is among other students, 76% say its use is ‘somewhat’ (50%) or ‘very’ (26%) popular.” In other words, even though they believe AI writing use is a form of misconduct, students use it anyway ( Intelligent, 2023 ).

Bottom line? Students are ready to talk about ChatGPT, AI writing, and academic integrity. They have awareness of ChatGPT and their own preliminary thoughts; it’s up to educators to bring nuance and clarity about ChatGPT and academic integrity and define AI writing misuse.

As with all academic integrity concerns, it is important to make your students feel seen and supported in a conversation about AI writing and ChatGPT. Research has shown that students who feel connected to their teacher are less likely to cheat ( Orosz, Tóth-Király, Böthe, Kusztor, Kovács, & Jánvári 2015 ). Ergo, how a discussion about AI writing and ChatGPT use and misuse is executed is an opportunity to strengthen student-teacher relationships, deepen bonds to academic integrity, and sharpen understanding of AI writing and ChatGPT use. A few key points to keep in mind as you discuss ChatGPT essay writing with your students include:

  • Center students
  • Create a safe space
  • Offer clarity and nuance
  • Provide room for reflection

One of the most straightforward ways to begin discussing ChatGPT is to ask your students what they know or to have them write you a letter about what they know and what their concerns and questions may be. This letter, for the record, can even be anonymous to engender more honesty. Centering their established understanding of AI writing and addressing their questions is one effective way to begin a discussion about AI writing, ChatGPT, and academic integrity. Some students may still want an introduction to the concept, and of course, educators should share what they know, but by interviewing students first, engagement can increase.

The topic of AI writing and ChatGPT within education is so heated that students may feel it is entirely taboo or too risky to discuss. Create a safe space around an AI writing conversation by having students discuss questions and opinions about ChatGPT and AI writing within small groups before sharing within a larger classroom discussion (this way, it’s harder to pinpoint who said what). Another way to create a safe space might be to ask students to anonymously submit questions about AI writing and ChatGPT on slips of paper beforehand.

It is also safer to discuss a topic that is presented more objectively. Perhaps, too, have students respond to a third-party journalism article on AI writing so that any insights are a reaction to words written by someone they do not know.

Before discussion begins, provide ground rules or guidelines , whether you provide them or invite the class to form them. They can include rules about courtesy and respect and safety, such as critiquing ideas and not people, and allowing everyone a turn to speak. Having ground rules prior to discussion gives everyone an opportunity to refer back to them should things get tense.

Focusing on how to properly use AI writing generators is the crux of how to talk to students about AI writing, ChatGPT, and academic integrity. While it’s tempting to paint a black and white picture of AI writing use or ban it altogether, the reality is that AI writing and ChatGPT are a large presence that should be addressed head on and with nuance. The reality, too, is that students will be eventually faced with AI writing use; providing them with tools and knowledge to understand what constitutes misuse versus valid use of ChatGPT is critical to lifelong learning and academic integrity.

While a reliance on AI writing tools can impact critical thinking, talking about AI writing tools has incredible space for critical thinking and analysis. This is also an opportunity to check out resources on AI writing and academic integrity , including a guide to approaching AI-generated text . What is its impact on learning, both positive and negative? When AI writing is prevalent outside of academia—for instance, in journalism—what is the difference between use and misuse? How is it being used properly? How is it being used improperly? These questions and more can lead to a deeper understanding of how, when, where, and why to use or not use AI writing tools.

Reflection is an important part of learning. Consider having students write their reflections on the class discussion, whether anonymously or as a graded assignment, so that different types of learners have a chance to synthesize class discussion and demonstrate their knowledge. This is, coincidentally, an opportunity for educators to have insights on what exactly students felt were the key takeaways from a class discussion on AI writing. The assignment can focus on class discussion points (which coincidentally, makes it very difficult for ChatGPT misuse).

This summary reflection can also be an opportunity for students to express how they felt about the discussion or to share any questions that remain unanswered for them.

While a discussion with students about AI writing and integrity is not something that can be controlled to the minute, it is important to have discussion points. And those discussion points will center around concerns and benefits around ChatGPT use in education. The following are some talking points you may want to consider.

According to the National Education Association, “The biggest concern is cheating. More than one-fourth of teachers have caught students using the chatbot to cheat, the Study.com survey found. An influx of ChatGPT generated work has left many educators spending more time checking for AI plagiarism and revamping lesson plans to be ‘AI-proof’” ( Blose, 2023 ).

A discussion with students may involve asking them if they agree with this concern and what they feel “AI-proof” assignments might look like.

AI writing misconduct is muddied by how it is used in education versus industry. The most analogous example of this conflict is in software development and programming.

In software development, it is standard procedure for engineers to work in collaboration with others. It is also standard procedure to use the code of other engineers made available on Github or via open source software without attribution. On the other hand, when students in software engineering courses use others’ source code, it is called programming plagiarism and in the case of collaboration for an assignment that is meant for individual assessment, collusion.

In both cases, collaboration and using shared resources aren’t “bad things,” depending on the context, particularly in industry. AI writing, when not disclosed and when students represent it as their own, can often qualify as misconduct just as when software engineering students use an open repository like Github for answers to their individual assignments.

And unlike contract cheating , there are viable use cases for AI writing, such as when teachers incorporate it into their curriculum and students use it in a way that is transparent.

The difference between journalists using AI writing and students using AI writing, of course, is that students are still learning and need to demonstrate their understanding to instructors; AI writing in classrooms, when used as a shortcut solution without transparency, impacts learning and thus qualifies as misuse.

It is important to highlight that students engage in learning; any AI writing use should be disclosed. Bottom line: if students are using AI writing tools and claiming that work as their own individual, original work without attribution, that may be a case of misuse and misconduct.

In the early weeks and days following the launch of OpenAI’s ChatGPT, many schools throughout the world, from the United States to Australia to the UK banned ChatGPT use outright.

This is a discussion point that provides an opportunity to discuss nuances in AI writing use. And it’s a way to discuss AI writing in a way that isn’t as personal and instead, more academic. We now see that AI writing tool bans aren’t effective; both students and teachers have slowly come to accept ChatGPT as having useful potential. What might those positive uses be?

AI writing can help students with brainstorming and structure. It can help those who are learning a new language like English, overcome language barriers while absorbing new concepts, because AI writing tools can translate content for students. Discussing ways in which AI writing can harm or benefit learning gives students ownership in the discussion and helps them understand AI writing’s impact on learning more deeply.

But even when AI writing tools can be used, a reliance on ChatGPT can shortcut learning.

One way to emphasize the shortcomings of ChatGPT misuse is to ask it the same questions you ask your students within discussion and show how their discussion was more nuanced and detailed and comprehensive. Here, we asked ChatGPT about impacts on learning (yes, we used ChatGPT). This is ChatGPT’s response:

“ChatGPT, as a language model, can have several impacts on learning:

1. Information Access: ChatGPT has access to a vast amount of knowledge and information, making it a valuable tool for learning. It can provide quick and accurate answers to questions, helping users gain knowledge on a wide range of topics.

2. Learning Assistance: ChatGPT can act as a learning assistant, providing explanations, examples, and clarifications. It can help users understand complex concepts, solve problems, and guide them through the learning process.

3. Personalized Learning: ChatGPT can adapt to individual learners' needs and preferences. It can provide tailored recommendations, resources, and study materials based on a user's specific requirements, making the learning experience more personalized and effective.

4. Language Practice: ChatGPT can engage in conversation and provide language practice opportunities. Users can improve their writing, speaking, and communication skills by interacting with ChatGPT and receiving feedback on their language usage.

5. Creative Exploration: ChatGPT's ability to generate text can spark creativity and encourage exploration. Users can brainstorm ideas, discuss possibilities, and receive suggestions, fostering a more innovative and open-minded approach to learning.

6. Collaboration and Peer Learning: ChatGPT can facilitate collaboration among learners. Multiple users can interact with ChatGPT simultaneously, creating a shared learning environment where they can exchange ideas, discuss concepts, and learn from one another.

7. Continuous Learning: ChatGPT can be continuously updated with new information, ensuring that learners have access to the most up-to-date knowledge. It can keep up with emerging trends, advancements, and discoveries, providing users with the latest insights and developments.

It's important to note that while ChatGPT can be a valuable learning tool, it should not be seen as a replacement for traditional forms of learning or human instructors. Its effectiveness depends on how it is utilized, and it should be used in conjunction with other learning resources and strategies for a well-rounded learning experience.”

Note the vague and basic language, and lack of examples, let alone ChatGPT’s self-promotion; not only is the content self-laudatory without criticism, it doesn’t center learning or students in its response. Furthermore, the claim that ChatGPT has access to the “most up-to-date knowledge,” is questionable, as various sources report that ChatGPT only has access to information up to 2021 (it is now 2023). Even if Microsoft, as of late March 2023, integrated Bing into ChatGPT , this is an evolving situation.

Rewriting this is imperative. Here’s the list again, written by a human (me).

ChatGPT has access to many information venues, mostly via the internet and has the potential to provide a breadth of answers; however, ChatGPT is dependent on human inputs and resources, so it can also be flawed and thereby further misinformation. If students are fully reliant on ChatGPT and AI writing, they may walk away with erroneous data that goes against the goals of education. It is important to know that ChatGPT is reliant on the average or median of a data set and if that data is biased, then the output is not based on facts but on opinions.

According to neaToday, ChatGPT’s own website states that “ChatGPT occasionally generates misinformation, untimely and biased responses. The program is only as knowledgeable as the information it has been introduced to and trained with. Even creators acknowledge that the program is not a credible source of factual information and should not be treated as an academic source. Many teachers worry that ChatGPT will make teaching and learning—particularly writing assignments— more formulaic” ( Blose, 2023 ).

Particularly for students who are learning a new language, ChatGPT can provide scaffolding to write in a new language; students can practice, receive feedback, and ask for help from ChatGPT. AI writing tools can help students express their ideas while bypassing language barriers. At the same time, this scaffolding must not be a replacement for learning, because students will eventually have to have an awareness, if not a deep understanding, of all components of writing.

ChatGPT may be useful for the brainstorming process; however, there is a limit to its output and it is up to the student to come up with their own analysis and introduce their original ideas. Furthermore, ChatGPT is intelligent, but can misunderstand context, leading to incorrect output. Again, it is up to the user to discern whether or not the output is correct or offers new insights. That said, ChatGPT is very good at providing structure to arguments and can help students formulate their narratives.

ChatGPT is available 24/7 whereas teachers cannot be available at all hours. When a student has a question, ChatGPT can be a resource at any time. Furthermore, students can formulate questions according to their interests and ChatGPT can then examine that interest further. Even when the output might be cursory, this helps students ask “good questions,” and fine-tune their thinking in different ways.

Students can become reliant on technology and “skip” important steps in the learning process. In the example of calculators, students need to be able to add and subtract and perform the functions of a calculator; a calculator, in sum, is not a substitute for knowledge but an aid. Similarly, ChatGPT can provide scaffolding, but it shouldn’t replace learning.

Critical thinking is an important learning outcome. When AI writing tools and ChatGPT essay writing are misused extensively, students may not be able to think for themselves. This concern is a great one. The Brookings Institution addresses this issue, adding, “More centrally, however, is that the bot is more of a synthesizer than a critical thinker. It would do well on a compare-and-contrast essay, but is less able to create a unique thesis and to defend that thesis” ( Hirsh-Pasek & Blinkoff, 2023 ).

Because ChatGPT can produce whole essays, and when students don’ t disclose their use of AI writing tools and don’t revise ChatGPT output extensively, it is not only a lack of original thought but academic dishonesty and misconduct. This is one of the most significant concerns surrounding ChatGPT and the reason that AI writing detection tools are in development. Without writing practice, students don’t have the opportunity to develop their authorial voice and writing confidence. Providing a space to “fail safely” is important and a way to mitigate AI writing misuse. Furthermore, ChatGPT’s writing output is less than engaging and in many ways, vague and passive; it isn’t the optimal model from which students should learn.

When ChatGPT is used as a shortcut solution, it can negatively impact student learning outcomes for myriad reasons. Students may learn incorrect facts, and when dependent on ChatGPT, not advance to original and critical thinking. But when used correctly, it can aid students in learning, much like tools like calculators.

Without advocating AI writing tools as a blanket solution for all learning, how do you communicate positive impacts of ChatGPT to students?

Students understand that ChatGPT essay writing is here to stay; and that it is a skill they will most likely be asked to use in the workplace after their academic journey. By highlighting the nuances of positive AI writing use, it is easier to help students understand how AI writing can help them learn (but also avoid misconduct).

The unique situation of students today is that they experienced the launch of ChatGPT simultaneously with their teachers. Some educators felt that ChatGPT would severely compromise the existence of essays as an assessment format or the entire way students are taught. As a result, students witnessed the initial panic from the education sector, with some school districts banning AI writing tools outright. In a humorous example of this panic, one person on Reddit joked that ChatGPT will bring typewriters back as a writing platform ( Fonky_Fesh, 2022 ).

But months out, educators are reevaluating what AI writing tools mean for the educational journey. The unique situation of students today, too, is that they can be a part of this discussion.

While AI writing tools aren’t undermining education as much as previously imagined, it will make assessments more thoughtful. According to an Insider article, Dr. Leah Henrickson, a lecturer at the University of Leeds, “thinks that artificial intelligence, if used carefully, might even make education fairer.” Henrickson states that the University of Leeds is also “looking at modifying its assessments in reaction to the rise in artificial intelligence. It hopes to focus more on critical analysis and judgement [sic]—a human skill— rather than straightforward information retention, which a chatbot like ChatGPT can easily replicate” ( Stacey, 2023 ).

The goal of education is to help students understand concepts and come up with their own original thoughts so that they can then enter the global marketplace of ideas . And if assessments move towards measuring a depth of knowledge, then we are all in a better place.

According to The New York Times, one student states, “A teacher at my school recently asked her class to use ChatGPT to write papers on the novel they were reading in class. The students also wrote their own papers, and compared the results. I found this teaching method to be extremely accommodating and productive. Rather than framing ChatGPT as a way to cheat, and therefore encouraging students to secretly use the forbidden program, teachers can show their students how to use it to their advantage, while still keeping their own original ideas. In today’s world, technology is quickly becoming more intelligent, but I don’t think we have to fear it” ( Devin ).

By modeling proper AI writing tool use within a classroom, students can learn how best to use tools like ChatGPT.

ChatGPT provides opportunities to make the writing process more transparent; for instance, using it as a way for students to critique ChatGPT output helps them refine their reviewing skills. Having students revise ChatGPT-produced first drafts gives students direct access to the revision process and to demystify revision.

Emily Donahoe, a writing tutor and educational developer at the University of Mississippi, says that “ChatGPT could help teachers shift away from an excessive focus on final results,” and now “asks her students to use ChatGPT to generate an argument and then had them annotate it according to how effective they thought the argument was for a specific audience. Then they turned in a rewrite based on their criticism” ( Heaven, 2023 ).

Using ChatGPT to provide more insights into writing can help teachers communicate to students how best to use AI writing tools.

ChatGPT can help language learners because it has the ability to translate assignments into a student’s native language. By prompting ChatGPT with a simple “translate this reading into Korean,” educators can bridge learning gaps ( Blose, 2023 ).

Furthermore, because ChatGPT engages in conversation, learners can practice their grammar and syntax by conversing with ChatGPT, which can provide feedback to increase language proficiency. In their Medium post, Michael King states, “I started testing ChatGPT as a German tutor, I quickly realized that it was much more than just a source of information. ChatGPT’s unique ability to actively test my learned skills and provide suggestions for improvement made it a true game-changer.”

ChatGPT’s impact on language learning has many more benefits, including instantaneous feedback, personalized learning, and increasing exposure to the target language ( Cai, 2023 ). Using ChatGPT as an actual learning tool can help students understand its benefits while becoming aware of its pitfalls.

Students come to our classrooms with an awareness of AI writing tools. While many students comprehend that AI writing can be misused, it’s important to define the difference between proper and improper use of tools like ChatGPT. Having a discussion about learning and the ways in which ChatGPT can help or inhibit the ways in which students absorb information can highlight the intersection of AI writing tools and academic integrity.

Students know how to use new AI writing tools. And what they need to learn is why ChatGPT may result in lower grades. What they need is to be in synchronization with the learning journey and educator goals. Having an in-depth discussion that is nuanced helps clarify the gray areas regarding AI writing tool use; for instance, that while used in professional writing, they are still learning and need to produce their own original writing, even if ChatGPT might be used to generate a first draft.

Pedagogy, too, is evolving. Students are at an amazing flex point in this point in history and they should know they are privileged to be a part of the conversation. And that it is going to change the future, including the ways in which knowledge is measured.

The Brookings Institution quotes Adam Stevens as saying, “ChatGPT is only a threat if our education system continues to ‘pursue rubric points and not knowledge.’ It is critical for all educators to follow their colleague’s example…The old education model in which teachers deliver information to later be condensed and repeated will not prepare our students for success in the classroom—or the jobs of tomorrow. We should allow that model to die a peaceful death. Used in the right way, ChatGPT can be a friend to the classroom and an amazing tool for our students, not something to be feared” ( Hirsh-Pasek & Blinkoff, Brookings Institute, 2023 ).

Have a thesis expert improve your writing

Check your thesis for plagiarism in 10 minutes, generate your apa citations for free.

  • Knowledge Base

Using AI tools

  • How to Write a Paper with ChatGPT | Tips & Examples

How to Write a Paper with ChatGPT | Tips & Examples

Published on 17 August 2023 by Koen Driessen .

A good research paper demonstrates your academic writing skills and knowledge of your chosen topic.

Your research paper should be based on in-depth independent research. However, generative AI tools like ChatGPT can be effectively used throughout the research process to:

  • Brainstorm research questions
  • Develop a methodology
  • Create an outline
  • Find sources
  • Summarise and paraphrase text
  • Provide feedback

Instantly correct all language mistakes in your text

Be assured that you'll submit flawless writing. Upload your document to correct all your mistakes.

upload-your-document-ai-proofreader

Table of contents

Brainstorming research questions, developing a methodology, creating an outline, finding sources, summarising or paraphrasing text, receiving feedback, other interesting articles, frequently asked questions.

You can use ChatGPT to help formulate research questions , brainstorm potential thesis statements , or narrow down a broad subject area to a specific topic that interests you.

However, make sure that the outputs make sense and meet the requirements of your assignment. You can adapt these outputs manually, or use further prompts to ensure they meet your needs. For more information, see our tips for creating effective ChatGPT prompts .

  • Generate three possible research questions for a research paper on the following topic: “The role of virtual reality in enhancing training and skill development in various industries.”
  • “What are the cognitive and skill development benefits of virtual reality-based training compared to traditional training methods across different industries?”
  • “How does the level of immersion in virtual reality training impact knowledge retention and skill transfer in fields such as medicine, aviation, and manufacturing?”
  • “What are the key factors that influence the successful integration and adoption of virtual reality technology for training purposes in diverse industries, and what challenges hinder its widespread implementation?”

The only proofreading tool specialized in correcting academic writing

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

essay written by chat gpt example

Correct my document today

Your paper should clearly indicate the data collection and analysis methods you used in your research.

You can use ChatGPT to help decide what kind of research you will conduct (e.g., qualitative or quantitative ), how you will analyse the data, and how you will mitigate potential research bias .

Bear in mind that you should be able to rationalize your chosen methodology . For example, you should ensure that it is either a standard methodology in the field or can be justified.

  • Suggest three quantitative research methods appropriate to the following question: “What are the cognitive and skill development benefits of virtual reality-based training compared to traditional training methods in the field of healthcare?”
  • Experimental Study: Conduct a randomized controlled trial (RCT) where healthcare professionals are randomly assigned to either a virtual reality-based training group or a traditional training group. Both groups would receive training in specific healthcare tasks, and their cognitive and skill development outcomes would be measured and compared. Performance metrics, such as accuracy, speed, and efficiency, could be quantitatively assessed to determine the benefits of virtual reality-based training over traditional methods.
  • Survey Research: Design a comprehensive survey targeting healthcare professionals who have undergone both virtual reality-based training and traditional training. The survey could include Likert-scale questions and structured response options to gauge their perceived cognitive and skill development benefits from each training method. Additionally, open-ended questions could be included to gather qualitative insights on specific strengths and weaknesses of each training approach.
  • Task Performance Analysis: Implement a task-based assessment that simulates real-life healthcare scenarios, where participants are evaluated on their ability to handle critical situations. Participants would go through both virtual reality-based training and traditional training before being assessed in the simulated scenarios. Their performance in the assessment could be objectively measured using predefined criteria, allowing for a quantitative comparison of skill development between the two training methods.

A clear outline can help structure your paper, organise your arguments effectively, and make your research more efficient.

You can use generative AI tools like ChatGPT to formulate a potential outline to help guide your research. You can be as specific as you like, giving an overview of your research objectives , main arguments, and any evidence you intend to include in your paper.

However, ChatGPT outputs sometimes contain factual inaccuracies. It’s important to check the headings and subheadings suggested in the output and adapt them to meet your needs.

  • Create an outline for an academic research paper with the following research question:“What are the cognitive and skill development benefits of virtual reality-based training compared to traditional training methods in the field of healthcare?” The paper will be about 10 pages long.

B. Existing studies on cognitive and skill development benefits of virtual reality training

C. Comparison of virtual reality-based training with traditional training methods

D. Gaps in the literature and research limitations

III. Theoretical Framework

A. Theoretical basis for cognitive and skill development in training

B. Application of cognitive theories to virtual reality-based training

C. The role of immersive learning environments in skill acquisition

D. Conceptual framework for the study

IV. Research Methodology

A. Research design and approach (e.g., quasi-experimental, pre-post intervention)

B. Selection of participants and sample size calculation

C. Description of the virtual reality-based training and traditional training programs

D. Instruments and measures for assessing cognitive and skill development

E. Data collection procedures and ethical considerations

V. Data Analysis

A. Overview of statistical methods used for data analysis

B. Treatment of data and data cleaning

C. Quantitative analysis of cognitive and skill development outcomes

D. Presentation of findings with charts and tables

VI. Results

A. Presentation of the results of the cognitive and skill development assessments

B. Comparison of outcomes between virtual reality-based training and traditional training

C. Statistical significance and effect size analysis

D. Interpretation of the results and implications

VII. Discussion

A. Interpretation of findings in the context of the research question

B. Comparison of results with existing literature

C. Explanation of how virtual reality-based training impacts cognitive processes and skill development

D. Limitations of the study and areas for future research

VIII. Practical Implications

A. Practical applications of virtual reality-based training in healthcare settings

B. Recommendations for integrating virtual reality training in healthcare education

C. Benefits of virtual reality for continuous professional development in healthcare

IX. Conclusion

A. Summary of key findings

B. Reiteration of the research question and hypothesis

C. Contributions of the study to the field of healthcare training

D. Concluding remarks

X. References

You can use ChatGPT to help find sources relevant to your research. However, it’s important to be aware of the limitations of ChatGPT (for example, it occasionally cites nonexistent sources).

Instead of asking ChatGPT for specific source recommendations, try asking it for recommendations on the types of sources relevant to your research topic. You can also use it to generate a list of relevant keywords to use in your literature search.

  • What types of sources could I use to write a research paper on the following research question? “What are the cognitive and skill development benefits of virtual reality-based training compared to traditional training methods in the field of healthcare?”
  • Academic Journals: Peer-reviewed journals are valuable sources of scholarly research. Look for articles in medical education, cognitive psychology, virtual reality applications in healthcare, and technology-enhanced learning.
  • Research Studies: Seek out published research studies that specifically investigate the impact of virtual reality-based training on cognitive and skill development in healthcare professionals.
  • Systematic Reviews and Meta-Analyses: These studies provide comprehensive overviews of existing research on a specific topic, making them a valuable source for summarizing and analyzing the overall evidence.
  • Books and Book Chapters: Academic books and book chapters authored by experts in the fields of medical education, virtual reality, cognitive science, or instructional design can provide in-depth insights and theoretical foundations.
  • Conference Proceedings: Proceedings from relevant conferences and symposiums may present the latest findings and advancements in virtual reality-based training and healthcare education.
  • Educational Institution Websites: Universities and medical schools often publish research papers, case studies, and whitepapers related to medical education and virtual reality training.
  • Industry Reports: Reports and whitepapers from healthcare organizations or technology companies exploring the benefits of virtual reality-based training in the healthcare sector.
  • Government and Non-profit Organisations: Look for reports and publications from government agencies and non-profit organisations that focus on medical education and healthcare training.
  • Online Databases: Utilize academic databases such as PubMed, Google Scholar, IEEE Xplore, and ScienceDirect to search for peer-reviewed research articles and conference papers.
  • Expert Interviews: If possible, conduct interviews with experts in medical education, cognitive science, and virtual reality to gain valuable insights and perspectives.
  • Surveys and Polls: Refer to reputable surveys and polls related to virtual reality-based training in the healthcare industry for additional statistical data.

You can use ChatGPT to paraphrase or summarise text. This can help you to condense sources to their most important points and explore new ways of expressing your ideas.

Alternatively you can use the more specialised tools featured on Scribbr’s AI writing resources page, which are designed specifically for these purposes and will give a smoother user experience.

  • Provide a short summary of the following text:“One of the most profound applications of VR in healthcare is its use in surgical training. Traditionally, surgical training has relied on a combination of didactic lectures, observation of procedures, and hands-on experience in the operating room. However, these methods may not provide the ideal environment for novices to develop the necessary dexterity and spatial understanding essential for complex surgeries. VR overcomes these limitations by offering immersive, true-to-life simulations that allow surgeons to navigate through intricate anatomical structures, practice incisions, and learn to manage unexpected challenges that may arise during surgeries. This hands-on experience in a risk-free environment not only enhances technical skills but also builds confidence and reduces the learning curve for new surgical techniques.”
  • Virtual Reality (VR) has a profound impact on healthcare, particularly in surgical training, as it provides immersive simulations that help surgeons develop dexterity, spatial understanding, and confidence for complex surgeries in a risk-free environment, ultimately enhancing their technical skills and reducing the learning curve.

When you’ve finished writing your research paper, you can use ChatGPT to receive feedback. You can be as specific as you like, selecting particular aspects the output should focus on (e.g., tone, clarity of structure, appropriateness of evidence to support your arguments).

You can also use ChatGPT to check grammar, spelling, and punctuation. However, it’s not designed for this purpose and occasionally misses errors. We recommend using a more specialised tool like Scribbr’s free grammar checker . Or, for more comprehensive feedback, Scribbr’s proofreading and editing service .

  • Feedback the above text with regard to appropriateness of evidence and coherence of structure.
  • The research paper demonstrates a well-structured and coherent presentation of the cognitive and skill development benefits of VR-based surgical training. The introduction effectively sets the context and rationale, while the literature review incorporates relevant historical and empirical evidence. The theoretical framework strengthens the foundation for the analysis.The methodology section could provide more details on the research design and study selection criteria. However, the sections on cognitive and skill development benefits present compelling evidence, supported by appropriate sub-sections.The exploration of improved patient outcomes lacks concrete examples but effectively outlines potential benefits. The section on challenges and future directions raises relevant points, but suggestions for solutions could enhance practicality.Overall, the paper combines theoretical, empirical, and practical evidence, offering a well-rounded approach. It establishes the significance of VR-based surgical training and concludes with a concise summary of key findings.

If you want more tips on using AI tools , understanding plagiarism , and citing sources , make sure to check out some of our other articles with explanations, examples, and formats.

  • Citing ChatGPT
  • Best grammar checker
  • Best paraphrasing tool
  • ChatGPT in your studies
  • Is ChatGPT trustworthy?
  • Types of plagiarism
  • Self-plagiarism
  • Avoiding plagiarism
  • Academic integrity
  • Best plagiarism checker

Citing sources

  • Citation styles
  • In-text citation
  • Citation examples
  • Annotated bibliography

Yes, you can use ChatGPT to summarise text . This can help you understand complex information more easily, summarise the central argument of your own paper, or clarify your research question.

You can also use Scribbr’s free text summariser , which is designed specifically for this purpose.

Yes, you can use ChatGPT to paraphrase text to help you express your ideas more clearly, explore different ways of phrasing your arguments, and avoid repetition.

However, it’s not specifically designed for this purpose. We recommend using a specialised tool like Scribbr’s free paraphrasing tool , which will provide a smoother user experience.

No, having ChatGPT write your college essay can negatively impact your application in numerous ways. ChatGPT outputs are unoriginal and lack personal insight.

Furthermore, Passing off AI-generated text as your own work is considered academically dishonest . AI detectors may be used to detect this offense, and it’s highly unlikely that any university will accept you if you are caught submitting an AI-generated admission essay.

However, you can use ChatGPT to help write your college essay during the preparation and revision stages (e.g., for brainstorming ideas and generating feedback).

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Driessen, K. (2023, August 17). How to Write a Paper with ChatGPT | Tips & Examples. Scribbr. Retrieved 26 August 2024, from https://www.scribbr.co.uk/using-ai-tools/chatgpt-paper/

Is this article helpful?

Koen Driessen

Koen Driessen

Other students also liked, how to write good chatgpt prompts, chatgpt citations | formats & examples, ethical implications of chatgpt.

The College Essay Is Dead

Nobody is prepared for how AI will transform academia.

An illustration of printed essays arranged to look like a skull

Suppose you are a professor of pedagogy, and you assign an essay on learning styles. A student hands in an essay with the following opening paragraph:

The construct of “learning styles” is problematic because it fails to account for the processes through which learning styles are shaped. Some students might develop a particular learning style because they have had particular experiences. Others might develop a particular learning style by trying to accommodate to a learning environment that was not well suited to their learning needs. Ultimately, we need to understand the interactions among learning styles and environmental and personal factors, and how these shape how we learn and the kinds of learning we experience.

Pass or fail? A- or B+? And how would your grade change if you knew a human student hadn’t written it at all? Because Mike Sharples, a professor in the U.K., used GPT-3, a large language model from OpenAI that automatically generates text from a prompt, to write it. (The whole essay, which Sharples considered graduate-level, is available, complete with references, here .) Personally, I lean toward a B+. The passage reads like filler, but so do most student essays.

Sharples’s intent was to urge educators to “rethink teaching and assessment” in light of the technology, which he said “could become a gift for student cheats, or a powerful teaching assistant, or a tool for creativity.” Essay generation is neither theoretical nor futuristic at this point. In May, a student in New Zealand confessed to using AI to write their papers, justifying it as a tool like Grammarly or spell-check: ​​“I have the knowledge, I have the lived experience, I’m a good student, I go to all the tutorials and I go to all the lectures and I read everything we have to read but I kind of felt I was being penalised because I don’t write eloquently and I didn’t feel that was right,” they told a student paper in Christchurch. They don’t feel like they’re cheating, because the student guidelines at their university state only that you’re not allowed to get somebody else to do your work for you. GPT-3 isn’t “somebody else”—it’s a program.

The world of generative AI is progressing furiously. Last week, OpenAI released an advanced chatbot named ChatGPT that has spawned a new wave of marveling and hand-wringing , plus an upgrade to GPT-3 that allows for complex rhyming poetry; Google previewed new applications last month that will allow people to describe concepts in text and see them rendered as images; and the creative-AI firm Jasper received a $1.5 billion valuation in October. It still takes a little initiative for a kid to find a text generator, but not for long.

The essay, in particular the undergraduate essay, has been the center of humanistic pedagogy for generations. It is the way we teach children how to research, think, and write. That entire tradition is about to be disrupted from the ground up. Kevin Bryan, an associate professor at the University of Toronto, tweeted in astonishment about OpenAI’s new chatbot last week: “You can no longer give take-home exams/homework … Even on specific questions that involve combining knowledge across domains, the OpenAI chat is frankly better than the average MBA at this point. It is frankly amazing.” Neither the engineers building the linguistic tech nor the educators who will encounter the resulting language are prepared for the fallout.

A chasm has existed between humanists and technologists for a long time. In the 1950s, C. P. Snow gave his famous lecture, later the essay “The Two Cultures,” describing the humanistic and scientific communities as tribes losing contact with each other. “Literary intellectuals at one pole—at the other scientists,” Snow wrote. “Between the two a gulf of mutual incomprehension—sometimes (particularly among the young) hostility and dislike, but most of all lack of understanding. They have a curious distorted image of each other.” Snow’s argument was a plea for a kind of intellectual cosmopolitanism: Literary people were missing the essential insights of the laws of thermodynamics, and scientific people were ignoring the glories of Shakespeare and Dickens.

The rupture that Snow identified has only deepened. In the modern tech world, the value of a humanistic education shows up in evidence of its absence. Sam Bankman-Fried, the disgraced founder of the crypto exchange FTX who recently lost his $16 billion fortune in a few days , is a famously proud illiterate. “I would never read a book,” he once told an interviewer . “I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that.” Elon Musk and Twitter are another excellent case in point. It’s painful and extraordinary to watch the ham-fisted way a brilliant engineering mind like Musk deals with even relatively simple literary concepts such as parody and satire. He obviously has never thought about them before. He probably didn’t imagine there was much to think about.

The extraordinary ignorance on questions of society and history displayed by the men and women reshaping society and history has been the defining feature of the social-media era. Apparently, Mark Zuckerberg has read a great deal about Caesar Augustus , but I wish he’d read about the regulation of the pamphlet press in 17th-century Europe. It might have spared America the annihilation of social trust .

These failures don’t derive from mean-spiritedness or even greed, but from a willful obliviousness. The engineers do not recognize that humanistic questions—like, say, hermeneutics or the historical contingency of freedom of speech or the genealogy of morality—are real questions with real consequences. Everybody is entitled to their opinion about politics and culture, it’s true, but an opinion is different from a grounded understanding. The most direct path to catastrophe is to treat complex problems as if they’re obvious to everyone. You can lose billions of dollars pretty quickly that way.

As the technologists have ignored humanistic questions to their peril, the humanists have greeted the technological revolutions of the past 50 years by committing soft suicide. As of 2017, the number of English majors had nearly halved since the 1990s. History enrollments have declined by 45 percent since 2007 alone. Needless to say, humanists’ understanding of technology is partial at best. The state of digital humanities is always several categories of obsolescence behind, which is inevitable. (Nobody expects them to teach via Instagram Stories.) But more crucially, the humanities have not fundamentally changed their approach in decades, despite technology altering the entire world around them. They are still exploding meta-narratives like it’s 1979, an exercise in self-defeat.

Read: The humanities are in crisis

Contemporary academia engages, more or less permanently, in self-critique on any and every front it can imagine. In a tech-centered world, language matters, voice and style matter, the study of eloquence matters, history matters, ethical systems matter. But the situation requires humanists to explain why they matter, not constantly undermine their own intellectual foundations. The humanities promise students a journey to an irrelevant, self-consuming future; then they wonder why their enrollments are collapsing. Is it any surprise that nearly half of humanities graduates regret their choice of major ?

The case for the value of humanities in a technologically determined world has been made before. Steve Jobs always credited a significant part of Apple’s success to his time as a dropout hanger-on at Reed College, where he fooled around with Shakespeare and modern dance, along with the famous calligraphy class that provided the aesthetic basis for the Mac’s design. “A lot of people in our industry haven’t had very diverse experiences. So they don’t have enough dots to connect, and they end up with very linear solutions without a broad perspective on the problem,” Jobs said . “The broader one’s understanding of the human experience, the better design we will have.” Apple is a humanistic tech company. It’s also the largest company in the world.

Despite the clear value of a humanistic education, its decline continues. Over the past 10 years, STEM has triumphed, and the humanities have collapsed . The number of students enrolled in computer science is now nearly the same as the number of students enrolled in all of the humanities combined.

And now there’s GPT-3. Natural-language processing presents the academic humanities with a whole series of unprecedented problems. Practical matters are at stake: Humanities departments judge their undergraduate students on the basis of their essays. They give Ph.D.s on the basis of a dissertation’s composition. What happens when both processes can be significantly automated? Going by my experience as a former Shakespeare professor, I figure it will take 10 years for academia to face this new reality: two years for the students to figure out the tech, three more years for the professors to recognize that students are using the tech, and then five years for university administrators to decide what, if anything, to do about it. Teachers are already some of the most overworked, underpaid people in the world. They are already dealing with a humanities in crisis. And now this. I feel for them.

And yet, despite the drastic divide of the moment, natural-language processing is going to force engineers and humanists together. They are going to need each other despite everything. Computer scientists will require basic, systematic education in general humanism: The philosophy of language, sociology, history, and ethics are not amusing questions of theoretical speculation anymore. They will be essential in determining the ethical and creative use of chatbots, to take only an obvious example.

The humanists will need to understand natural-language processing because it’s the future of language, but also because there is more than just the possibility of disruption here. Natural-language processing can throw light on a huge number of scholarly problems. It is going to clarify matters of attribution and literary dating that no system ever devised will approach; the parameters in large language models are much more sophisticated than the current systems used to determine which plays Shakespeare wrote, for example . It may even allow for certain types of restorations, filling the gaps in damaged texts by means of text-prediction models. It will reformulate questions of literary style and philology; if you can teach a machine to write like Samuel Taylor Coleridge, that machine must be able to inform you, in some way, about how Samuel Taylor Coleridge wrote.

The connection between humanism and technology will require people and institutions with a breadth of vision and a commitment to interests that transcend their field. Before that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance. But that’s always been the beginning of wisdom, no matter what technological era we happen to inhabit.

About the Author

More Stories

The Secret That Dr. Ruth Knew

The Future of Writing Is a Lot Like Hip-Hop

Chat GPT Essays

Ancient egypt and chat gpt, chat gpt is chatbot, popular essay topics.

  • American Dream
  • Artificial Intelligence
  • Black Lives Matter
  • Bullying Essay
  • Career Goals Essay
  • Causes of the Civil War
  • Child Abusing
  • Civil Rights Movement
  • Community Service
  • Cultural Identity
  • Cyber Bullying
  • Death Penalty
  • Depression Essay
  • Domestic Violence
  • Freedom of Speech
  • Global Warming
  • Gun Control
  • Human Trafficking
  • I Believe Essay
  • Immigration
  • Importance of Education
  • Israel and Palestine Conflict
  • Leadership Essay
  • Legalizing Marijuanas
  • Mental Health
  • National Honor Society
  • Police Brutality
  • Pollution Essay
  • Racism Essay
  • Romeo and Juliet
  • Same Sex Marriages
  • Social Media
  • The Great Gatsby
  • The Yellow Wallpaper
  • Time Management
  • To Kill a Mockingbird
  • Violent Video Games
  • What Makes You Unique
  • Why I Want to Be a Nurse
  • Send us an e-mail

Camosun Library Logo-clicking on the image will take you to the library home page

Artificial Intelligence: A Guide for Students

  • What is Artificial Intelligence (AI)?
  • What is ChatGPT?
  • Appropriate Uses of AI
  • Inappropriate Uses of AI
  • Assessing AI-Generated Content
  • Privacy and Security
  • Ethical Considerations of AI
  • Fine Tuning Prompts for AI Tools
  • For Faculty

Attribution

Unless otherwise stated this guide is based on  Artificial Intelligence: A Guide for Students  created by Brenda Smith at TRU  and licensed under a Creative Commons Attribution Non-Commercial Share Alike (CC BY-NC-SA 4.0) . Brenda Smith's guide was adapted from Artificial Intelligence  by Ulrike Kestler from KPU used under  CC BY-NC-SA 4.0 .

essay written by chat gpt example

How to acknowledge the use of AI

If your instructor permits you to use generative AI in your assignment, make sure to follow the instructions on how exactly you can use it and to acknowledge its use. We highly  recommend  that you include the following information when using AI generated content in your work:

  • Written acknowledgment of the generative AI tool used and for what purpose
  • The prompt(s) used to generate the content 
  • A note about your modifications
  • In-text citation and bibliographic citation  
  • Note: In the past, content created by ChatGPT could not be replicated because an identical prompt would not produce the same response as before. This means that your instructor would not have been able to verify the result your original prompt created. There are now third-party tools such as  ShareGPT  and  AI Archives  that allow you to share the original output by making an archive of the content and a custom link. These tools could be useful for citing your use of AI because you can include this link instead of or in addition to a screenshot or printout of the output. ChatGPT has also introduced a link-sharing feature , but please be aware that the link will not work if you delete a conversation from your account.

Examples of written acknowledgments

Acknowledging the use of generative AI tools is important.  Keep records for how you used ChatGPT or other tools, and provide this information as part of your assignment submission.  

Use the following template to create your acknowledgment of AI use. 

Suggested AI acknowledgement template:

I acknowledge the use of [insert AI system(s) and link] to [specific use of generative artificial intelligence]. I entered the following prompts on [date]: [list of prompts]. The output from these prompts was used to [explain use]. A copy of the original output is attached with my assignment.

Here are two examples of written acknowledgements, but please check with your instructor about their preferences.   

I acknowledge the use of ChatGPT ( https://chat.openai.com/ ) to generate materials for background research and self-study for this assignment. I entered the following prompts on 7 April 2024:

  • Write a 250 word summary about the history of the Kamloops Indian Residential School. Add two references.

The original output was adapted and modified for the final response. A copy of the original output is attached with my assignment.

I acknowledge the use of ChatGPT ( https://chat.openai.com/ ) to refine the academic language and accuracy of my own work. I submitted my entire essay and entered the following prompts on 7 April 2024:

  • Improve the academic tone and accuracy of language, including grammatical structures, punctuation and vocabulary

The original output was adapted and modified for the final response. A copy of my original essay and a copy of the original output are attached with my assignment.

Citing AI by Citation Style

Some citation styles have begun to develop guidelines for citing ChatGPT and other generative AI. For example APA, Chicago, and MLA have posted guidance about citing generative AI. Guidelines may change and new citation styles may be added, so check for updates, and consult resources specific to the citation style you are using. 

  • How to Cite ChatGPT - APA Style
  • How do I cite generative AI in MLA style? - MLA Style Center
  • How do you recommend citing content developed or generated by artificial intelligence, such as ChatGPT? - Chicago Manual of Style

AIs and Authorship

Publishers are taking various approaches related to the use of generative AI. If you are writing for publication, check the publisher's information for authors. As of August 2023, several publishers have provided guidance on how AI tools should be considered in their publications:

  • Cambridge University Press : AI must be declared and clearly explained, and does not meet the requirements for authorship in terms of accountability 
  • Elsevier : AI and AI-assisted tools do not qualify for authorship. Authors who use AI or AI-assisted tools during the manuscript writing process are asked to disclose their use in a separate section of the manuscript. Check out FAQ on the use of AI and AI-assisted writing technologies in scientific writing  for more information.
  • Nature and other Springer Journals : Large Language Models (LLMs) like ChatGPT will not be accepted as credited authors on research papers. However, researchers using LLM tools should document their use in the methods or acknowledgement section
  • Oxford University Press : Processing tools driven by artificial intelligence (AI) such as ChatGPT do not qualify as authors, and OUP will screen for them in author lists. The use of AI (i.e., to help generate content, write code, or analyze data) must be disclosed both in cover letters to editors and in the Methods or Acknowledgements section of manuscripts
  • Science :  Artificial Intelligence (AI) policy states: “Text generated from AI, machine learning, or similar algorithmic tools cannot be used in papers published in Science journals, nor can the accompanying figures, images, or graphics be the products of such tools, without explicit permission from the editors. In addition, an AI program cannot be an author of a Science journal paper. A violation of this policy constitutes scientific misconduct.”
  • Springer Nature : AI Authorship states: "Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our  authorship  criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs."  
  • Taylor & Francis : AI tools cannot be listed as an author, and their use must be acknowledge and documented appropriately 
  • Wiley : AI tools cannot be listed as an author of an article. If an author has used AI to develop any portion of a manuscript, its use must be described, transparently and in detail, in the Methods or Acknowledgements section.
  • << Previous: Assessing AI-Generated Content
  • Next: Privacy and Security >>
  • Last Updated: Aug 28, 2024 3:14 PM
  • URL: https://camosun.libguides.com/artificialintelligence

IMAGES

  1. How to use Chat GPT to write an essay or article

    essay written by chat gpt example

  2. How To use Chat GPT To Write an Essay

    essay written by chat gpt example

  3. How OpenAI's Chat GPT Can Benefit Your Essay Writing

    essay written by chat gpt example

  4. How to use Chat GPT to Write an Essay: Begginers Tips

    essay written by chat gpt example

  5. How To Use Chat Gpt To Write An Essay With Ease

    essay written by chat gpt example

  6. 10 Tips for Using Chat GPT to Ace Your Essays in 2023

    essay written by chat gpt example

VIDEO

  1. Conversation in Spanish with Chat GPT: Language Learning Update

  2. Chat GPT Writing a Book Review: How We Did It

  3. Best chat gpt essay gets pinned

  4. Using Chat GPT to help you with your IB Extended Essay

  5. #ARTinHEART 1st Live Chat with Sílvia dos Santos

  6. How to Get ChatGPT to write an essay 2024

COMMENTS

  1. How to Write an Essay with ChatGPT

    For example, you can include the writing level (e.g., high school essay, college essay), perspective (e.g., first person) and the type of essay you intend to write (e.g., argumentative, descriptive, expository, or narrative ). You can also mention any facts or viewpoints you've gathered that should be incorporated into the output.

  2. How to Write a Paper with ChatGPT

    You can use generative AI tools like ChatGPT to formulate a potential outline to help guide your research. You can be as specific as you like, giving an overview of your research objectives, main arguments, and any evidence you intend to include in your paper. However, ChatGPT outputs sometimes contain factual inaccuracies.

  3. How to Write an Introduction Using ChatGPT

    When writing an essay introduction, ChatGPT can help you to develop an outline, summarize text, and receive feedback.

  4. Writing an Essay with ChatGPT

    The simplest way of using ChatGPT is to ask it to give you an essay directly by the following prompt: Write an essay in support of the following statement: As people rely more and more on technology to solve problems, the ability of humans to think for themselves will surely deteriorate.

  5. Chat GPT Essay Example: Enhancing Communication and Creativity

    Explore a comprehensive chat GPT essay example that highlights the power of AI in communication and creative writing. Discover the benefits, insights, and potential applications of using chat GPT in essay writing.

  6. 5 ways ChatGPT can help you write an essay

    How ChatGPT can help you write an essay If you are looking to use ChatGPT to support or replace your writing, here are five different techniques to explore.

  7. How to Use ChatGPT to Write an Essay: Examples and Tips

    Another point for using Chat GPT to write an essay is that it can create a thorough outline for your topic. Upon break-in into the article topic, it can produce a range of related subtopics and key moments which you could develop in your writing.

  8. Should I Use ChatGPT to Write My Essays?

    Should I Use ChatGPT to Write My Essays? Everything high school and college students need to know about using — and not using — ChatGPT for writing essays.

  9. A large-scale comparison of human-written versus ChatGPT-generated essays

    RQ1: How good is ChatGPT based on GPT-3 and GPT-4 at writing argumentative student essays? RQ2: How do AI-generated essays compare to essays written by students?

  10. How to use Chat GPT to Write an Essay

    Mastering Chat GPT to write an essay is what you need if you want an easier college experience, and here in simple steps, we show you how!

  11. Write an Essay From Scratch With Chat GPT: Step-by-Step Tutorial

    Writing an essay with Chat GPT is possible, but you need to know how to use it. Learn how to write an argumentative paper and a book review!

  12. Using ChatGPT to Write a College Essay

    A good college essay helps to distinguish your application from those of other candidates. It conveys your experiences, values, and character traits and

  13. How to Grade Essays with ChatGPT

    ChatGPT can analyze written content for various parameters, including content quality, argument structure, coherence, and adherence to guidelines. Whether you use a continuous scoring system (e.g., quality of writing) or a discrete one (e.g., essay positions), ChatGPT can be tailored to your specific needs, offering customized feedback for different writing styles and assignments. Literature ...

  14. How to Use OpenAI to Write Essays: ChatGPT Tips for Students

    Are you curious about using ChatGPT to write an essay? While most instructors have tools that make it easy to detect AI-written essays, there are ways you can use OpenAI's ChatGPT to write papers without worrying about plagiarism or getting caught. In addition to writing essays for you, ChatGPT can also help you come up with topics, write outlines, find sources, check your grammar, and even ...

  15. 7 Surefire Signs That ChatGPT Has Written an Essay Revealed

    7 Surefire Signs That ChatGPT Has Written an Essay Revealed Researchers at Cambridge University have revealed how to spot ChatGPT's distinctive writing style.

  16. Can ChatGPT write a college admission essay? We tested it

    Can an AI generated college essay fool an Ivy League admissions counselor? We put ChatGPT to the test.

  17. How to Use ChatGPT for Writing a Personal Statement

    In the following section, we'll explore how to use ChatGPT ethically to assist with writing your personal statement. Using ChatGPT for Brainstorming. The first way to use ChatGPT is for brainstorming ideas. When writing a personal statement, you need to create a document that showcases why you are a good fit for a particular degree.

  18. How to Write an Essay with ChatGPT

    How to Write an Essay with ChatGPT | Tips & Examples Published on 26 June 2023 by Koen Driessen . Passing off AI-generated text as your own work is widely considered plagiarism. However, when used correctly, generative AI tools like ChatGPT can legitimately help guide your writing process.

  19. Using ChatGPT for Assignments

    People are still figuring out the best use cases for ChatGPT, the popular chatbot based on a powerful AI language model. This article provides some ideas for how to use ChatGPT and other AI tools to assist with your academic writing.

  20. ChatGPT essay writing: How to talk to students about AI and ...

    ChatGPT essay writing is a hot topic, especially in the realm of teaching and learning where it can impact academic integrity. Should teachers incorporate ChatGPT and AI writing into their teaching curriculum? Or should AI writing be entirely banned and eschewed? The teacher response is incredibly bifurcated, with those who are using it or thinking of using it to facilitate early drafts. And ...

  21. How to Write a Paper with ChatGPT

    How to Write a Paper with ChatGPT | Tips & Examples Published on 17 August 2023 by Koen Driessen . A good research paper demonstrates your academic writing skills and knowledge of your chosen topic. Your research paper should be based on in-depth independent research.

  22. Will ChatGPT Kill the Student Essay?

    The essay, in particular the undergraduate essay, has been the center of humanistic pedagogy for generations. It is the way we teach children how to research, think, and write.

  23. Chat GPT Essay Examples

    Chat GPT Is Chatbot Introduction Definition of Chatbot and Overview of Chat GPT and its capabilities Chatbot can be defined as a computer program designed to simulate conversation with human users using natural language processing and artificial intelligence (Adamopoulou & Moussiades, 2020).

  24. ChatGPT Essays

    The essays below were written by AI essay writing applications including OpenAI and ChatGPT, as an example of the current strengths of artificial intelligence to replicate human written content. If you are looking for help with your essay then we offer a comprehensive writing service, provided by fully qualified academics in your field of study ...

  25. Citing AI

    Unless otherwise stated this guide is based on Artificial Intelligence: A Guide for Students created by Brenda Smith at TRU and licensed under a Creative Commons Attribution Non-Commercial Share Alike (CC BY-NC-SA 4.0).Brenda Smith's guide was adapted from Artificial Intelligence by Ulrike Kestler from KPU used under CC BY-NC-SA 4.0.