Banner

ESL (English as a Second Language) Research Guide

  • Writing a Research Paper
  • Finding Books
  • Finding Articles
  • Citing & Evaluating This link opens in a new window
  • Annotated Bibliography

Books Providing Research & Writing Guidelines

research paper for esl students

Research Paper Fundamentals

1.  Choose a Topic

2.  Find Information on the Topic

3.  Take Detailed Notes on Information and Sources of Information

4.  Organize the Notes

5.  Create an Outline

6.  Write a First Draft

7.  Include Footnotes or End Notes and a Bibliography

8.  Revise the First Draft

9.  Write and Proofread the Final Draft

Research & Writing Guidelines (Examples from Other Universities)

  • Purdue Online Writing Lab
  • << Previous: Citing & Evaluating
  • Next: Annotated Bibliography >>
  • Last Updated: Mar 8, 2024 11:44 AM
  • URL: https://otterbein.libguides.com/esl
  • Review article
  • Open access
  • Published: 19 January 2018

A review of previous studies on ESL/EFL learners’ interactional feedback exchanges in face-to-face and computer-assisted peer review of writing

  • Murad Abdu Saeed 1 ,
  • Kamila Ghazali 1 &
  • Musheer Abdulwahid Aljaberi 2  

International Journal of Educational Technology in Higher Education volume  15 , Article number:  6 ( 2018 ) Cite this article

11k Accesses

13 Citations

11 Altmetric

Metrics details

This paper is a review of previous studies on learners’ interactional feedback exchanges in face-to-face peer review (FFPR) and computer-assisted peer review (CAPR) of English as Second/Foreign Language (ESL/EFL) writing. The review attempted to (1) identify the patterns of interactional feedback, (2) search an empirical evidence of learners’ incorporation of peer interactional feedback in their text revisions and (3) identify the factors affecting learners’ interactional feedback as reported in these previous studies. To achieve this, a search of previous studies on peer review in writing from 1990 to 2016 was conducted. However, only 37 out of 58 peer reviewed studies were extensively reviewed and systematically analyzed by two independent coders. The findings showed that in terms of the language functions, learners’ interactional feedback exchanges are categorized as (1) exploratory (showing learners’ reflection and interpretation of the task), (2) procedural (showing how learners handle the task of revising their texts) and (3) social (showing how learners maintain good relationships). In relation to the nature and focus areas, learners’ interactional feedback exchanges are revision-oriented (targeting problems or errors in written texts) and non-revision-oriented (do not target any problems). Results of some previous reviewed studies also provided evidence of learners’ integration of peer feedback into their text revisions. However, peer interactional feedback is affected by several factors: training learners on feedback, mode of peer review, type of written tasks, learners’ roles in peer review activties, learners’ proficiency in English and other factors, including learners’ gender differences and configuration of peer review dyads as well as context of peer review. Synthesizing the findings of the reviewed studies, we proposed a dual space-interactional feedback model that comprises the learning space and the social space of interactional feedback in peer review. Several pedagogical, research and technological implications were also drawn from the major findings. Future researchers should pay attention to both spaces of interactional feedback and identify further factors affecting interactional feedback in peer review.

Introduction

Peer/group review in writing, also known as peer response, peer revision (McGroarty & Zhu, 1997 ), peer feedback (Zhu, 2001 ; Hyland & Hyland, 2006 ) or peer evaluation (Stanley, 1992 ), has attracted the attention of many second language (L2) and foreign language (FL) writing practitioners and researchers (Hedge, 2001 ; Hyland & Hyland, 2006 ; Hu & Lam, 2010 ). This is because peer review fits well within the process approach to writing instruction in English as Second/Foreign Language (ESL/EFL) contexts. Its pedagogical value is attributed to its role in motivating learners to become sources of corrective feedback rather than the instructor (Hu, 2005 ; Hyland & Hyland, 2006 ). Peer review also helps learners to evaluate their texts, detect various problems and solve them through text modifications (Min, 2005 , 2006; Hanjani & Li, 2014 ).

From the socio-cultural approach (Vygotsky, 1987), particularly the notions of scaffolding and regulation, as stated by some researchers (eg., Levi Altstaedter, 2016 ; Yang, 2011 ), peer review is a constructive or collaborative activity in which students negotiate the intended ideas and meaning and mutually scaffold each other. Within this theory, peer review provides learners with opportunities to exchange multiple corrective feedback and articulate their knowledge of L2 (Hyland & Hyland, 2006 ). Research grounded on Vygotsky’s ( 1978 ) sociocultural theory and the interactionist theory (Swain & Lapkin, 1998 , 2002 ) indicates that peer review engages learners in scaffolding or assisting each other within the Zone of Proximal Development (ZPD) to solve problems in writing and in using the language for negotiations, thus modifying their output (Villamil & De Guerrero, 1996 ; De Guerrero & Villamil, 2000 ; Hyland & Hyland, 2006 ; Hanjani & Li, 2014 ).

Recently, studies into the applications of asynchronous and synchronous technologies to ESL/EFL group writing in general and peer review in particular (Darhower, 2002 ; Jones, Garralda, Li, & Lock, 2006 ; Liang, 2010 ; Razak & Saeed, 2014 ) have focused on learners’ interactional/feedback exchanges based on the belief that understanding learners’ interaction is a way to understand their cognitive engagement processes such as thinking.

Therefore, the review reported in this paper aimed to provide a synthesis of 37 previous empirical studies on ESL/EFL learners’ interactional feedback exchanges in peer review in writing published from 1990 to 2016. However, since the interests of individual researchers vary, and reporting each study in detail is not necessary, the focus of the review is on the patterns of peer interactional feedback, the role of interactional feedback in improving learners’ text revisions and major factors affecting learners’ interactional feedback in peer review. It attempted to answer the following specific research questions:

(1) What are the patterns of ESL/EFL learners’ interactional feedback exchanges in peer review in writing as identified in previous studies from 1990 to 2016?

(2) Is there empirical evidence of learners’ incorporation of peer interactional feedback exchanges in their text revisions?

(3) What are the major factors that affect ESL/EFL learners’ interactional feedback exchanges in peer review?

In the next sections, we discuss the main theories used as the basis for investigating learners’ feedback in peer review and explain our methodology. Then, we provide a detailed discussion of the synthesized findings of the reviewed studies according to the three research questions. The paper is concluded by useful implications for ESL/EFL pedagogy of writing and technology use in peer review and recommendations for future research.

Literature review

Investigation of peer review in ESL/EFL writing reported in previous research is grounded on several theories in different disciplines, most important of which are the process writing theory (Hayes & Flower, 1980 ), the sociocultural theory (Vygotsky, 1978 ) and the interactionist theory (Swain & Lapkin, 1998 , 2002 ). First, the process writing theory, which marks a remarkable shift in the views and practices of ESL writing from a finished text/product to a dynamic process, is one of the theoretical bases that inform the application of peer review to writing courses (Hayes & Flower, 1980 ). Within this theoretical perspective, peer review plays an important role in assisting learners to produce multiple text revisions of their writing. Proponents of the process writing approach also argue that peer review is not only the last stage of the process writing approach that follows pre-writing and writing stages, but it is also a dynamic and recursive process in which learners discover and negotiate intended meanings and ideas in their writing through feedback and revise their texts accordingly (Saeed & Ghazali, 2016 ).

Within the sociocultural theory (Vygotsky, 1978 ), the concept of ZPD has been associated with another notion that is central to this theory, scaffolding. It is a term that refers to the various forms of supportive behaviours by which one learner can assist another peer to achieve higher levels of regulation (De Guerrero & Villamil, 2000 ). Researchers investigating peer review from this theoretical perspective have reported that peer review engages learners in exchanging reciprocal scaffolding that assists them in targeting issues in their writing and solving them through revisions (e.g., Villamil & De Guerrero, 1996 ; De Guerrero & Villamil, 2000 ; Hanjani & Li, 2014 ).

From the interactionist theory (Long, 1983 , 1985 ; Swain & Lapkin, 1998 , 2002 ; Swain, 2006 ), the application of peer review to writing is based on the assumption that peer review activities encourage learners to exchange peer feedback. Such feedback exchanges facilitate learners’ language development in general and in particular, writing skill. This is because peer feedback provides learners with sufficient input and opportunities to use the language for negotiation and thus, modify their output and enhance their writing (Hyland & Hyland, 2006 ).

ESL/EFL writing instructors and researchers have paid considerable attention to the application of peer review in writing courses. They showed a great interest in applying peer review either in the form of face-to-face peer review (FFPR) (e.g., Mendonca & Johnson, 1994 ; Lockhart & Ng, 1995 ; Zhu, 1995 ; Villamil & De Guerrero, 1996 ; De Guerrero & Villamil, 2000 ; Hanjani & Li, 2014 ) or in compter-assisted peer review (CAPR) by integrating various technological tools, including synchronous and asynchronous tools (e.g., Liou & Peng, 2009 ; Chang, 2012 ; Bradley, 2014 ; Ho, 2015 ). One of the most interesting aspects of peer review for investigation has been ESL/EFL learners’ feedback/interactional exchanges. In other words, as learners engage in peer review, they interact either orally as in the case of FFPR or synchronously/asynchronously as in the case of CAPR. Such peer feedback/interactional exchanges are recognized by many of the above cited researchers as a means to negotiating meanings or ideas in writing, increasing reader awareness, exchanging scaffolds or support, developing learners’ communication skills, promoting their self-regulation and developing their learning autonomy.

Previous researchers have also looked at ESL/EFL learners’ feedback/interactional exchanges. Results indicate that peer review engages learners in exchanging feedback that functions as evaluations, suggestions, clarifications and questions (Mendonca & Johnson, 1994 ; Lockhart & Ng, 1995 ; Zhu, 1995 ; Villamil & De Guerrero, 1996 ; De Guerrero & Villamil, 2000 ; Hanjani & Li, 2014 ). Similar patterns of peer feedback/interactional exchanges have been identified in electronic peer review activities, in addition to other patterns, including alterations (Bradley, 2014 ; Chang, 2012 ; Ho, 2015 ; Liou & Peng, 2009 ; Liu & Sadler, 2003 ) as well as agreements and disagreements (Di Giovanni & Nagaswami, 2001 ) and advice exchanges (Tuzi, 2004 ). Most of these studies also highlight the role of peer feedback exchanges in peer review in identifying issues at the global and local levels of written texts.

Methodology

This section provides a systematic description of the methodology used in our review of previous related studies on ESL/EFL learners’ peer review of writing. Specfically, it describes the sources and phases of searching previous studies as well as determination for inclusion and exclusion and coding and interpreting the results as follows:

Data collection

The current review followed five phases: (1) identification of keywords/themes, (2) searching relevant sources, (3) determination for inclusion, (4) coding and (5) analyzing and interpreting. The first phase was concerned with identifying these keywords: feedback, interaction, commenting patterns, peer feedback, peer review, peer revision, peer correction, peer response, peer evaluation, language functions, focus areas of comments, scope of peer feedback, FFPR, CAPR, synchronous and asynchronous peer review and ESL/EFL writing.

The second phase was exhaustive searching relevant sources. These sources include searching online databases such as ERIC, Educational Abstracts, PsychLit, and Dissertation Abstracts, Searching Google Scholar, searching websites known to reference or contain research related to educational technology such as research gate and linkedin, searching many scholarly e-journals and using general search engines (e.g., Google) in keyword searches for further manuscripts that either had not yet been catalogued in ERIC or were currently under refereed journal review (yet posted on the researcher’s own webpage).

For the international refereed journals, we targeted these journals: Journal of Second Language Writing, Computers and Composition, Computer Assisted Language Learning, Language Learning & Technology, Computers & Education, Australasian Journal of Educational Technology, British Journal of Educational Technology, Journal of English for Academic Purposes, Language learning, Journal of Business and Technical Communication, Journal of Computer Assisted Learning, Learning and Instruction,Computers in Human Behavior, Research in the Teaching of English, Language Teaching Research, Instructional Science, Innovation in Language Learning and Teaching, TESOL Quarterly, Electronic Journal of Foreign Language Teaching, Suranaree Journal of Science and Technology, Multimedia Assisted Language Learning, CALICO Journal, The Modern Language Journal, ELT Journal, System, AsiaCall Online Journal, Journal of Science Ho Chi Minh City Open University, Written Communication, Educational Technology Research and Development, English Language Teaching and Language Teaching.

The third phase was determination for inclusion. This included only studies meeting the following criteria:

(1) Being published in the period from 1990 to 2016.

(2) Had to be carried out in ESL/EFL writing courses.

(3) Had to be focusing on FFPR and or CAPR, specifically interactional feedback.

(4) Had to use a clear analysis of feedback, including qualitative, quantitative or a mixture of both analyses. In other words, the analysis of feedback exchanges should include a clear description of the definitions of the various patterns of feedback exchanges found or identfied in learners’ peer review of writing in these reviewed studies as well as illustrations of patterns with clear examples extracted from learners’ interaction.

(5) Had to have clearly described research contexts, including participants, well-described data collection procedure, particularly the activities in peer review, interpretation and discussion of the findings and implications as well as limitations.

We selected the period from 1990 to 2016 for several reasons. First, we wanted to see the development of research on peer feedback in ESL/EFL writing over one and half decades. Secondly, we noticed that studies looking into peer feedback interactional exchanges that are published online and accessible are dated back to the early 1990 (e.g. Stanley, 1992 ; Beason, 1993 ). Thirdly, 2016 marks the year during which we finished analyzing the data and writing the first version of the review paper.

The research papers ( N  = 52) were evaluated based on the above cited inclusion criteria by the researcher and another coder who is also a researcher. Based on this, only 37 individual studies could meet the above criteria, and therefore, they were used for the analysis in the current review. As shown in the Additional file  1 , the reviewed studies were conducted during (1990–2016). They were also carried out on peer review in writing among learners from different countries that are identified as ESL and EFL contexts. This is discussed in details in the finding section of this review as one factor, labeled context of peer review.

Moreover, the mode of peer review in each study was identified: a FFPR only, a CAPR only, or a combination of both FFPR and CAPR. Once again, for these studies focusing on CAPR, we identified the mode of peer feedback exchanges based on the technological tools used: synchronous, asynchronous, or a combination of synchronous and asynchronous CAPR. Out of the 37 reviewed studies, 15 previous studies focused on FFPR only (Stanley, 1992 ; Beason, 1993 ; Mendonca & Johnson, 1994 ; Lockhart & Ng, 1995 ; Zhu, 1995 ; Mendonca & Johnson, 1994 ; Villamil & De Guerrero, 1996 ; McGroarty & Zhu, 1997 ; De Guerrero & Villamil, 2000 ; Zhu, 2001 ; Min, 2005 ; Cho & Cho, 2011 ; Lina & Samuel, 2013 ; Vorobel & Kim, 2014 ; Hanjani & Li, 2014 ). On the other hand, an equal number of studies ( N  = 15) concentrated on CAPR only (Tuzi, 2004 ; Hewett, 2006 ; Guardado & Shi, 2007 ; Liang, 2008 , 2010 ; Liou & Peng, 2009 ; Ho & Usaha, 2009 ; Anderson, Bergman, Bradley, Gustafsson, & Matzke, 2010 ; Cha & Park, 2010 ; Ho, 2010 ; Ho & Usaha, 2013 ; Bradley, 2014 ; Razak & Saeed, 2014 ; Pham & Usaha, 2015 ; Saeed & Ghazali, 2016 ). Other remaining studies ( N  = 7) combined both FFPR and CAPR (Sullivan & Pratt, 1996 ; Di Giovanni & Nagaswami, 2001 ; Liu & Sadler, 2003 ; Jones et al., 2006 ; Song & Usaha, 2009 ; Chang, 2012 ; Ho, 2015 ).

So for the last two groups of studies ( N  = 22) that employed CAPR only and those that combined FFPR and CAPR, we also identified whether CAPR in these studies was synchronous or asynchronous, or a combination of synchronous or asynchronous. In this regard, of these 22 studies, most of them( N  = 12) used asynchronous CAPR only (Sullivan & Pratt, 1996 ; Tuzi, 2004 ; Liou & Peng, 2009 ; Ho & Usaha, 2009 ; Song & Usaha, 2009 ; Ho, 2010 ; Ho & Usaha, 2013 ; Bradley, 2014 ; Razak & Saeed, 2014 ; Ho, 2015 ; Pham & Usaha, 2015 ; Saeed & Ghazali, 2016 ). However, 6 studies employed synchronous CAPR only (Hewett, 2006 ; Jones et al., 2006 ; Anderson et al., 2010 ; Liang, 2008 , 2010 ; Cha & Park, 2010 ), and other studies ( N  = 4) used a combination of both synchronous and asynchronous CAPR (Di Giovanni & Nagaswami, 2001 ; Liu & Sadler, 2003 ; Guardado & Shi, 2007 ; Chang, 2012 ).

The research method, particulary analysis followed by each study was also identified, with an emphasis on the type of analysis: qualitative, mixed or quantitative. Finally, the findings and discussions of each study were thoroughly reviewed, coded, analyzed and interpreted as follows:

Coding, analyzing and interpretation

Prior to coding the findings of the selected papers for our review, another researcher was invited as a second coder. He was informed of the purpose and research questions of this review. Then, they discussed why these 37 studies were selected for review. After this, they started coding. Coding was performed based on the grounded theory by Glaser and Strauss ( 1967 ).

Each coder had to read the findings of each study. Coding the findings of these previous studies involved both substantive coding and theoretical coding. While substantive coding focuses on conceptualizing the empirical findings of these previous studies into substantive codes, theoretical coding focuses on relating or integrating such findings or codes into a research model (Glaser, 1978 ). For the substantive coding in this study, the coders independently coded each previous study. Based on the first round of reading of the selected studies, the two coders met face-to-face and discussed the initial emerging findings. This open coding allowed them to understand the various aspects of investigation of peer review in these selected studies, including feedback, text revisions, perception and even factors. This was followed by several rounds of readings and selective coding. Selective coding focused on patterns of peer feedback reported in these studies, incorporation of feedback into text revisions and factors affecting peer feedback. The coders had several study-by-study basis discussions. Moreover, they contacted each other via whats up group where they engaged in discussing several aspects of coding previous studies, including the patterns, themes and categories generated. They iteratively coded the findings/results of previous studies ( N  = 37). For those coding discrepancies, they were discussed and resolved by consulting the original research study.

For the patterns of interactional feedback exchanges, the iterative coding process resulted into many codes that were categorized into two main categories: the language functions of interactional feedback and nature and focus areas of interactional feedback. The language functions of interactional feedback were also clustered into three sub-categories: exploratory, procedural and social. Regarding the second main category, the nature and focus areas of interactional feedback, it has two sub-categories: revision-oriented and non-revision-oriented. The revision-oriented sub-category comprises: global and local which are all discussed in " Findings and discussion " section. Clustering the different patterns of feedback exchanges in terms of the language functions and the nature and focus areas of such feedback patterns allowed us to make a synthesis of the various patterns of learners’ feedback in the form of main categories. Another reason for such clustering of the patterns of feedback into main categories is to obtain an understanding of whether the learners as reported in previous studies engage highly in the task of peer review, the procedure itself or social aspects and whether their feedback exchanges targets global issues or local issues of written texts in peer review of writing.

Finally, the findings of previous studies reviewed in this paper were coded in terms of the integration of peer feedback into learners’ text revisions and the factors affecting such interactional feedback. Besides the various factors affecting peer review that were identified and reported by some previous researchers as part of their research objectives and analysis, we looked at the context of peer review in each study, including the ESL/EFL contexts, the socio-cultural backgrounds of learners participating in peer review and the institutions and programs in which writing and peer review took place. This was pursued or continued till almost an agreement of 96% between the two coders was achieved.

Findings and discussion

This section presents the findings of the current study in an attempt to answer the three research questions stated in the introduction:

What are the patterns of ESL/EFL learners’ interactional feedback exchanges in peer review in writing as identified in previous studies from 1990 to 2016?

This sub-section discusses the types of interactional feedback exchanges as identified in previous studies and as categorized in this paper in terms of the language functions and then, the focus areas of commenting patterns in both modes of peer review: FFPR and CAPR.

The language functions of interactional feedback exchanges in peer review

Research has focused on how learners engaged in peer review by analyzing the stances of the language functions underlying their interactional or feedback exchanges. Out of the 37 reviewed studies, 30 studies identified and reported various patterns of learners’ exploratory interactional feedback exchanges in peer review. However, other seven studies (Guardado & Shi, 2007 ; Liang, 2008 , 2010 ; Anderson et al., 2010 ; Cho & Cho, 2011 ; Vorobell & Kim, 2014 ; Pham & Usaha, 2015 ) were exclusive to the focus areas of learners’ interactional exchanges in peer review. Based on our analysis of the findings of these studies, there are three patterns of interactional feedback exchanges: (1) exploratory, (2) procedural and (3) social which are discussed as follows:

Exploratory interactional feedback exchanges

This first main category of feedback exchanges reflects learners’ interpretation and reflection on their written texts. Thus, out of the 30 studies, the first group of studies ( N  = 13) as summarized in Table  1 were conducted on FFPR. Patterns of feedback that are exploratory functioned as advising, eliciting and questioning as well as responses to questions and clarifications of information and intended meaning (Stanley, 1992 ). Other functions are seeking-providing explanations of particular aspects of written texts (Mendonca & Johnson, 1994 ; Mendonca & Johnson, 1994 ; Min, 2005 ; Lin & Samuel, 2013 ), requesting-providing clarifications (Zhu, 1995 ; Villamil & De Guerrero, 1996 ; McGroarty & Zhu, 1997 ; Min, 2005 ; Hanjani & Li, 2014 ), comprehension-check and response (Mendonca & Johnson, 1994 ; Hanjani & Li, 2014 ) and seeking confirmation and confirming understanding (Mendonca & Johnson, 1994 ; Lin & Samuel, 2013 ; Hanjani & Li, 2014 ).

As learners engaged in FFPR, they sought and provided suggestions or advice on a particular point of text revision, elicited and provided opinions or evaluation (Mendonca & Johnson, 1994 ; Mendonca & Johnson, 1994 ; Lockhart & Ng, 1995 ; Zhu, 1995 ; Villamil & De Guerrero, 1996 ; De Guerrero & Villamil, 2000 ; Hanjani & Li, 2014 ) and exchanged information (McGroarty & Zhu, 1997 ). They also exchanged restatement, grammar corrections (Mendonca & Johnson, 1994 ; Mendonca & Johnson, 1994 ), identfied problems in their texts (Beason, 1993 ; Zhu, 1995 ; Min, 2005 ; Lin & Samuel, 2013 ), summarized points, expressed intentions (Lockhart & Ng, 1995 ) and decribed various aspects of texts (Beason, 1993 ) and restating or repeating (Lin & Samuel, 2013 ; Hanjani & Li, 2014 ).

The exploratory functions of peer feedback also underlie how the learners engaged in instructing in the form of a mini lesson (Villamil & De Guerrero, 1996 ; De Guerrero & Villamil, 2000 ; Hanjani & Li, 2014 ), subtle hints, providing options/alternatives, defining and using L1 to assist each other in attending to the problems and solving them (De Guerrero & Villamil, 2000 ; Hanjani & Li, 2014 ). In FFPR, learners exchanged comments that functioned as elaborating, (Zhu, 2001 ), referencing, guessing, expressing knowledge/lack of knowledge, expressing inability and persisting (Hanjani & Li, 2014 ).

The second group of studies ( N  = 17) in Table  2 applied various technological tools, including synchronous and asynchronous tools to peer review in writing. The increasing applications of CAPR have motivated many researchers to explore learners’ interactional feedback exchanges. The results of our review indicate that CAPR engaged ESL learners in various exploratory feedback exchanges. The most commonly identfied functions of such feedback are questions that seek peers’ clarifications, justifications, confirmation and opinions and providing suggestions or advice on particular aspects of their texts (Sullivan & Pratt, 1996 ; Di Giovanni & Nagaswami, 2001 ; Liu & Sadler, 2003 ; Tuzi, 2004 ; Hewett, 2006 ; Jones et al., 2006 ; Liou & Peng, 2009 ; Song & Usaha, 2009 ; Cha & Park, 2010 ; Ho, 2010 ; Ho & Usaha, 2013 ; Chang, 2012 ; Bradley, 2014 ; Razak & Saeed, 2014 ; Ho, 2015 ; Saeed & Ghazali, 2016 ). Other exploratory patterns of peer feedback exchanges are evaluations of various aspects of written texts, clarifications, alterations (Liu & Sadler, 2003 ; Liou & Peng, 2009 ; Ho, 2010 ; Ho & Usaha, 2013 ; Chang, 2012 ; Bradley, 2014 ; Ho, 2015 ; Saeed & Ghazali, 2016 ), agreement and disagreement (Di Giovanni & Nagaswami, 2001 ; Saeed & Ghazali, 2016 ), acceptance and rejections of peers’ revisions (Jones et al., 2006 ).

It is also interesting that ESL/EFL learners exchange exploratory feedback functioning as statements or explanation (Tuzi, 2004 ; Jones et al., 2006 ; Ho & Usaha, 2009 ; Ho, 2010 ; Cha & Park, 2010 ; Ho & Usaha, 2013 ), restatements (Di Giovanni & Nagaswami, 2001 ), justification (Razak & Saeed, 2014 ; Saeed & Ghazali, 2016 ), criticism (Tuzi, 2004 ; Song & Usaha, 2009 ), complying, acknowledging (Jones et al., 2006 ), informing, directing attention (Hewett, 2006 ), correction (Cha & Park, 2010 ) and scaffolding in the form of definitions and use of L1 (Razak & Saeed, 2014 ).

From the reviewed studies that identified the various functions of peer exploratory feedback in CAPR, some studies also reported the frequency occurrence of these patterns of feedback. The results indicate that the most frequently patterns of peer exploratory feedback exchanges varied from alterations (Liu & Sadler, 2003 ), opening moves (Cha & Park, 2010 ), only suggestions (Ho, 2010 ; Bradley, 2014 ; Ho, 2015 ) to a combination of suggestions and other patterns such as restatements (Di Giovanni & Nagaswami, 2001 ), evaluation (Liou & Peng, 2009 ), clarifications (Ho & Usaha, 2009 ) and questions (Song & Usaha, 2009 ).

Procedural interactional exchanges

The second main category of interactional exchanges in terms of the language functions comprises those procedural comments that are exchanged by learners as means to organizing the process of peer review. Examples of procedual comments are pointing at particular parts of texts (Stanley, 1992 ; Zhu, 2001 ; Jones et al., 2006 ) and announcing various aspects of the task (Stanley, 1992 ; Villamil & De Guerrero, 1996 ; Zhu, 2001 ). Other patterns of procedural feedback exchanges, as identified by Hanjani and Li ( 2014 ), are providing directives to peers, asking them to read parts of the texts, assigning responsibilities and clarifying instructions. However, the occurrence of such procedural interactional exchanges seemed to be low in these previous studies.

Social interactional exchanges

The third main category of interactional exchanges in terms of the language functions encompasses those social interactional comments as a means to establish a positive atmosphere in peer review (Beason, 1993 ; Di Giovanni & Nagaswami, 2001 ; Tuzi, 2004 ; Jones et al., 2006 ; Song & Usaha, 2009 ; Cha & Park, 2010 ; Hanjani & Li, 2014 ). Examples of these comments are thanking, welcoming, praising and even social chatting or talks. Others are greeting, thanking and welcoming, confusion, surprise, blaming, laughing and distraction (Hanjani & Li, 2014 ).

Thus, the above various patterns of learners’ interactional feedback exchanges in terms of the language functions are synthesized and illustrated in Fig.  1 . The first category of interactional feedback exchanges is exploratory in nature since it is an indicator of learners’ reflection on the task of peer review. It is scaffolding and non-scaffolding.

Patterns of the Language Functions of Learners’ Interactional Feedback Exchanges in Peer Review. This figure is an illustration of the various categories of learners’ interactional feedback exchanges identified in the 37 reviewed studies. It starts with the language functions which are sub-categorized into exploratory, procedural and social. Moreover, the exploratory category has two sub-categories: scaffolding and non-scaffolding. For the social category, it has two sub-categories: on-task and off-task

For the scaffolding sub-category, few studies identified scaffolding patterns such as seeking-providing advice, exchanging suggestions, instructing in the form of a mini lesson on a certain aspect of their texts, defining, using L1 and suggesting or advising (De Guerrero & Villamil, 2000 ; Hanjani & Li, 2014 ). Such scaffolding comments are intended to verbally assist peers to solve problems in their writing within the ZPD. Based on their findings, those recent proponents of Vygotsky’s ( 1978 ) socio-cultural theory also supported the mutual or reciprocal nature of scaffolding exchanges in their studies. In other words, these few studies stated above proved that scaffolding is not a necessarily unidirectional type of assistance from an expert to a novice learner, but it can be also a bidirectional verbal support provided and received reciprocally by two or more novice learners in collaborative writing, including peer review.

The non-scaffolding sub-category comprises many patterns of language functions of learners’ peer feedback, including explanation, clarification, summarizing, elaboration, restatement, interpretation of certain aspects of written tasks, expressing intended meanings and others. Although such feedback exchanges may not be scaffolding in the sense that they do not function as language or verbal assistance in enabling peers to target problems and solve them in their written tasks. Yet, they show how learners attempt to express their intended meanings and ideas, elaborate them, interpret them and even sum up points.

The second category of learners’ interactional exchanges in peer review is procedural because it shows how learners attempt to carry out the process of peer review, handle the task of peer revision, organize their interaction in peer review and distribute their tasks in peer review. In addition, based on our categorization, the third sub-category of feedback exchanges, social interaction, is classified into two sub-types: on-task and off-task interactional exchanges. The on-task social interactional exchanges illustrate how learners praise one another in peer review, react to their feedback, thank and welcome one another, and even express feelings such as confusion and surprise. The off-task social interactional exchanges, however, function as greetings, joking, distraction, laughing, blaming and rejecting blames. Thus, they are irrelevant to the task because they reflect how learners engage in social matters in peer review.

The focus areas of interactional feedback exchanges in peer review

Studies have gone beyond this mere categorization of patterns of interactional comments by looking at the focus areas/scope of interaction. Out of the 37 reviewed studies, only 23 studies identified and reported the nature and focus areas of ESL/EFL learners’ interactional feedback exchanges in peer review. These are discussed under the following two main categories coded and clustered in the current review:

Revision-related/oriented interactional comments

This first main category comprising revision-oriented exchanges shows how ESL/EFL learners remain on the task and target various problems in their written texts. Regarding this, five studies identified the focus areas of revision-oriented feedback in FFPR as in Table  3 . Four of these studies (Beason, 1993 ; Lockhart & Ng, 1995 ; Min, 2005 ; Vorobel & Kim, 2014 ) indicated that peer feedback focused on issues of writing at the macro-level, such as idea development, organization and purpose more than issues at the micro level, such as mechanics and grammar. On the other hand, Hanjani and Li ( 2014 ) reported that most of EFL learners’ interactional exchanges focused on micro-level issues such as grammar, while a few comments focused on macro-level issues such as content and organization. This was ascribed to learners’ proficiency levels in English which could not have allowed some of them to target macro-level issues in writing.

Concerning the CAPR mode, there are 18 studies that identified the focus areas of learners’ revision-oriented interactional comments. Findings of most previous studies support the role of peer feedback identifying macro-level or global issues of writing, including the task of criticizing writing (Sullivan & Pratt, 1996 ), ideas or content (Di Giovanni & Nagaswami, 2001 ; Hewett, 2006 ; Jones et al., 2006 ; Anderson et al., 2010 ; Saeed & Ghazali, 2016 ), organization (Di Giovanni & Nagaswami, 2001 ; Hewett, 2006 ; Jones et al., 2006 ), thesis statements (Hewett, 2006 ; Jones et al., 2006 ; Guardado & Shi, 2007 ), topic (Jones et al., 2006 ; Guardado & Shi, 2007 ), writing processes (Hewett, 2006 ) unity and coherence (Guardado & Shi, 2007 ) as well as essay improvement (Cha & Park, 2010 ).

Other studies obtained results that corroborate the higher percentage of peer feedback in the revision-related discourse (Ho & Usaha, 2009 ; Liang, 2008 , 2010 ). In these three studies, the revision-oriented discourse refers to peer feedback that focuses on content, meanings and error correction as in Liang ( 2008 , 2010 ) and content, organization, grammar, vocabulary and mechanics as in Ho and Usaha’s ( 2009 ) study. While Liang ( 2008 ) found that content discussion scored the highest among all types of focus areas, Liang ( 2010 ) found that interaction on task management, and content outnumbered meaning negotiation and error correction. According to Ho and Usaha ( 2009 ), feedback focusing on content scored the highest percentage, followed by organization, grammar and vocabulary. However, the least attention was paid to mechanics.

Some previous researchers (Liou & Peng, 2009 ; Ho, 2010 , Ho & Usaha, 2013 ; Bradley, 2014 ; Ho, 2015 ; Pham & Usaha, 2015 ) used Liu and Sadler’s ( 2003 ) taxonomy of the nature and focus areas of peer feedback. In terms of the nature of peer feedback, it is classified as revision-oriented and non-revision-oriented. For the focus areas, the revision-oriented comments focus on the global aspects of texts (idea development, organization of ideas, purpose and audience) and local aspects of texts (wording, grammar and punctuations). The results of these studies indicate that learners engaged more in revision-oriented feedback than non-revision-oriented feedback. More interestingly, the same researchers found that learners made a higher number of global revision-oriented comments than the local revision-oriented comments. Such results are similar to the results in in Liang ( 2008 , 2010 ) in respect to the higher number of revision-oriented comments, but those comments leading to text revisions in Liang’s studies were classified into different focus areas: content, meanings and errors. Nevertheless, the results of both studies by Liang ( 2008 , 2010 ) still support the role of synchronous feedback interactions in facilitating learners’ attention to content more than meanings and grammar errors. Yet, this contradicts results of other two studies Anderson et al., 2010 & Chang, 2010 ) showing that learners produced a higher number of local comments focusing on the language issues.

From the above discussion of the findings of studies on CAPR, it is apparent that CAPR assists learners in generating more feedback that is oriented towards revisions and that more feedback focuses on macro-level or global issues, including content, organization and purpose than local issues of writing, including language and mechanics. Yet, while some researchers support the role of synchronous CAPR (Liu & Sadler, 2003 ; Hewett, 2006 ; Jones et al., 2006 ; Liang, 2008 , 2010 ; Anderson et al., 2010 ; Cha & Park, 2010 ; Ho, 2010 ; Chang, 2012 ), other researchers (Sullivan & Pratt, 1996 ; Liou & Peng, 2009 ; Ho & Usaha, 2013 ; Bradley, 2014 ; Ho, 2015 ; Pham & Usaha, 2015 ; Saeed & Ghazali, 2016 ) support the role of asynchronous CAPR in facilitating learners’ feedback towards more global issues in writing.

Non-revision-related/oriented interactional comments

The second main category of feedback exchanges in terms of the focus areas, known as non-revision-oriented ineractional comments, encompasses these comments that do not target any problems in learners’ written texts in peer review. However, most previous studies reported only calculations of ESL/EFL learners’ comments in the non-revision-oriented discourse or social space without identifying their focus areas (Liu & Sadler, 2003 ; Liou & Peng, 2009 ; Ho, 2010 ; & Usaha, 2013 ; Ho, 2015 ; Pham & Usaha, 2015 ). Negligence of identification of the focus areas of this category of comments could be due to the fact that such comments do not target any problems in writing.

Only a few studies have identified the focus areas of the non-revision-oriented interactional exchanges which are establishing positive tones that sugar coated the criticism carried out in the follow-up-negative comments (Guardado & Shi, 2007 ), thanking and praising to alleviate their critical tone and establish relationships (Hanjani & Li, 2014 ) and creating an atmosphere of sugar-coating the criticism in their revision-oriented comments (Bradley, 2014 ). In the CAPR mode (Darhower, 2002 ; Fitze, 2006 ; Jones et al., 2006 ; Cha & Park, 2010 ; Liang, 2010 ), such comments reflect the socio-relational aspect of communication, such as talking about irrelevant matters and establishing a sound social context for maintaining friendship as well as appraisal and encouragement (Anderson et al., 2010 ).

From the above findings, peer review helps ESL/EFL learners to exchange various patterns of feedback: exploratory, procedural and social. Moreover, it engages learners more in revision-oriented feedback that targets issues in their written texts. Especially with the use of synchronous and asynchronous tools, learners become less shy of their peers and more confident to offer more revision-oriented feedback that focuses on more global issues. Although this particular investigation in previous studies enriches our understanding of the value of peer feedback in writing, peer feedback can be more valuable if learners integrate such feedback in improving their writing. Therefore, some previous studies discussed in the next section of the review have looked at whether learners incorporate their peer feedback in revising their writing.

Is there empirical evidence of learners’ incorporation of peer interactional feedback exchanges in their text revisions?

There are some studies that have looked at whether peer feedback exchanges were efficacious in enhancing learners’ written texts. For instance, Beason ( 1993 ) reported that only 156 comments out of the overall 233 comments resulted into 117 text revisions. Similarly, Mendonca and Johnson ( 1994 ) showed that although students used their peers’ comments to revise their essays, they incorporated peer feedback comments in their text revisions selectively, thus deciding what to revise in their own texts. According to Tuzi ( 2004 ), learners integrated peer feedback from the asynchronous CAPR into clause, sentence and even paragraph-level revisions more than the FFPR feedback.

In comparing between the initial and revised drafts to identify those text revisions based on peer’s feedback, Guardado and Shi ( 2007 ) found that only 13 out of 22 students revised their drafts, and out of the 13 students, 10 students revised their drafts based on their peers’ online feedback, while 3 of them made self-generated revisions. Out of the 10 students, 4 of them made major revisions, whereas 3 of them made minor revisions. In other words, only 13 student-writers accepted 27 feedback comments out of the total number of 60 feedback comments in revising their texts accordingly, whereas the other 9 student-writers almost ignored their peers’ feedback. This was attributed to their cultural and educational backgrounds and uncertainty about the accuracy and reliability of their peers’ feedback comments.

According to Liou and Peng ( 2009 ), learners’ incorporation of the asynchronous interactional comments in their text revisions was not high due to their unwillingness to revise the texts based on peer feedback comments. Pham and Usaha ( 2015 ) found that learners used peer feedback in revising sentences in their texts. In contrast, the revisions made by the learners based on self-decisions are at word and phrase-levels. Yet, these last two studies did not provide any evidence showing the feedback interaction-revision connections. According to Song and Usaha ( 2009 ), in the FFPR peer review, out of the total revision-oriented comments ( n  = 364), 292 comments were incorporated into subsequent text revisions. On the other hand, in the asynchronous and synchronous CAPR peer review mode, out of the total revision-oriented comments ( N  = 300), 257 comments were integrated into learners’ text revisions.

Cha and Park ( 2010 ) used a quantitative measure to find out the extent to which learners incorporated their peers’ opinions or ideas in revising their essays. The results indicated that the amount or percentage of interactional exchanges incorporated in their text revisions varied among the 14 pairs of learners. In Ho’s ( 2010 ) study, lower level-revisions, including phrase and words were made by learners themselves rather than based on peers’ feedback exchanges. However, higher-level revisions, such as sentences and paragraphs were made based on their peers’ feedback. Thus, the quantitative analysis showed that 39% of revisions were triggered by peer feedback and 61% were made by the student writers based on their own decisions.

Finally, there are only two studies which have used textual analyses to determine the extent to which learners incorporate online peer feedback exchanges into revising their texts. The first study by Hewett ( 2006 ) indicated that learners incorporated most of the synchronous CAPR feedback exchanges into their text revisions. Yet, this study did not provide any evidence of the connections between the feedback exchanges and the text revisions. The second study by Liang ( 2010 ) provided evidence of how EFL learners integrated synchronous interactions into their text revisions, thus concluding that synchronous interactions facilitated their text revisions. However, the rate of synchronous comments integrated into learners’ text revisions differed among the groups of learners according to the tasks, which is discussed below as one factor affecting interactions and text revisions in peer review.

What are the major factors affecting ESL/EFL learners’ interactional feedback exchanges in peer review?

The review of previous related research identified various factors affecting learners’ interactional feedback exchanges in peer review. These factors are discussed in the following sections:

Training and instruction on feedback

Several previous studies reported the effect of training learners through explicit instruction on their interactional feedback exchanges in peer review. In these reviewed studies, learners were trained on peer review through explicit instruction provided by instructors who are often the researchers. Such instruction guides learners on aspects of writing that they should focus on and how to provide feedback on such issues of writing. Usually, this instruction as developed by researchers should be clear and consistent with the purpose of the study and aims of writing courses at the university. For instance, Stanley ( 1992 ) reported that training affects the intensity of interaction as the coached or trained groups interacted more than the untrained groups. Moreover, groups who received coaching offered more specific interactional comments to their peers that helped them to revise their texts better. This implies that training enabled those coached groups to assume the roles as evaluators. Evidences of the increased engagement of the coached groups are the higher incidences of interactional exchanges: pointing, advising and collaborating as well as clarifying.

Similarly, Zhu ( 1995 ) reported that the coached learners engaged more actively in peer review than the uncoached group. Moreover, the negations of the trained group were more extensive, more in depth and characterized by extended exchanges on a particular topic, suggesting lovelier and richer discussions. Similarly, McGroarty and Zhu ( 1997 ) found that the trained group engaged more extensively as evidenced by the higher number of turns and more extended and livelier interactions than the untrained group. The findings of Min’s ( 2005 ) study indicated that training through explicit instruction on peer review enabled learners to generate significantly more comments which revolved around clarifying, identifying and expounding a single issue as well as suggesting how to improve their texts. It also increased learners’ focus of comments as they made more comments on global issues. Other two studies showed conflicting results. For instance, Liou and Peng ( 2009 ) showed that training did not increase the learners’ willingness to incorporate peers’ comments in revising their texts.

Mode of peer review

The analysis of the findings of some previous studies (Sullivan & Pratt, 1996 ; Di Giovanni & Nagaswami, 2001 ; Liu & Sadler, 2003 ; Jones et al., 2006 ; Song & Usaha, 2009 , Chang, 2012 ; Ho, 2015 ) on learners’ interactional feedback exchanges showed that the mode of peer review has a significant effect on learners’ feedback. All these studies combined and compared between FFPR and CAPR in relation to learners’ feedback exchanges. For the CPRA, five of these studies used both synchronous and asynchronous tools (Di Giovanni & Nagaswami, 2001 ; Liu & Sadler, 2003 ; Song & Usaha, 2009 ; Chang, 2012 ; Ho, 2015 ), whereas two of them employed only synchronous tools, such as chats (Sullivan & Pratt, 1996 ; Jones et al., 2006 ).

Interesting findings were obtained by these studies, showing several differences between FFPR and CAPR in terms of the functions of feedback, focus areas of feedback, incorporation of peer feedback in writing, and other aspects related to roles of learners in interaction. For the functions of peer feedback, first, the number of negotiations in FFPR was higher than that of negotiations in the CAPR (Di Giovanni & Nagaswami, 2001 ). This is because of the conversational mode of communication in FFPR. Turn-taking in the FFPR mode was also found to be greater than that in the asynchronous CAPR as in the latter mode of peer review, it took longer time to type the comments (Sullivan & Pratt, 1996 ). Similarly, Song and Usaha ( 2009 ) found that the number of the language functions of learners’ feedback exchanges in the FFPR was higher than that of feedback in the CAPR mode. However, results of Liu and Sadler ( 2003 ) contradict this as the researchers reported that in terms of the patterns of language functions of comments, the number was higher for CAPR than that in the FFPR. Moreover, while suggestion was the highest in the FFPR mode, alteration was the highest in the CAPR.

For the nature and focus areas of feedback in FFPR vs. CAPR, results support CAPR. First, learners’ turn taking tends to be more critical and focused in the CAPR as evidenced by the sequence of comments: positive comments followed by suggestions (Sullivan & Pratt, 1996 ). Evidence also indicates that learners’ negotiations in the CAPR allowed students to reflect and remain focused on the task (Di Giovanni & Nagaswami, 2001 ). This seems similar to results of Liu and Sadler ( 2003 ), indicating that the learners exchanges more feedback in the CAPR mode than in the FFPR. Moreover, leaners’ feedback exchanges in the synchronous CAPR mode tended to focus more on global issues in writing, including content, organization and process, whereas in the FFPR, feedback was found to be more local since it focused on grammar, vocabulary, and style (Jones et al., 2006 ). According to Liu and Sadler ( 2003 ) and Ho ( 2015 ), the number of learners’ feedback that focused on local issues in writing: wording, grammar, and punctuation in the CAPR mode was higher than that of feedback in the FFPR mode.

Results of the above studies indicate that CAPR is not totally better than FFPR or vice versa, but it provides learners with opportunities to explore and discuss global issues of texts. In other words, learners have sufficient time to reflect on their writing and think of global issues in the CAPR as opposed to the FFPR where interaction tends to be more simultaneous and learners cannot have time to think of such issues (Jones et al., 2006 ). In the CAPR, learners can also detect more local issues in texts and consequently comment more on such issues. However, in the FFPR mode, these lower local comments could be attributed to the lack of such functions and students’ lack of confidence regarding their grammar and spelling (Liu & Sadler, 2003 ).

On the other hand, there are studies which reported results contradicting the above results regarding learners’ higher number of feedback in terms of focus areas. For instance, Ho ( 2015 ) reported that the number of global feedback that focused on content, organization and purpose in the FFPR mode was higher than that of global feedback in the CAPR mode. Chang ( 2012 ) found also differences in the number of feedback not only between FFPR and CAPR modes, but also between synchronous and asynchronous modes. The researcher found that in both FFPR and synchronous CAPR modes, the learners produced a higher number of global feedback than they did in the asynchronous CAPR mode. This was attributed to the delayed time in the asynchronous CAPR mode. In other words, when learners engaged in asynchronous CAPR, they had time to delay their responses, which made their peers shift the focus of their feedback from global issues to local issues.

In relation to incorporation of peer feedback, one study reported that the mode of peer review affects the percentage of learners’ integration of peer feedback into their writing (Song & Usaha, 2009 ). This study reported that the higher rate of feedback comments incorporated into learners’ text revisions in the CAPR mode implies that, as opposed to the FFPR mode where feedback comments are oral, in the CAPR, the comments are written and learners have enough time to read and understand them well and consequently, integrate their peers’ suggestions into their text revisions. Finally, concerning learners’ roles in interaction, Sullivan and Pratt ( 1996 ) found that while in the FFPR, the author talked more, thus dominating the discourse, in CAPR, the author talked less, thus equalizing participation among all members.

Type of written tasks in peer review

There are a few studies showing that the type and number of learners’ feedback exchanges differed according to the tasks being revised. In this regard, Liou and Peng ( 2009 ) found that the number of commenting patterns in the first assignment was different from the fourth assignment. The researchers reported that there was an increase in the number of comments, especially suggestions and evaluations in the fourth assignment. Moreover, the revision-oriented comments increased and there was an increase in the global-oriented comments. It was attributed to the the difficulty in the writing tasks in both assignments.

EFL learners’ synchronous interaction facilitated their text revisions, though this differed among the groups of learners according to the tasks (Liang, 2010 ). In the book review, two groups integrated most of their content-related interactions into their text revisions, while for the research paper revision, only a very small proportion of text revisions were linked to learners’ interaction as the learners made most text revisions based on their own decisions and other interactional processes such as social interactional comments. Ho ( 2015 ) found that the mode of peer review is not the only factor that contributed to the differences in the number and percentage of commenting patterns, but the nature of the task was also seen as an important factor behind this. Whereas in the first task, the global and local comments accounted for ( N  = 311 & 175, respectively), in the second task, the global and local comments accounted for ( N  = 479 & 132).

Learners’ roles in peer review

Regarding learners’ role: whether they are reviewers or writers, it has been demonstrated as an important factor affecting their interactional feedback exchanges in peer review. According to Mendonca and Johnson ( 1994 ), most of the negotiations were initiated by student-reviewers. Such result indicates that since the learner-reviewers were not well familiar with the content of the written texts, they tended to initiate negotiations that sought explanations or clarifications from their peer-writers. Moreover, the functions and numbers of interactional feedback exchanges differed according to the reviewer-writer roles. For instance, reviewers were observed to produce more feedback exchanges by which they explained and gave their opinions on the clarity of ideas in writing.

Finally, Zhu’s ( 2001 ) results indicated that when ESL learners acted as writers, they produced fewer turns and responded to peer feedback, but they did not clarify their writing for the readers (native speakers). Acting as readers, the ESL students generated a similar amount of oral interactional exchanges, but they encountered difficulties competing for turns and sustaining and regaining them when they were interrupted by the writers. These results implied that the ESL students were at somewhat of a disadvantage when interacting with the native speakers in oral peer review and raise some concerns about ensuring equality in mixed peer response groups.

Learners’ levels of English proficiency

Another important factor influencing learners’ types and frequency of interactional feedback exchanges is learners’ level of proficiency in English. There are only two previous studies which measured the effect of this factor by comparing the interactional feedback exchanges among learners with different levels of English proficiency. Zhu ( 2001 ) reported that the language functions of the ESL learners’ interactional exchanges in mixed peer review were restricted compared to those of the native speakers. It was found that while the functions of the exchanges produced by the ESL learners were mostly announcing, reacting, questioning, advising, and justifying, those exchanges of the native speakers were confirming, pointing, hedging, elaborating, and eliciting. Thus, it was concluded that while the native speakers tended to provide suggestions more directly through advising, the ESL learners tended to point out at and question problematic areas.

The level of English proficiency was observed by Hanjani and Li ( 2014 ) to affect the interaction of learners, especially in asymmetrical pairs of learners in peer review. In such asymmetrical pairs, it was noticed that the more competent learner acted as a tutor who seemed to be confident, initiated the interaction and called the attention of the less competent peer to the problems in the written tasks. However, the less competent peer seemed to be more conservative, less confident, thus, changing the advice and receiving feedback from the more competent.

Other factors

There are also other three factors that affect learners’ interactional exchanges in peer review, two of which are gender and configuration of peer review dyads that were identified and reported in two different studies, while the third factor, context of peer review, was analyzed by us. Concerning the gender factor, Hanjani and Li ( 2014 ) shed light on the effect of gender on interaction dynamics based on their observation. It was observed that in mixed gender pairs of learners, the interaction seemed more polite, reverent, and formal and no disapproval even in cases of disagreements. However, in single gender pairs, the interaction looked more natural and more dynamic, and the learners produced more interaction, interrupted each other more often, used more informal language and more frequently challenged each other. For the configuration of peer review dyads, Mendonca and Johnson ( 1994 ) found that peer review dyads from different fields of study generated more negotiations that peer review dyads of the same field of study.

Regarding the factor of peer review context, there are several interesting findings generated from our comparison of the contexts of peer review in the 37 studies. First, most of the studies reviewed in this paper ( N  = 26) have been conducted among ESL learners (Stanley, 1992 ; Beason, 1993 ; Mendonca & Johnson, 1994 ; Lockhart & Ng, 1995 ; Zhu, 1995 ; Mendonca & Johnson, 1994 ; Sullivan & Pratt, 1996 ; Villamil & De Guerrero, 1996 ; McGroarty & Zhu, 1997 ; De Guerrero & Villamil, 2000 ; Di Giovanni & Nagaswami, 2001 ; Zhu, 2001 ; Liu & Sadler, 2003 ; Tuzi, 2004 ; Hewett, 2006 ; Jones et al., 2006 ; Guardado & Shi, 2007 ; Ho & Usaha, 2009 ; Anderson et al., 2010 ; Ho, 2010 ; Cho & Cho, 2011 ; Ho & Usaha, 2013 ; Lina & Samuel, 2013 ; Vorobel & Kim, 2014 ; Bradley, 2014 ; Pham & Usaha, 2015 ). On the other hand, the remaining number of reviewed studies ( N  = 11) focused on EFL learners (Min, 2005 ; Liang, 2008 , 2010 ; Liou & Peng, 2009 ; Song & Usaha, 2009 ; Cha & Park, 2010 ; Chang, 2012 ; Hanjani & Li, 2014 ; Razak & Saeed, 2014 ; Ho, 2015 ; Saeed & Ghazali, 2016 ). From this finding, it seems that whereas studies on peer review in the ESL context are dated back to the early decade, studies on peer review in the EFL context are dated to the last few years. This indicates that recently, EFL instructors and researchers have started giving attention to peer review in writing. Such studies in the EFL context have even extended peer review research in the ESL context and contributed to previous knowledge by adding further insights into the role of technological tools in facilitating learners’ peer review since all of them integrated technological tools, including blogs, designed systems and social networks, such as Facebook into peer review.

Another interesting finding is that the learners engaged in ESL peer review contexts as documented in all above-mentioned previous studies except these studies (De Guerrero & Villamil, 2000 ; Ho, 2010 ; Ho & Usaha, 2009 ; 2013 ; Lina & Samuel, 2013 ; Pham & Usaha, 2015 ; Sullivan & Pratt, 1996 ) are socio-culturally heterogeneous since they come from diverse socio-cultural backgrounds, such as Indians, Indonesians and Singaporeans. Moreover, some of these studies mixed ESL learners with native speakers of English in peer review practices (Anderson et al., 2010 ; Bradley, 2014 ; Cho & Cho, 2011 ; Liu & Sadler, 2003 ; McGroarty & Zhu, 1997 ; Zhu, 1995 , 2001 ). Unlike learners in the ESL peer review context, learners in the EFL peer review context belong to homogenous cultural backgrounds, including Taiwan, China, Iran, Arab and Korea. In other words, all these studies on EFL learners’ peer review in writing focused on socio-culturally homogeneous groups of learners. The only exception is the two studies by Razak and Saeed ( 2014 ) and Saeed and Ghazali ( 2016 ) that although the learners are Arabs, they come from different Arab countries, including Yemen, Tunisia, Egypt, Sudan, Syria and Algeria.

Finally, in terms of the educational context, all the above studies in both ESL and EFL contexts have investigated peer review among university learners, mainly undergraduates, while only one study reported a mixture of undergraduates and postgraduates (Anderson et al., 2010 ). This indicates that more attention has been given to peer review in tertiary settings where students’ levels of proficiency are high and time is available for peer review practices.

Implications and conclusion

The findings of the present review have several implications for ESL/EFL pedagogy in writing and future research on peer review, including CAPR. First, peer review is a socio-cognitive approach to writing that engages learners in reflection and interpretation of written texts as evidenced by their interactional comments in the task/learning space (Fig.  2 ). Although peer review situations are unique, the findings of the review have implications for all cases of investigating peer review in ESL/EFL contexts. More precisely, the findings provided further supportive evidence to previous research acknowledging how learners through interaction in the learning space in peer review activities verbalize what and how they think of the texts and assist one another to detect problems in their texts and solve them. Hence, this eventually will help them accomplish their tasks and enhance the quality of their written texts.

A Dual Space-Interactional Feedback Model. This last figure reflects how we conceptualized the various categories of learners’ interactional feedback exchanges as two spaces: learning/task space where learners focus on the task and social space where learners need to build up a friendly atmosphere for peer review. Both spaces are affected by several factors, including type of texts, mode of peer review, training, learners’ roles and other factors

While the above implications underlie the learners’ interactional feedback exchanges in the learning space, interaction exchanges in the social space should not be discouraged by instructors, especially in the CAPR. In other words, ESL/EFL learners should not be urged by their instructors to steer away from social interaction in peer review. This is because, as indicated by the findings of the present study, social interaction in peer review is a means to establishing shared understanding, admission of misunderstanding or errors, a friendly atmosphere and mutual respect among learners. This, in turns, will motivate learners to accept one another’s criticism and integrate their peers’ suggestions into revising and improving their texts. Furthermore, social interaction in peer review provides learners with a space for using English as a means of communication and socialization. This implies that learners can find opportunities for using English for communicating on matters irrelevant to the task, especially in CAPR, which may be limited in the ESL/EFL classroom contexts.

As indicated by the findings of the review, approaching peer review from both spaces of interaction will provide researchers with an overall view of group review dynamics that do not disvalue one of these spaces at the expense of the other. This means that both spaces of interaction contribute to learners’ pursuit of peer review. Although learners can establish these socio-relational aspects of learning through their interaction in the learning space especially when they are offered in a friendly manner, the non-task or off-task comments should be included in investigation of peer review. This is because off-task interaction tends to be more informal and casual. This will allow learners to get acquainted with one another and build social ties.

From the findings of the reviewed studies, few studies looked at how learners’ interactional feedback facilitates their text revisions. It is evident that learners commonly integrate most of their peer feedback into revising their written texts with some variations as discussed above. Yet, there is still a need for future studies on how learners’ interactional feedback exchanges facilitate their text revisions by using a textual analysis of learners’ virtual interaction comments as to link them to subsequent text revisions.

To make learners’ interactional feedback meaningful, this necessitates careful planning or preparation including training ESL/EFL learners through explicit instruction prior to peer review. The review of previous studies points at the role of training in fostering learners’ interaction in peer review, especially global-oriented comments. Therefore, learners need to be trained by instructors on what and how to comment on their written texts through explicit instruction. Training assists learners to make their comments well-focused and revision-oriented comments that focus on both local and global issues of their texts.

Although previous research identified other important factors affecting learners’ interactional feedback in peer review, further research should also examine the impact of learners’ different roles in peer review and configuration of peer review dyads on their interactional feedback. The findings of most previous studies reviewed in this paper showed that ESL/EFL learners generated more global-oriented comments than local-oriented comments in peer review. Some of these studies on CAPR pointed to the delayed time in asynchronous peer review that enables learners to discuss more global issues of their texts than local ones. Specifically, the potential of the asynchronous mode of interaction in group work is due to the delayed time that allows learners to better reflect on their written texts at the global level. However, the delayed time in the asynchronous mode of peer review may not be valuable unless peer review discussions are scheduled at specific time. Scheduling peer review discussions at specific time implies that all learners are online present, and they ask and respond to one another within that specific time allocated for each discussion. This will engage them in discussing their texts more actively. Finally, future research should extend the use of peer review among school students in order to encourage them to review their writing through feedback and make them aware of the value of peer feedback in improving their writing.

Anderson, P., Bergman, B., Bradley, L., Gustafsson, M., & Matzke, A. (2010). Peer reviewing across the Atlantic: Patterns and trends in L1 and L2 comments made in an asynchronous online collaborative learning exchange between technical communication students in Sweden and in the United States. Journal of Business and Technical Communication , 24 (3), 296–322.

Article   Google Scholar  

Beason, L. (1993). Feedback and revision in writing across the curriculum classes. Research in the Teaching of English , 27 (4), 395–422.

Google Scholar  

Bradley, L. (2014). Peer-reviewing in an intercultural wiki environment – Student interaction and reflections. Computers and Composition , 34 , 80–95.

Cha, Y., & Park, L. E. (2010). An analysis of synchronous interaction and its influence on EFL writers’ revisions. Multimedia Assisted Language Learning , 13 (2), 9–36.

Cho, Y. H., & Cho, K. (2011). Peer reviewers learn from giving comments. Instructional Science, 39 (5), 629–643.

Chang, C. F. (2012). Peer review via three modes in an EFL writing course. Computers and Composition , 29 , 63–78.

Chang, L. (2010). Group processes and EFL learners, motivation: a study of group dynamics in EFL classrooms. TESOL Quarterly , 44(1):129–154.

Darhower, M. (2002). Interactional features of synchronous computer-mediated communication in the intermediate L2 class: A sociocultural case study. CALICO Journal , 19 (2), 249–277.

De Guerrero, M. C. M., & Villamil, O. S. (2000). Activating the ZPD: Mutual scaffolding in L2 peer revision. The Modern Language Journal , 84 , 51–68.

Di Giovanni, E., & Nagaswami, G. (2001). Online peer review: An alternative to face-to-face? ELT Journal , 55 (3), 263–272.

Fitze, M. (2006). Discourse and participation in ESL face-to-face and written electronic conferences. Language Learning & Technology , 10 (1), 67–86.

Glaser, B. G. (1978). Theoretical sensitivity . Mill Valley: Sociology.

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research . Hawthorne: Aldine de Gruyter.

Guardado, M., & Shi, L. (2007). ESL students’ experiences of online peer feedback. Computers and Composition , 24 (4), 443–461.

Hanjani, A. M., & Li, L. (2014). Exploring L2 writers’ collaborative revision interactions and their writing performance. System , 44 , 101–114.

Hayes, J. R., & Flower, L. S. (1980). The dynamics of composing: Making plans and juggling constraints. In L. W. Gregg, & E. R. Steinberg (Eds.), Cognitive processes in writing , (pp. 31–50). Hillsdale: Lawrence Erlbaum Associates.

Hedge, T. (2001). Teaching and learning in the language classroom , (vol. 106). Oxford: Oxford University Press.

Hewett, B. (2006). Synchronous online conference-based instruction: A study of whiteboard interactions and student writing. Computers and Composition , 23 , 4–31.

Ho, M.-C. (2015). The effects of face-to-face and computer-mediated peer review on EFL writers’ comments and revisions. Australasian Journal of Educational Technology , 31 (1), 1–15.

Ho, P. V. P. (2010). Blog-based peer response for l2 writing revision , PhD thesis (). Nakhon Ratchasima: Suranaree University of Technology.

Ho, P. V. P., & Usaha, S. (2009). Blog-based peer response for EFL writing: A case study in Viet Nam. AsiaCall Online Journal , 4 (1), 1–29.

Ho, P. V. P., & Usaha, S. (2013). The effectiveness of the blog-based peer response for L2 writing. Journal of Science Ho Chi Minh City Open University , 3 (3), 27–44.

Hu, G., & Lam, S. T. E. (2010). Issues of cultural appropriateness and pedagogical efficacy: Exploring peer review in a second language writing class. Instructional Science , 38 (4), 371–394.

Hu, G. W. (2005). Using peer review with Chinese ESL student writers. Language Teaching Research , 9 (3), 321–342.

Hyland, K., & Hyland, F. (2006). Feedback in second language writing: Contexts and issues . Cambridge: Cambridge university press.

Book   MATH   Google Scholar  

Jones, R. H., Garralda, A., Li, D. C. S., & Lock, G. (2006). Interactional dynamics in on-line and face-to-face peer-tutoring sessions for second language writers. Journal of Second Language Writing , 15 , 1–23.

Levi Altstaedter, L. (2016). Investigating the impact of peer feedback in foreign language writing. Innovation in Language Learning and Teaching , 1–15.

Liang, M. (2010). Using synchronous online peer response groups in EFL writing: Revision-related discourse. Language Learning & Technology , 14 (1), 45–64.

Liang, M. Y. (2008). SCMC interaction and EFL writing revision: Facilitative or futile? Proceedings of E-learn 2008 , 2886–2892.

Lin, S. S. P., & Samuel, M. (2013). Scaffolding during peer response sessions. Procedia-Social and Behavioral Sciences , 90 , 737–744.

Liou, H. C., & Peng, Z. Y. (2009). Training effects on computer-mediated peer review. System , 37 , 514–525.

Liu, J., & Sadler, R. W. (2003). The effect and affect of peer review in electronic versus traditional modes on L2 writing. Journal of English for Academic Purposes , 2 , 193–227.

Lockhart, C., & Ng, P. (1995). Analyzing talk in ESL peer response groups: Stances, functions, and content. Language Learning , 45 (4), 605–651.

Long, M. H. (1983). Native speaker/non-native speaker conversation and the negotiation of comprehensible input1. Applied linguistics, 4(2),126-141.

Long, M. H. (1985). A role for instruction in second language acquisition: Task-based language teaching. Modelling and assessing second language acquisition, 18, 77–99.

McGroarty, M. E., & Zhu, W. (1997). Triangulation in classroom research: A study of peer revision. Language Learning , 47 (1), 1–43.

Mendonca, C. O., & Johnson, K. E. (1994). Peer review negotiations: Revision activities in ESL writing instruction. TESOL Quarterly , 28 (4), 745–769.

Min, H. T. (2005). Training students to become successful peer reviewers. System , 33 (2), 293–308.

Pham, V. P. H., & Usaha, S. (2015). Blog-based peer response for L2 writing revision. Computer Assisted Language Learning , 1 , 1–25.

Razak, N. A., & Saeed, M. A. (2014). Collaborative writing revision process among learners of English as a foreign language (EFL) in an online community of practice (COP). Australasian Journal of Educational Technology , 30 (5), 580–599.

Saeed, M. A., & Ghazali, K. (2016). Modeling peer revision among EFL learners in an online learning community. Electronic Journal of Foreign Language Teaching , 13 (2), 275–292.

Song, W., & Usaha, S. (2009). How EFL university students use electronic peer response into revisions. Suranaree Journal of Science and Technology , 16 (3), 263–275.

Stanley, J. (1992). Coaching student writers to be effective peer evaluators. Journal of . Second Language Writing , 1 (3), 217–233.

Sullivan, N., & Pratt, E. (1996). A comparative study of two ESL writing environments: A computer-assisted classroom and a traditional oral classroom. System , 24 (4), 491–501.

Swain, M., & Lapkin, S. (1998). Interaction and second language learning: Two adolescent French immersion students working together. The Modern Language Journal , 82 , 320–337.

Swain, M., & Lapkin, S. (2002). Talking it through: Two French immersion learners’ response to reformulation. International Journal of Educational Research , 37 , 285–304.

Swain, M. (2006). Languaging, agency and collaboration in advanced second language learning. In H. Byrnes (Ed.), Advanced language learning: The contributions of Halliday and Vygotsky (pp. 95–108). London, England: Continuum.

Tuzi, F. (2004). The impact of e-feedback on the revisions of L2 writers in an academic writing course. Computers and Composition , 21 (2), 217–235.

Villamil, O. S., & De Guerrero, M. C. (1996). Peer revision in the L2 classroom: Social-cognitive activities, mediating strategies, and aspects of social behaviour. Journal of Second Language Writing , 5 (1), 51–75.

Vorobel, O., & Kim, D. (2014). Focusing on content: Discourse in L2 peer review groups. TESOL Journal , 5 (4), 698–720.

Vygotsky, L. (1978). Mind in society. The development of higher psychological processes . Cambridge: Harvard University Press.

Yang, Y. F. (2011). A reciprocal peer review system to support college students’ writing. British Journal of Educational Technology , 42 (4), 687–700.

Zhu, W. (1995). Effects of training for peer response on students’ comments and interaction. Written Communication , 12 (4), 492–528.

Zhu, W. (2001). Interaction and feedback in mixed peer response groups. Journal of Second Language Writing , 10 (4), 251–276.

Download references

Acknowledgements

The authors acknowledge the assistance provided by other colleagues in the form of ideas and suggestions to accomplish the review.

Author information

Authors and affiliations.

Faculty of Languages and Linguistics, University of Malaya, Kuala Lumpur, Malaysia

Murad Abdu Saeed & Kamila Ghazali

Faculty of Medicine and Health Sciences, Universiti Putra Malaysia, Selangor, Malaysia

Musheer Abdulwahid Aljaberi

You can also search for this author in PubMed   Google Scholar

Contributions

All authors (MS, GK and MA) worked together, collaborated in collecting previous studies for the review, analyzing the data, writing up the review, editing it and completing the format according to the journal. They also read and approved the final draft to be submitted to the journal.

Corresponding author

Correspondence to Murad Abdu Saeed .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:.

The 37 reviewed studies. (DOCX 39 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Saeed, M.A., Ghazali, K. & Aljaberi, M.A. A review of previous studies on ESL/EFL learners’ interactional feedback exchanges in face-to-face and computer-assisted peer review of writing. Int J Educ Technol High Educ 15 , 6 (2018). https://doi.org/10.1186/s41239-017-0084-8

Download citation

Received : 13 April 2017

Accepted : 13 December 2017

Published : 19 January 2018

DOI : https://doi.org/10.1186/s41239-017-0084-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Feedback exchanges
  • Text revision
  • Face-to-face-peer review
  • Computer-assisted peer review

research paper for esl students

The Internet TESL Journal For Teachers of English as a Second Language

This page requires javascript..



Limited to 3 folders on our server: /Articles/, /Lessons/ and /Techniques/


An alternate search engine.



Google search of our student activities at a4esl.org.

About the Internet TESL Journal

Sortable table contents.

There are 812 rows in the table. Articles are listed with each author's name, so articles with multiple authors are listed more than once.

This table of contents was added on March 11, 2013

The Classroom | Empowering Students in Their College Journey

ESL Topics for a Research Paper

Thesis Topics Related to Language Learning and Acquisition

Thesis Topics Related to Language Learning and Acquisition

As the English language grows and spreads to more parts of the world, the field of English as a Second Language (ESL) is expanding as well. Students studying ESL education can consider several different topics and angles that would serve as good subjects for a research paper.

English Immersion

Schools abroad promote immersion as they best way to teach a child to become fluent in English. This may involve employing an English-only rule in the classroom, sending the children to overnight English-only camps or sending them to study English in an English-speaking country. A comparison of these methods to more traditional teaching methods would make a strong topic for a research paper.

Cultural Impact

The increase of ESL teachers and classes in other countries has had varying levels of impact on their way of life, cost of living, and in some cases, their native language. Choose three to five countries, preferably from different continents, and write a research paper comparing the ways in which ESL has affected their culture over the last decade.

Home versus Abroad

Educators interested in entering the field of ESL have the option of teaching or studying in the United States or abroad. The experience of teaching ESL varies greatly from country to country. Write a research paper comparing the salary, hours, case studies and living aspects of ESL teachers from three to five different locations. Or, write a persuasive essay comparing the details of teaching or studying ESL in the United States to teaching or studying ESL overseas.

ESL Training

ESL teachers can earn a variety of degrees and certificates. Some earn degrees in English, linguistic studies or education, while others pursue their Teachers of English to Speakers of Other Languages (TESOL) or Certificate in Teaching English to Speakers of Other Languages (CELTA) certificates, as reflected on the TESOL and CELTA websites. Write a research paper comparing and contrasting these different programs and certificates, including costs, materials and international perceptions.

Specialty ESL

A variety of courses aim at teaching English only for a specific purpose, rather than aiming for fluency as a whole. Some professional adults, for example, enroll in Business English courses, which focus on learning appropriate phrases for interviews, emails and meetings. Several different angles would be appropriate for a research paper in this area. One option is to write about the effectiveness of these programs by researching how many students that chose this route were hired for a job as a result (using the Business English as an example). Another is to compare the benefits of a narrowly focused ESL program to a broad ESL program that promotes total fluency.

Related Articles

English Language Learner (ELL) Teacher Certification

English Language Learner (ELL) Teacher Certification

Phonetic Research Paper Topics

Phonetic Research Paper Topics

Thesis Topics for Elementary Education

Thesis Topics for Elementary Education

List of English Universities in Germany

List of English Universities in Germany

The Advantages of Learning English

The Advantages of Learning English

What Are the Problems With Teaching English as a Second Language?

What Are the Problems With Teaching English as a Second Language?

How Long Does it Take to Get a Doctorate in English?

How Long Does it Take to Get a Doctorate in English?

How to Get a Job Teaching Italian in Tokyo

How to Get a Job Teaching Italian in Tokyo

Kara Page has been a freelance writer and editor since 2007. She maintains several blogs on travel, music, food and more. She is also a contributing writer for Suite101 and has articles published on eHow and Answerbag. Page holds a Bachelor of Music Education degree from the University of North Texas.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.7(9); 2021 Sep

Logo of heliyon

Evidence-based reading interventions for English language learners: A multilevel meta-analysis

Associated data.

Data included in article/supplementary material/referenced in article.

The number of English Language Learners (ELLs) has been growing worldwide. ELLs are at risk for reading disabilities due to dual difficulties with linguistic and cultural factors. This raises the need for finding practical and efficient reading interventions for ELLs to improve their literacy development and English reading skills. The purpose of this study is to examine the evidence-based reading interventions for English Language Learners to identify the components that create the most effective and efficient interventions. This article reviewed literature published between January 2008 and March 2018 that examined the effectiveness of reading interventions for ELLs. We analyzed the effect sizes of reading intervention programs for ELLs and explored the variables that affect reading interventions using a multilevel meta-analysis. We examined moderator variables such as student-related variables (grades, exceptionality, SES), measurement-related variables (standardization, reliability), intervention-related variables (contents of interventions, intervention types), and implementation-related variables (instructor, group size). The results showed medium effect sizes for interventions targeting basic reading skills for ELLs. Medium-size group interventions and strategy-embedded interventions were more important for ELLs who were at risk for reading disabilities. These findings suggested that we should consider the reading problems of ELLs and apply the Tier 2 approach for ELLs with reading problems.

English language learners, Evidenced-based intervention, Meta-analysis, Reading.

1. Introduction

There is a growing body of literature that recognizes the importance of quality education for learners who study in a language other than their native language ( Estrella et al., 2018 ; Ludwig et al., 2019 ). As cultural, racial, ethnic, and linguistic diversification takes place globally, the number of students studying a second language different from their native language is also increasing worldwide. In the United States, nearly 5 million learners who are not native speakers of English are currently attending public schools, and this figure has increased significantly over the past decade ( NCES, 2016 ). As the number of children whose native language is not English increased, the need for educational support also increased. Furthermore, the implementation of NCLB policy emphasizes the need for quality education for all students included in all schools. Accordingly, NCLB has emerged as a critical policy for learners to study in their second language. In other words, there is an urgent need to ensure that non-native English speakers receive appropriate education due to NCLB, which has not only increased the demand for education but also led to the practice of enhanced education for learners whose English is not their native language.

ELLs (English language learners) refer to the education provided for learners whose native language is not English in English-speaking countries ( National Center for Education Statistics, 2021 ). The education provided to these ELLs is called ESL (English as a second language), ESOL (English to speakers of other languages), EFL (English as a foreign language), and so on. Each term is adopted differently depending on the policy, purpose, and status of operation of the state and/or school district. While a variety of terms have been suggested, this paper uses the term ‘ELLs’ to refer to learners who are not native speakers of English and uses the terms ‘the English education program’ and the ‘ELL program’ to refer to the English education program provided to ELLs.

To ensure quality education, students identified as ELLs can participate in supportive programs to improve their English skills. These ELL programs can be broadly divided into two methods: “pull-out” and “push-in” ( Honigsfeld, 2009 ). In the pull-out program, students are taken to a specific space other than the classroom at regular class time and are separately taught English. In the push-in program, the ELL teacher joins the mainstream ELLs’ classroom and assists them during class time. Through these educational supports, ELLs are required to achieve not only English language improvements addressed in Title III of NCLB but also language art achievements appropriate to their grade level addressed in Title I of NCLB. ELLs are expected to achieve the same level of academic achievement as students of the same grade level, as well as comparable language skills.

A considerable amount of literature has been published on the achievement and learning status of ELLs ( Ludwig, 2017 ; Soland and Sandilos, 2020 ). These studies revealed that despite the intensive, high-quality education support for ELLs, they encounter difficulties learning and academic achievement. The National Reading Achievement Test (NAEP) results show that the achievement gap between non-ELLs and ELLs is steadily expanding in the areas of both mathematics and reading ( Polat et al., 2016 ). Ultimately, ELLs are reported to have the highest risk of dropping out of school ( Sheng et al., 2011 ). These difficulties are not limited to early school age. Fry (2007) reported that the results from a national standardized test of 8th-grade students found that ELLs performed lower than white students in both reading and math. Callahan and Shifrer (2016) analyzed data from a nationally representative educational longitudinal study in 2002 and found that, despite taking into account language, socio-demographic and academic factors, ELLs still have a large gap in high school academic achievement. Additionally, research has suggested that ELLs are less likely to participate in higher education institutions compared to non-ELL counterparts ( Cook, 2015 ; Kanno and Cromley, 2015 ).

Factors found to influence the difficulties of ELLs in learning have been explored in several studies ( Dussling, 2018 ; Thompson and von Gillern, 2020 ; Yousefi and Bria, 2018 ). There are two main reasons for these difficulties. First, ELLs face many challenges in learning a new language by following the academic content required in the school year ( American Youth Policy Forum, 2009 ). Moreover, language is an area that is influenced by sociocultural factors, and learning academic contents such as English language art and math are also influenced by sociocultural elements and different cultural backgrounds, which affects the achievement of ELLs in school ( Chen et al., 2012 ; Orosco, 2010 ). Second, it is reported that the heterogeneity of ELLs makes it challenging to formulate instructional strategies and provide adequate education for them. Due to the heterogeneous traits in the linguistic and cultural aspects of the ELL group, there are limitations in specifying and guiding traits. Therefore, properly reflecting their characteristics is difficult.

The difficulties for ELLs in academic achievement raise the necessity for searching practical and efficient reading interventions for ELLs to improve English language and academic achievement, including ELLs' English language art achievement. These needs and demands led to the conduct of various studies that analyze the difficulties of ELLs. Over the past decade, these studies have provided important information on education for ELLs. The main themes of the studies are difficulties in academic achievement and interventions for ELLs, including reading ( Kirnan et al., 2018 ; Liu and Wang, 2015 ; Roth, 2015 ; Shamir et al., 2018 ; Tam and Heng, 2016 ), writing ( Daugherty, 2015 ; Hong, 2018 ; Lin, 2015 ; nullP ) or both reading and math ( Dearing et al., 2016 ; Shamir et al., 2016 ). The influences of teachers on children's guidance ( Kim, 2017 ; Daniel and Pray, 2017 ; Téllez and Manthey, 2015 ; Wasseell, Hawrylak, Scantlebuty, 2017 ) and the influences of family members ( Johnson and Johnson, 2016 ; Walker, Research on 2017 ) are also examined.

Reading is known to function as an important predictor of success not only in English language art itself but also in overall school life ( Guo et al., 2015 ). This is because reading is conducted throughout the school years, as most of the activities students perform in school are related to reading. Furthermore, reading is considered one of the major fundamental skills in modern society because it has a strong relationship with academic and vocational success beyond school-based learning ( Lesnick et al., 2010 ). In particular, for ELLs, language is one of the innate barriers; thereafter, reading is one of the most common and prominent difficulties in that it is not done in their native language ( Rawian and Mokhtar, 2017 ; Snyder et al., 2017 ). In this respect, several studies have investigated reading for ELLs. These studies explore effective interventions and strategies ( Kirnan et al., 2018 ; Mendoza, 2016 ; Meredith, 2017 ; Reid and Heck, 2017 ) and suggest reading development models or predictors for reading success ( Boyer, 2017 ; Liu and Wang, 2015 ; Rubin, 2016 ). For these individual studies to provide appropriate guidance to field practitioners and desirable suggestions for future research, aggregation of the overall related studies, not only of the individual study, and research reflections based on them are required. Specifically, meta-analysis can be an appropriate research method. Through meta-analysis, we can derive conclusions from previous studies and review them comprehensively. Furthermore, meta-analysis can ultimately contribute to policymakers and decision-makers making appropriate decisions for rational strategies and policymaking.

Although extensive research has been carried out on the difficulties of ELLs and how to support them, a sufficiently comprehensive meta-analysis of these studies has not been carried out. Some studies have focused on specific interventions, such as morphological interventions ( Goodwin and Ahn, 2013 ), peer-mediated learning ( Cole, 2014 ), and video game-based instruction ( Thompson and von Gillern ). Ludwig, Guo, and Georgiou (2019) demonstrated the effectiveness of reading interventions for ELLs. However, they divided reading-related variables into “reading accuracy”, “reading fluency”, and “reading comprehension” and examined the effectiveness of the reading-related attributes in each of the variables. Therefore, the study has limitations for exploring the various aspects of reading and their effectiveness for reading interventions.

Individual studies have their characteristics and significance. However, for individual studies to be more widely adopted in the field and to be a powerful source for future research, it is necessary to analyze these individual studies more comprehensively. Meta-analysis reviews past studies related to the topic by 'integrating' previous studies, analyzes and evaluates them through 'critical analysis', provides implications to the field, and gives rise to intellectual stimulation to future studies by ‘identifying issues’ ( Cooper et al., 2019 ). Through this, meta-analysis can be a useful tool for diagnosing the past where relevant research has been conducted, taking appropriate treatment for the present, and providing intellectual stimulation for future studies.

Therefore, the purposes of this study are to examine evidence-based reading interventions for ELLs presented in the literature to analyze their effects and to identify the actual and specific components for creating the most effective and efficient intervention for ELLs. The findings of this study make a major contribution to research on ELLs by demonstrating the implications for the field and future study.

2.1. Selection of studies

A meta-analysis of peer-reviewed articles on ELL reading interventions published between January 2008 and March 2018 was conducted. According to the general steps of a meta-analysis, data related to reading interventions for English language learners were collected as follows. First, educational and psychological publication databases, such as Google Scholar ( https://scholar.google.co.kr ), ERIC ( https://eric.ed.gov/ ), ELSEVIER ( http://www.elsevier.com ), and Springer ( https://www.springer.com/gp ) were used to find the articles to be analyzed using the search terms “ELLs,” ESL,” “Reading,” “Second language education,” “Effectiveness,” and “Intervention” separately and in combination with each other. We reviewed the results of the web-based search for articles and included all relevant articles on the preliminary list. We selected the final list of the articles to be analyzed by applying inclusion and exclusion criteria to the preliminary list of articles. Studies were included in the final list based on three primary criteria. First, each study should evaluate the effectiveness of a school-based reading intervention using an experimental or quasi-experimental group design. In this process, single case, qualitative, and/or descriptive studies for ELLs were excluded from the analysis. Second, we included all types of reading-related interventions (i.e., phonological awareness, word recognition, reading fluency, vocabulary, and reading comprehension). Third, each study needed to report data in a statistical format to calculate an effect size. Fourth, we only included studies whose subjects were in grades K-12. The preliminary list had 75 articles, but since some of these studies did not meet the inclusion criteria, we excluded them from the final list for analysis. In total, this meta-analysis included 28 studies with 234 effect sizes (see Figure 1 ).

Figure 1

Prisma flow diagram.

2.2. Data analysis

2.2.1. coding procedure.

To identify the relevant components of the evidence-based reading interventions for ELLs, we developed an extensive coding document. Our interest was in synthesizing the effect sizes and finding the variables that affect the effectiveness of reading interventions for ELLs. The code sheet was made based on a code sheet used in Vaughn et al. (2003) and Wanzek et al. (2010) . All studies were coded for the following: (a) study characteristics, including general information about the study, (b) student-related variables, (c) intervention-related variables, (d) implementation-related variables, (e) measurement-related variables, and (f) quantitative data for the calculation of effect sizes.

Within the study characteristics category, we coded the researchers’ names, publication year, and title from each study to identify the general information about each study. For the student-related variables, mean age, grade level(s), number of participants, number of males, number of females, sampling method, exceptionality type (reading ability level), identification criteria in case of learning disabilities, race/ethnicity, and SES were coded. We divided grade level(s) into lower elementary (K-2), upper elementary (3–5), and secondary (6–12). When students with learning disabilities participated in the study, we coded the identification criteria reported in the study. For race/ethnicity, we coded white, Hispanic, black, Asian, and others. Within intervention-related variables, we coded for the title of the intervention, the key instructional components of the intervention, the type of intervention, and the reading components of the intervention. The reading components coded were phonemic awareness, phonics, fluency, vocabulary, reading comprehension, listening comprehension, and others. If an intervention contained multiple reading components, all reading components included in the intervention were coded. Fourth, within implementation-related variables, we coded group size, duration of the intervention (weeks), the total number of sessions, frequency of sessions per week, length of each session (minutes), personnel who provided the intervention (i.e., teacher, researchers, other), and the setting. Fifth, in measurement-related variables, we coded the title of the measurement, reliability coefficient, validity coefficient, type of measurement, type of reliability, and type of validity. We also coded quantitative data such as the pre- and posttest means, the pre- and posttest standard deviations, and the number of participants in the pre- and posttests for both the treatment and control groups. These coding variables are defined in Table 1 . The research background and sample information are in Appendix 1 .

Table 1

Coding variables.

Study ComponentCodeDetails
General InformationTitle
Names of researchers
Publication year
ParticipantMean age
Age and Grade levelsPreschool, Lower elementary (K-2), Upper elementary (3–5), Secondary (6–12)
Number of participantsTotal number of participants, Number of girls, Number of boys
ExceptionalityGeneral, Learning difficulties, Learning disabilities, Others
Race/EthnicityEuropean-American, Hispanic, African-American, Asian/Pacific Islander, Others
SESLower, Middle, Upper
InterventionTitle of intervention
Key instructional components
Type of reading interventionStrategy instruction, Peer tutoring, Computer-based learning, and Others
Reading componentsPhonemic awareness, Phonics, Fluency, Vocabulary, Reading comprehension, Listening comprehension and Others
ImplementationGroup sizeSmall group (1 or more and 5 or less), Middle group (6 or more and 15 or less), and Large group or class size (16 or more)
Duration of intervention (weeks)
Total number of sessions
Frequency per week
Length of each session (minutes)
InstructorTeachers, Graduate students, Researchers, Others
SettingClassroom, Resource room, Afternoon school, and Others
MeasurementTitle of measurement methods
Type of measurementStandardized measurement and Researcher-developed measurement
Reliability coefficientReported and Unreported
Validity coefficientReported and Unreported
Type of reliabilityTest-retest reliability, Cronbach α, and Others
Type of validityCriterion validity, Construct validity, Content validity and Others

2.2.2. Coding reliability

The included articles were coded according to the coding procedure described above. Two researchers coded each study separately and reached 91% agreement. Afterward, the researchers reviewed and discussed the differences to resolve the initial disagreements.

2.2.3. Data analysis

First, we calculated 234 effect sizes from the interventions included in the 28 studies. The average effect size was calculated using Cohen's d formula. In addition, we conducted a two-level meta-analysis through multilevel hierarchical linear modeling (HLM) using the HLM 6.0 interactive mode statistical program to analyze the computed effect sizes and find the predictors that affect the effect sizes of reading interventions. HLM is appropriate to quantitatively obtain both overall summary statistics and quantification of the variability in the effectiveness of interventions across studies as a means for accessing the generalizability of findings. Moreover, HLM easily incorporates the overall mean effect size using the unconditional model, and HLM is useful to explain variability in the effectiveness of interventions between studies in the conditional model. The aim of the current study is to provide a broad overview of interventions for ELLs. To achieve this aim, we conducted an unconditional model for overall mean effect size and conducted a conditional model to identify factors that have an impact on the strength of effect sizes. In regard to variables related to the effectiveness of interventions, we conducted a conditional model with student-related, measurement-related, intervention-related, and implementation-related variables. In the case of quantitative meta-analyses, it is assumed that observations are independent of one another ( How and de Leeuw, 2003 ). However, this assumption is usually not applied in social studies if observations are clustered within larger groups ( Bowman, 2003 ) because each effect size within a study might not be homogeneous ( Beretvas and Pastor, 2003 ). Thus, a two-level multilevel meta-analysis using a mixed-effect model was employed because multiple effect sizes are provided within a single education study. To calculate effect size (ES) estimates using Cohen's d, we use the following equation [1]:

The pooled standard deviation, SD pooled , is defined as

In HLM, the unconditional model can be implemented to identify the overall effect size across all estimates and to test for homogeneity. If an assumption of homogeneity is rejected by an insignificant chi-square coefficient in the unconditional model, this means that there are differences within and/or between studies. This assumption must go to the next step to find moderators that influence effect sizes. This step is called a level two model or a conditional model. A conditional model is conducted to investigate the extent of the influence of the included variables.

The level one model (unconditional model) was expressed as [3], and the level two model (the conditional model was expressed as [4].

In equation (3) , δ j represents the mean effect size value for study j, and e j is the within-study error term assumed to be theoretically normally distributed with a mean of 0 and a variance of V j . In the level two model equation [4], γ 0 represents the overall mean effect size for the population, and u j represents the sampling variability between studies presumed to be normally distributed with a mean of 0 and a variance τ .

Regarding publication bias, we looked at the funnel plot with the 'funnel()' command of the metafor R package ( Viechtbauer, 2010 ), and to verify this more statistically, we used the dmetar R package ( Harrer et al., 2019 ). Egger's regression test ( Egger et al., 1997 ) was conducted using the 'eggers.test()' command to review publication bias. Egger's regression analysis showed that there was a significant publication error (t = 3.977, 95% CI [0.89–2.54], p < .001). To correct this, a trim-and-fill technique ( Duval and Tweedie, 2000 ) was used. As a result, the total effect size corrected for publication bias was also calculated. The funnel plot is shown in [ Figure 2 ].

Figure 2

Funnel plot.

We analyzed 28 studies to identify influential variables that count for reading interventions for ELLs. Before performing the multilevel meta-analysis, the effect size of 28 studies was analyzed by traditional meta-analysis. The forest plots for the individual effect sizes of 28 studies are shown in Appendix 2. We present our findings with our research questions as an organizational framework. First, we showed an unconditional model for finding the overall mean effect size. Then, we described the variables that influenced the effect size of reading interventions for ELLs using a conditional model.

3.1. Unconditional model

An unconditional model of the meta-analysis was tested first. In the analysis, restricted maximum likelihood estimation was used. This analysis was conducted to confirm the overall mean effect size and to examine the variability among all samples. The results are shown in Table 2 .

Table 2

Results of the unconditional model analysis.

Fixed Effect
Coefficient Ratio( )95% CI
LowerUpper
Intercept 0.653 0.063 10.173∗∗(233) 0.530 0.776
Random Effect
Variance Component Chi
Intercept0.5890.7671245.90∗∗∗

∗∗∗ p < 0.001, df: degree of freedom.

The intercept coefficient in the fixed model is the overall mean effect size from 234 effect sizes. This means that the effect of reading intervention for English language learners is medium based on Cohen's d. Cohen's d is generally interpreted as small d = 0.2, medium d = 0.5 and large d = 0.8. The variance component indicates the variability among samples. The estimate was 0.589 and remained significant (χ 2 = 1245.90, p < . 001). This statistical significance means that moderator analysis with dominant predictors in a model is required to explore the source of variability.

3.2. Conditional model

Moderator analysis using the conditional model was expected to identify factors that have an impact on the strength of effect sizes. In this study, the moderator analysis was administered by nine critical variable categories: students’ grade, exceptionality, SES, reading area, standardized test, test reliability, intervention type, instructor, and group size. Variables in each category were coded by dummy coding. Dummy coding was used to identify the difference in dependent variables between the categories of independent variables. For example, we used four dummy variables to capture the five dimensions. The parameter estimates capture the differences in effect sizes between the groups that are coded 1 and a reference group that is coded 0. From a mathematical perspective, it does not matter which categorical variable is used as the referenced group ( Frey, 2018 ). We labeled one variable in each category as a reference group to make the interpretation of the results easier. We used an asterisk mark to denote the reference group for each category; if a word has an asterisk next to it, this indicates that it is the reference group for that category.

  • 1) Student-related variables

The results of the conditional meta-analysis for students' grade variables are presented in Table 3 . In Table 3 , the significant coefficients mean that mean effect sizes are significantly larger for studies in reference conditions. For student grades, upper elementary students showed significantly larger mean effect sizes than secondary students (2.720, p = 0.000), but preschool students showed significantly lower mean effect sizes than secondary students (-0.103, p = 0.019). The Q statistic was significant for students’ grades ( Q = 27.20, p < 0.001) (see Table 4 ).

Table 3

Results of the moderator analysis for student grade.

Fixed EffectKCoefficient (d)Standard Error Ratio -value
Secondary∗200.4820.0667.2612300.00027.70
Preschool110-0.1030.043-2.3702300.019
Lower Elementary870.0680.0840.8102300.419
Upper Elementary172.7200.16916.0762300.000

df: degree of freedom.

Table 4

Results of the moderator analysis for exceptionality.

Fixed EffectkCoefficient (d)Standard Error Ratio -value
Low achievement∗60.7070.1983.5812320.0010.0278
General228-0.0800.208-0.3852320.700

For the student-related variables, students with low achievement showed significantly larger mean effect sizes scores than general students (0.707, p = 0.001). However, there was no significant difference between students with low achievement and general students. The Q statistic was significant for students’ exceptionality ( Q = 0.0278, p < 0.001).

Table 5 shows that low and low-middle SES was not significantly different from students with no information about SES (0.055, p = 0.666). Moreover, students with middle and upper SES did not have significantly smaller effect sizes than students with nonresponse (-0.379, p = 0.444). The Q statistic was significant for students’ SES ( Q = 68.50, p < 0.001).

Table 5

Results of the moderator analysis for SES.

Fixed EffectkCoefficient (d)Standard Error Ratio -value
Nonresponse∗880.6130.0926.6562310.00068.50
Low-Middle1240.0550.1270.4322310.666
Middle-Upper22-0.3790.494-0.7672310.444
  • 2) Measurement-related variables

Table 6 shows the results of the moderator analysis for measurement types. The coefficient for the standardized measurement-related variable was not significant. The Q statistic was significant for the standardization of measurement tools ( Q = 5.28, p < 0.001).

Table 6

Results of the moderator analysis for standardization of measurement tools.

Fixed EffectkCoefficient (d)Standard Error Ratio -value
Researcher developed∗610.7210.1076.7272320.0005.28
Standardized173-0.1290.131-0.9832320.327

Table 7 shows the results of the moderator analysis for the reliability of the measurement tools. The coefficient for the measurement reliability-related variable was significant (0.409, p = 0.003), which means that the effect sizes of measurements that reported reliability (ES = 0.770) were significantly larger than the effect sizes of measurements that had information about reliability (ES = 0.361). The Q statistic was significant for the reliability of the measurement tools ( Q = 5.82, p < 0.001) (see Table 8 ).

Table 7

Results of the moderator analysis for reliability.

Fixed EffectkCoefficient (d)Standard Error Ratio -value
Nonresponse about reliability∗810.3610.1083.3382320.0015.82
Reliability1530.4090.1323.0932320.003

Table 8

Results of the moderator analysis for content of the intervention.

Fixed EffectkCoefficient (d)Standard Error Ratio -value
Other area∗210.0960.1500.6422280.52124.005
Phonological awareness580.5280.2092.5212280.013
Reading fluency131.1500.3243.5492280.001
Vocabulary930.4420.1792.4642280.000
Reading comprehension320.9710.2094.6512280.000
Listening Comprehension170.8340.2573.2442280.002
  • 3) Intervention-related variables

The content of the intervention was divided into phonological awareness, reading fluency, vocabulary, reading comprehension, listening comprehension, and other areas. Studies measured other areas that functioned as a reference group. For the measurement area, all reading areas were significantly larger than other areas. Reading fluency (1.150, p = 0.001), reading comprehension (0.971, p = 0.000) and listening comprehension (0.834, p = 0.002) were significantly larger than those in the other areas. However, phonological awareness and vocabulary were significantly larger than other areas but lower than reading fluency, reading comprehension, and listening comprehension (0.528, p = 0.013; 0.442, p = 0.000). The Q statistic was significant for the content of the intervention ( Q = 24.005, p < 0.001).

For intervention types, strategy instruction, peer tutoring, and computer-based learning were compared to other methods, which were fixed as a reference group. Table 9 shows that strategy instruction was significantly larger than other methods in mean effect sizes (0.523, p = 0.001). However, studies that applied peer tutoring and computer-based learning showed lower than other methods, but these differences were not statistically significant (-0.113, p = 0.736; -0114, p = 0.743). The Q statistic was significant for intervention types ( Q = 73.343, p < 0.001).

Table 9

Results of the moderator analysis for intervention types.

Fixed EffectkCoefficient (d)Standard Error Ratio -value
Other method∗340.2690.1351.9862300.04873.343
Strategy instruction1540.5230.1543.4052300.001
Peer tutoring18-0.1130.337-0.3372300.736
Computer based learning28-0.1140.348-0.3282300.743
  • 4) Implementation-related variables

For instructor-related variables, other instructor-delivered instructions were assigned as a reference group. Table 10 shows that the teacher and researcher groups showed significantly larger than the other instructors. Moreover, the teacher group showed larger than the researcher group (0.909, p = 0.000). The Q statistic was significant for instructor-related variables ( Q = 14.024, p < 0.001).

Table 10

Results of the moderator analysis for instructor.

Fixed EffectkCoefficient (d)Standard Error Ratio -value
Other instructor∗6-0.1970.225-0.8732300.38414.024
Teacher1820.9090.2373.8372300.000
Graduate students40.6910.4691.4762300.141
Researcher420.8940.2733.2732300.002

For group size, mixed groups were fixed as a reference group. Group size variables were divided into a small group (1 or more and 5 or less), a middle group (6 or more and 15 or less), and a large group or class size (16 or more). Table 11 shows that the middle group (6 or more and 15 or less) and the small group (1 or more and 5 or less) were significantly larger than the mixed group (0.881, p = 0.000; 0.451, p = 0.006). However, the difference between the large group and the mixed group was not significant (0.120, p = 0.434). The Q statistic was significant for group size variables ( Q = 17.756, p < 0.001).

Table 11

Results of the moderator analysis for group size.

Fixed EffectkCoefficient (d)Standard Error Ratio -value
Mixed group∗620.3910.1113.5282300.00117.756
Small group610.4510.1602.8242300.006
Middle group180.8810.2313.8082300.000
Large group930.1200.1530.7832300.434

4. Discussion

The purpose of this meta-analysis was to explore the effects of reading interventions for ELLs and to identify research-based characteristics of effective reading interventions for enhancing their reading ability. To achieve this goal, this study tried to determine the answers to two research questions. What is the estimated mean effect size of reading interventions for ELLs in K-12? To what extent do student-, intervention-, implementation-, and measurement-related variables have effects on improving the reading ability of ELLs in K-12? Therefore, our study was limited to recent K-12 intervention studies published between January 2008 and March 2018 that included phonological awareness, fluency, vocabulary, reading comprehension, and listening comprehension as intervention components and outcome measures. A total of 28 studies were identified and analyzed. To inquiry the two main research questions, a two-level meta-analysis was employed in this study. For the first research question, the unconditional model of HLM was conducted to investigate the mean effect size of reading interventions for ELLs. The conditional model of HLM was conducted to determine which variables have significant effects on reading interventions for ELLs. Below, we briefly summarized the results of this study and described the significant factors that seem to influence intervention effectiveness. These findings could provide a better understanding of ELLs and support implications for the development of reading interventions for ELLs.

4.1. Effectiveness of reading interventions for ELLs

The first primary finding from this meta-analysis is that ELLs can improve their reading ability when provided appropriate reading interventions. Our findings indicated that the overall mean effect size of reading interventions of ELLs yielded an effect size of 0.653, which indicates a medium level of effect. From this result, we can conclude that the appropriate reading interventions generally have impacts on reading outcomes for ELLs in K-12. This is consistent with prior syntheses reporting positive effects of reading interventions for ELLs ( Vaughn et al., 2006 ; Abraham, 2008 ).

Effect size information is important to understand the real effects of the intervention. Therefore, this finding indicated that supplementary reading interventions for ELLs will be developed and implemented. This finding also showed that states are required to develop a set of high-quality reading interventions for ELLs. Language interventions for ELLs have become one of the most important issues in the U.S. Increasing numbers of children in U.S. schools have come from homes in which English is not the primary language spoken. NCES (2016) showed that 4.9 million students, or 9.6% of public school students, were identified as ELLs, which was higher than the 3.8 million students, or 8.1%, identified in 2000 ( NCES, 2016 ). While many students of immigrant families succeed in their academic areas, too many do not. Some ELLs lag far behind native English speakers in the school because of the strong effect of language factors on the instruction or assessment. Although English is not their native language, ELLs should learn educational content in English. This leads to huge inequity in public schools. Thus, improving the English language and literacy skills of ELLs is a major concern for educational policymakers. This finding can support practitioners’ efforts and investments in developing appropriate language interventions for ELLs.

4.2. The effects of moderating variables

The second primary finding of this meta-analysis relates to four variable categories: student-, intervention-, implementation-, and measurement-related variables. Effective instruction cannot be designed by considering one factor. The quality of instruction is the product of many factors, including class size, the type of instructions, and other resources. This finding showed which factors affected the effectiveness of reading interventions. Specifically, we found that the variables that proved to have significant effects on reading outcomes of ELLs were as follows: upper elementary students, reliable measurement tools, reading and listening comprehension-related interventions, strategy instruction, and the middle group consisting of 6 or more and 15 or less. Teachers and practitioners in the field may choose to adopt these findings into their practices. ELL teachers may design their instruction as strategy-embedded instruction in middle-sized groups.

We found that grades accounted for significant variability in an intervention's effectiveness. Specifically, we found that reading interventions were substantially more effective when used with upper elementary students than secondary students. This means that the magnitude of an intervention's effectiveness changed depending on when ELLs received reading interventions. Specifically, the larger effect sizes on upper elementary students than secondary schools showed the importance of early interventions to improve ELLs' language abilities. Students who experience early reading difficulty often continue to experience failure in later grades. ELLs, or students whose primary language is other than English and are learning English as a second language, often experience particular challenges in developing reading skills in the early grades. According to Kieffer (2010) , substantial proportions of ELLs and native English speakers showed reading difficulties that emerged in the upper elementary and middle school grades even though they succeeded in learning to read in the primary grades.

Regarding students’ English proficiency and academic achievement, there was no statistically significant difference between students with low achievement and general students. Given the heterogeneity of the English language learner population, interventions that may be effective for one group of English language learners may not be effective with others ( August and Shanahan, 2006 ). This result is similar to the results achieved by Lovett et al. (2008) . Lovett et al. (2008) showed that there were no differences between ELLs and their peers who spoke English as a first language in reading intervention outcomes or growth intervention. This finding suggests that systematic and explicit reading interventions are effective for readers regardless of their primary language.

For students' socioeconomic status (SES), there was no significant difference between the low-middle group and the nonresponse group. However, we cannot find that students' SES is critical for implementing reading interventions. Low SES is known to increase the risk of reading difficulties because of the limited access to a variety of resources that support reading development and academic achievement ( Kieffer, 2010 ). Many ELLs attend schools with high percentages of students living in poverty ( Vaughn et al., 2009 ). These schools are less likely to have adequate funds and resources and to provide appropriate support for academic achievement ( Donovan and Cross, 2002 ). Snow, Burns and Griffin (1998) highlighted multiple and complex factors that contribute to poor reading outcomes in school, including a lack of qualified teachers and students who come from poverty. Although this study cannot determine the relationship between the effectiveness of reading interventions and the SES of students, more studies are needed. In addition, these results related to students’ characteristics showed that practitioners and teachers can consider for whom to implement some interventions. Researchers should provide a greater specification of the student samples because this information will be particularly critical for English language learners.

Although many of the studies measured a variety of outcomes across all areas of reading, interventions that focused on improving reading comprehension and listening comprehension obtained better effects than other reading outcomes. This result is similar to those discussed in previous findings ( Wanzek and Roberts, 2012 ; Carrier, 2003 ).

With regard to effective intervention types, the findings indicated that strategy instruction was statistically significant for improving the reading skills of ELLs. However, computer-based interventions, which are frequently used for reading instruction for ELLs in recent years, showed lower effect sizes than mixed interventions. Strategy instructions are known as one of the effective reading interventions for ELLs ( Proctor et al., 2007 ; Begeny et al., 2012 ; Olson and Land, 2007 ; Vaughn et al., 2006 ). These strategies included activating background knowledge, clarifying vocabulary meaning, and expressing visuals and gestures for understanding after reading. Some studies have shown that computer-based interventions are effective for ELLs ( White and Gillard, 2011 ; Macaruso and Rodman, 2011 ), but this study does not. Therefore, there is little agreement in the research literature on how to effectively teach reading to ELLs ( Gersten and Baker, 2000 ). Continued research efforts must specify how best to provide intervention for ELLs.

With respect to the implementation of the intervention, teachers and researchers as instructors would produce stronger effects than other instructors. In this study, multiple studies showed that various instructors taught ELLs, including teachers, graduate students, and researchers. The professional development of instructors is more important than that of those who taught ELLs. This finding is consistent with Richards-Tutor et al. (2016) . They also did not find differences between researcher-delivered interventions and school personnel-delivered interventions. Continuing professional development should build on the preservice education of teachers, strengthen teaching skills, increase teacher knowledge of the reading process, and facilitate the integration of newer research on reading into the teaching practices of classroom teachers ( Snow et al., 1998 ). Overall, professional development is the key factor in strengthening the reading skills of ELLs.

This study showed that medium-sized groups of 6 or more and 15 or less had larger effect sizes than the mixed groups. In addition, the medium-sized group showed a larger effect size than the small group of 5 or less. This finding showed that a multi-tiered reading system should be needed in the general classroom. This finding is linked to the fact that the reaction to intervention (RTI) approach is more effective for ELLs. Linan-Thompson et al. (2007) pointed out that RTI offers a promising alternative for reducing the disproportionate representation of culturally and linguistically diverse students in special education by identifying students at risk early and providing preventive instruction to accelerate progress. Regarding interventions for ELLs who are struggling with or at risk for reading difficulties, Ross and Begeny (2011) compared the effectiveness between small group interventions and implementing the intervention in a 1/1 context for ELLs. They showed that nearly all students benefitted from the 1/1 intervention, and some students benefitted from the small group intervention. This finding is commensurate with a previous study investigating the comparative differences between group sizes and suggests research-based support for the introduction of the RTI approach.

However, most implementation-related variables, including duration of intervention, the total number of sessions, frequency per week, length of each session, settings, and instructor, did not have any significant effect on the reading ability of ELLs. That is, ELLs are able to achieve their reading improvement regardless of the duration of intervention, where they received the reading intervention, and who taught them. This finding is similar to those discussed by Snyder et al. (2017) . They also synthesized the related interventions for ELLs and showed that the length of intervention did not seem to be directly associated with overall effect sizes for reading outcomes. This finding is also the same as recent research on intervention duration with native English speakers ( Wanzek et al., 2013 ). Wanzek and colleagues examined the relationship between student outcomes and hours of intervention in their meta-analysis. The findings showed no significant differences in student outcomes based on the number of intervention hours. Elbaum et al. (2000) stated that the intensity of the interventions is most important for effectiveness. Our results somewhat support these researchers’ opinions, but we cannot be certain that a brief intervention would have the same overall effect on reading outcomes as a year-long intervention. Thus, we should consider the intervention intensity, such as student attendance at the sessions, with the duration of the intervention.

4.3. Implications for practice and for research

The most effective and efficient education refers to education that is made up in the right ways, that includes proper content, and that is delivered on time so that the students can benefit the most. To implement this, research to identify a particular framework based on the synthesis of research results through meta-analysis, such as this study, must be conducted. Furthermore, the implications based on the results must be deeply considered. In this respect, important implications for the practice and research of practitioners, researchers, and policymakers on enhancing reading competence for ELLs of this study are as follows.

First, reading interventions for ELLs are expected to be the most efficient when conducted on a medium-sized group of 6–15 students. This indicates that implementing reading interventions for ELLs requires a specially designed group-scale configuration rather than simply a class-wide or one-to-one configuration. Second, the implementation of reading interventions for ELLs is most effective when conducted for older elementary school students. This is in contrast to Morgan and Sideridis (2006) , who demonstrated the characteristics of students with learning disabilities using multilevel meta-analysis and showed that age groups were irrelevant in the effect size of reading interventions for students with learning disabilities. Therefore, it can be seen that the ELLs group, unlike the learning disability group, the students of which have reading difficulty due to their disabilities, is in the normal development process but has reading difficulty due to linguistic differences. Accordingly, it can be seen that the senior year of elementary school, in which a student has been exposed to the academic environment for a sufficiently long time and language is sufficiently developed, is the appropriate time for learning English for ELLs. Third, effective reading interventions for ELLs should be performed with a strategy-embedded instruction program. This is based on the fact that strategic instructions are effective for vocabulary or concepts in unfamiliar languages ( Carlo et al., 2005 ; Chaaya and Ghosn, 2010 ).

The above implications require the implementation of Tier 2 interventions for reading interventions for ELLs in practice. In Tier 2 interventions, students can participate in more intensive learning through specially designed interventions based on their personal needs ( Ortiz et al., 2011 ). In other words, in policymaking and administrative decision-making, intensive education programs for ELLs who have been exposed to the academic environment for a certain period but still have reading difficulties, including having achievements that fall short of the expected level, are needed.

Considering further applications, these findings could guide practitioners and policymakers to develop effective evidence-based reading programs or policies. The significant variables in this study can be considered to develop new programs for ELLs.

Declarations

Author contribution statement.

All authors listed have significantly contributed to the development and the writing of this article.

Funding statement

This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2020S1A3A2A02103411).

Data availability statement

Declaration of interests statement.

The authors declare no conflict of interest.

Additional information

No additional information is available for this paper.

Appendix A. Supplementary data

The following is the supplementary data related to this article:

  • Abraham L.B. Computer-mediated glosses in second language reading comprehension and vocabulary learning: a meta-analysis. Comput. Assist. Lang. Learn. 2008; 21 (3):199–226. [ Google Scholar ]
  • August D., Shanahan T. In: Developing literacy in second-language learners: Report of the National Literacy Panel on Language Minority Children and Youth. August D., Shanahan T., editors. Lawrence Erlbaum Associates; Mahwah, NJ: 2006. Synthesis: instruction and professional development; pp. 351–364. [ Google Scholar ]
  • Begeny J.C., Ross S.G., Greene D.J., Mitchell R.C., Whitehouse M.H. Effects of the Helping Early Literacy with Practice Strategies (HELPS) reading fluency program with Latino English language learners: a preliminary evaluation. J. Behav. Educ. 2012; 21 (2):134–149. [ Google Scholar ]
  • Beretvas S.N., Pastor D.A. Using mixed-effects models in reliability generalization studies. Educ. Psychol. Meas. 2003; 63 (1):75–95. [ Google Scholar ]
  • Bowman N.A. Effect sizes and statistical methods for meta-analysis in higher education. Res. High. Educ. 2003; 53 :375–382. [ Google Scholar ]
  • Boyer K. The Relationship Between Vocabulary and Reading Comprehension in Third Grade Students Who Are English Language Learners and Reading Below Grade Level. 2017. (Masters Thesis) [ Google Scholar ]
  • Carlo M.S., August D., Snow C.E. Sustained vocabulary-learning strategy instruction for English-language learners. Teach. Learn. Vocabul.: Bring. Res. Pract. 2005:137–153. [ Google Scholar ]
  • Carrier K.A. Improving high school English language learners' second language listening through strategy instruction. Biling. Res. J. 2003; 27 (3):383–408. [ Google Scholar ]
  • Chaaya D., Ghosn I.K. Supporting young second language learners reading through guided reading and strategy instruction in a second grade classroom in Lebanon. Educ. Res. Rev. 2010; 5 (6):329–337. [ Google Scholar ]
  • Chen X., Ramirez G., Luo Y.C., Geva E., Ku Y.M. Comparing vocabulary development in Spanish-and Chinese-speaking ELLs: the effects of metalinguistic and sociocultural factors. Read. Writ. 2012; 25 (8):1991–2020. [ Google Scholar ]
  • Cole M.W. Speaking to read: meta-analysis of peer-mediated learning for English language learners. J. Lit. Res. 2014; 46 (3):358–382. [ Google Scholar ]
  • Cooper H., Hedges L.V., Valentine J.C., editors. The Handbook of Research Synthesis and Meta-Analysis. Russell Sage Foundation; 2019. [ Google Scholar ]
  • Daniel S.M., Pray L. Learning to teach English language learners: a study of elementary school teachers’ sense-making in an ELL endorsement program. Tesol Q. 2017; 51 (4):787–819. [ Google Scholar ]
  • Daugherty J.L. 2015. Assessment of ELL Written Language Progress in Designated ESL Noncredit Courses at the Community College Level. [ Google Scholar ]
  • Dearing E., Walsh M.E., Sibley E., Lee-St. John T., Foley C., Raczek A.E. Can community and school-based supports improve the achievement of first-generation immigrant children attending high-poverty schools? Child Dev. 2016; 87 (3):883–897. [ PubMed ] [ Google Scholar ]
  • Donovan M.S., Cross C.T. National Academies Press; Washington, DC: 2002. Minority Students in Special and Gifted Education. 2101 Constitution Ave., NW, Lockbox 285, 20055. [ Google Scholar ]
  • Dussling T.M. Examining the effectiveness of a supplemental reading intervention on the early literacy skills of English language learners. Liter. Res. Instr. 2018; 57 (3):276–284. [ Google Scholar ]
  • Duval S., Tweedie R. Trim and fill: a simple funnel-plot–based method of testing and adjusting for publication bias in meta-analysis. Biometrics. 2000; 56 (2):455–463. [ PubMed ] [ Google Scholar ]
  • Egger M., Smith G.D., Schneider M., Minder C. Bias in meta-analysis detected by a simple, graphical test. Bmj. 1997; 315 (7109):629–634. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Elbaum B., Vaughn S., Tejero Hughes M., Watson Moody S. How effective are one-to-one tutoring programs in reading for elementary students at risk for reading failure? A meta-analysis of the intervention research. J. Educ. Psychol. 2020; 92 (4):605–619. [ Google Scholar ]
  • Estrella G., Au J., Jaeggi S.M., Collins P. Is inquiry science instruction effective for English language learners? A meta-analytic review. AERA Open. 2018; 4 (2) 2332858418767402. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Frey B.B. Sage Publications; 2018. The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation. [ Google Scholar ]
  • Gersten R., Baker S. What we know about effective instructional practices for English-language learners. Except. Child. 2000; 66 (4):454–470. [ Google Scholar ]
  • Guo Y., Sun S., Breit-Smith A., Morrison F.J., Connor C.M. Behavioral engagement and reading achievement in elementary-school-age children: a longitudinal cross-lagged analysis. J. Educ. Psychol. 2015; 107 (2):332–347. [ Google Scholar ]
  • Goodwin A.P., Ahn S. A meta-analysis of morphological interventions in English: effects on literacy outcomes for school-age children. Sci. Stud. Read. 2013; 17 (4):257–285. [ Google Scholar ]
  • Harrer M., Cuijpers P., Furukawa T.A., Ebert D.D. Doing meta-analysis with R: A hands-On guide. from. 2019. https://bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/ [ Google Scholar ]
  • Hong H. Exploring the role of intertextuality in promoting young ELL children’s poetry writing and learning: a discourse analytic approach. Classr. Discourse. 2018; 9 (2):166–182. [ Google Scholar ]
  • Honigsfeld A. ELL programs: not ‘one size fits all’ Kappa Delta Pi Rec. 2009; 45 (4):166–171. [ Google Scholar ]
  • How J.J., de Leeuw E.D. In: Multilevel Modeling: Methodological Advances, Issues, and Applications. Reise S.P., Duan N., editors. Lawrence Erlbaum; Mahwah, NJ: 2003. Multilevle model for meta-analysis; pp. 90–111. [ Google Scholar ]
  • Johnson E.J., Johnson A.B. Enhancing academic investment through home-school connections and building on ELL students' scholastic funds of knowledge. J. Lang. Literacy Educ. 2016; 12 (1):104–121. [ Google Scholar ]
  • Kieffer M.J. Socioeconomic status, English proficiency, and late-emerging reading difficulties. Educ. Res. 2010; 39 (6):484–486. [ Google Scholar ]
  • Kim K. Global Conference on Education and Research (GLOCER 2017) 2017. May). Influence of ELL instructor’s culturally responsive attitude on newly arrived adolescent ELL students’ academic achievement; pp. 239–242. [ Google Scholar ]
  • Kirnan J., Ventresco N.E., Gardner T. The impact of a therapy dog program on children’s reading: follow-up and extension to ELL students. Early Child. Educ. J. 2018; 46 (1):103–116. [ Google Scholar ]
  • Lesnick J., Goerge R., Smithgall C., Gwynne J. Chapin Hall at the University of Chicago; Chicago, IL: 2010. Reading on Grade Level in Third Grade: How Is it Related to High School Performance and College Enrollment. [ Google Scholar ]
  • Lin S.M. A study of ELL students' writing difficulties: a call for culturally, linguistically, and psychologically responsive teaching. Coll. Student J. 2015; 49 (2):237–250. [ Google Scholar ]
  • Linan-Thompson S., Cirino P.T., Vaughn S. Determining English language learners' response to intervention: questions and some answers. Learn. Disabil. Q. 2007; 30 (3):185–195. [ Google Scholar ]
  • Liu S., Wang J. Reading cooperatively or independently? Study on ELL student reading development. Read. Matrix: Int. Online J. 2015; 15 (1):102–120. [ Google Scholar ]
  • Lovett M.W., De Palma M., Frijters J., Steinbach K., Temple M., Benson N., Lacerenza L. Interventions for reading difficulties: a comparison of response to intervention by ELL and EFL struggling readers. J. Learn. Disabil. 2008; 41 (4):333–352. [ PubMed ] [ Google Scholar ]
  • Ludwig C. University of Alberta; Canada: 2017. The Effects of Reading Interventions on the Word-Reading Performance of English Language Learners: A Meta-Analysis (Unpublished Master’s Thesis) [ Google Scholar ]
  • Ludwig C., Guo K., Georgiou G.K. Are reading interventions for English language learners effective? A meta-analysis. J. Learn. Disabil. 2019; 52 (3):220–231. [ PubMed ] [ Google Scholar ]
  • Macaruso P., Rodman A. Benefits of computer-assisted instruction to support reading acquisition in English language learners. Biling. Res. J. 2011; 34 (3):301–315. [ Google Scholar ]
  • Mendoza S. Reading strategies to support home-to-school connections used by teachers of English language learners. J. English Lang. Teach. 2016; 6 (4):33–38. [ Google Scholar ]
  • Meredith D.C. Trevecca Nazarene University; 2017. Relationships Among Utilization of an Online Differentiated Reading Program, ELL Student Literacy Outcomes, and Teacher Attitudes. [ Google Scholar ]
  • Morgan P.L., Sideridis G.D. Contrasting the effectiveness of fluency interventions for students with or at risk for learning disabilities: a multilevel random coefficient modeling meta-analysis. Learn. Disabil. Res. Pract. 2006; 21 (4):191–210. [ Google Scholar ]
  • National Center for Education Statistics (NCES) 2021. English Language Learners in Public Schools. https://nces.ed.gov/programs/coe/indicator/cgf Retrieved from. [ Google Scholar ]
  • Olson C.B., Land R. A cognitive strategies approach to reading and writing instruction for English language learners in secondary school. Res. Teach. Engl. 2007:269–303. [ Google Scholar ]
  • Orosco M.J. A sociocultural examination of response to intervention with Latino English language learners. Theory Into Pract. 2010; 49 (4):265–272. [ Google Scholar ]
  • Ortiz A.A., Robertson P.M., Wilkinson C.Y., Liu Y.J., McGhee B.D., Kushner M.I. The role of bilingual education teachers in preventing inappropriate referrals of ELLs to special education: implications for response to intervention. Biling. Res. J. 2011; 34 (3):316–333. [ Google Scholar ]
  • Pang Y. Empowering ELL students to improve writing skills through inquiry-based learning. N. Engl. Read. Assoc. J. 2016; 51 (2):75–79. [ Google Scholar ]
  • Polat N., Zarecky-Hodge A., Schreiber J.B. Academic growth trajectories of ELLs in NAEP data: The case of fourth-and eighth-grade ELLs and non-ELLs on mathematics and reading tests. J. Educ. Res. 2016; 109 (5):541–553. [ Google Scholar ]
  • Proctor C.P., Dalton B., Grisham D.L. Scaffolding English language learners and struggling readers in a universal literacy environment with embedded strategy instruction and vocabulary support. J. Literacy Res. 2007; 39 (1):71–93. [ Google Scholar ]
  • Rawian R.M., Mokhtar A.A. ESL reading accuracy: the inside story. Soc. Sci. 2017; 12 (7):1242–1249. [ Google Scholar ]
  • Reid T., Heck R.H. Exploring reading achievement differences between elementary school students receiving and not receiving English language services. AERA Online Paper Repositor. 2017 [ Google Scholar ]
  • Richards-Tutor C., Baker D.L., Gersten R., Baker S.K., Smith J.M. The effectiveness of reading interventions for English learners: a research synthesis. Except. Child. 2016; 82 (2):144–169. [ Google Scholar ]
  • Ross S.G., Begeny J.C. Improving Latino, English language learners' reading fluency: the effects of small-group and one-on-one intervention. Psychol. Sch. 2011; 48 (6):604–618. [ Google Scholar ]
  • Roth A. Northwest Missouri State University; United States: 2015. Does Buddy Reading Improve Students' Oral Reading Fluency? Unpublished Doctoral dissertation. [ Google Scholar ]
  • Rubin D.I. Growth in oral reading fluency of Spanish ELL students with learning disabilities. Interv. Sch. Clin. 2016; 52 (1):34–38. [ Google Scholar ]
  • Shamir H., Feehan K., Yoder E. EdMedia+ Innovate Learning. Association for the Advancement of Computing in Education (AACE); 2016. June). Using technology to improve reading and math scores for the digital native; pp. 1425–1434. [ Google Scholar ]
  • Shamir H., Feehan K., Yoder E., Pocklington D. MANAGEMENT AND MARKETING (MAC-EMM 2018); 2018. Can CAI Improve Reading Achievement in Young ELL Students? ECONOMICS; p. 229. [ Google Scholar ]
  • Sheng Z., Sheng Y., Anderson C.J. Dropping out of school among ELL students: implications to schools and teacher education. Clear. House. 2011; 84 (3):98–103. [ Google Scholar ]
  • Snow C.E., Burns S.M., Griffin P. Predictors of success and failure in reading. Prevent. Read. Difficul. Young Child. 1998:100–134. [ Google Scholar ]
  • Snyder E., Witmer S.E., Schmitt H. English language learners and reading instruction: a review of the literature. Prev. Sch. Fail.: Alter. Educ. Child. Youth. 2017; 61 (2):136–145. [ Google Scholar ]
  • Soland J., Sandilos L.E. English language learners, self-efficacy, and the achievement gap: understanding the relationship between academic and social-emotional growth. J. Educ. Stud. Placed A. T. Risk. 2020:1–25. [ Google Scholar ]
  • Tam K.Y., Heng M.A. Using explicit fluency training to improve the reading comprehension of second grade Chinese immigrant students. Mod. J. Lang. Teach Methods. 2016; 6 (5):17. [ Google Scholar ]
  • Téllez K., Manthey G. Teachers’ perceptions of effective school-wide programs and strategies for English language learners. Learn. Environ. Res. 2015; 18 (1):111–127. [ Google Scholar ]
  • Thompson C.G., von Gillern S. Video-game based instruction for vocabulary acquisition with English language learners: a Bayesian meta-analysis. Educ. Res. Rev. 2020; 30 :100332. [ Google Scholar ]
  • Vaughn S., Kim A., Sloan C.V.M., Hughes M.T., Elbaum B., Sridhar D. Social skills interventions for young children with disabilities: a synthesis of group design studies. Remedial Spec. Educ. 2003; 24 :2–15. [ Google Scholar ]
  • Vaughn S., Martinez L.R., Linan-Thompson S., Reutebuch C.K., Carlson C.D., Francis D.J. Enhancing social studies vocabulary and comprehension for seventh-grade English language learners: findings from two experimental studies. J. Res. Educ. Effect. 2009; 2 (4):297–324. [ Google Scholar ]
  • Vaughn S., Mathes P., Linan-Thompson S., Cirino P., Carlson C., Pollard-Durodola S., Francis D. Effectiveness of an English intervention for first-grade English language learners at risk for reading problems. Elem. Sch. J. 2006; 107 (2):153–180. [ PubMed ] [ Google Scholar ]
  • Viechtbauer W. Conducting meta-analyses in R with the metafor package. J. Stat. Softw. 2010; 36 (3):1–48. [ Google Scholar ]
  • Walker S.A. Chicago State University; 2017. Exploration of ELL Parent Involvement Measured by Epstein's Overlapping Spheres of Influence. Doctoral dissertation. [ Google Scholar ]
  • Wanzek J., Roberts G. Reading interventions with varying instructional emphases for fourth graders with reading difficulties. Learn. Disabil. Q. 2012; 35 (2):90–101. [ Google Scholar ]
  • Wanzek J., Wexler J., Vaughn S., Ciullo S. Reading interventions for struggling readers in the upper elementary grades: a synthesis of 20 years of research. Read. Writ. 2010; 23 (8):889–912. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Wanzek J., Vaughn S., Scammacca N.K., Metz K., Murray C.S., Roberts G., Danielson L. Extensive reading interventions for students with reading difficulties after grade 3. Rev. Educ. Res. 2013; 83 (2):163–195. [ Google Scholar ]
  • Wassell B.A., Hawrylak M.F., Scantlebury K. Barriers, resources, frustrations, and empathy: teachers’ expectations for family involvement for Latino/a ELL students in urban STEM classrooms. Urban Educ. 2017; 52 (10):1233–1254. [ Google Scholar ]
  • White E.L., Gillard S. Technology-based literacy instruction for English language learners. J. Coll. Teach. Learn. 2011; 8 (6):1–5. [ Google Scholar ]
  • Yousefi M.H., Biria R. The effectiveness of L2 vocabulary instruction: a meta-analysis. Asian-Pacific J. Second Fore. Lang. Educ. 2018; 3 (1):1–19. [ Google Scholar ]

References for studies included in the meta-analysis

  • Amendum S.J., Bratsch-Hines M., Vernon-Feagans L. Investigating the efficacy of a web-based early reading and professional development intervention for young English learners. Read. Res. Q. 2018; 53 (2):155–174. [ Google Scholar ]
  • American Youth Policy Forum. (2009). Issue brief Moving English language learners to college and career readiness. Retrieved November 22,2009, from: www.aypf.org/documents/ELLIssueBrief
  • Callahan R.M., Shifrer D. Equitable access for secondary English learner students: course taking as evidence of EL program effectiveness. Educ. Admin. Q. 2016; 52 (3):463–496. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cho H., Brutt-Griffler J. Integrated reading and writing: a case of Korean English language learners. Read. Foreign Lang. 2015; 27 (2):242. [ Google Scholar ]
  • Chow B.W.Y., McBride-Chang C., Cheung H. Parent–child reading in English as a second language: effects on language and literacy development of Chinese kindergarteners. J. Res. Read. 2010; 33 (3):284–301. [ Google Scholar ]
  • Cruz de Quiros A.M., Lara-Alecio R., Tong F., Irby B.J. The effect of a structured story reading intervention, story retelling and higher order thinking for English language and literacy acquisition. J. Res. Read. 2012; 35 (1):87–113. [ Google Scholar ]
  • Cook A.L. Exploring the role of school counselors. J. Sch. Couns. 2015; 13 (9):1–33. [ Google Scholar ]
  • Dabarera C., Renandya W.A., Zhang L.J. The impact of metacognitive scaffolding and monitoring on reading comprehension. System. 2014; 42 :462–473. [ Google Scholar ]
  • Dockrell J.E., Stuart M., King D. Supporting early oral language skills for English language learners in inner city preschool provision. Br. J. Educ. Psychol. 2010; 80 (4):497–515. [ PubMed ] [ Google Scholar ]
  • Filippini A.L., Gerber M.M., Leafstedt J.M. A vocabulary-added reading intervention for English learners at-risk of reading difficulties. Int. J. Spec. Educ. 2012; 27 (3):14–26. [ Google Scholar ]
  • Fry R. How far behind in math and reading are English language learners? Pew Hispanic Center; Washington, DC: 2007. [ Google Scholar ]
  • Jamaludin K.A., Alias N., Mohd Khir R.J., DeWitt D., Kenayathula H.B. The effectiveness of synthetic phonics in the development of early reading skills among struggling young ESL readers. Sch. Effect. Sch. Improv. 2016; 27 (3):455–470. [ Google Scholar ]
  • Kanno Y., Cromley J. English language learners’ pathways to four-year colleges. Teachers College Record. 2015; 117 (12):1–44. [ Google Scholar ]
  • Mancilla-Martinez J. Word meanings matter: cultivating English vocabulary knowledge in fifth-grade Spanish-speaking language minority learners. Tesol Q. 2010; 44 (4):669–699. [ Google Scholar ]
  • McMaster K.L., Kung S.H., Han I., Cao M. Peer-assisted learning strategies: a “Tier 1” approach to promoting English learners' response to intervention. Except. Child. 2008; 74 (2):194–214. [ Google Scholar ]
  • Olivier A.M., Anthonissen C., Southwood F. Literacy development of English language learners: the outcomes of an intervention programme in grade R. S. Afr. J. Commun. Disord. 2010; 57 (1):58–65. [ PubMed ] [ Google Scholar ]
  • Pollard-Durodola S.D., Gonzalez J.E., Saenz L., Resendez N., Kwok O., Zhu L., Davis H. The effects of content-enriched shared book reading versus vocabulary-only discussions on the vocabulary outcomes of preschool dual language learners. Early Educ. Dev. 2018; 29 (2):245–265. [ Google Scholar ]
  • Proctor C.P., Dalton B., Uccelli P., Biancarosa G., Mo E., Snow C., Neugebauer S. Improving comprehension online: effects of deep vocabulary instruction with bilingual and monolingual fifth graders. Read. Writ. 2011; 24 (5):517–544. [ Google Scholar ]
  • Rodríguez C.D., Filler J., Higgins K. Using primary language support via computer to improve reading comprehension skills of first-grade English language learners. Comput. Sch. 2012; 29 (3):253–267. [ Google Scholar ]
  • Silverman R., Hines S. The effects of multimedia-enhanced instruction on the vocabulary of English-language learners and non-English-language learners in pre-kindergarten through second grade. J. Educ. Psychol. 2009; 101 (2):305–314. [ Google Scholar ]
  • Slavin R.E., Madden N., Calderón M., Chamberlain A., Hennessy M. Reading and language outcomes of a multiyear randomized evaluation of transitional bilingual education. Educ. Eval. Pol. Anal. 2011; 33 (1):47–58. [ Google Scholar ]
  • Solari E.J., Gerber M.M. Early comprehension instruction for Spanish-speaking English language learners: teaching text-level reading skills while maintaining effects on word-level skills. Learn. Disabil. Res. Pract. 2008; 23 (4):155–168. [ Google Scholar ]
  • Spycher P. Learning academic language through science in two linguistically diverse kindergarten classes. Elem. Sch. J. 2009; 109 (4):359–379. [ Google Scholar ]
  • Tong F., Irby B.J., Lara-Alecio R., Koch J. Integrating literacy and science for English language learners: from learning-to-read to reading-to-learn. J. Educ. Res. 2014; 107 (5):410–426. [ Google Scholar ]
  • Tong F., Irby B.J., Lara-Alecio R., Mathes P.G. English and Spanish acquisition by Hispanic second graders in developmental bilingual programs: a 3-year longitudinal randomized study. Hisp. J. Behav. Sci. 2008; 30 (4):500–529. [ Google Scholar ]
  • Tong F., Irby B.J., Lara-Alecio R., Yoon M., Mathes P.G. Hispanic English learners' responses to longitudinal English instructional intervention and the effect of gender: a multilevel analysis. Elem. Sch. J. 2010; 110 (4):542–566. [ Google Scholar ]
  • Tong F., Lara-Alecio R., Irby B.J., Mathes P.G. The effects of an instructional intervention on dual language development among first-grade Hispanic English-learning boys and girls: a two-year longitudinal study. J. Educ. Res. 2011; 104 (2):87–99. [ Google Scholar ]
  • Townsend D., Collins P. Academic vocabulary and middle school English learners: an intervention study. Read. Writ. 2009; 22 (9):993–1019. [ Google Scholar ]
  • Vadasy P.F., Sanders E.A. Efficacy of supplemental phonics-based instruction for low-skilled kindergarteners in the context of language minority status and classroom phonics instruction. J. Educ. Psychol. 2010; 102 (4):786–803. [ Google Scholar ]
  • Vadasy P.F., Sanders E.A. Efficacy of supplemental phonics-based instruction for low-skilled first graders: how language minority status and pretest characteristics moderate treatment response. Sci. Stud. Read. 2011; 15 (6):471–497. [ Google Scholar ]
  • Van Staden A. Reading in a second language: considering the 'simple view of reading' as a foundation to support ESL readers in Lesotho, Southern Africa. Per Linguam: J. Lang. Learn. 2016; 32 (1):21–40. [ Google Scholar ]
  • Yeung S.S., Siegel L.S., Chan C.K. Effects of a phonological awareness program on English reading and spelling among Hong Kong Chinese ESL children. Read. Writ. 2013; 26 (5):681–704. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Banner

  • Libraries Home
  • Research Guides

English as a Second Language (ESL) / English for Non-native Speakers

  • Research Skills
  • Tutorial Databases

Research Mindset

Steps in research, evaluate information, limits or filters.

  • ArcherSearch
  • Articles (Databases)
  • Books & Ebooks
  • Especially for ESL Teachers
  • Blogs & Social Media
  • Citation & Writing Help

Develop a research mindset. Understand research as a process of asking questions and exploring. 

The quality of your research depends largely on the questions you ask. Practice asking a lot of them. Adopt the mindset of an explorer or investigator. What qualities and characteristics do successful explorers and investigators have?  Develop a plan; where will you start?  As you begin to explore, you will discover that research can be messy. Expect and welcome twists and turns, keep an open mind, and keep asking questions throughout the process. Use many different kinds of search tools and resources, and conduct many different kinds of searches. 

Think like a researcher: Keep an open mind, be curious, be persistent, patient, maintain high standards, be flexible, and explore

Research takes time and patience; it can also be fun and has value.

Developing your research skills will enable you to identify a problem, collect informational resources that can help address the problem, evaluate these resources for quality and relevance, and come up with an effective solution to a problem. Research skills develop critical thinking and equip you to write better research papers and craft better speeches. You will also improve problem solving skills required to tackle issues in your personal life and in the workplace. 

Follow these steps.

Keep an open mind. You may need to refine your topic, ask new questions, and repeat steps as you go along.

Identify and define your topic. Put your research topic into a question such as, "What is the debate surrounding vaccination refusal?" Now you can identify the main concepts and keywords, including alternate terms, for your topic.

Background reading will deepen your understanding and vocabulary around the topic, which will help you identify search terms and develop an effective research question. Subject encyclopedias (in print or in Credo Reference)   are excellent resources. 

Use ArcherSearch or the library catalog to find books . 

Use ArcherSearch or individual databases to find articles from magazines, journals and newspapers . Choose appropriate databases for your topic.

Search for credible website resources. Try the librarian-recommended websites on this guide.

Always evaluate what you find. Consider timeliness, relevance, authority, accuracy, and purpose.

Cite your sources . Citing gives proper credit to the authors of materials you use and allows your professors to verify your conclusions. 

research shown as a squiggly path, not a straight line

Evaluating the information you find, whether in print or digital format, is an essential aspect of doing research.

Learn to think critically about the source of information and the information within each source by using the Evaluate Your Sources guide. 

  • Evaluate Your Sources STLCC Libraries

Use the filters or limits to see just a subset of your search results. Depending on the tool you are using, search limits may show up in the left margin, at the top of the results, or below the search box. 

Date limits are especially useful to filter out older, outdated material. You can usually choose a preset limit such as "current 5 years," or set a custom range of publication dates. 

Scholarly/Peer-Reviewed Journals

This will limit to only journals that publish articles that have undergone a rigorous peer-review process. These are usually articles that report on a specific study, analysis, experiment, or other piece of the research. Some scholarly/peer-reviewed articles are systematic reviews  which survey a wide range of published peer-reviewed articles to give an overview of the current state of knowledge on the topic.

The Subject limit will help you narrow your results by subject terms. These are like tags or labels; they indicate that the book, article, or other source focuses on the subject of interest. Without this limit, you may find items that include your search words but are not about your topic. Keep in mind that different databases may use different subject terms.  

The Full-text limit is already applied for most searches. It is very useful to filter out articles where you only have access to a citation or a description of an article, not the full the full article. Unless you are required to find everything out there is published on a given subject, this limit should be applied every time you search. If you do find resources that are not full text but would be useful to you, STLCC Libraries may be able provide them. See the Borrowing from Other Libraries page for details and the form for requests. 

  • Borrowing from Other Libraries
  • << Previous: Tutorial Databases
  • Next: ArcherSearch >>
  • Last Updated: Jun 18, 2024 10:18 AM
  • URL: https://guides.stlcc.edu/esl

  • Open access
  • Published: 13 February 2024

Impact of ChatGPT on ESL students’ academic writing skills: a mixed methods intervention study

  • Santosh Mahapatra   ORCID: orcid.org/0000-0002-0077-2882 1  

Smart Learning Environments volume  11 , Article number:  9 ( 2024 ) Cite this article

33k Accesses

6 Citations

8 Altmetric

Metrics details

This paper presents a study on the impact of ChatGPT as a formative feedback tool on the writing skills of undergraduate ESL students. Since artificial intelligence-driven automated writing evaluation tools positively impact students’ writing, ChatGPT, a generative artificial intelligence-propelled tool, can be expected to have a more substantial positive impact. However, very little empirical evidence regarding the impact of ChatGPT on writing is available. The current mixed methods intervention study tried to address this gap. Data were collected from tertiary level ESL students through three tests and as many focus group discussions. The findings indicate a significant positive impact of ChatGPT on students' academic writing skills, and students’ perceptions of the impact were also overwhelmingly positive. The study strengthens and advances theories of feedback as a dialogic tool and ChatGPT as a reliable writing tool, and has practical implications. With proper student training, ChatGPT can be a good feedback tool in large-size writing classes. Future researchers can investigate the impact of ChatGPT on various specific genres and micro aspects of writing.

Introduction

Formative feedback positively impacts students’ writing (Anderson & Ayaawan, 2023 ; Butterfuss et al., 2022 Huisman et al., 2019 ). Usually associated with formative assessment strategies such as self-and peer assessment, formative feedback can be directed to address students’ needs for explanation (Bozorgian & Yazdani, 2021 ; Zhang et al., 2023 ) concerning various aspects of writing such as content, organization, grammar, vocabulary, and style. Since formative feedback is a continuous process, it offers real-time support to students as they write (Zhu et al., 2020 ). However, it is time-consuming and challenging to implement in a large classroom (Golzar et al., 2022 ). Such crowded classrooms are parts of everyday realities in many educational institutions in most developing and underdeveloped countries across the Global South. Addressing students’ individual feedback needs in such classrooms is a significant challenge. Computer-mediated feedback has evolved as a potential solution to this problem in writing classrooms (Taskiran & Goksel, 2022 ; Yamashita, 2021 ), especially in institutions of higher education in countries where students have access to portable computing devices and the Internet (Mahapatra, 2021 ). Globally, artificial intelligence (AI)-driven automated feedback is fast becoming a norm (Rad et al., 2023 ). Based on a large language model, ChatGPT, a relatively new addition to the repertoire of AI tools, can provide meaningful writing samples (Barrot, 2023 ), adjust the difficulty level of texts matching leaners’ proficiency level (Bonner et al., 2023 ), offer advice regarding various structural aspects of a text and translate it (Imran & Almusharraf, 2023 ), and facilitate guided writing (Kohnke et al., 2023 ). These affordances can aid self-reliance and fulfill instant feedback needs of students. It can provide human-like expert assistance to students in idea generation, organization, maintaining accuracy and choosing appropriate vocabulary (Tai et al., 2023 ). However, currently, little empirical evidence is available on the impact of ChatGPT on students' writing skills (Su et al., 2023 ). Though there are discussions on its utility as a feedback tool (Bonner et al., 2023 ), few empirical studies make such claims. Thus, the present study looked into the impact of ChatGPT as a feedback tool on the academic writing skills of undergraduate English as a second language (ESL) students in a relatively crowded classroom at a university. To achieve the aim, the study employed a mixed methods intervention design with ChatGPT (as a feedback tool) as the independent variable and writing skills as the dependent variable. It was hypothesized that the employment of ChatGPT as a feedback tool would significantly impact students’ writing skills. Considering that the study was set in relatively crowded classes, the findings can be generalized to similar ESL/English as a foreign language (EFL) settings. Additionally, the attempt to use ChatGPT as a feedback tool can be of significant pedagogic value and lead to further explorations globally.

Literature review

The utility and positive impact of formative feedback in the writing classroom are well-established (Olsen & Huns, 2023 ). When operationalized in the form of self-and peer assessment (SA and PA), formative feedback leads to reflection, self-regulation, self-monitoring, and revision on the parts of students (Lam, 2018 ). SA and PA can be used to augment learning in large-size writing classrooms often encountered in developing and under-developed countries in the Global South (Fathi & Khodabakhsh, 2019 ; Mathur & Mahapatra, 2022 ). However, research on feedback in large writing classes is limited (Rodrigues et al., 2022 ). It has been reported that smarter techniques must replace traditional ways to offer personalized dialogic feedback to students (Kohnke et al., 2023 ). With the proliferation of AI-driven tools such as Grammarly, QuillBot, Copy.ai, WordTune, ChatGPT, and others, it has become easier for students to obtain feedback on their writing (Marzuki et al., 2023 ; Zhao, 2022 ). They have advanced automated writing evaluation (AWE) and feedback in writing (Gayed et al., 2022 ). While the literature on Grammarly as a feedback tool in the writing classroom is well-established (Fitriana & Nurazani, 2022 ; Koltovskaia, 2020 ; Thi & Nikolov, 2022 ) and its positive impact on writing has been investigated empirically (Chang et al., 2021 ), the use of AI chatbots like ChatGPT for leveraging feedback in writing classes is a relatively new area and requires further investigation (Barrot, 2023 ). Since ChatGPT is a large generative language model, its potential to help students with writing is immense. It is more student-friendly and can provide more need-based assistance than other AWE tools, as suggested by Guo et al. ( 2022 ) and Rudolph et al. ( 2023 ). It can support student writing by providing appropriate directions related to content and organization as they write (Allagui, 2023 ). Since it can automatically train itself and learn from previous conversations (Chan & Hu, 2023 ), students can receive tailored feedback suitable for individual needs.

Like earlier AI chatbots, ChatGPT can be used for generating ideas and brainstorming (Lingard, 2023 ). Recently, it has been accepted that ChatGPT can make writing easier and faster (Stokel-Walker, 2022 ). This potential, when exploited by teachers, can be converted into a dependable feedback tool. Wang and Guo ( 2023 ) discuss ChatGPT supporting students with learning grammar and vocabulary. As pointed out by Rudolph et al. ( 2023 ), irrespective of students’ ability to use language accurately to ask questions, ChatGPT can provide feedback and information. In another study by Dai et al. ( 2023 ), students received corrective feedback from ChatGPT. Mizumoto and Eguchi ( 2023 ) also highlight similar findings when they tried ChatGPT as an AWE tool. In a study conducted in Saudi Arabia, Ali et al. ( 2023 ) discuss the positive impact of its use on learners’ motivation. It could be due to its ability to provide reliable explanations (Kohnke et al., 2023 ) without the student having to go through the anxiety of asking the query in a classroom (Su et al., 2023 ). Since coming into existence in the last part of 2022 (OpenAI, 2022 ), ChatGPT has gained immense popularity among language educators. It has been reported as capable of producing high-quality texts (Gao et al., 2022 ), offering feedback on text organization, language use and recommending corrections (Ohio University, 2023 ), logically organizing content, adding appropriate supporting details and conclusion (Fitria, 2023 ). While Yan ( 2023 ) has reported benefits to students’ writing skills through its use, he has also warned that its use can threaten academic honesty and ethicality in writing.

Theoretically, the utilization of ChatGPT for leveraging formative feedback can be placed within a framework comprising two theories. The first one is the theory of feedback as a dialogic process advocated by Winstone and Carless ( 2020 ). According to them, when feedback involves interactions, it leads to students clarifying their expectations, obtaining desired information and guidance, and making progress in learning. ChatGPT facilitates dialogue by responding to the user’s queries regarding various aspects of writing. It offers suggestions when sought and functions as a support-on-demand tool. Additionally, it admits mistakes and rectifies itself thereby making dialogue meaningful. The second one is Barrot’s ( 2023 ) theory of ChatGPT as a reliable writing tool that can provide immediate, need-based, and tailored feedback to students as they move through different stages of writing. The current study was built on these two theories as its aim was to utilize ChatGPT as a formative feedback tool involving SA and PA, and assess its impact on students’ academic writing skills.

Research questions

The study addressed the following two research questions:

In an intensive academic writing course, when the instructional hours and tasks are held constant, does the employment of ChatGPT as a feedback tool have any significant impact on undergraduate ESL students’ writing skills?

How do the experimental group students perceive the impact of ChatGPT as a feedback tool on their writing skills?

Methodology

Mixed methods intervention design.

A mixed methods intervention design (Creswell & Plano-Clark, 2017 ) was adopted for the study. Since the study aimed to assess the impact of ChatGPT as a feedback tool and involved an intervention, a quasi-experimental design was an automatic choice for such a study, as with intervention studies in education (Gopalan et al., 2020 ). However, considering that qualitative data add to the validity of the claims and the strength of the study by providing in-depth details about how students used ChatGPT for self-and peer feedback during classroom assessments, it was logical to choose a mixed methods design. Figure  1 shows the design of the study.

figure 1

Design of the study

Participants

An intact class of students randomly assigned to two sections was chosen for the study. This aligns with the sampling principle used for quasi-experimental studies in education and applied linguistics (Perry Jr., 2011 ). These were first-year science and engineering students in an elite private-run university in India. None of them had used ChatGPT to improve their writing skills before the intervention. However, the participants had the experience of using mobile phones, laptops, the Internet, and AI tools such as Grammarly. Most of them were from financially well-off backgrounds and had exposure to television, media, and books on various topics from an early age. They were informed about the nature of participation and what they were expected to do before the start of the intervention. Their participation was voluntary, and they were given the choice to opt out of the study at any point during the intervention. Finally, 78 people in the experimental group (EG) and 56 in the comparison group (CG) consented to participate in the study. Since they were in their first year, most students in CG did not know students from the EG. All of them studied in English medium schools before joining the undergraduate program, had English as their second language, and aged between 18 and 19 years. Only those who attended all the intervention classes, i.e., six hours and took the pre-, post-, and delayed post-tests, were included in the reported data. Thus, 35 students from the EG and 37 students from the CG were included in the study. Six students volunteered when invited to participate in focus group discussions (FGDs). However, only five students participated in all three FGDs. Table 1 presents the inclusion and exclusion criteria.

Methods of data collection

Data were collected for the study primarily through three tests of writing. Each test comprised three tasks focusing on process, comparison, and cause-effect (see “ Appendix ”). Since all students studied science before joining the undergraduate program, they were familiar with scientific contexts used in the tasks. The rationale behind the task choice was that the writing genres were frequently used for academic purposes by the target group of students and were also part of the regular syllabus. Rubrics were created by the researcher and another teacher, who also taught the course to other groups of students, for each writing task type and were validated by two experts in applied linguistics. The evaluation criteria in the rubrics included content, organization, grammar, and vocabulary. The researcher and the other teacher evaluated students’ writing performance using the rubrics. For all write-ups, Cohen’s Kappa was found to be more than 0.8, indicating a strong inter-rater reliability. The qualitative data for the study were collected through FGDs with five EG students. The FGDs focused on obtaining views about and experiences of using ChatGPT for self-and peer feedback purposes. They were also encouraged to share artefacts as screenshots during the FGDs.

Procedure of data collection

Data were collected in various phases spanning almost an academic semester. First, students in the CG and EG took a pre-test. In the next phase, the EG was prepared through a short training program on using ChatGPT for SA and PA purposes in the writing classroom. In fact, Mathur and Mahapatra ( 2022 ) have recommended training for learners before using a digital tool in the classroom. After that, the intervention was undertaken in which the EG was taught process, comparison and cause-effect writing for six hours, and ChatGPT was used for SA and PA. Figure  2 presents the details about the intervention.

figure 2

Intervention plan

During the intervention, which was completed in a month, two FGDs were organized to obtain information about students’ views on the employment of ChatGPT. The following prompts guided these discussions:

We are using ChatGPT to assess our own and our peers’ writing. How would you describe your experience with it?

We are getting feedback on the topic sentence, supporting details, concluding sentence, use of signposts, appropriateness of content, and grammar. How beneficial do you think this exercise is?

Do you have any suggestions for making the employment of ChatGPT more impactful?

In the fourth phase, a post-test was conducted for the CH and EG immediately after the end of the intervention. In the last phase, a delayed post-test was conducted for both groups and another FGD was conducted with five students from the EG. Both were organized almost two months after the post-test.

The quantitative and qualitative data were analyzed separately and then triangulated to obtain answers to the research questions. Statistical analyses were performed for the quantitative data, which comprised pre-test, post-test, and delayed post-test scores for the CG and the EG. In fact, Rose et al. ( 2020 ) recommend using delayed post-tests for the robust assessment of the intervention impact. It is also a practice in writing related interventions (Rezai et al., 2022 ). The data analysis involved a one-way RM ANOVA run on the EG’s scores across three tests to compare the corresponding mean scores. A post-hoc Bonferroni test (Loewen & Plonsky, 2017 ) was run to control the overall error rate due to RM ANOVA. Then, a one-tailed t -test was computed to compare the pre-test, post-test and delayed post-test mean scores of the CG and the EG. Before the t -tests were run, Levine’s test for equality of variance was calculated. The f -ratio value was 0.16078, and the p -value was 0.689658. The result was not significant at p  < 0.05; thus, the homogeneity requirement is met. Also, a Shapiro–Wilk test was conducted to test normality. For the CG, it did not show a significant departure from normality, W (35) = 0.98, p  = 0.675. For the EG too, no significant departure from normality was observed, W (37) = 0.96, p  = 0.168. Grubb’s test, performed to identify outliers, did not find any.

The qualitative analysis involved coding the transcripts of the FGDs. A phronetic iterative approach (Tracy, 2019 ) was adopted in this case. The approach was found suitable because the coding was inductive, and at the same time, it was guided by the themes that emerged through the review of literature and a few known patterns.

Positive impact of ChatGPT as a feedback tool

The impact of ChatGPT as a feedback tool on students’ writing skills was positive and significant. The differences among the EG mean scores for the pre-test, post-test, and delayed post-test (see Table  2 ) indicate the trajectory of improvement in students’ writing skills.

The claim is strengthened by the results from the one-way RM ANOVA computations. The results (F [2, 72] = 330.704, p  = 5.146e−37, η 2  = 0.902) indicate a significant difference among some of the mean scores across the tests. A Bonferroni post-hoc test was performed to trace any significant differences between pairs due to the intervention. The differences between the pre-test and post-test ( p  < 0.001, d  = − 3.300) and pre-test and delayed post-test ( p  < 0.001, d  = − 3.898) scores were found to be statistically highly significant. A significant difference was observed between the post-test and delayed post-test scores ( p  < 0.01, d  = − 0.598) (see Tables 3 , 4  and 5 ).

Since the findings from the one-way RM ANOVA offered little information about the comparative performance between the CG and the EG, two one-tailed independent sample t -tests were computed for both groups' post-test and delayed post-test scores. A significant difference was found between the post-test scores of the CG and the EG: t (70) = − 5.643, p  = 3.297e−7, with the EG’s ( M  = 19.216, SD = 2.485) mean score significantly higher than that of the CG ( M  = 16.371, SD = 1.695). The difference continued to be statistically significant for the delayed post-test scores of the CG and the EG: t (70) = − 9.371, p  = 5.544e−14, with EG’s ( M  = 20.419, SD = 2.575) mean score significantly higher than that of the CG ( M  = 15.400, SD = 1.897) (see Tables 6 , 7 and  8 ).

Students’ positive perception of the impact

The FGDs were transcribed and coded. Table 9 presents the analysis.

The codes were classified under three main themes: content, organization, and grammar and two sub-themes: positive and specific and negative. When asked about their views on the impact of ChatGPT as a feedback tool, students in FGDs spoke in terms of content, organization, and grammar.

The way it guides us in obtaining the required information, arranging our ideas, and writing correctly is surprising. I was never aware that it could be a writing buddy. (S2, FGD 1) You see, when we were asking it to help us with organizing the ideas in writing, it gave us some directions. I guess that’s quite helpful. It’s like someone is constantly there to oversee your writing process. (S4, FGD 1) It’s actually better than Grammarly in the sense that it explains the grammar issues when asked. You have a choice, and you can also learn from it. (S1, FGD 2)

They seemed to be happy about how ChatGPT helped them generate ideas and focused information on the given topic and work independently.

The best part about ChatGPT is that you ask for information, and you get it. You can go as specific or detailed as you wish. This reduced our thinking time invested to get ideas and information. (S5, FGD 3) I felt that it was a handy support tool for writing without being dependent on anyone. When we write, we usually have queries regarding several aspects of writing. With ChatGPT, you have a reliable support system with you. I like that freedom. (S3, FGD 2)

They also highlighted how it promoted collaboration among peers, facilitated faster task completion, helped them create strong topic sentences and reduced brainstorming time.

When you use ChatGPT in a classroom with your classroom, you’re doing it with several people. So much talk going on simultaneously! It’s kinda cool. The conversations are so meaningful and without noticing, we are working together and writing. (S2, FGD 3) I absolutely love how we play with it together and how that fun is so productive. The process took much less time and there was this constant focused chatter which helped complete the tasks. We didn’t miss anything significant, for example, when cheating the topic sentence, because one or two people are working with me and giving me feedback on the topic and the strength of the controlling idea. (S4, SGD 2)

However, it was also pointed out on a few occasions that ChatGPT could lead to a lack of motivation to think and more machine dependence.

It might be a concern that my dependence might discourage me to do things on my own when writing. What if I won’t want to write on my own? It can be scary, but I don’t know. (S5, FGD 3)

In their comments on the impact of ChatGPT as a feedback tool on organization in writing, it was an overwhelming claim that ChatGPT made it easier to stay focused on the topic when writing.

We felt that it keeps us on our toes. You know it’s so easy to get diverted and include details unrelated to the topic. When you as ChatGPT, it tells you where you skid off the track. (S1, FGD 3) It’s unreal! You share the topic sentence and the supporting details and it tells you if and how you have adhered to the controlling idea in your details. Isn’t that cool? I don’t think we can do that so accurately on our own. (S3, FGD 3)

Students also mentioned that through feedback, they could add appropriate supporting details related to the main idea, use proper signposts, and write strong conclusions. More collaboration among peers when organizing the content was also highlighted.

When you asked us to write that paragraph on AI use in education, I created a topic sentence and added the required details along with a concluding sentence. When I asked GPT to tell me if my concluding sentence is a good one for the topic sentence, the feedback surprisingly good. I did the same when sharing feedback on my friends’ writing. (S5, FGD 2) I’m happy that I have improved my signpost use. In fact, most of my classmates have too. It made me conscious about the choice of signposts. The thread connecting information and ideas suddenly felt more robust. Sometimes, the explicit explanation with examples helped. (S4, FGD 3)

Though a minor, nonetheless, ChatGPT was claimed to impose a pattern on writing and hinder creativity in content organization.

I’m not sure, but sometimes it feels like I’m under a spell and I’m arranging information as directed. Though it’s my responsibility to choose and accept, I may be getting too lazy to use my own creativity to place things in order. (S2, FGD 3)

When talking about the impact of ChatGPT on grammar, students highlighted ChatGPT as a reliable source of grammar, a tool for improving accuracy in sentence structure and obtaining explanations for language errors.

I’ve been using Grammarly for a while, but it provides explanations for the grammar queries or when it identifies an error. Nothing like knowing the details about the issue. We kinda get sucked into curiosity by asking it questions on sentence structure, tense use and other aspects of grammar. It provides detailed explanations with examples. Good alternative to Grammarly, dictionary and other such stuff. (S1, FGD 3)

On several occasions, students shared artefacts showing how ChatGPT directs them to use of zero conditionals in scientific writing. Thus, when asked to verify ‘At first, we take a bowl full of water and heat it. When it boils, we will stop the stove. Then, we take it out.’ by a student, ChatGPT provides a polished version: ‘Initially, a bowl filled with water was taken and heated. Once the boiling point is reached, the stove was promptly turned off. Subsequently, the bowl was carefully removed.’ On some other occasions, students were encouraged by ChatGPT responses and sought further explanations related to the correct use of punctuation, articles, and sentence structures in formal contexts. In the above-mentioned case, ChatGPT explains that the use of past tense and passive voice are common in scientific writing, especially in the methods section and that passive voice adds to objectivity and clarity in writing.

The minor patterns included the claims related to helping with vocabulary choices, peer discussions and contributing to explicit knowledge of grammar. Two relatively minor patterns indicating negative impacts also emerged from the FGDs: lack of attention among students in terms of maintaining grammatical accuracy in writing and an increase in machine dependence.

This mixed methods intervention study assessed the impact of ChatGPT as a formative feedback tool on academic writing skills of undergraduate ESL students, which comprised the first research question. In addition, it also captured students’ views on the impact of ChatGPT on their writing skills, which was the focus of the second research question. The answer to the first question was obtained through the quasi-experimental study, and the FGD data was analyzed to find the answer to the second question. The findings from the experiment corroborated those from the FGDs. Few studies before this (published in mainstream journals) have empirically explored the impact of ChatGPT on academic writing skills. Thus, the current study bears significance. The study contributes to several areas of research on academic writing. First, it demonstrates the utilization of ChatGPT for formative feedback purposes in an academic writing classroom. Second, it is conducted in a natural setting without many intrusive measures. Last, the mixed methods intervention design employed for the study is relatively rare in academic writing research.

Positive impact on writing skills

Though the CG and the EG students demonstrated improved writing skills during the period under consideration, the EG’s performance was significantly better than their CG peers across two post-intervention tests. Though there is little research on the impact of ChatGPT on students’ academic writing skills, the findings are consistent with those reported by researchers who focused on the impact of AI-driven AWE tools on tertiary-level students’ writing skills (Marzuki et al., 2023 ; Zhao, 2022 ). Many existing studies utilized these tools for formative purposes (Rudolph et al., 2023 ) and found similar results. Thus, the employment of generative AI only strengthens those claims. The positive impact was evident in students’ achievements in terms of generation of focused ideas, better connection among ideas and sentences, and improved grammatical accuracy, which were also claimed by previous researchers (Allagui, 2023 ; Kohnke et al., 2023 ; Su et al., 2023 ; Wang & Guo, 2023 ). The delayed post-test results in this study add to the generalizability of the positive impact claims (Rose et al., 2020 ). In fact, the literature on delayed post-tests highlights the sustainability of the impact and retention of writing skills (Rezai et al., 2022 ). The continued positive impact could be a result of various factors. First, planned training was organized for students before the intervention was undertaken, as advised by researchers who undertook similar interventions (Mathur & Mahapatra, 2022 ). Second, students were engaged in SA and PA, which have been proven effective strategies in writing classrooms (Mathur & Mahapatra, 2022 ). Third, the metalinguistic explanations as part of the corrective feedback could have made a difference, as claimed by many previous researchers (Kohnke et al., 2023 ). Fourth, the instant and personalized nature of the obtained feedback could have strongly impacted the continued positive impact (Gayed et al., 2022 ). Fifth, self-correction was made easy (Dai et al., 2023 ) Last, a major factor that might have shaped EG students’ performance in the study is their pre-intervention proficiency levels. They had to pass a challenging test of English to join the institution. Thus, future researchers may investigate language proficiency as a variable when determining the impact of ChatGPT. The fear regarding the loss of creativity and the imposition of a pre-decided pattern is an addition to the fear regarding the loss of ethicality and such issues reported by Yan ( 2023 ). Several other factors, like the affordances of ChatGPT as a writing tool and its ability to engage students in a dialogic feedback process, backed by theories of Barrot ( 2023 ) and Winstone and Carless ( 2020 ), also strengthened the theoretical foundations underpinning the use of ChatGPT. Through the empirical evidence on the utility of ChatGPT as a formative feedback tool in the academic writing classroom, this study establishes that the integration of ChatGPT into the academic writing instruction can yield positive impacts when the instructor is aware of the affordances of ChatGPT and knows how to guide students about its utilities. It also proves that the dialogic nature of ChatGPT can be fully put to use when students are adequately prepared.

Students’ positive outlook

The findings from the analysis of FGD data indicate an overall positive attitude towards the impact of ChatGPT. This finding is consistent with the literature on students’ views on the impact of ChatGPT and other AI-driven tools on their writing (Marzuki et al., 2023 ; Yan, 2023 ). The overall positive attitude can be explained by the enthusiasm for using ChatGPT in the classroom. The findings on how ChatGPT aids content generation align with Gayed et al.’s ( 2022 ), Guo et al.’s ( 2022 ) and Marzuki et al.’s ( 2023 ) claims. In fact, this confirms assertions in the education literature on ChatGPT. Staying focused on the topic is an added advantage which could be new to the literature.

On the other hand, the promotion of learner autonomy and peer collaboration are similar to findings in the AWE literature (Dai et al., 2023 ; Rudolph et al., 2023 ). The advantages of faster task completion and creating a more robust topic, and concluding sentences are attractive, as they are relatively new to the AWE and ChatGPT literature. They may need more microscopic investigation.

In terms of organization, the perceived impact confirms the findings of Allagui ( 2023 ) and Marzuki et al. ( 2023 ), who highlight the help with the organization of content in their study. Thi and Nikolov’s ( 2022 ) study on Grammarly reports negative perceptions related to organization, with which the current study disagrees. The reasons for this kind of perception may have to do with their ability to ask appropriate questions. It will be interesting to examine whether students with low proficiency levels can benefit from ChatGPT in terms of the organization of their content. Another overwhelming opinion that emerged from the study is that ChatGPT improves grammatical accuracy, which is in line with the claims by Ohio University ( 2023 ) and Wang and Guo ( 2023 ). Though it is an expected finding that strengthens results reported in Grammarly studies (Fitriana & Nurazani, 2022 ), the highlight here is the attention paid by students to the explicit metalinguistic feedback, which is also found in AWE literature (Kohnke et al., 2023 ). However, the lack of attention to accuracy-related details may need longitudinal-focused inquiries.

The study was an attempt to investigate the extent to which the employment of ChatGPT as a feedback tool impacts ESL students’ academic writing skills. It could be one of the earliest intervention-based empirical inquiries into the impact of ChatGPT on students’ academic writing skills. Conducted as a mixed methods intervention study, the study’s findings were on expected lines. The significant positive impact coupled with students' positive perception of the same can add a fresh perspective to the literature on ChatGPT and other AWE tools. The findings of the study add to the literature on AWE, especially the use of generative AI. They strengthen and further theories of feedback as a dialogic tool and ChatGPT as a formative feedback tool that can be harmoniously integrated into large writing classes. The findings of the study demonstrate that interactions facilitated by ChatGPT during the process of writing has a direct positive impact on student learning. It positively influences how students seek feedback, how they engage with it and how they make improvements in their academic writing. It enables students to overcome the anxiety involved in asking for and receiving the desired kind of feedback. In large classes, where feedback-related dialogue is a thorny issue, ChatGPT can be a potential game-changer. It can provide tailored feedback that goes beyond the barriers of language, time, and place. It may be safely claimed that students who may not be very proficient in English can ask for and receive help in their own language on ChatGPT. The study also makes a case for ChatGPT as a formative feedback tool that can drive writing forward through SA and PA. In a way, ChatGPT breaks the barrier between SA and PA and strengthens the role of a teacher as a facilitator because many time-consuming tasks in large size writing classrooms such as monitoring content, organization, vocabulary use and grammatical accuracy can be easily performed by ChatGPT.

The study establishes the potential of ChatGPT as a pedagogic tool for writing classrooms, especially in many Global South countries where students have access to portable computing devices and the Internet. It can be easily integrated into the regular teaching of academic writing skills in institutions of higher education. A major factor that needs mentioning is the teacher’s attitude towards ChatGPT and their ability to use it in a constructive manner in a large size classroom. The latter one includes self-and student-training. It is true that many webinars and workshops on the use of ChatGPT are being conducted for teachers working in Global South countries like India. However, without proper reflective planning and an analysis of the need for shunning traditional feedback strategies, the use of ChatGPT may not be as impactful. Thus, teacher education programs need to orient teachers towards utilizing ChatGPT in their writing classrooms.

Methodologically, using mixed methods intervention design, a potent way of conducting educational experiments, can be a significant addition to the applied linguistics literature. Though it is one of the first attempts to empirically investigate the impact of ChatGPT on students’ academic writing skills, the study has a few limitations. First, it focused on only three writing genres. Second, the intervention lasted for only six hours. Third, artefacts from students were not included in the study. Since the study used an intact classroom, which adds to the authenticity and validity of sampling, it was impossible to go beyond the prescribed syllabus, focus on more components and continue the intervention for more hours. The artefacts could have added to the study, but privacy and copyright issues were difficult to overcome. It may be difficult to ignore how the limitations could have shaped the findings. The findings could be true for only three genres, but a major genre like argumentative writing got ignored in the process. Thus, any generalization should be carefully worded. Also, the absence of artefacts hurts the validity of claims. The findings are entirely based on the tests and students’ opinions which could have been strengthened by the collection and analysis of artefacts from the classroom. Future researchers can investigate the impact of ChatGPT using an extended intervention period. More investigation is required to compare its impact on various writing genres and across various micro features. Another exciting area could be the impact of corrective metalinguistic written feedback given through ChatGPT on students’ writing skills. With ethical issues looming large, the future of writing research will nonetheless be dominated by generative AI writing tools like ChatGPT.

Availability of data and materials

The data will be made available upon appropriate request to the author.

Abbreviations

Artificial intelligence

English as a second language

English as a foreign language

Self-assessment

Peer-assessment

Automated writing evaluation

Ali, J. K. M., Shamsan, M. A. A., Hezam, T. A., & Mohammed, A. A. Q. (2023). Impact of ChatGPT on learning motivation: Teachers and students’ voices. Journal of English Studies in Arabia Felix, 2 (1), 41–49. https://doi.org/10.56540/jesaf.v2i1.51

Article   Google Scholar  

Allagui, B. (2023). Chatbot Feedback on students’ writing: Typology of comments and effectiveness. In O. Gervasi, B. Murgante, A. M. A. C. Rocha, C. Garau, F. Scorza, Y. Karaca, & C. M. Torre (Eds.), International conference on computational science and its applications (pp. 377–384). Springer. https://doi.org/10.1007/978-3-031-37129-5_31

Chapter   Google Scholar  

Anderson, J. A., & Ayaawan, A. E. (2023). Formative feedback in a writing programme at the University of Ghana. In A. Esimaje, B. van Rooy, D. Jolayemi, D. Nkemleke, & E. Klu (Eds.), African perspectives on the teaching and learning of English in higher education (pp. 197–213). Routledge.

Barrot, J. S. (2023). Using ChatGPT for second language writing: Pitfalls and potentials. Assessing Writing, 57 , 100745. https://doi.org/10.1016/j.asw.2023.100745

Bonner, E., Lege, R., & Frazier, E. (2023). Large language model-based artificial intelligence in the language classroom: Practical ideas for teaching. Teaching English with Technology, 23 (1), 23–41. https://doi.org/10.56297/BKAM1691/WIEO1749

Bozorgian, H., & Yazdani, A. (2021). Direct written corrective feedback with metalinguistic explanation: Investigating language analytic ability. Iranian Journal of Language Teaching Research, 9 (1), 65–85. https://doi.org/10.30466/IJLTR.2021.120976

Butterfuss, R., Roscoe, R. D., Allen, L. K., McCarthy, K. S., & McNamara, D. S. (2022). Strategy uptake in writing Pal: Adaptive feedback and instruction. Journal of Educational Computing Research, 60 (3), 696–721. https://doi.org/10.1177/07356331211045304

Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20 , 43. https://doi.org/10.1186/s41239-023-00411-8

Chang, T. S., Li, Y., Huang, H. W., & Whitfield, B. (2021). Exploring EFL students’ writing performance and their acceptance of AI-based automated writing feedback. In 2021 2nd International conference on education development and studies (pp. 31–35). https://doi.org/10.1145/3459043.3459065

Creswell, J. W., & Plano Clark, V. L. (2017). Designing and conducting mixed methods research (3rd ed.). Sage Publications.

Google Scholar  

Dai, W., Lin, J., Jin, F., Li, T., Tsai, Y., Gasevic, D., & Chen, G. (2023). Can large language models provide feedback to students? A case study on ChatGPT. https://doi.org/10.35542/osf.io/hcgzj

Fathi, J., & Khodabakhsh, M. R. (2019). The role of self-assessment and peer-assessment in improving writing performance of Iranian EFL students. International Journal of English Language and Translation Studies, 7 (3), 1–10.

Fitria, T. N. (2023). Artificial intelligence (AI) technology in OpenAI ChatGPT application: A review of ChatGPT in writing English essay. ELT Forum: Journal of English Language Teaching, 12 (1), 44–58. https://doi.org/10.15294/elt.v12i1.64069

Fitriana, K., & Nurazni, L. (2022). Exploring English department students’ perceptions on using Grammarly to check the grammar in their writing. Journal of English Teaching, 8 (1), 15–25. https://doi.org/10.33541/jet.v8i1.3044

Gao, C. A., Howard, F. M., Markov, N. S., Dyer, E. C., Ramesh, S., Luo, Y., & Pearson, A. T. (2022). Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers. BioRxiv . https://doi.org/10.1101/2022.12.23.521610

Article   PubMed   PubMed Central   Google Scholar  

Gayed, J. M., Carlon, M. K. J., Oriola, A. M., & Cross, J. S. (2022). Exploring an AI-based writing assistant’s impact on English language learners. Computers and Education: Artificial Intelligence, 3 , 100055. https://doi.org/10.1016/j.caeai.2022.100055

Golzar, J., Momenzadeh, S. E., & Miri, M. A. (2022). Afghan English teachers’ and students’ perceptions of formative assessment: A comparative analysis. Cogent Education, 9 (1), 2107297. https://doi.org/10.1080/2331186X.2022.2107297

Gopalan, M., Rosinger, K., & Ahn, J. B. (2020). Use of quasi-experimental research designs in education research: Growth, promise, and challenges. Review of Research in Education, 44 (1), 218–243. https://doi.org/10.3102/0091732X20903302

Guo, K., Wang, J., & Chu, S. K. W. (2022). Using chatbots to scaffold EFL students’ argumentative writing. Assessing Writing, 54 , 100666. https://doi.org/10.1016/j.asw.2022.100666

Haristiani, N. (2019). Artificial Intelligence (AI) chatbot as language learning medium: An inquiry. Journal of Physics: Conference Series, 1387 (1), 1–7. https://doi.org/10.1088/1742-6596/1387/1/012020

Huisman, B., Saab, N., van den Broek, P., & van Driel, J. (2019). The impact of formative peer feedback on higher education students’ academic writing: A meta-analysis. Assessment & Evaluation in Higher Education, 44 (6), 863–880. https://doi.org/10.1080/02602938.2018.1545896

Imran, M., & Almusharraf, N. (2023). Analyzing the role of ChatGPT as a writing assistant at higher education level: A systematic review of the literature. Contemporary Educational Technology, 15 (4), ep464. https://doi.org/10.30935/cedtech/13605

Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). ChatGPT for language learning and teaching. RELC Journal . https://doi.org/10.1177/003368822311628

Koltovskaia, S. (2020). Student engagement with automated written corrective feedback (AWCF) provided by Grammarly: A multiple case study. Assessing Writing, 44 , 100450. https://doi.org/10.1016/j.asw.2020.100450

Lam, R. (2018). Feedback in writing portfolio assessment. In R. Lam (Ed.), Portfolio assessment for the teaching and learning of writing (pp. 59–72). Springer.

Lingard, L. (2023). Writing with ChatGPT: An illustration of its capacity, limitations & implications for academic writers. Perspectives on Medical Education, 12 (1), 261–270. https://doi.org/10.5334/pme.1072

Loewen, S., & Plonsky, L. (2017). An A-Z of applied linguistics research methods . Bloomsbury Publishing.

Mahapatra, S. K. (2021). Online formative assessment and feedback practices of ESL teachers in India, Bangladesh and Nepal: A multiple case study. Asia-Pacific Education Researcher, 30 (6), 519–530. https://doi.org/10.1007/s40299-021-00603-8

Marzuki, S., Widiati, U., Rusdin, D., Darwin, R., & Indrawati, I. (2023). The impact of AI writing tools on the content and organization of students’ writing: EFL teachers’ perspective. Cogent Education, 10 (2), 2236469. https://doi.org/10.1080/2331186X.2023.2236469

Mathur, M., & Mahapatra, S. (2022). Impact of ePortfolio assessment as an instructional strategy on students’ academic speaking skills: An experimental study. CALL-EJ, 23 (3), 1–23.

Mizumoto, A., & Eguchi, M. (2023). Exploring the potential of using an AI language model for automated essay scoring. Research Methods in Applied Linguistics . https://doi.org/10.1016/j.rmal.2023.100050

Ohio University. (2023). ChatGPT and teaching and learning . https://www.ohio.edu/center-teaching-learning/resources/chatgpt

Olsen, T., & Hunnes, J. (2023). Improving students’ learning—The role of formative feedback: Experiences from a crash course for business students in academic writing. Assessment & Evaluation in Higher Education . https://doi.org/10.1080/02602938.2023.2187744

OpenAI. (2022). ChatGPT: Optimizing language models for dialogue. OpenAI . https://openai.com/blog/chatgpt/

Perry, F. L., Jr. (2011). Research in applied linguistics: Becoming a discerning consumer . Routledge.

Book   Google Scholar  

Rad, H. S., Alipour, R., & Jafarpour, A. (2023). Using artificial intelligence to foster students’ writing feedback literacy, engagement, and outcome: A case of Wordtune application. Interactive Learning Environments . https://doi.org/10.1080/10494820.2023.2208170

Rezai, A., Naserpour, A., & Rahimi, S. (2022). Online peer-dynamic assessment: an approach to boosting Iranian high school students’ writing skills: A mixed-methods study. Interactive Learning Environments . https://doi.org/10.1080/10494820.2022.2086575

Rodríguez, M. F., Nussbaum, M., Yunis, L., Reyes, T., Alvares, D., Joublan, J., & Navarrete, P. (2022). Using scaffolded feedforward and peer feedback to improve problem-based learning in large classes. Computers & Education, 182 , 104446. https://doi.org/10.1016/j.compedu.2022.104446

Rose, H., McKinley, J., & Baffoe-Djan, J. B. (2020). Data collection research methods in applied linguistics . Bloomsbury Academic.

Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning & Teaching, 6 (1), 342–363. https://doi.org/10.37074/jalt.2023.6.1.9

Stokel-Walker, C. (2022). AI bot ChatGPT writes smart essays—Should professors worry? Nature . https://doi.org/10.1038/d41586-022-04397-7

Article   PubMed   Google Scholar  

Su, Y., Lin, Y., & Lai, C. (2023). Collaborating with ChatGPT in argumentative writing classrooms. Assessing Writing, 57 , 100752. https://doi.org/10.1016/j.asw.2023.100752

Tai, A. M. Y., Meyer, M., Varidel, M., Prodan, A., Vogel, M., Iorfino, F., & Krausz, R. M. (2023). Exploring the potential and limitations of ChatGPT for academic peer-reviewed writing: Addressing linguistic injustice and ethical concerns. Journal of Academic Language and Learning, 17 (1), T16–T30.

Taskiran, A., & Goksel, N. (2022). Automated feedback and teacher feedback: Writing achievement in learning English as a foreign language at a distance. Turkish Online Journal of Distance Education, 23 (2), 120–139. https://doi.org/10.17718/tojde.1096260

Thi, N. K., & Nikolov, M. (2022). How teacher and Grammarly feedback complement one another in Myanmar EFL students’ writing. The Asia-Pacific Education Researcher, 31 (6), 767–779. https://doi.org/10.1007/s40299-021-00625-2

Tracy, S. J. (2019). Qualitative research methods: Collecting evidence, crafting analysis, communicating impact . Wiley.

Wang, M., & Guo, W. (2023). The potential impact of ChatGPT on education: Using history as a rearview mirror. ECNU Review Education . https://doi.org/10.1177/20965311231189826

Winstone, N., & Carless, D. (2020). Designing effective feedback processes in higher education . Routledge.

Yamashita, T. (2021). Corrective feedback in computer-mediated collaborative writing and revision contributions. Language Learning & Technology, 25 (2), 75–93.

Yan, D. (2023). Impact of ChatGPT on learners in a L2 writing practicum: An exploratory investigation. Educational Information Technology . https://doi.org/10.1007/s10639-023-11742-4

Zhang, F., Schunn, C., Chen, S., Li, W., & Li, R. (2023). EFL student engagement with giving peer feedback in academic writing: A longitudinal study. Journal of English for Academic Purposes, 64 , 101255. https://doi.org/10.1016/j.jeap.2023.101255

Zhao, X. (2022). Leveraging artificial intelligence (AI) technology for English writing: Introducing wordtune as a digital writing assistant for EFL writers. RELC Journal . https://doi.org/10.1177/00336882221094089

Zhu, M., Liu, O. L., & Lee, H. S. (2020). The effect of automated feedback on revision behavior and learning gains in formative assessment of scientific argument writing. Computers & Education, 143 , 103668. https://doi.org/10.1016/j.compedu.2019.103668

Download references

Acknowledgements

I acknowledge and thank Prof. Punna Rao, my colleague, for his suggestions concerning the paper's quantitative analysis.

No external funding was used to conduct the study.

Author information

Authors and affiliations.

K 127, BITS Pilani Hyderabad Campus, Hyderabad, 500078, India

Santosh Mahapatra

You can also search for this author in PubMed   Google Scholar

Contributions

The author has planned, designed, and conducted the study and written the paper.

Corresponding author

Correspondence to Santosh Mahapatra .

Ethics declarations

Ethics approval and consent to participate.

Written consent was obtained from all the participants who participated in the study. Their participation was voluntary, and they had the option to leave the study at any point. The institution where they studied did not require to give any ethical committee permission for the study as the students were adults and had the freedom to choose their participation.

Consent for publication

Not applicable.

Competing interests

I do not have any financial or non-financial competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Process writing tasks

Write a paragraph in 150 words describing the procedure for conducting an experiment using two spring balances to verify Newton’s third law of motion.

Write a paragraph in 150 words describing the procedure for conducting an experiment in the laboratory to study Archimedes’ principle.

Write a paragraph in 150 words describing the procedure for plotting a cooling curve that indicates the relationship between a hot object and the time taken by it to cool down.

Comparison writing tasks

Write a paragraph in 150 words comparing the features of iOS and android operating system.

Write a paragraph in 150 words comparing the properties of metals and non-metals.

Write a paragraph in 150 words comparing renewable and non-renewable resources.

Cause-effect writing tasks

Write a paragraph in 150 words describing the impact of regular exercise on our mental health.

Write a paragraph in 150 words describing the impact of climate change on the environment.

Write a paragraph in 150 words describing the impact of technology addiction on mental health.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Mahapatra, S. Impact of ChatGPT on ESL students’ academic writing skills: a mixed methods intervention study. Smart Learn. Environ. 11 , 9 (2024). https://doi.org/10.1186/s40561-024-00295-9

Download citation

Received : 03 October 2023

Accepted : 07 February 2024

Published : 13 February 2024

DOI : https://doi.org/10.1186/s40561-024-00295-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Writing skills
  • Intervention

research paper for esl students

share this!

July 22, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

Science, social studies classes can help young English-learning students learn to read and write in English

by Joey Pitchford, North Carolina State University

math class

A new study finds that science and social studies classes may also help young students learn English, even when those classes include difficult and technical vocabulary. The paper is published in the Journal of Educational Psychology .

The study, which observed first- and second-grade students in 30 elementary schools in North Carolina, encouraged teachers to keep their English-learning students in class during science and social studies lessons. Science and social studies textbooks in those grades are often relatively technical and difficult for students, so traditional teaching methods in North Carolina encourage teachers to remove English-learning students from those content classes to focus on their language skills instead.

By creating a 10-week literacy program—known as a Tier 1 intervention—which kept English-learners in science and social studies classes, researchers found that those students saw improvements in their ability to write argumentative essays and use new academic vocabulary. The study highlights the importance of giving English-learners access to academic content, said Jackie Relyea, corresponding author of the study and assistant professor of literacy education at North Carolina State University.

"This study shows how important it is to provide equitable opportunities for English-learners to build knowledge in science and history, and to apply that knowledge through informational texts alongside their peers," Relyea said. "What we found was that when English-learners have access to content-rich literacy instruction, they develop content knowledge as well as language, reading comprehension, and writing skills."

The program used methods like interactive read-alouds, collaborative research and concept mapping to build students' vocabulary and understanding of complex topics. Concept mapping refers to using diagrams or similar visual aids to depict connections between ideas.

Significantly, the study found that English-learners had similar levels of improvement in science and social studies vocabulary and argumentative writing as their English-proficient classmates across the 10-week program. Notably, the intervention did not lead to negative results elsewhere, which supports the idea that English-learning students can attend more complex classes without falling behind. This further suggests that content-rich literacy instruction may help narrow the achievement gap between English-learners and their peers.

The intervention also modified classes to cover individual subjects for longer. That way, Relyea said, English-learners could get comfortable with a subject early on and then continue to get value from that knowledge later.

"One thing we noticed is the importance of using coherent text sets that focus on a single topic. In more traditional literacy instruction, our study found that topics tended to change quickly and there wasn't always a consistent throughline that the students could grab on to," she said.

"By focusing on similar subjects for longer, kids can dig deeper and develop more in-depth knowledge. It may be challenging at first, but when students encounter the same new words day after day, they become familiar with them and expand their vocabulary network. Staying on a thematic unit for longer periods also helps them become experts in the subject matter, which greatly enhances their vocabulary and comprehension skills."

The study challenges widely held assumptions about English-learners' academic capabilities, and highlights their readiness to engage with complex subject material despite their developing English proficiency. Relyea said that further research into incorporating small-group supplemental instruction could be valuable, potentially enhancing the effectiveness of the program even further.

Journal information: Journal of Educational Psychology

Provided by North Carolina State University

Explore further

Feedback to editors

research paper for esl students

Researchers record first-ever images and data of a shark being struck by a boat

research paper for esl students

Scientists try to replicate ancient butchering methods to learn how Neanderthals ate birds

4 hours ago

research paper for esl students

Taco-shaped arthropod fossils give new insights into the history of the first mandibulates

9 hours ago

research paper for esl students

When searching for light and a mate in the deep, dark sea, male dragonfish grow larger eyes, scientists discover

research paper for esl students

Butterflies accumulate enough static electricity to attract pollen without contact, research finds

research paper for esl students

The unintended consequences of success against malaria

research paper for esl students

A new way to make element 116 opens the door to heavier atoms

10 hours ago

research paper for esl students

Combining trapped atoms and photonics for new quantum devices

research paper for esl students

Stress granules found to play an unsuspected role in blood vessel formation

research paper for esl students

Uncovering the link between meltwater and groundwater in mountain regions

11 hours ago

Relevant PhysicsForums posts

Sources to study basic logic for precocious 10-year old.

Jul 21, 2024

Free Abstract Algebra curriculum in Urdu and Hindi.

Jul 20, 2024

Kumon Math and Similar Programs

Jul 19, 2024

AAPT 2024 Summer Meeting Boston, MA (July 2024) - are you going?

Jul 4, 2024

How is Physics taught without Calculus?

Jun 25, 2024

Is "College Algebra" really just high school "Algebra II"?

Jun 16, 2024

More from STEM Educators and Teaching

Related Stories

research paper for esl students

How likely are English learners to graduate from high school? Study shows it depends on race, gender and income

May 6, 2024

NYU scholar makes recommendations to end disparities in stem for English learners

Oct 8, 2019

research paper for esl students

In-person learning helped narrow reading gaps during pandemic, study finds

Nov 17, 2022

research paper for esl students

Extensive eye-tracking dataset derived from Japanese L2 English learners' text reading

Jun 20, 2024

research paper for esl students

Grouping English learners in classrooms yields no benefit in reading development, new study finds

Oct 19, 2023

research paper for esl students

All teachers need to teach language and literacy, not just English teachers

Apr 21, 2022

Recommended for you

research paper for esl students

Smartphone reminders found to have negative impact on learning times

Jul 17, 2024

research paper for esl students

High ceilings linked to poorer exam results for uni students

Jul 3, 2024

research paper for esl students

Early childhood problems linked to persistent school absenteeism

Jun 26, 2024

research paper for esl students

AI-generated exam submissions evade detection at UK university

research paper for esl students

AI predicts upper secondary education dropout as early as the end of primary school

research paper for esl students

Study reveals complex dynamics of philanthropic funding for US science

Jun 10, 2024

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

  • Environment
  • National Politics
  • Investigations
  • Florida Voices

No charges filed against County Commissioner Tobia, despite wide-ranging allegations

A state prosecutor has opted against filing charges against Brevard County Commissioner John Tobia , after investigating a series of allegations of wrongdoing brought by a former Tobia employee.

Florida Department of Law Enforcement reports of its staff's interviews with two current employees in Tobia's office appeared to back up some of former employee Christopher Davis' allegations, but did not address others.

The allegations are contained in a 247-page case file obtained by FLORIDA TODAY from the State Attorney's Office for the 7th Judicial Circuit, which handled the case. The investigation was transferred in November by Gov. Ron DeSantis from the 18th Judicial Circuit that includes Brevard to the Daytona Beach-area 7th Circuit, at the request of 18th Circuit State Attorney Phil Archer, who wanted to avoid a conflict of interest or any appearance of impropriety.

Davis, a former administrative aide in Tobia's Palm Bay office, alleges in sworn statements to investigators that Tobia:

  • Used staff in his commission office to help him grade papers of students he taught at Valencia College while employees were supposed to be working on county business. Valencia lists Tobia as a full-time professor, and Tobia teaches courses on American federal government and on state and local government. The college paid Tobia $100,663.63 in 2023, according to Tobia's latest financial disclosure report filed with the state. A current Tobia aide backed up the allegation that staff helped grade papers.
  • Asked Davis to perform personal errands. Davis provided text messages that appeared to indicate that he helped Tobia research airline flights.
  • Asked Davis to surreptitiously obtain information on Brevard County Supervisor of Elections Tim Bobanic's stay in a Washington hotel. Tobia — who cannot seek reelection as a county commissioner this year because of term limits — is challenging Bobanic in an Aug. 20 Republican primary for supervisor of elections, a job that pays $184,356 a year. Davis alleged that Tobia asked him to create an email account that appeared to be Bobanic's in order to obtain those records from the hotel in an effort to determine how the bill was paid and whether taxpayer money was used. The trip was paid for by a federal agency, so Bobanic could discuss election integrity matters.
  • Had his staff research the political party affiliations and other information about people who called Tobia's office, and that calls from Democrats were ignored. Current staff members agreed that they researched callers' voter registration or other information, but said every call was returned regardless.

Tobia reacts to allegations

Transferred to another district: County Commissioner John Tobia under investigation for alleged voter fraud

In response to the allegations and the state attorney's office's decision not to file charges, Tobia told FLORIDA TODAY: "This was a witch-hunt initiated by a disgruntled former employee terminated for cause and perpetuated by a desperate political opponent who believed this was his path to victory, while concealing his past as a registered Democrat."

Davis on March 20, 2023, was hired to the administrative aide position in Tobia's District 3 commission office. Tobia is completing his second four-year term as commissioner for District 3, which covers parts of South Brevard County.

Davis' personnel file shows that Davis submitted a letter of resignation to Tobia, dated June 27, 2023, indicating that his resignation would have been effective on July 12, 2023.

Tobia said he fired Davis because Davis took a list of personal passwords off Tobia's desk at his Palm Bay commission office. A July 3, 2023, letter from Tobia to Davis in Davis' personnel file said: "Due to recent events, your employment is henceforth terminated due to cause, effective immediately."

Tobia also filed a complaint about Davis with the Brevard County Sheriff's Office about the matter. There is no indication of the status of that matter in the state attorney's office file, and a previous FLORIDA TODAY inquiry to a BCSO spokesman was not answered.

According to FDLE investigative reports, Davis has contended that a list of Tobia's work-related and personal passwords and log-ons were part of an "information manual" provided to employees of Tobia's commission office, and that he did not take the passwords illegally. Davis said he maintained possession of the information manual with the belief the manual was given to him.

Responding to Tobia's contention that Bobanic is trying to hide his Democratic past, Bobanic told FLORIDA TODAY that he briefly switched his voter registration from Republican to Democratic in 2012, while he was working at the Hillsborough County Supervisor of Elections Office, Bobanic said that was so he could be eligible to vote in a Democratic primary there for his then-boss, who was running in that primary for supervisor of elections. After that primary was over, Bobanic said he switched his registration back to Republican.

In response to Davis' allegations contained in case file, Tobia said: "I categorically deny these baseless accusations."

Hotel bill controversy

In a document filed June 19 in County Court in Daytona Beach, 7th Judicial Circuit Assistant State Attorney Sarah Thomas said she did not intend to prosecute the case against Tobia on a charge of unlawful possession of personal identification information over Tobia's attempt to get Bobanic's hotel records. No other potential charges were listed in that document.

So far, Thomas has provided no further details in the case file related to her decision.

Bobanic questions the decision not to prosecute.

"It seems to be a clear-cut case of identity theft by Commissioner Tobia and his staff to make up an email impersonating me to try to get information on a hotel bill," Bobanic alleged.

The case file included a series of investigative reports written by staff of the Florida Department of Law Enforcement Office of Executive Investigations.

In one report, the investigator wrote that Davis said Tobia wanted him to obtain the receipts from the Fairmont Hotel in Washington, where Bobanic stayed during a work-related trip, "to determine if Bobanic was lavishly spending taxpayer money." Davis said Tobia asked him to call the hotel to get the information. When the hotel asked for an email request from the customer, Davis said he was instructed by Tobia to create a fake email account, write the email, and make it appear that it was coming from someone named Timothy or Tim.

Bobanic told investigators that his trip to Washington was at the request of the U.S. Election Assistance Committee so he could present on how Brevard County conducts post-election audits, and that it was financed by the U.S. Treasury, so no county taxpayer funds were used.

Two current staff members in Tobia's office also were interviewed by FDLE officials. But Tobia's attorney told an FDLE inspector on Nov. 7 that Tobia would not provide a statement to investigators.

Two FDLE inspectors on Sept. 27 interviewed both Bethany Prasad, chief of staff for Tobia, and Brian Bond, Tobia's legislative aide.

According to FDLE reports, "Prasad acknowledged that Tobia requested Davis to find out how much Bobanic paid" for the hotel room. She described Tobia "as fiscally concerned, saying Tobia believed travel was a frivolous, often-abused item. Prasad admitted she was the person who came up with the email for Davis to send to the hotel."

A Nov. 14 FDLE report indicated that, "based on the evidence gathered, there is probable cause to believe that the email address was created and sent to the Fairmont Hotel to obtain Bobanic's bill. It is also apparent that the author of the email made it appear there was an affiliation to Bobanic."

But they said the matter fell short of requirement for prosecution because there was no use of Bobanic's official state-issued driver's license number, Social Security number or other personal identification information.

Valencia College allegation

Questions about deposition: Deposition alleges withholding of public records in lawsuit involving School Board members

According to one FDLE report, Prasad indicated that she assisted Tobia by logging into his Valencia College account, and "has assisted him in grading papers." She said Tobia was "technologically impaired," so she also helped him in setting up his new classes when the college made changes to its online system.

A separate FDLE report said Bond indicated that there were occasions when he assisted Tobia with his Valencia College work, including helping when there was a syllabus change and occasionally helping "cross-reference grades."

When investigators asked Prasad if there was anything illegal, immoral or unethical that she was aware of that had occurred at her office, Prasad "advised that she did not like doing Tobia's personal work, but did not feel that it rose to the level of being unethical," the report said.

Research through webElect

The report on the Bond interview said Bond acknowledged that he used Tobia's account with webElect, a voter registration database, "in a limited capacity for constituent research. According to Bond, he used the program to better understand a voter, to determine their party and to know their affiliations."

Nevertheless, Bond told investigators that "each person who contacts Tobia's office gets a response, regardless," the FDLE report said.

That conflicts with an allegation by Davis that registered Democrats or those not having a high enough "score" were to be ignored.

Prasad also told investigators that she used webElect "to conduct informational research on Tobia's constituents," including "to research/verify if a constituent is in Tobia's district," according to an FDLE report.

Questions about Tobia's home address

The FDLE also investigated whether Tobia actually lived at the Palm Bay address he listed on voting documents as his home address. Davis alleged that Tobia actually lived outside his commission district.

According to an FDLE report, the owner of the Palm Bay home told FDLE inspectors that she lived there with her adult daughter, and that she rented a room to Tobia for $300 a month. She said, while some of Tobia's clothes and other belongings were in the rented room and in a garage at the home, Tobia was there only occasionally.

The FDLE concluded that "probable cause has not been established that a crime has been committed," related to the issue of Tobia's address.

Brevard County Attorney Morris Richardson sat in on the interviews with Bond and Prasad.

According to a report in the case file, Richardson said the county's view is that county officials "typically let commissioners run their offices at their discretion."

"As such, it would be necessary for county administrators to determine if the actions of any of the individuals within the District 3 commissioner's office constitute a violation," including if they are criminal in nature or would be more property addressed through county disciplinary procedures, the report said.

Dave Berman is business editor at  FLORIDA TODAY.  Contact Berman at  [email protected] , on X at  @bydaveberman  and on Facebook at  www.facebook.com/dave.berman.54

research paper for esl students

UM Academic presents paper on challenges of language integration at LPP2024

  • In Conferences
  • 08:59, 23 Jul 2024

research paper for esl students

Dr Phyllisienne Gauci from the Department of Inclusion and Access to Learning within the Faculty of Education , presented a paper on the challenges of language integration in Malta at the Multidisciplinary Approaches in Language Policy and Planning Conference (LPP 2024). The conference took place at Carleton University in Ottawa, Canada, between 27 June and 1 July 2024. The paper delved into the intricate challenges of language integration in Malta, the smallest nation within the European Union, which has witnessed unparalleled migrant population growth over the last decade. The complexity arises from Malta's bilingual status, with Maltese and English serving as official languages, each differing markedly in their functions and domains of usage. In an attempt to address its rapidly changing socio-demographic landscape, Malta initiated the Migrant Integration Strategy & Action Plan in 2017 with the intention of setting up a stronger framework for the integration of migrants who were already working and living in Malta. As part of this initiative, the I Belong program was established, serving as a prerequisite for non-EU nationals aspiring to attain long-term residence status. The paper presents a scholarly examination, drawing upon data collected from over 350 students who completed the I Belong Stage 2 Maltese language integration program. Through rigorous analysis, this contribution illuminates the efficacy and relevance of the program and its ramifications within the context of Malta's evolving social fabric. By offering empirical insights, this research contributes substantively to the academic understanding of language integration strategies and informs future policy frameworks not only in Malta but also in comparable sociolinguistic contexts. The study was possible thanks to the University of Malta Research Seed Fund 2023.

Dr Rachael Agius, Dr Marjorie Bonello and Dr Guilherme Couto from the FHS represented the UM

Faculty of Health Sciences Academics receive award at International INHWE conference

  • CONFERENCES
  • 12:58, 23 Jul 2024

Outcomes of the Fourth Small Islands Developing States Conference

Outcomes of the Fourth Small Islands Developing States Conference

  • 09:05, 19 Jul 2024

Invitation to an event

Malta Sociology Conference

  • 12:30, 17 Jul 2024
  • ARTS & CULTURE
  • EXHIBITIONS
  • PUBLICATIONS
  • RESEARCH & PROJECTS
  • SCHOLARSHIPS
  • TALKS & SEMINARS

Information about

  • L-Università ta' Malta (UM)
  • Academic entities

Information for

  • Prospective students
  • Current students
  • Campuses and maps
  • Courses at UM
  • Search for staff
  • Statutes, regulations and bye-laws

University of Malta Msida, MSD 2080 Malta

Phone: +356 2340 2340 UM website Contact details

  • Accessibility
  • Freedom of information

© L-Università ta' Malta

American Psychological Association Logo

Resilience is the process and outcome of successfully adapting to difficult or challenging life experiences, especially through mental, emotional, and behavioral flexibility and adjustment to external and internal demands.

A number of factors contribute to how well people adapt to adversities, including the ways in which individuals view and engage with the world, the availability and quality of social resources, and specific coping strategies.

Psychological research demonstrates that the resources and skills associated with resilience can be cultivated and practiced.

Adapted from the APA Dictionary of Psychology

Resources from APA

The Road to Resilience

Building your resilience

Resilience for teens

Resilience for teens: 10 tips to build skills on bouncing back from rough times

Resilience Guide

Resilience guide for parents and teachers

Motivation Myth Busters

Nature Meets Nurture

Building Psychological Resilience in Military Personnel

The Pain Survival Guide, Rev. Ed.

Magination Press children’s books

Rhythm

Out of the Fires

Cover of New Kid, New Scene (medium)

New Kid, New Scene

Cover of Lucy's Light (medium)

Lucy's Light

Cover of Bounce Back (medium)

Bounce Back

Journal special issues

Resilience and Perseverance for Human Flourishing

Somos Latinxs

Risk and Resilience in Sexual and Gender Minority Relationships

Psychological Perspectives on Culture Change

Psychiatric Rehabilitation for Veterans

Generative AI Can Harm Learning

59 Pages Posted: 18 Jul 2024

Hamsa Bastani

University of Pennsylvania - The Wharton School

Osbert Bastani

University of Pennsylvania - Department of Computer and Information Science

Özge Kabakcı

Budapest British International School

Rei Mariman

Independent; Independent

Date Written: July 15, 2024

Generative artificial intelligence (AI) is poised to revolutionize how humans work, and has already demonstrated promise in significantly improving human productivity. However, a key remaining question is how generative AI affects learning , namely, how humans acquire new skills as they perform tasks. This kind of skill learning is critical to long-term productivity gains, especially in domains where generative AI is fallible and human experts must check its outputs. We study the impact of generative AI, specifically OpenAI's GPT-4, on human learning in the context of math classes at a high school. In a field experiment involving nearly a thousand students, we have deployed and evaluated two GPT based tutors, one that mimics a standard ChatGPT interface (called GPT Base) and one with prompts designed to safeguard learning (called GPT Tutor). These tutors comprise about 15% of the curriculum in each of three grades. Consistent with prior work, our results show that access to GPT-4 significantly improves performance (48% improvement for GPT Base and 127% for GPT Tutor). However, we additionally find that when access is subsequently taken away, students actually perform worse than those who never had access (17% reduction for GPT Base). That is, access to GPT-4 can harm educational outcomes. These negative learning effects are largely mitigated by the safeguards included in GPT Tutor. Our results suggest that students attempt to use GPT-4 as a "crutch" during practice problem sessions, and when successful, perform worse on their own. Thus, to maintain long-term productivity, we must be cautious when deploying generative AI to ensure humans continue to learn critical skills. * HB, OB, and AS contributed equally

Keywords: Generative AI, Human Capital Development, Education, Human-AI Collaboration, Large Language Models

Suggested Citation: Suggested Citation

University of Pennsylvania - The Wharton School ( email )

3641 Locust Walk Philadelphia, PA 19104-6365 United States

University of Pennsylvania - Department of Computer and Information Science ( email )

3330 Walnut Street Philadelphia, PA 19104 United States

Alp Sungu (Contact Author)

Budapest british international school ( email ), independent ( email ), do you have a job opening that you would like to promote on ssrn, paper statistics, related ejournals, econometrics: computer programs & software ejournal.

Subscribe to this fee journal for more curated articles on this topic

Randomized Social Experiments eJournal

Environment for innovation ejournal, generative ai ejournal.

Subscribe to this free journal for more curated articles on this topic

  • MyU : For Students, Faculty, and Staff

Seven graduate students honored with Doctoral Dissertation Fellowships

Photographs of 2024 DDF Recipients

MINNEAPOLIS / ST. PAUL (7/18/2024) – Seven graduate students advised by Department of Chemistry faculty members were recently awarded the University of Minnesota’s Doctoral Dissertation Fellowship. The seven students honored by this prestigious award are Kaylee Barr, Brylon Denman, Madeline Honig, Chris Seong, Sneha Venkatachalapathy, Murphi Williams, and Caini Zheng.

Kaylee Barr , a Chemical Engineering and Materials Science PhD student, is entering her fifth year in the Reineke Group . Before making the move to Minnesota, she received her BS in Chemical Engineering from the University of Kansas. “I came to the University of Minnesota because of the department's developments in polymer science, and because I was interested in the intersection of polymer science and drug delivery in Theresa Reineke's lab,” she says. Here at UMN, Kaylee studies how bottlebrush polymer architecture affects pH-responsive oral drug delivery. This summer, she is excited to grow professionally and as a scientist in an intern position at Genentech.

Brylon Denman is a Chemistry PhD candidate in the Roberts Group . She joined the UMN community in 2020 after completing her BS in Biochemistry at St. Louis University. “My research in the Roberts group seeks to resolve regioselectivity and reactivity issues within aryne methodology via ligand control,” Brylon says. “To accomplish this task, I have taken a mechanistic and hypothesis driven approach to understand how key molecular parameters modify regioselectivity and reactivity. I hope to use the knowledge I have gained from these studies to both improve the synthetic utility of aryne intermediates, and improve the sustainability of aryne reactions.” Brylon is also passionate about sustainable and green chemistry. As a founding member of the Sustainable and Green Chemistry committee, Brylon strives to collaborate with other department teammates to strengthen the culture of green and sustainable chemistry through integration into teaching, research, and community engagement. “In my career I aim to continue this advocacy and use my breadth of knowledge to enact sustainable change at a major pharmaceutical company as emphasizing sustainability on such a large scale can lead to a large impact,” she says. As she works through her internship at AbbVie this summer, Brylon is looking towards the future to outline her next steps after graduation.

Madeline Honig first experienced Chemistry at UMN during a summer REU experience in the Bühlmann Lab . She formally joined the Prof. Bühlmann's team in Fall 2020 after earning her BA in Chemistry from Earlham College. Her research here at UMN  has focused on the development and improved understanding of polymeric membrane-based ion-selective electrodes (ISEs). “One of my projects involves developing a quantitative parameter to better define the upper detection limits of these sensors which can be used to more accurately define sensor performance and predict the working range under different conditions,” Madeline says. “This research led us to investigate the unexplained 'super-Nernstian' responses of some pH-selective electrodes and expand the phase boundary model (the quantitative model that predicts ISE behavior) to include the formation of complexes between protonated ionophores and counter-ions in the sensing membrane. ISEs have been widely used for decades in clinical blood analysis among many other applications so it's exciting that I was still able to add to our fundamental understanding of how these sensors function.” One of Madeline’s goals is to use her research to enable the development of improved sensors that can be used in a wider range of conditions. Over the course of her graduate studies, Madeline has had the opportunity to be a graduate student mentor for two other students: Ariki Haba, a visiting master's student from Japan, and Katie O'Leary, a summer REU student, who both made significant contributions to the project. “Acting as a graduate mentor was really cool and I hope I can also make graduate-level chemistry research more approachable for everyone that I work with,” Madeline says. For her significant research efforts, Madeline was also recently selected in a national competition as one of the four winners of the 2024 Eastern Analytical Symposium Graduate Student Research Award. She will accept the award in November in Plainsboro NJ at the Eastern Analytical Symposium.

Chris Seong , an international student from New Zealand and PhD candidate in the Roberts Group, came to UMN after completing his BA with Distinction in Chemistry at St. Olaf College in 2020. Chris’ overarching chemistry interests involve the development of methods to utilize naturally abundant carboxylic acids as feedstock to synthesize medicinally relevant products, which are traditionally made with non-renewable starting materials derived from fossil fuels. “My earlier work has been focused on making alkyl-alkyl bonds through decarboxylation, but lately, in true Roberts Group fashion, I have turned my attention to using a similar mechanism to do aryne chemistry,” Chris says. He is currently working to publish a paper on the aryne project that he has been working on with two talented group mates; Sal Kargbo and Felicia Yu. “I am really excited to share this cool chemistry with the world,” he says. Outside of the lab, Chris is working on expanding his network to apply for jobs in the pharmaceutical industry – specifically in the early process space.

Sneha Venkatachalapathy is a member of the Distefano Group and an international student from India. She completed her BS in Chemistry with a minor degree in Biotechnology from Shiv Nadar University, Greater Noida, India in 2020. “Chemistry has always been my passion since high school. I still remember my first successful brown ring test that has left a remarkable fascination and interest towards chemistry,” Sneha says. “This early fascination has driven my academic journey, guided by mentors like Dr. Subhabrata Sen, who encouraged me to pursue a PhD in the United States.” Sneha was drawn towards working in the Chemical Biology research field where she could directly contribute to developing human life. “Joining Dr. Mark Distefano’s lab at UMN provided me with the chance to collaborate with Dr. Mohammad Rashidian from Dana Farber Cancer Institute. Together, we work towards expanding the scope of protein prenylation to construct protein-based cancer diagnostic tools,” she says. Sneha’s goal for her time in the UMN PhD program is to create innovative protein-based tools for cancer detection and treatment, aiming to enhance patient’s quality of life. She says she is looking forward to continuing to develop her leadership skills as she continues her doctorate, and is also exploring future opportunities beyond UMN. “One thing that motivates me daily is the belief that my research contributions to the scientific community would enhance our understanding of cancer diagnostic methods, ultimately leading to improved patient outcomes worldwide,” she says.

Murphi Williams  completed her undergraduate studies at the University of Wisconsin-Eau Claire, then joined the Bhagi-Damodaran at UMN in 2020. When it comes to research, Murphi is interested in chemical biology, more specifically, looking into proteins involved in important biological problems. “One of my major projects is developing and characterizing a potential inhibitor for  Mycobacterium tuberculosis , the bacteria that causes tuberculosis,” Murphi says. “Tuberculosis is the leading infectious disease so my projects center on understanding and inhibiting heme proteins important for the bacteria. Specifically, a previous lab member identified a small molecule that I've been characterizing the activity of in cells.” Her current research goal is to express and purify the protein targets for her small molecule inhibitor in the lab to further demonstrate the in vitro activity. She is also contemplating a future career in science communication. Outside of the lab, she enjoys working on her garden. 

Caini Zheng joined the Chemistry at the UMN in 2019 after finishing her undergraduate studies at Shanghai Jiao Tong University. She is currently a sixth-year graduate student co-advised by Profs. Tim Lodge and Ilja Siepmann . Her research focuses on the phase behavior of soft materials, including polymers and oligomers. Her DDF statement is titled "Self-Assembly of Polymers and Amphiphiles into Bicontinuous Phases". Caini is currently working on a project to elucidate the self-assembly of glycolipids through molecular dynamics simulations coupled with machine learning methods. In the future, she wants to work in the industry on bridging data science with traditional material research.

Related news releases

  • Joint Safety Team featured in ACS Chemical Health & Safety
  • Distinguished University Teaching Professor Philippe Bühlmann receives President's Award for Outstanding Service
  • Twelve students recognized at 23rd annual GSRS
  • Professor Emerita Jane Wissinger awarded 2024 Career Achievement in Green Chemistry Education
  • Chemistry Diversity and Inclusion Committee debuts DEI Self-Study program
  • Future undergraduate students
  • Future transfer students
  • Future graduate students
  • Future international students
  • Diversity and Inclusion Opportunities
  • Learn abroad
  • Living Learning Communities
  • Mentor programs
  • Programs for women
  • Student groups
  • Visit, Apply & Next Steps
  • Information for current students
  • Departments and majors overview
  • Departments
  • Undergraduate majors
  • Graduate programs
  • Integrated Degree Programs
  • Additional degree-granting programs
  • Online learning
  • Academic Advising overview
  • Academic Advising FAQ
  • Academic Advising Blog
  • Appointments and drop-ins
  • Academic support
  • Commencement
  • Four-year plans
  • Honors advising
  • Policies, procedures, and forms
  • Career Services overview
  • Resumes and cover letters
  • Jobs and internships
  • Interviews and job offers
  • CSE Career Fair
  • Major and career exploration
  • Graduate school
  • Collegiate Life overview
  • Scholarships
  • Diversity & Inclusivity Alliance
  • Anderson Student Innovation Labs
  • Information for alumni
  • Get engaged with CSE
  • Upcoming events
  • CSE Alumni Society Board
  • Alumni volunteer interest form
  • Golden Medallion Society Reunion
  • 50-Year Reunion
  • Alumni honors and awards
  • Outstanding Achievement
  • Alumni Service
  • Distinguished Leadership
  • Honorary Doctorate Degrees
  • Nobel Laureates
  • Alumni resources
  • Alumni career resources
  • Alumni news outlets
  • CSE branded clothing
  • International alumni resources
  • Inventing Tomorrow magazine
  • Update your info
  • CSE giving overview
  • Why give to CSE?
  • College priorities
  • Give online now
  • External relations
  • Giving priorities
  • CSE Dean's Club
  • Donor stories
  • Impact of giving
  • Ways to give to CSE
  • Matching gifts
  • CSE directories
  • Invest in your company and the future
  • Recruit our students
  • Connect with researchers
  • K-12 initiatives
  • Diversity initiatives
  • Research news
  • Give to CSE
  • CSE priorities
  • Corporate relations
  • Information for faculty and staff
  • Administrative offices overview
  • Office of the Dean
  • Academic affairs
  • Finance and Operations
  • Communications
  • Human resources
  • Undergraduate programs and student services
  • CSE Committees
  • CSE policies overview
  • Academic policies
  • Faculty hiring and tenure policies
  • Finance policies and information
  • Graduate education policies
  • Human resources policies
  • Research policies
  • Research overview
  • Research centers and facilities
  • Research proposal submission process
  • Research safety
  • Award-winning CSE faculty
  • National academies
  • University awards
  • Honorary professorships
  • Collegiate awards
  • Other CSE honors and awards
  • Staff awards
  • Performance Management Process
  • Work. With Flexibility in CSE
  • K-12 outreach overview
  • Summer camps
  • Outreach events
  • Enrichment programs
  • Field trips and tours
  • CSE K-12 Virtual Classroom Resources
  • Educator development
  • Sponsor an event

COMMENTS

  1. English as a Second Language (ESL) Learning: Setting the Right

    In English classes, teacher-student interactions in English are essential for fostering the students' intrinsic motivation and understanding of the importance of mastering the English language ...

  2. EFL/ESL Articles from The Internet TESL Journal

    The Development and Maintenance of The Internet TESL Journal. By Charles I. Kelly and Lawrence E. Kelly. Overcoming the Fear of Using Drama in English Language Teaching. By Judith Gray Royka. Virtual Reality for ESL Students. By Hee-Jung Jung. Integrating English into an Elementary School Life Course.

  3. Writing a Research Paper

    Library resources and information for students of English as a Second Language. Skip to Main Content. Library; LibGuides; ESL (English as a Second Language) Research Guide; Writing a Research Paper; ... Research Papers, Reports, Theses by Slade. Call Number: LB2369 .C3 1999. ISBN: 039595827X. Publication Date: 1999. Research Paper Fundamentals ...

  4. PDF A Review of the Literature on English as a Second Language (ESL ...

    A Review of the Literature on ESL Issues / v ©Alberta Education, Alberta, Canada 2008 Preamble The Language Research Centre at the University of Calgary was contracted by Alberta Education to produce an annotated bibliography on diverse aspects of education related to English as a Second Language (ESL) students.

  5. A review of recent research in EFL motivation: Research trends

    1. Introduction1.1. Importance of motivation to learn EFL/ESL in the current world. Motivation to learn English as a second or foreign language (ESL/EFL) is an ever-quickening pulse in a world where English is the lingua franca (ELF) and where effective communication in global trading, policy, and diplomacy is crucial (Woodrow, 2016).However, national and international scenarios are increasing ...

  6. (PDF) Language Learning Strategies Used by ESL Students in Enhancing

    PDF | On Dec 4, 2022, Nurul Intan Hipni Helen Kenol and others published Language Learning Strategies Used by ESL Students in Enhancing English Proficiency: A Systematic Review (2013-2022) | Find ...

  7. A review of previous studies on ESL/EFL learners' interactional

    This paper is a review of previous studies on learners' interactional feedback exchanges in face-to-face peer review (FFPR) and computer-assisted peer review (CAPR) of English as Second/Foreign Language (ESL/EFL) writing. The review attempted to (1) identify the patterns of interactional feedback, (2) search an empirical evidence of learners' incorporation of peer interactional feedback in ...

  8. (PDF) READING PRACTICES OF ESL LEARNERS: LITERACY ...

    E-mail: [email protected]. Abstract - This paper sheds light on the reading practices of English as Second Language (ESL) Learners. It aimed at. identifying some of their basic reading ...

  9. Internet TESL Journal (For ESL/EFL Teachers)

    The Internet TESL Journal is a free online journal for teachers of English as a second language that includes lesson plans, classroom handouts, links of interest to ESL teachers and students, articles, research papers and other things that are of immediate practical use to ESL teachers.

  10. ESL Topics for a Research Paper

    The experience of teaching ESL varies greatly from country to country. Write a research paper comparing the salary, hours, case studies and living aspects of ESL teachers from three to five different locations. Or, write a persuasive essay comparing the details of teaching or studying ESL in the United States to teaching or studying ESL overseas.

  11. Teaching ESL Students to Read and Write Experimental‐Research Papers

    This paper presents both a discussion of experimental-research paper organization and a method for teaching reading and writing of experimental-research articles to ESL students. ESL teachers are advised to teach students to analyze the reading purpose first, and then to select a reading strategy to meet that purpose.

  12. A review of the ESL/EFL learners' gains from online peer feedback on

    Peer feedback is essential in writing English as a Second/Foreign Language (ESL/EFL). Traditionally, offline PF was more widely favored but nowadays online peer feedback (OPF) has become frequent in ESL/EFL learners' daily writing. This study is undertaken to probe into the gains of using OPF in ESL/EFL writing on the basis of 37 research ...

  13. PDF ESL Student Identity and the Multigenre Research Project

    This article proposes to incorporate Freirean philosophy with a technique used in academic (monolingual) English courses: the multigenre research paper. When applying this technique in the ESL classroom, it is crucial to also use global and international texts of various genres in order to support students' cultural iden-tities and also ...

  14. Teaching ESL Students to Read and Write Experimental‐Research Papers

    This paper presents both a discussion of experimental-research paper organization and a method for teaching reading and writing of experimental-research articles to ESL students. ESL teachers are advised to teach students to analyze the reading purpose first, and then to select a reading strategy to meet that purpose.

  15. Evidence-based reading interventions for English language learners: A

    Reid T., Heck R.H. Exploring reading achievement differences between elementary school students receiving and not receiving English language services. AERA Online Paper Repositor. 2017 [Google Scholar] Richards-Tutor C., Baker D.L., Gersten R., Baker S.K., Smith J.M. The effectiveness of reading interventions for English learners: a research ...

  16. Research Skills

    Use this guide to find and evaluate information and resources related to learning English as a Second Language (ESL) or English for non-native speakers. It also includes resources for teachers of ESL. ... Research skills develop critical thinking and equip you to write better research papers and craft better speeches. You will also improve ...

  17. PDF ESL learners' online research and comprehension strategies

    These 74 students were second and third semester ESL students from various courses. The instrument developed for this study is the Online Research And Comprehension Strategies (ORACS) survey. The ORACS survey consisted of 37 items. Sixteen items were based on the five processes involved in online research and comprehension activities suggested ...

  18. Impact of ChatGPT on ESL students' academic writing skills: a mixed

    This paper presents a study on the impact of ChatGPT as a formative feedback tool on the writing skills of undergraduate ESL students. Since artificial intelligence-driven automated writing evaluation tools positively impact students' writing, ChatGPT, a generative artificial intelligence-propelled tool, can be expected to have a more substantial positive impact.

  19. (PDF) Use of I-Ready in Tiered Instructions for ESL Students: A

    Paper−Use of I-Ready in Tiered Instruc ons for ESL Students: A Quan ta ve Case Study 94 there was a 5 % growth in the PH domain by the end of the y ear.

  20. Science, social studies classes can help young English-learning

    The paper is published in the Journal of Educational Psychology. A new study finds that science and social studies classes may also help young students learn English, even when those classes ...

  21. State Attorney's Office opts against pursuing charges against Tobia

    Davis, a former administrative aide in Tobia's Palm Bay office, alleges in sworn statements to investigators that Tobia: Used staff in his commission office to help him grade papers of students he ...

  22. UM Academic Presents Paper on Challenges of Language Integration at

    Dr Phyllisienne Gauci from the Department of Inclusion and Access to Learning within the Faculty of Education, presented a paper on the challenges of language integration in Malta at the Multidisciplinary Approaches in Language Policy and Planning Conference (LPP 2024). The conference took place at Carleton University in Ottawa, Canada, between 27 June and 1 July 2024.

  23. Welcome to Turnitin Guides

    Dedicated student and administrator guidance hubs. Visit the Student hub area to locate student guidance. For students who access Turnitin via an LMS or VLE, check out the subsection Submitting to Turnitin. Visiting the Administrator hub area to locate administrator guidance and release notes.

  24. Inquiry skills teaching and its relationship with UAE secondary school

    Undoubtedly, due to continuous changes in time, environment, and demand, teaching techniques in science education should be constantly explored, reflected upon, and improved. This paper explores the current evidence related to secondary science teachers' perspectives about teaching inquiry skills in the United Arab Emirates (UAE). After a systematic Boolean search in online databases, a ...

  25. Resilience

    Resources for students, teachers and psychologists at all levels to explore career growth in psychology. Education. Pre-K to 12; Undergraduate Psychology; ... Psychological research demonstrates that the resources and skills associated with resilience can be cultivated and practiced. Adapted from the APA Dictionary of Psychology.

  26. Generative AI Can Harm Learning

    That is, access to GPT-4 can harm educational outcomes. These negative learning effects are largely mitigated by the safeguards included in GPT Tutor. Our results suggest that students attempt to use GPT-4 as a "crutch" during practice problem sessions, and when successful, perform worse on their own.

  27. PDF Developing Research Paper Writing Programs for EFL/ESL Undergraduate

    This paper is therefore believed to make a great contribution to practical applications for RPW program developers, lecturers, undergraduate and postgraduate students in EFL/ESL contexts. Keywords: EFL/ESL undergraduate students, Research Paper Writing (RPW) program, Process Genre Approach (PGA) 1. Introduction.

  28. Seven graduate students honored with Doctoral Dissertation Fellowships

    MINNEAPOLIS / ST. PAUL (7/18/2024) - Seven graduate students advised by Department of Chemistry faculty members were recently awarded the University of Minnesota's Doctoral Dissertation Fellowship. The seven students honored by this prestigious award are Kaylee Barr, Brylon Denman, Madeline Honig, Chris Seong, Sneha Venkatachalapathy, Murphi Williams, and Caini Zheng.

  29. (PDF) Strategies to Improve English Vocabulary and ...

    Future research should focus on the CCC strategy application to improve their vocabulary skills for ESL students who also have LD. Discover the world's research 25+ million members

  30. PDF Recovery still elusive: 2023-24 student achievement highlights

    This brief is a continuation of NWEA's research series examining the impact of the COVID-19 pandemic on student achievement and progress toward academic recovery. Initiated in the early phase of the pandemic, this series has leveraged NWEA's large national sample of longitudinal MAP® Growth™ data to track .