• Research article
  • Open access
  • Published: 26 October 2021

The impact of artificial intelligence on learner–instructor interaction in online learning

  • Kyoungwon Seo   ORCID: orcid.org/0000-0003-3435-0685 1 ,
  • Joice Tang 2 ,
  • Ido Roll 3 ,
  • Sidney Fels 4 &
  • Dongwook Yoon 2  

International Journal of Educational Technology in Higher Education volume  18 , Article number:  54 ( 2021 ) Cite this article

366k Accesses

92 Citations

311 Altmetric

Metrics details

Artificial intelligence (AI) systems offer effective support for online learning and teaching, including personalizing learning for students, automating instructors’ routine tasks, and powering adaptive assessments. However, while the opportunities for AI are promising, the impact of AI systems on the culture of, norms in, and expectations about interactions between students and instructors are still elusive. In online learning, learner–instructor interaction (inter alia, communication, support, and presence) has a profound impact on students’ satisfaction and learning outcomes. Thus, identifying how students and instructors perceive the impact of AI systems on their interaction is important to identify any gaps, challenges, or barriers preventing AI systems from achieving their intended potential and risking the safety of these interactions. To address this need for forward-looking decisions, we used Speed Dating with storyboards to analyze the authentic voices of 12 students and 11 instructors on diverse use cases of possible AI systems in online learning. Findings show that participants envision adopting AI systems in online learning can enable personalized learner–instructor interaction at scale but at the risk of violating social boundaries. Although AI systems have been positively recognized for improving the quantity and quality of communication, for providing just-in-time, personalized support for large-scale settings, and for improving the feeling of connection, there were concerns about responsibility, agency, and surveillance issues. These findings have implications for the design of AI systems to ensure explainability, human-in-the-loop, and careful data collection and presentation. Overall, contributions of this study include the design of AI system storyboards which are technically feasible and positively support learner–instructor interaction, capturing students’ and instructors’ concerns of AI systems through Speed Dating, and suggesting practical implications for maximizing the positive impact of AI systems while minimizing the negative ones.

Introduction

The opportunities for artificial intelligence (AI) in online learning and teaching are broad (Anderson et al., 1985 ; Baker, 2016 ; Roll et al., 2018 ; Seo et al., 2020b ; VanLehn, 2011 ), ranging from personalized learning for students and automation of instructors’ routine tasks to AI-powered assessments (Popenici & Kerr, 2017 ). For example, AI tutoring systems can provide personalized guidance, support, or feedback by tailoring learning content based on student-specific learning patterns or knowledge levels (Hwang et al., 2020 ). AI teaching assistants help instructors save time answering students’ simple, repetitive questions in online discussion forums, and instead instructors can dedicate their saved time to higher-value work (Goel & Polepeddi, 2016 ). AI analytics allows instructors to understand students’ performance, progress, and potential by decrypting their clickstream data (Roll & Winne, 2015 ; Fong et al., 2019 ; Seo et al., 2021 ; Holstein et al., 2018 ).

While the opportunities for AI are promising, students and instructors may perceive the impact of AI systems negatively. For instance, students may perceive indiscriminate collection and analysis of their data through AI systems as a privacy breach, as illustrated by the Facebook–Cambridge Analytica data scandal (Chan, 2019 ; Luckin, 2017 ). The behavior of AI agents that do not take into account the risk of data bias or algorithmic bias can be perceived by students as discriminatory (Crawford & Calo, 2016 ; Murphy, 2019 ). Instructors worry that relying too much on AI systems might compromise the student’s ability to learn independently, solve problems creatively, and think critically (Wogu et al., 2018 ). It is important to examine how students and instructors perceive the impact of AI systems in online learning environments (Cruz-Benito et al., 2019 ).

The AI in Education (AIEd) community is increasingly exploring the impact of AI systems in online education. For example, Roll and Wylie ( 2016 ) call for more involvement of AI systems in the communication between students and instructors, and in education applications outside school context. At the same time, Zawacki-Richter and his colleagues ( 2019 ) conducted a systematic review of AIEd publications from 2007 to 2018 and as a result found a lack of critical reflection of the ethical impact and risks of AI systems on learner–instructor interaction. Popenici and Kerr ( 2017 ) investigated the impact of AI systems on learning and teaching, and uncovered potential conflicts between students and instructors, such as privacy concerns, changes in power structures, and excessive control. All of these studies called for more research into the impact of AI systems on learner–instructor interaction, which will help us identify any gaps, issues, or barriers preventing AI systems from achieving their intended potential.

Indeed, learner–instructor interaction plays a crucial role in online learning. Kang and Im ( 2013 ) demonstrated that factors of learner–instructor interaction, such as communication, support, and presence, improve students’ satisfaction and learning outcomes. The learner–instructor interaction further affects students’ self-esteem, motivation to learn, and confidence in facing new challenges (Laura & Chapman, 2009 ). Less is known, however, about how introducing AI systems in online learning will affect learner–instructor interaction. Guilherme ( 2019 , p. 7) predicted that AI systems would have “a deep impact in the classroom, changing the relationship between teacher and student.” More work is needed to understand how and why various forms of AI systems affect learner–instructor interaction in online learning (Felix, 2020 ).

Considering the findings in the literature and the areas for further research, the present study aimed to identify how students and instructors perceive the impact of AI systems on learner–instructor interaction in online learning. To this end, we used Speed Dating, a design method that allows participants to quickly interact with and experience the concepts and contextual dimensions of multiple AI systems without any technical implementation (Davidoff et al., 2007 ). In Speed Dating, participants are presented with various hypothetical scenarios via storyboards while researchers conduct interviews to understand the participants’ immediate reactions (Zimmerman & Forlizzi, 2017 ). These interviews provided rich opportunities to understand the way students and instructors perceive the impact of AI systems on learner–instructor interaction and the boundaries beyond which AI systems are perceived as “invasive.”

The study offers several unique contributions. First, as part of the method, we designed storyboards that can be used to facilitate further research on AI implications for online learning. Second, the study documents the main promises and concerns of AI in online learning, as perceived by both students and instructors in higher education. Last, we identify practical implications for the design of AI-based systems in online learning. These include empahses on explainability, human-in-the-loop, and careful data collection and presentation.

This paper is organized as follows. The next section provides the theoretical framework and background behind this research paper by describing the main aspects of the learner–instructor interaction and AI systems in education. “ Materials and methods ” section is related to the methodological approach followed in this research and describes the storyboards used to collect data, the participants, the study procedure, and the performed qualitative analysis. “ Findings ” section shows the results obtained and the main findings related to the research question. Finally, “ Discussion and conclusion ” section provides an overview of the study’s conclusions, limitations, and future research.

This paper explores the impact of AI systems on learner–instructor interaction in online learning. We first proposed a theoretical framework based on studies on learner–instructor interaction in online learning. We then reviewed the AI systems currently in use in online learning environments.

Theoretical framework

Interaction is paramount for successful online learning (Banna et al., 2015 ; Nguyen et al., 2018 ). Students exchange information and knowledge through interaction and construct new knowledge from this process (Jou et al., 2016 ). Moore ( 1989 ) classified these interactions in online learning into three types: learner–content, learner–learner, and learner–instructor. These interactions help students become active and more engaged in their online courses (Seo et al., 2021 ; Martin et al., 2018 ), and by doing so strengthen their sense of community which is essential for the continuous usage of online learning platforms (Luo et al., 2017 ).

Martin and Bolliger ( 2018 ) found that the learner–instructor interaction is the most important among Moore’s three types of interactions. Instructors can improve student engagement and learning by providing a variety of communication channels, support, encouragement, and timely feedback (Martin et al., 2018 ). Instructors can also enhance students’ sense of community by engaging and guiding online discussions (Shackelford & Maxwell, 2012 ; Zhang et al., 2018 ). Collectively, learner–instructor interaction has a significant impact on students’ satisfaction and achievement in online learning (Andersen, 2013 ; Kang & Im, 2013 ; Walker, 2016 ).

The five-factor model of learner–instructor interaction offers a useful lens for interpreting interactions between students and the instructor in online learning (see Table 1 ; Kang, 2010 ). Robinson et al. ( 2017 ) found that communication and support are key factors of the learner–instructor interaction for designing meaningful online collaborative learning. Richardson et al. ( 2017 ) added that the perceived presence during learner–instructor interaction positively influences student motivation, satisfaction, learning, and retention in online courses. Kang and Im ( 2013 ) synthesized these findings by showing that communication, support, and presence are the three most important factors in improving students’ achievement and satisfaction over other factors. Thus, in this study, we focused on communication, support, and presence between students and instructors.

AI systems are likely to affect the way learner–instructor interaction occurs in online learning environments (Guilherme, 2019 ). If students and instructors have strong concerns about the impact of AI systems on their interactions, then they would not use such systems, in spite of perceived benefits (Felix, 2020 ). To the best of our knowledge, the impact of AI systems on learner–instructor interaction has limited empirical studies, and Misiejuk and Wasson ( 2017 ) have called for more work on this.

Artificial intelligence in online learning

There are a variety of AI systems that are expected to affect learner–instructor interaction in online learning. For example, Goel and Polepeddi ( 2016 ) developed an AI teaching assistant named Jill Watson to augment the instructor’s communication with students by autonomously responding to student introductions, posting weekly announcements, and answering routine, frequently asked questions. Perin and Lauterbach ( 2018 ) developed an AI scoring system that allows faster communication of grades between students and the instructor. Luckin ( 2017 ) showed AI systems that support both students and instructors by providing constant feedback on how students learn and the progress they are making towards their learning goals. Ross et al. ( 2018 ) developed online adaptive quizzes to support students by providing learning contents tailored to each student’s individual needs, which improved student motivation and engagement. Heidicker et al. ( 2017 ) showed that virtual avatars allow several physically separated users to collaborate in an immersive virtual environment by increasing sense of presence. Aslan and her colleagues ( 2019 ) developed AI facial analytics to improve instructors’ presence as a coach in technology-mediated learning environments. When looking at these AI systems, in-depth insight into how students and instructors perceive the AI’s impact is important (Zawacki-Richter et al., 2019 ).

The recent introduction of commercial AI systems for online learning has demonstrated the complex impact of AI on learner–instructor interaction. For instance, Proctorio (Proctorio Inc., USA), a system that aims to prevent cheating by monitoring students and their computer screens during an exam, seems like a fool-proof plan to monitor students in online learning, but students complain that it increases their test-taking anxiety (McArthur, 2020 ). The idea of being recorded by Proctorio distracts students and creates an uncomfortable test-taking atmosphere. In a similar vein, although Squirrel AI (Squirrel AI Learning Inc., China) aims to provide adaptive learning by adjusting itself automatically to the best method for an individual student, there is a risk that this might restrict students’ creative learning (Beard, 2020 ). These environments have one thing in common: Unlike educational technologies that merely mediate interactions between instructors and students, AI systems have more autonomy in the way in which it interprets data, infers learning, and at times, takes instructional decisions.

In what follows, we describe Speed Dating with storyboards, an exploratory research method that allows participants to quickly experience different forms of AI systems possible in the near future, to examine the impact of those systems on learner–instructor interaction (“ Materials and methods ”). Findings offer new insights on students’ and instructors’ boundaries, such as when AI systems are perceived as “invasive” (“ Findings ”). Lastly, we discuss how our findings provide implications for future AI systems in online learning ( Discussion and conclusion ).

Materials and methods

The goal of this study is to gain insight on students’ and instructors’ perception of the impact of AI systems on learner–instructor interaction (inter alia, communication, support, and presence; Kang & Im, 2013 ) in online learning. The study was conducted amid the COVID-19 pandemic, thus students and instructors have heightened awareness about the importance of online learning and fresh experiences from the recent online courses. Our aim was not to evaluate specific AI technologies, but instead, to explore areas where AI systems positively contribute to learner–instructor interaction and where more attention is required.

We used Speed Dating with storyboards, an exploratory research method that allows participants to experience a number of possible AI systems in the form of storyboards, to prompt participants to critically reflect on the implications of each AI area (Zimmerman & Forlizzi, 2017 ). Exposure to multiple potential AI areas that are likely to be available in the future helps participants to shape their own perspectives and to evaluate the AI systems in a more nuanced way (Luria et al., 2020 ). We first created a set of eleven four-cut storyboards for the comprehensive and diverse use cases of possible AI systems in online learning (see “ Creating storyboards ” section), and then used these storyboards to conduct Speed Dating with student and instructor participants (see “ Speed dating ” section). Overall, we address the following research question:

How do students and instructors perceive the impact of AI systems on learner–instructor interaction (inter alia, communication, support, and presence) in online learning?

Creating storyboards

To create AI system storyboards which are technically feasible and positively support learner–instructor interaction, we ran an online brainwriting activity (Linsey & Becker, 2011 ) in which we asked a team of designers to come up with scenarios about possible AI systems in online learning. We recruited six designers from our lab (four faculty members and two PhD candidates) with an average of 15.4 years (SD = 4.7 years) of design experience in human–computer interaction (HCI). Each team member wrote down scenarios using a Google Slides file and then passed it on to another team member. This process was repeated four times until all designers agreed that the scenarios of AI systems were technically feasible and supported learner–instructor interaction in online learning.

As initial scenarios were made by HCI designers, in order to validate their technical feasibility and positive impact on learner–instructor interaction, we enacted additional interviews with six AI experts with an average of 10.8 years (SD = 7.8 years) of research experience and 8 years (SD = 6.2 years) of teaching experience (see Appendix A , Table 7 , for details). The first two authors conducted semi-structured interviews with AI experts using a video conferencing platform (i.e., Zoom). We showed each scenario to AI experts and asked the following questions: “Can you improve this scenario to make it technically feasible?” and “Can you improve this scenario to have a positive impact on learner–instructor interaction based on your own online teaching experience?” After showing all the scenarios, the following question was asked: “Do you have any research ideas that can be used as a new scenario?” The scenario was modified to reflect the opinions of AI experts and AIEd literature. The interviews lasted around 41 min on average (SD = 7.3 min). Each AI expert was compensated 100 Canadian dollars for their time. The process was cleared by the Research Ethics Board.

As shown in Table 2 , we ended up with 11 scenarios which support learner–instructor interaction (i.e., communication, support, and presence) in online learning. Scenarios were categorized by the first two authors with reference to the learner–instructor interaction factors as defined in Table 1 (see “ Theoretical framework ” section). For example, although the AI Teaching or Grading Assistant scenarios could be considered systems of support for the instructor, “support” within the learner–instructor interaction framework refers to support for the student. Therefore, since the scenarios illustrate increased or expedited communication between students and instructors rather than direct support for students, AI Teaching and Grading Assistant scenarios are categorized as systems for communication. We note that these categories are not definitive, and scenarios may have interleaving aspects of several learner–instructor interaction factors. However, the final categories in Table 2 refer to the factors that best define the respective scenarios.

Seven scenarios (Scenarios 1, 3, 5, 6, 8, 9, and 11) have well reflected the state-of-the-art AI systems that were identified in “Artificial intelligence in online learning” section. The following four scenarios were created based on research ideas from AI experts: AI Companion (Scenario 2), AI Peer Review (Scenario 4), AI Group Project Organizer (Scenario 7), and AI Breakout Room Matching (Scenario 10). These 11 final scenarios were not to exhaust all AI systems in online learning or to systematically address all topics, but rather to probe a range of situations that shed light on the realities that present themselves with the utilization of AI systems in online learning.

We generated four-cut storyboards based on the scenarios in Table 2 . Figure  1 shows an illustrated example of a storyboard detailing the scenario through captions. We stylized the characters in a single visual style and as flat cartoon shades in order to reduce gender and ethnic clues and enable participants to put themselves in the shoes of the characters in each storyboard (Truong et al., 2006 ; Zimmerman & Forlizzi, 2017 ). The full set of storyboards can be viewed at https://osf.io/3aj5v/?view_only=bc5fa97e6f7d46fdb66872588ff1e22e .

figure 1

A storyboard example of scenario 8, Adaptive Quiz in Table 2

  • Speed dating

Participants

Next, we conducted a Speed Dating activity with storyboards. We recruited 12 students (see Table 3 ) and 11 instructors (see Table 4 ) for a Speed Dating activity. For diversity, we recruited students from 11 different majors and recruited instructors from nine different subjects. Students and instructors had a minimum of three months of online learning or teaching experience due to the COVID-19 pandemic. Overall, students had at least one year of university experience and instructors had at least three years of teaching experience. We required students and instructors to have online learning and teaching experience respectively so as to control the expected and experienced norms of student-instructor interaction within online university classes. Conversely, we did not require participants to have knowledge of AI systems as we wanted their perspective on the intended human–AI interactions and their potential effects as illustrated. Previous studies showed that Speed Dating works well without any prior knowledge or experience with AI systems, so no special knowledge or experience was required to participate in this study (Luria et al., 2020 ; Zimmerman & Forlizzi, 2017 ). Each participant was compensated with 25 Canadian dollars for their time.

We conducted semi-structured interviews with participants using a video conferencing platform (i.e., Zoom). We designed the interview questions to capture how the participants perceive the AI systems illustrated in the storyboards (see Appendix B). Participants read each of the storyboards aloud and then expressed their perceptions of AI in online learning. Specifically, we asked participants to critically reflect on how incorporating the AI system into an online course would affect learner–instructor interaction and whether they would like to experience its effect. We also asked them to choose AI systems that would work well and which would not work well, to capture their holistic point of view regarding perceived affordances and drawbacks. The entire interview lasted around 50.9 min (SD = 10.7 min), with 3–5 min spent to share each storyboard and probe participants on its specific implications.

Data analysis

Each interview was audio recorded and transcribed for analysis. We used a Reflexive Thematic Analysis approach (Braun & Clarke, 2006 ; Nowell et al., 2017 ). After a period of familiarization with the interview data, the first two authors began by generating inductive codes with an initial round of semantic codes related to intriguing statements or phrases in the data. The two authors then coded each transcript by highlighting and commenting on data items through Google Docs, independently identifying patterns that arose through extended examination of the dataset. Any conflicts regarding such themes were resolved through discussion between the two authors. Later, through a deductive approach guided by the learner–instructor interaction factors adapted from Kang and Im ( 2013 ), data were coded and collated into themes in a separate word document. An example of our codes can be viewed at https://osf.io/3aj5v/?view_only=bc5fa97e6f7d46fdb66872588ff1e22e . We then utilized three iterative discussions with all five authors present that yielded recurrent topics and themes by organizing the data around significant thematic units; the final six major themes were derived from twelve codes. The themes, which describe the impact of AI systems, were as follows: (1) Quantity and Quality, (2) Responsibility, (3) Just-in-time Support, (4) Agency, (5) Connection, and (6) Surveillance. The findings below are presented according to these themes.

The central theme of participants’ responses, which stood out repeatedly in our study, was that adopting AI systems in online learning can enable personalized learner–instructor interaction at scale but at the risk violating social boundaries. Participants were concerned that AI systems could create responsibility, agency, and surveillance issues in online learning if they violated social boundaries in each factor of learner–instructor interaction (i.e., communication, support, and presence). Table 5 summarizes the perceived benefits and concerns of students and instructors about the impact of AI systems on learner–instructor interaction, as noted with ( +) and ( −) respectively. Each quote outlines whether the response came from a student (“S”) or an instructor (“I”).

Communication

In online learning environments, communication refers to questions and answers between students and the instructor about topics directly related to learning contents, such as instructional materials, assignments, discussions, and exams (Kang & Im, 2013 ). Students and instructors expect AI systems will positively impact the quantity and quality of communication between them but bears the risk causing miscommunication and responsibility issues, as described below.

Quantity and quality

Students believe that the anonymity afforded by AI would make them less self-conscious and, as a result, allow them to ask more questions . In online learning environments, students are generally afraid to ask questions to their instructors during class, primarily because they “worry that someone already asked it” (S4) or “don't want to seem dumb by instructors or peers” (S10). Students perceive that the anonymity from both an AI Teaching Assistant (Scenario 1) and an AI Companion (Scenario 2) would make them “less afraid to ask questions” (S10), “wouldn't feel bad about wasting the professor's time” (S11), and would be “less distracting to class” (S12). Bluntly put, participant S11 stated: “If it’s a dumb question, I’ve got an AI to handle it for me. The AI won't judge me. The AI is not thinking like, wow, what an idiot.” S5 expanded on this idea, mentioning that asking questions to an AI removes self-consciousness that typically exists in instructional communications: “… you don’t feel like you’re bothering a person by asking the questions. You can’t really irritate an AI, so you can ask as many as you need to.” As a result, all 12 students answered that AI systems would nudge them to ask more questions in online learning.

Instructors believe that AI could help answer simple, repetitive questions, which would allow them to focus on more meaningful communication with students . Answering repetitive questions from students takes a huge amount of time (I11). Instructors reflected that the time saved from tedious tasks, such as answering administrative questions, could allow course teams to focus on more content-based questions (I10). Because an AI Teaching Assistant (Scenario 1) answers students’ repetitive questions and AI Grading Assistance (Scenario 3) and AI Peer Review (Scenario 4) enable fast feedback loops, instructors can communicate more meaningfully with students by helping to “focus more on new questions” (I6) or “use their time for more comprehensive or more irregular questions” (I4). As well-stated by I10: “I think it allows us time to have conversations that are more meaningful… in some ways you're choosing quality over quantity. The more time I have, the more time, I can do things like answer emails or answer things on Piazza, things that actually will communicate with the student.”

Responsibility

Although students believe AI systems would improve the quantity and quality of instructional communication, they worry that AI could give unreliable answers and negatively impact their grades . For example, S4 worried that “I just want to make sure it’s a really reliable source, because if the AI is answering questions from students, and then they’re going to apply that answer to the way they do their work in the future, and it might be marked wrong. Then it's hard to go to the instructor and say, oh, this answer was what was given to me, but you said it was wrong.” Most students (10 out of 12) feel like the lack of explainability of AI would make it hard to blame despite the fact that it may hold a position of responsibility in some situations, such as answering questions where its answers should be considered as truth. S9 said that “Whereas with AI and just intelligent systems that you don't fully understand the back end to in a sense, it’s harder to decipher the reasoning behind the answer or why they gave that answer.” In particular, students are concerned about how instructors would react if something went wrong because they trusted the AI. S11 expects that “I can see a lot of my fellow engineering students finding more room to argue for their marks. I can see people not being as willing to accept their fate with this kind of system.”

Instructors predicted conflicts between students and the instructor due to AI-based misunderstandings or misleadingness . For example, a conflict could arise from potential discrepancies between answers from AI, the instructor, and human TAs. As expressed by I4, “Students will argue that, oh AI is wrong. I demand a better assessment right? So, you can say that easily for the AI. But for the authoritative figure like TA and instructor, maybe it's hard to do that.” Similarly, I6 argued a conflict could stem from the opposite direction: “If an AI gives students a great suggestion, if the instructor and TA decided to regrade, it would just be a lot of trouble.” Several instructors (five out of 11) also worried about conflicts that could arise from the quality of response. I1 said that “The concern is the quality of the response, given that there can be ambiguity in the way the students post questions. My concern here is that the algorithm may respond incorrectly or obliquely.” I8 also cautioned AI-based misunderstandings or misleadingness: “If you have a conversation in person, you can always clarify misunderstandings or things like that. I don't think a machine can do that yet. So there's a bit of a potential for misunderstandings so misleading the students.”

In online learning environments, support refers to the instructor’s instructional management for students, such as providing feedback, explanations, or recommendations directly related to what is being taught (Kang & Im, 2013 ). Students and instructors expect a positive impact from AI systems in terms of enabling just-in-time personalized support for students at scale, but they expect a negative impact in that excessive support could reduce student agency and ownership of learning.

Just-in-time support

Students believe that AI would support personalized learning experiences, particularly with studying and group projects . Ultimately, all 12 students felt that AI could help them work to their strengths, mainly in scenarios regarding instructor-independent activities like studying (Scenario 5, 6, 8) and group projects (Scenario 7). Students like S2, S3, and S9 focused on how adaptive technologies could make studying more effective and efficient, as it would “allow [them] to fully understand the concept of what [they’re] learning,” and “allows for them to try and focus on where they might be weaker.” In some cases, the sense of personalization led students to describe the systems as if they could fulfill roles as members of the course team. For example, S1 referred to the Adaptive Quiz system (Scenario 8) as a potential source of guidance: “I think being able to have that quiz to help me, guide me, I’m assuming it would help me.” Likewise, S5 described the presence of an AI Group Project Organizer (Scenario 7) as “having a mentor with you, helping you do it” which would help students “focus more on maybe just researching things, writing their papers, whatever they need to do for the project.”

Instructors believe AI could be effectively leveraged to help students receive just-in-time personalized support . I1 said that “one of the best learning mechanisms is to be confronted right away with the correct answer or the correct way of finding the right answer” when doing quizzes and assignments. Many instructors (10 out of 11) expressed approval towards AI-based Intelligent Suggestions (Scenario 5) and an Adaptive Quiz system (Scenario 8). All 11 instructors appreciated how immediate feedback afforded by AI could help students study and effectively understand gaps in their knowledge, particularly at times when they would be unavailable. Similarly, I4 and I11 appreciated that AI could support students who would otherwise be learning asynchronously. For example, AI systems could be supportive of student engagement “because the students are getting real-time answers, particularly in an online world where they may not be in the same time zone, this is a synchronous type [of] learning event for them where they could be doing it when they're studying” (I11).

Despite the fact that students appreciated the support that they could potentially receive from AI, students perceived that canned and standardized support might have a negative influence on their ability to learn effectively . For example, S11 shared how he felt the usage of systems that collect engagement data would “over standardize” the learning process by prescribing how an engaged student would or should act. He likened some of the AI examples to “helicopter parenting,” expressing that guidance—whether from an AI or parent—can set an arbitrary pace for a student to follow, despite the fact that the learning experience should involve “learning about yourself and going at your own pace.” Several other students (four out of 12) were concerned with the potential effect of a system like the AI Group Project Organizer (Scenario 7), citing concerns that students “wouldn’t put that much effort” into their group projects because “it might just end up AI doing all the work for them” (S2). Similarly, S6 focused on how AI could detract from the fact that experiences with schoolwork can help students later in life: “… I think it’s like giving them a false sense of security in the sense that like, I’m so used to doing projects with this AI helper that when I go into the real world, I’m not going to be ready. I’m just not going to be prepared for it.”

Instructors are similarly wary of the fact that too much support from AI could take away students' opportunities for exploration and discovery . Many instructors (nine out of 11) were concerned that students could lose opportunities to learn new skills or learn from their mistakes. Responding about the AI Group Project Organizer (Scenario 7), I7 stressed that she wouldn’t want to standardize inconsistent group projects since part of an instructor’s job is “helping people understand how group work is conducted… [and] if you’re just laying on a simple answer, you miss that opportunity.” Similarly, other instructors (five out of 12)—primarily those in humanities-based fields—were concerned “it may take the creativity away from the students” since students’ projects “can be hugely different from each other, yet equally good,” and suggestions based on historical data could steer students towards certain directions (I6). I4 even expressed that he currently tries “not to share previous work because [he] thinks that restricts their creativity.” After experiencing all the storyboards related to AI-powered support, I11 presented a vital question: “At what stage is it students’ work and what stage is it the AI’s algorithm?”.

In online learning environments, presence refers to a factor that makes students and instructors perceive each other’s existence during the learning process (Kang & Im, 2013 ). Students and instructors expect the impact of AI systems to be positive in terms of giving them a feeling of improved connectivity, and to be negative in terms of increasing the risk of surveillance problems.

Students believe that AI can address privacy concerns and support learner–instructor connections by providing social interaction cues without personal camera information . Many students (10 out of 12) stated that they don’t want to turn on their camera in online courses, even though turning off the camera adversely affects their presence in class, because they have concerns like: “looking like a mess” (S1), “just in my pajamas” (S2), and “feeling too invasive” (S4). Specifically, S9 stated that turning on the camera “makes you more anxious and conscious of what you’re doing and as a result, it deters from you engaging with the content.” In this sense, most students (11 out of 12) liked the Virtual Avatar system (Scenario 9), where AI communicates student facial expressions and body language to the instructor via a virtual avatar. Students expect that this will make them “feel more comfortable going to lecture” (S2), “feel less intrusive for at home learning” (S4), and “showcase much more of their expression or confusion or understanding” (S10). Overall, many students (nine out of 12) appreciated the potential of AI systems as “it solves the problem of not needing to show your actual face, but you can still get your emotions across to the instructor” (S10).

Instructors believe that the addition of AI would help them become more aware of students’ needs . Many instructors (10 out of 11), particularly those that taught larger undergraduate courses, stated that students tend to turn off their cameras in online learning spaces, “so something that you really, really miss from teaching online is reading body language” (I10). Instructors generally expressed that AI systems like the Virtual Avatar (Scenario 9) and the AI Facial Analytics (Scenario 11) could be helpful, due to the fact that they would allow students to share their body language and facial expressions without directly sharing their video feed. I4 appreciated that the AI Facial Analytics could automate the process of looking at students’ faces “to see if they got it.” Similarly, I5 liked that a Virtual Avatar could give “any sign that someone is listening,” as “it’s sometimes very tough, especially if [she’s] making a joke.” Furthermore, I4 emphasized that turning on the camera can be helpful not just for the instructor but also for students’ own accountability since “if students don’t turn on the camera, it’s very likely that they are going to do something else.” Overall, instructors appreciated AI’s ability to provide critical information to understand how students are doing and how they feel in online courses.

Surveillance

Although AI can strengthen the connection between students and instructors, students are uncomfortable with the measurement of their unconscious behavior, such as eye tracking or facial expression analysis, because it feels like surveillance . All 12 students discussed how they would be anxious about being represented by unconscious eye-tracking data. S1 professed: “I don't really know what my eyes are doing. I think it might just make me a little nervous when it comes to taking quizzes or tests and all that. I might be scared that I might have accidentally cheated.” S12 additionally spoke on how that would make her more anxious when sending emails or asking questions due to concern that instructors would judge him based on his unconscious behavior before taking care of his questions. Note that most students (10 out of 12) feel uncomfortable with AI Facial Analytics (Scenario 11). For example, S6 was concerned that facial expression is “something that happens [that] might be outside of your control,” so AI might miss the nuance of authentic human emotion and flatten and simplify it in a way that might cause more confusion. In a similar vein, S11 said that “The nuances of social interaction is something that should be left up to humans and not guided because it’s innately something that, that’s what makes us human is the social interaction portion.” Overall, students complained that they didn’t want to use AI’s measures of unconscious behavior, such as eye tracking or facial expression analysis, even if there are positive aspects.

Instructors were negative about relying on AI interpretation to understand students’ social interaction cues . All instructors felt uncomfortable with collecting private data, such as eye movements and facial expressions of students through AI, because “not all the students feel comfortable sharing their private information with the instructor” (I2, I5). Additionally, I9 was concerned that AI Facial Analytics might force students to smile to get a good engagement score, which could adversely affect online learning itself. In this sense, many instructors (nine out of 11) declined to use AI systems that use eye tracking and facial expression analysis in their online courses. Furthermore, I6 would rather “choose to rely on my own kind of sense of the classroom dynamic instead of AI systems” because she believed that the social relationship between students and instructors should be authentic. Plus, other instructors stated they “don’t have time to check all of the interface[s]”, or would have trouble “knowing that that data is accurately reflecting, [that] the student is responding to [their] content” rather than extraneous stimulation in their personal environments (I3, I7). Overall, instructors were uncomfortable with AI giving detailed information about how students engage with their online courses, and they wanted to understand these social interaction cues for themselves.

In summary, students and instructors expect that AI systems will benefit learner–instructor interaction in online learning in terms of improving the quantity and quality of communication, enabling just-in-time personalized support for students at scale, and giving them a feeling of improved connectivity. However, at the same time, students and instructors were concerned that AI systems could create responsibility, agency, and surveillance issues in online learning if they violated social boundaries. These boundaries that make AI perceived to be negative will be discussed in the next section.

Discussion and conclusion

Our research question focused on examining how students and instructors perceive the impact of AI systems on learner–instructor interaction (inter alia, communication, support, and presence) in online learning. Although the growing body of AIEd research has been conducted to investigate the useful functionalities of AI systems (Seo et al., 2020b ; Popenici & Kerr, 2017 ; Zawacki-Richter et al., 2019 ), little has been done to understand students’ and instructors’ concerns on AI systems. Recent use of AI systems in online learning showed that careless application can cause surveillance and privacy issues (Lee, 2020 ), which makes students feel uncomfortable (Bajaj & Li, 2020 ). In this study, we found that students and instructors perceive the impact of AI systems as double-edged swords. Consequently, although AI systems have been positively recognized for improving the quantity and quality of communication, for providing just-in-time, personalized support for large-scale students, and for improving the feeling of connection, there were concerns about responsibility, agency, and surveillance issues. In fact, what students and instructors perceive negatively often stemmed from the positive aspects of AI systems. For example, students and instructors appreciated AI’s immediate communication, but at the same time they were concerned about AI-based misunderstandings or misleadingness. Although students and instructors valued the just-in-time, personalized support of AI, they feared that AI would limit their ability to learn independently. Students and instructors valued the social interaction cues provided by AI, but they are uncomfortable with the loss of privacy due to AI’s excessive data collection. As shown in Table 6 , this study provides rich opportunities to identify the boundaries beyond which AI systems are perceived as “invasive.”

First, although AI systems improve instructional communication due to the anonymity it can provide for students, students were concerned about responsibility issues that could arise when AI’s unreliable and unexplained answers lead to negative consequences. For instance, when communicating with an AI Teaching Assistant, the black-box nature of the AI system leaves no choices for students to check whether the answers from AI are right or wrong (Castelvecchi, 2016 ). Accordingly, students believe they would have a hard time deciphering the reasoning behind an AI’s answer. This can result in serious responsibility issues if students apply an AI’s answers to their tests but instructors mark them as wrong. As well, students would find more room to argue for their marks because of AI’s unreliability.

Acknowledging that AI systems cannot always provide the right answer, a potential solution to this problem is to ensure the system is explainable. Explainability refers to the ability to offer human-understandable justifications for the AI’s output or procedures (Gunning, 2017 ). Explainability gives students the opportunity to check whether an AI’s answer is right or wrong by themselves, and in doing so can make AI more reliable and responsible (Gunning, 2017 ). Explainability should be the boundary that determines students’ trust and acceptance of AI systems. How to ensure the explainability of AI systems in the online learning communication context will be an interesting research topic. For example, instead of providing unreliable answers that may mislead or confuse students, AI systems should connect students to relevant sources of information that students can navigate on their own.

Second, while AI systems enable some degree of personalized support, there is a risk of over-standardizing the learning process by prescribing how an engaged student would or should act. Despite the fact that students appreciate the support that they could potentially receive from AI systems, students also worry that canned and standardized support would have a negative influence on their agency over their own learning. Instructors are similarly wary of the fact that too much support from AI systems could take away students’ opportunities for exploration and discovery. Many instructors were concerned that students could lose opportunities to learn new skills or learn from their mistakes.

A solution to mediate this challenge may be to keep instructors involved. The role of AI systems in online education should not be to reduce learning to a set of canned and standardized procedures that reduce the student agency, but rather to enhance human thinking and augment the learning process. In practice, adaptive support is often jointly enacted by AI systems and human facilitators, such as instructors or peers (Holstein et al., 2020 ). In this context, Baker ( 2016 , p. 603) tried to reconcile humans with AI systems by combining “stupid tutoring systems and intelligent humans.” AI systems can process large amounts of information quickly, but do not respond well to complex contexts. Humans cannot process information as AI systems do, but instead they are flexible and intelligent in a variety of contexts. When AI systems bring human beings into the decision-making loop and try to inform them, humans can learn more efficiently and effectively (Baker, 2016 ). The human-in-the-loop is the solution to ensure students’ perceived agency in online learning. How to balance artificial and human intelligences to promote students’ agency is an important research direction (e.g., goldilocks conditions for human–AI interaction; Seo et al., 2020a ).

Third, even though AI strengthens the perceived connection between students and instructors, students are uncomfortable with the measurement of their unconscious behavior, such as facial expression analysis or eye tracking, as it feels like surveillance. While most students liked the Virtual Avatar system (Scenario 9) where AI simply delivers student facial expressions and body language to the instructor via an avatar, students declined to use the AI Facial Analytics (Scenario 11), which might miss the nuance of social interaction by flattening and simplifying it in a way that might cause more confusion. Interpreting social interaction from unconscious behavior could be the boundary beyond which AI systems are perceived as “invasive.” Students felt uncomfortable about being represented by their unconscious behavior because they did not know what their gaze or face was doing. Stark ( 2019 ) described facial recognition as the plutonium of AI: “[facial recognition] is dangerous, racializing, and has few legitimate uses; facial recognition needs regulation and control on par with nuclear waste.” Students complained about their presence being represented by the interpretation of the AI system. In a similar vein, the instructor negatively felt the AI system’s involvement in interpreting the meaning of student behavior.

Establishing clear, simple, and transparent data norms and agreements about the nature of data being collected from students and what kind of data is okay to be presented to instructors are important considerations for future research (Ferguson, 2019 ; Tsai et al., 2020 ).

While this study revealed important findings and implications for using AI systems in online learning, the study recognizes some limitations that should be considered when interpreting the patterns of the results. First, although this study attempted to capture various forms of AI systems in online learning based on the ideations from HCI designers and AI experts, it might be possible that other kinds of AI systems exist. Different AI systems might offer different insights. As such, further studies can be conducted with different kinds of AI systems. Next, students’ and instructors’ perceptions of AI systems could be affected by different disciplines. In the current study, we recruited students and instructors in diverse majors and subjects. Although this helped us to generalize our findings from participants with diverse backgrounds, there’s more room to investigate how students and instructors in different disciplines perceive AI systems differently. In our findings, we anecdotally found that instructors in humanities-based fields were more concerned about rapport with students and students’ creativity in courses compared to other disciplines. In order to fully investigate this, future research should consider the different learner–instructor interaction needs of participants from different majors (e.g., engineering vs. humanities).

Another limitation is that the study was conducted by reading the storyboards, rather than directly interacting with AI systems. This might have limited participants’ perceptions about the AI systems. If participants have continuous, direct interactions with the AI systems in the real world, their perceptions may change. As such, future researchers should examine students’ responses to direct exposures of AI systems. This can be accomplished in a variety of ways. For example, one could conduct a lab experiment using virtual reality, the wizard-of-oz method, or the user enactment method to see how students actually respond to AI systems. It would also be meaningful to conduct a longitudinal study to understand whether and/or how student perceptions would change over time.

Theoretical implications

This study provides theoretical implications for a learner–instructor interaction framework by highlighting and mapping key challenges in AI-related ethical issues (i.e. responsibility, agency, and surveillance) in online learning environments. Researchers have requested clear ethical guidelines for future research to prevent AI systems from accidently harming people (Loi et al., 2019 ). Although several ethical frameworks and professional codes of conduct have been developed to mitigate the potential dangers and risks of AI in education, significant debates continue about their specific impact on students and instructors (Williamson & Eynon, 2020 ). The results of this study increase our understanding of the boundaries that determine student and instructor trust and acceptance of AI systems, and provide a theoretical background for designing AI systems that positively support learner–instructor interactions in a variety of learning situations.

Practical implications

This study has practical implications for both students and instructors. Interestingly, most of the negative experiences with AI systems came from students’ unrealistic expectations and misunderstandings about AI systems. The AI system’s answer is nothing more than an algorithm based on accumulated data, yet students typically expect the AI system to be accurate. These misconceptions can be barriers to the effective use of AI systems by students and instructors. To address this, it is important to foster AI literacy in students and instructors without a technical background (Long & Magerko, 2020 ). For example, recent studies have published guides on how to incorporate AI into K-12 curricula (Touretzky et al., 2019 ), and researchers are exploring how to engage young learners in creative programming activities involving AI (Zimmermann-Niefield et al., 2019 ).

Furthermore, in order to minimize the negative impact of AI systems on learner–instructor interaction, it is important to address tensions where AI systems violate the boundaries between students and instructors (e.g., responsibility, agency, and surveillance issues). We proposed that future AI systems should ensure explainability, human-in-the-loop, and careful data collection and presentation. By doing so, AI systems will be more closely integrated into future online learning. It is important to note that the present study does not argue that AI systems will replace the entire role of human instructors. Rather, in the online learning of the future, AI systems and humans will work closely together, and for this, it is important to use these systems with consideration about perceived affordances and drawbacks.

Availability of data and materials

The full set of storyboards and an example of our codes can be viewed at https://osf.io/3aj5v/?view_only=bc5fa97e6f7d46fdb66872588ff1e22e .

Andersen, J. C. (2013). Learner satisfaction in online learning: An analysis of the perceived impact of learner-social media and learner–instructor interaction . Doctoral dissertation. East Tennessee State University, Tennessee.

Anderson, J. R., Boyle, C. F., & Reiser, B. J. (1985). Intelligent tutoring systems. Science, 228 (4698), 456–462.

Article   Google Scholar  

Aslan, S., Alyuz, N., Tanriover, C., Mete, S. E., Okur, E., D'Mello, S. K., & Arslan Esme, A. (2019). Investigating the impact of a real-time, multimodal student engagement analytics technology in authentic classrooms. In: Proceedings of the 2019 CHI conference on human factors in computing systems (pp. 1–12).

Bajaj, M., & Li, J. (2020). Students, faculty express concerns about online exam invigilation amidst COVID-19 outbreak . Retrieved February 8, 2021, from https://www.ubyssey.ca/news/Students-express-concerns-about-online-exams/

Baker, R. S. (2016). Stupid tutoring systems, intelligent humans. International Journal of Artificial Intelligence in Education, 26 (2), 600–614.

Banna, J., Lin, M. F. G., Stewart, M., & Fialkowski, M. K. (2015). Interaction matters: Strategies to promote engaged learning in an online introductory nutrition course. Journal of Online Learning and Teaching/MERLOT, 11 (2), 249.

Google Scholar  

Beard, A. (2020). Can computers ever replace the classroom? . Retrieved January 10, 2021, from https://www.theguardian.com/technology/2020/mar/19/can-computers-ever-replace-the-classroom

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3 (2), 77–101.

Castelvecchi, D. (2016). Can we open the black box of AI? Nature News, 538 (7623), 20.

Chan, R. (2019). The Cambridge Analytica whistleblower explains how the firm used Facebook data to sway elections . Business Insider. Retrieved from https://www.businessinsider.com/cambridge-analytica-whistleblower-christopher-wylie-facebook-data-2019-10

Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538 (7625), 311–313.

Cruz-Benito, J., Sánchez-Prieto, J. C., Therón, R., & García-Peñalvo, F. J. (2019). Measuring students’ acceptance to AI-driven assessment in eLearning: Proposing a first TAM-based research model. In: International conference on human–computer interaction (pp. 15–25). Springer, Cham.

Davidoff, S., Lee, M. K., Dey, A. K., & Zimmerman, J. (2007). Rapidly exploring application design through speed dating. In: International conference on ubiquitous computing (pp. 429–446). Springer, Berlin, Heidelberg.

Felix, C. V. (2020). The role of the teacher and AI in education. In: International perspectives on the role of technology in humanizing higher education . Emerald Publishing Limited.

Ferguson, R. (2019). Ethical challenges for learning analytics. Journal of Learning Analytics, 6 (3), 25–30.

Fong, M., Dodson, S., Harandi, N. M., Seo, K., Yoon, D., Roll, I., & Fels, S. (2019). Instructors desire student activity, literacy, and video quality analytics to improve video-based blended courses. In Proceedings of the Sixth (2019) ACM Conference on Learning@ Scale (pp. 1–10).

Goel, A. K., & Polepeddi, L. (2016). Jill Watson: A virtual teaching assistant for online education . Georgia Institute of Technology.

Guilherme, A. (2019). AI and education: The importance of teacher and student relations. AI & Society, 34 (1), 47–54.

Gunning, D. (2017). Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web, 2 (2).

Heidicker, P., Langbehn, E., & Steinicke, F. (2017). Influence of avatar appearance on presence in social VR. In: 2017 IEEE symposium on 3D user interfaces (3DUI) (pp. 233–234). IEEE.

Holstein, K., Hong, G., Tegene, M., McLaren, B. M., & Aleven, V. (2018). The classroom as a dashboard: Co-designing wearable cognitive augmentation for K-12 teachers. In: Proceedings of the 8th international conference on learning analytics and knowledge (pp. 79–88).

Holstein, K., Aleven, V., & Rummel, N. (2020). A conceptual framework for human–AI hybrid adaptivity in education. In: International conference on artificial intelligence in education (pp. 240–254). Springer, Cham.

Hwang, G. J., Xie, H., Wah, B. W., & Gašević, D. (2020). Vision, challenges, roles and research issues of Artificial Intelligence in Education. Computers and Education: Artificial Intelligence, 1 , 100001.

Jou, M., Lin, Y. T., & Wu, D. W. (2016). Effect of a blended learning environment on student critical thinking and knowledge transformation. Interactive Learning Environments, 24 (6), 1131–1147.

Kang, M. S. (2010). Development of learners’ perceived interaction model and scale between learner and instructor in e-learning environments . Doctoral dissertation. Korea University, Korea.

Kang, M., & Im, T. (2013). Factors of learner–instructor interaction which predict perceived learning outcomes in online learning environment. Journal of Computer Assisted Learning, 29 (3), 292–301.

Laura, R. S., & Chapman, A. (2009). The technologisation of education: Philosophical reflections on being too plugged in. International Journal of Children’s Spirituality, 14 (3), 289–298.

Lee, S. (2020). Proctorio CEO releases student’s chat logs, sparking renewed privacy concerns . Retrieved February 8, 2021, from https://www.ubyssey.ca/news/proctorio-chat-logs/

Linsey, J. S., & Becker, B. (2011). Effectiveness of brainwriting techniques: comparing nominal groups to real teams. In: Design creativity 2010 (pp. 165–171). Springer.

Loi, D., Wolf, C. T., Blomberg, J. L., Arar, R., & Brereton, M. (2019). Co-designing AI futures: Integrating AI ethics, social computing, and design. In: Companion publication of the 2019 on designing interactive systems conference 2019 companion (pp. 381–384).

Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In: Proceedings of the 2020 CHI conference on human factors in computing systems (pp. 1–16).

Luckin, R. (2017). Towards artificial intelligence-based assessment systems. Nature Human Behaviour, 1 (3), 1–3.

Luo, N., Zhang, M., & Qi, D. (2017). Effects of different interactions on students’ sense of community in e-learning environment. Computers & Education, 115 , 153–160.

Luria, M., Zheng, R., Huffman, B., Huang, S., Zimmerman, J., & Forlizzi, J. (2020). Social boundaries for personal agents in the interpersonal space of the home. In: Proceedings of the 2020 CHI conference on human factors in computing systems (pp. 1–12).

Martin, F., & Bolliger, D. U. (2018). Engagement matters: Student perceptions on the importance of engagement strategies in the online learning environment. Online Learning, 22 (1), 205–222.

Martin, F., Wang, C., & Sadaf, A. (2018). Student perception of helpfulness of facilitation strategies that enhance instructor presence, connectedness, engagement and learning in online courses. The Internet and Higher Education, 37 , 52–65.

McArthur, A. (2020). Students struggle with online test proctoring systems . Retrieved January 10, 2021, from https://universe.byu.edu/2020/12/17/students-struggle-with-online-test-proctoring-systems/

Misiejuk, K., & Wasson, B. (2017). State of the field report on learning analytics . Centre for the Science of Learning & Technology (SLATE), University of Bergen.

Moore, M. G. (1989). Three types of interaction. American Journal of Distance Education, 3 (2), 1–7.

Murphy, R. F. (2019). Artificial intelligence applications to support K–12 teachers and teaching. RAND Corporation . https://doi.org/10.7249/PE315

Nguyen, T. D., Cannata, M., & Miller, J. (2018). Understanding student behavioral engagement: Importance of student interaction with peers and teachers. The Journal of Educational Research, 111 (2), 163–174.

Nowell, L. S., Norris, J. M., White, D. E., & Moules, N. J. (2017). Thematic analysis: Striving to meet the trustworthiness criteria. International Journal of Qualitative Methods, 16 (1), 1609406917733847.

Perin, D., & Lauterbach, M. (2018). Assessing text-based writing of low-skilled college students. International Journal of Artificial Intelligence in Education, 28 (1), 56–78.

Popenici, S. A., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12 (1), 22.

Richardson, J. C., Maeda, Y., Lv, J., & Caskurlu, S. (2017). Social presence in relation to students’ satisfaction and learning in the online environment: A meta-analysis. Computers in Human Behavior, 71 , 402–417.

Robinson, H., Kilgore, W., & Warren, S. (2017). Care, communication, support: Core for designing meaningful online collaborative learning. Online Learning Journal. https://doi.org/10.24059/olj.v21i4.1240

Roll, I., & Winne, P. H. (2015). Understanding, evaluating, and supporting self-regulated learning using learning analytics. Journal of Learning Analytics, 2 (1), 7–12.

Roll, I., & Wylie, R. (2016). Evolution and revolution in artificial intelligence in education. International Journal of Artificial Intelligence in Education, 26 (2), 582–599.

Roll, I., Russell, D. M., & Gašević, D. (2018). Learning at scale. International Journal of Artificial Intelligence in Education, 28 (4), 471–477.

Ross, B., Chase, A. M., Robbie, D., Oates, G., & Absalom, Y. (2018). Adaptive quizzes to increase motivation, engagement and learning outcomes in a first year accounting unit. International Journal of Educational Technology in Higher Education, 15 (1), 30.

Seo, K., Fels, S., Kang, M., Jung, C., & Ryu, H. (2020a). Goldilocks conditions for workplace gamification: How narrative persuasion helps manufacturing workers create self-directed behaviors. Human–Computer Interaction . 1–38.

Seo, K., Fels, S., Yoon, D., Roll, I., Dodson, S., & Fong, M. (2020b). Artificial intelligence for video-based learning at scale. In Proceedings of the Seventh ACM Conference on Learning@ Scale (pp. 215–217).

Seo, K., Dodson, S., Harandi, N. M., Roberson, N., Fels, S., & Roll, I. (2021). Active learning with online video: The impact of learning context on engagement. Computers & Education, 165 , 104132.

Shackelford, J. L., & Maxwell, M. (2012). Contribution of learner–instructor interaction to sense of community in graduate online education. MERLOT Journal of Online Learning and Teaching, 8 (4), 248–260.

Stark, L. (2019). Facial recognition is the plutonium of AI. XRDS: Crossroads, the ACM Magazine for Students, 25 (3), 50–55.

Touretzky, D., Gardner-McCune, C., Martin, F., & Seehorn, D. (2019). Envisioning AI for K-12: What should every child know about AI?. In: Proceedings of the AAAI conference on artificial intelligence (Vol. 33, No. 01, pp. 9795–9799).

Truong, K. N., Hayes, G. R., & Abowd, G. D. (2006). Storyboarding: an empirical determination of best practices and effective guidelines. In: Proceedings of the 6th conference on designing interactive systems (pp. 12–21).

Tsai, Y. S., Whitelock-Wainwright, A., & Gašević, D. (2020). The privacy paradox and its implications for learning analytics. In: Proceedings of the tenth international conference on learning analytics & knowledge (pp. 230–239).

VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46 (4), 197–221.

Walker, C. H. (2016). The correlation between types of instructor-student communication in online graduate courses and student satisfaction levels in the private university setting . Doctoral dissertation. Carson-Newman University, Tennessee.

Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology, 45 (3), 223–235.

Wogu, I. A. P., Misra, S., Olu-Owolabi, E. F., Assibong, P. A., Udoh, O. D., Ogiri, S. O., & Damasevicius, R. (2018). Artificial intelligence, artificial teachers and the fate of learners in the 21st century education sector: Implications for theory and practice. International Journal of Pure and Applied Mathematics, 119 (16), 2245–2259.

Woolf, B. P., Arroyo, I., Muldner, K., Burleson, W., Cooper, D. G., Dolan, R., & Christopherson, R. M. (2010). The effect of motivational learning companions on low achieving students and students with disabilities. In: International conference on intelligent tutoring systems (pp. 327–337). Springer, Berlin, Heidelberg.

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators? International Journal of Educational Technology in Higher Education, 16 (1), 39.

Zhang, C., Chen, H., & Phang, C. W. (2018). Role of instructors’ forum interactions with students in promoting MOOC continuance. Journal of Global Information Management (JGIM), 26 (3), 105–120.

Zimmerman, J., & Forlizzi, J. (2017). Speed dating: Providing a menu of possible futures. She Ji: THe Journal of Design, Economics, and Innovation, 3 (1), 30–50.

Zimmermann-Niefield, A., Turner, M., Murphy, B., Kane, S. K., & Shapiro, R. B. (2019). Youth learning machine learning through building models of athletic moves. In Proceedings of the 18th ACM international conference on interaction design and children (pp. 121–132).

Download references

Acknowledgements

The authors would like to thank all students, instructors, and AI experts for their great support and inspiration.

This study was financially supported by Seoul National University of Science & Technology.

Author information

Authors and affiliations.

Department of Applied Artificial Intelligence, Seoul National University of Science and Technology, 232 Gongneung-ro, Gongneung-dong, Nowon-gu, Seoul, 01811, Korea

Kyoungwon Seo

Department of Computer Science, The University of British Columbia, Vancouver, Canada

Joice Tang & Dongwook Yoon

Faculty of Education in Science and Technology, Technion-Israel Institute of Technology, Haifa, Israel

Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, Canada

Sidney Fels

You can also search for this author in PubMed   Google Scholar

Contributions

KS: conceptualization, methodology, investigation, writing—original draft, visualization, project administration; JT: conceptualization, methodology, investigation, data curation, writing—original draft, project administration; IR: writing—review and editing, conceptualization; SF: writing—review and editing, supervision, project administration, funding acquisition; DY: writing—review and editing, conceptualization, supervision, project administration. KS and JT contribute equally. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Kyoungwon Seo .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: summary of the AI experts’ information

Appendix b: speed dating interview script.

1. Introduction

Hello, thank you for taking time for this interview today. We’re really looking forward to learning from your experience with online learning.

Today, we’ll be discussing a set of 11 storyboards that are related to AI systems for online courses. When reading the storyboards, try to think about them in the context of your discipline and experiences. Our goal is to reveal your perceptions of AI in online learning.

For your information, the interview will take about 60 min. The interview will be audio recorded but will be confidential and de-identified.

2. For each storyboard

Do you think this AI system supports learner–instructor interaction? Yes, no, or do you feel neutral? Why?

[When the participant is a student] Would the incorporation of this AI system into your courses change your interaction with the instructor?

[When the participant is an instructor] Would incorporating this AI system into the course change how you interact with students?

Do you have any reservations or concerns about this AI system?

3. After examining all storyboards (capturing participants’ holistic point of view)

Of the storyboards shown today, which AI systems do you think would work well in your online classroom? Why? Also, which ones wouldn’t work well?

How do you think the adoption of AI would affect the relationship between students and the instructor?

4. Conclusion

Do you have any final comments?

Thank you for taking the time to interview with us today. We really appreciate that you took time to participate in our study and share your expertise. Your insights were really helpful.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Seo, K., Tang, J., Roll, I. et al. The impact of artificial intelligence on learner–instructor interaction in online learning. Int J Educ Technol High Educ 18 , 54 (2021). https://doi.org/10.1186/s41239-021-00292-9

Download citation

Received : 20 April 2021

Accepted : 29 July 2021

Published : 26 October 2021

DOI : https://doi.org/10.1186/s41239-021-00292-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Learner–instructor interaction
  • Online learning

ai in education research paper

Accessibility Links

  • Skip to content
  • Skip to search IOPscience
  • Skip to Journals list
  • Accessibility help
  • Accessibility Help

Click here to close this panel.

Purpose-led Publishing is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.

Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.

We are proudly declaring that science is our only shareholder.

Evaluating Artificial Intelligence in Education for Next Generation

Shubham Joshi 1 , Radha Krishna Rambola 1 and Prathamesh Churi 2

Published under licence by IOP Publishing Ltd Journal of Physics: Conference Series , Volume 1714 , 2nd International Conference on Smart and Intelligent Learning for Information Optimization (CONSILIO) 2020 24-25 October 2020, Goa, India Citation Shubham Joshi et al 2021 J. Phys.: Conf. Ser. 1714 012039 DOI 10.1088/1742-6596/1714/1/012039

Article metrics

7678 Total downloads

Share this article

Author e-mails.

[email protected]

Author affiliations

1 School of Technology Management and Engineering, Shirpur Campus, NMIMS University, India

2 School of Technology Management and Engineering, Mumbai Campus, NMIMS University, India

Buy this article in print

The use of Artificial Intelligence (AI) is now observed in almost all areas of our lives. Artificial intelligence is a thriving technology to transform all aspects of our social interaction. In education, AI will now develop new teaching and learning solutions that will be tested in different situations. Educational goals can be better achieved and managed by new educational technologies. First, this paper analyses how AI can use to improve outcomes in teaching, providing examples of how technology AI can help educators use data to enhance fairness and rank of education in developing countries. This study aims to examine teacher's and student's perceptions of the use and effectiveness of AI in education. Its curse and perceived as a good education system and human knowledge. The optimistic use of AI in class is strongly recommended by teachers and students. But every teacher is more adapted to new technological changes than students. Further research on generational and geographical diversity on perceptions of teachers and students can contribute to the more effective implementation of AI in Education (AIED).

Export citation and abstract BibTeX RIS

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence . Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

Advertisement

Advertisement

Generative Artificial Intelligence in Education and Its Implications for Assessment

  • Original Paper
  • Published: 11 November 2023
  • Volume 68 , pages 58–66, ( 2024 )

Cite this article

ai in education research paper

  • Jin Mao   ORCID: orcid.org/0000-0001-8498-3523 1 ,
  • Baiyun Chen   ORCID: orcid.org/0000-0002-4010-9890 2 &
  • Juhong Christie Liu   ORCID: orcid.org/0000-0002-3384-4379 3  

3192 Accesses

2 Citations

Explore all metrics

The abrupt emergence and rapid advancement of generative artificial intelligence (AI) technologies, transitioning from research labs to potentially all aspects of social life, has brought a profound impact on education, science, arts, journalism, and every facet of human life and communication. The purpose of this paper is to recapitulate the use of AI in education and examine potential opportunities and challenges of employing generative AI for educational assessment, with systems thinking in mind. Following a review of the opportunities and challenges, we discuss key issues and dilemmas associated with using generative AI for assessment and for education in general. We hope that the opportunities, challenges, and issues discussed in this paper could serve as a foundation for educators to harness the power of AI within the digital learning ecosystem.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

ai in education research paper

Students’ voices on generative AI: perceptions, benefits, and challenges in higher education

ai in education research paper

Artificial intelligence in education: Addressing ethical challenges in K-12 settings

ai in education research paper

Systematic review of research on artificial intelligence applications in higher education – where are the educators?

Archibald, A., Hudson, C., Heap, T., Thompson, R. R., Lin, L., DeMeritt, J., & Lucke, H. (2023). A validation of ai-enabled discussion platform metrics and relationships to student efforts. TechTrends, 67 (2), 285–293. https://doi.org/10.1007/s11528-022-00825-7

Article   Google Scholar  

Armstrong, K. (2023). ChatGPT: US lawyer admits using AI for case research . BBC News.  https://www.bbc.com/news/world-us-canada-65735769?via=aitoolzs . Accessed 10 June 2023

Baker, M. (2000). The roles of models in Artificial Intelligence and education research: A prospective view. International Journal of Artificial Intelligence in Education, 11 , 122–143.

Google Scholar  

Bowen, J. A. (2012). Teaching naked: How moving technology out of your college classroom will improve student learning . John Wiley & Sons.

Cardon, P., Fleischmann, C., Aritz, J., Logemann, M., & Heidewald, J. (2023). The challenges and opportunities of AI-assisted writing: Developing AI literacy for the AI age. Business and Professional Communication Quarterly, 86 (3), 257–295. https://doi.org/10.1177/23294906231176517

Celik, I., Dindar, M., Muukkonen, H., & Järvelä, S. (2022). The promises and challenges of artificial intelligence for teachers: A systematic review of research. TechTrends, 66 (4), 616–630. https://doi.org/10.1007/s11528-022-00715-y

Chrisinger, B. (2023). It’s not just our students — ChatGPT is coming for faculty writing and there’s little agreement on the rules that should govern it . The Chronicle of Higher Education. https://www.chronicle.com/article/its-not-just-our-students-ai-is-coming-for-faculty-writing?cid=gen_sign_in . Accessed 20 Mar 2023

Cooper, G. (2023). Examining science education in ChatGPT: An exploratory study of generative artificial intelligence. Journal of Science Education and Technology, 32 (3), 444–452. https://doi.org/10.1007/s10956-023-10039-y

Cope, B., Kalantzis, M., & Searsmith, D. (2021). Artificial intelligence for education: Knowledge and its assessment in AI-enabled learning ecologies. Educational Philosophy and Theory, 53 (12), 1229–1245.

Darling-Hammond, L. (2017). Developing and measuring higher order skills: Models for state performance assessment systems . Learning Policy Institute and Council of Chief State School Officers.

von Davier, M. (2019). Training Optimus Prime, M.D.: Generating medical certification items by fine-tuning OpenAI’s GPT2 transformer model. arXiv Computer Science: Computation and Language . https://doi.org/10.48550/arXiv.1908.08594

Dhirani, L. L., Mukhtiar, N., Chowdhry, B. S., & Newe, T. (2023). Ethical dilemmas and privacy issues in emerging technologies: A review. Sensors, 23 (3), 1151. https://doi.org/10.3390/s23031151

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Hanaa Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Janarthanan Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D.,… Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management , 71 , 102642 https://doi.org/10.1016/j.ijinfomgt.2023.102642

Floridi, L. (2023). AI as agency without intelligence: On ChatGPT, large language models, and other generative models. Philosophy and Technology . https://doi.org/10.2139/ssrn.4358789

Gamage, K. A., Silva, E. K. D., & Gunawardhana, N. (2020). Online delivery and assessment during COVID-19: Safeguarding academic integrity. Education Sciences, 10 (11), 301. https://doi.org/10.3390/educsci10110301

Gewirtz, D. (2023). Can AI detectors save us from ChatGPT? I tried 3 online tools to find out . ZDNET. https://www.zdnet.com/article/can-ai-detectors-save-us-from-chatgpt-i-tried-3-online-tools-to-find-out/ . Accessed 4 Nov 2023

Hu, K. (2023). ChatGPT sets record for fastest-growing user base - analyst note . REUTERS. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ . Accessed 15 Feb 2023

Humble, N., & Mozelius, P. (2022). The threat, hype, and promise of artificial intelligence in education. Discover Artificial Intelligence, 2 (22). https://doi.org/10.1007/s44163-022-00039-z

Ifenthaler, D., & Schumacher, C. (2016). Student perceptions of privacy principles for learning analytics. Educational Technology Research and Development, 64 , 923–938. https://doi.org/10.1007/s11423-016-9477-y

Ivarsson, J., & Lindwall, O. (2023). Suspicious minds: The problem of trust and conversational agents. Computer Supported Cooperative Work (CSCW) https://doi.org/10.1007/s10606-023-09465-8

Jones-Rooy, A. (2019). What does the data say? I’m a data scientist who is skeptical about data . Quartz. https://qz.com/1664575/is-data-science-legit . Accessed Mar 5 2023

Kaipa, R. M. (2021). Multiple choice questions and essay questions in curriculum. Journal of Applied Research in Higher Education, 13 (1), 16–32. https://doi.org/10.1108/jarhe-01-2020-0011

Kan, M. (2023). ChatGPT may be the fastest growing app of all time, beating TikTok . PCMag . https://www.pcmag.com/news/chatgpt-may-be-the-fastest-growing-app-of-all-time-beating-tiktok . Accessed March 5, 2023

Kaplan-Rakowski, R., Grotewold, K., Hartwick, P., & Papin, K. (2023). Generative AI and teachers’ perspectives on its implementation in education. Journal of Interactive Learning Research, 34 (2), 313–338.

Kelley, K. J. (2023). Teaching actual student writing in an AI world . Inside Higher Ed. https://www.insidehighered.com/advice/2023/01/19/ways-prevent-students-using-ai-tools-their-classes-opinion . Accessed June 10, 2023

Kennedy, B., Tyson, A., & Saks, E. (2023). Public awareness of artificial intelligence in everyday activities: Limited enthusiasm in U.S. over AI’s growing influence in daily life . Pew Research Center. https://www.pewresearch.org/science/2023/02/15/public-awareness-of-artificial-intelligence-in-everyday-activities/ . Accessed 1 Mar 2023

Kowch, E. G., & Liu, J. C. (2018). Principles for teaching, leading, and participatory learning with a new participant: AI. In 2018 International Joint Conference on Information, Media and Engineering (ICIME)  (pp. 320-325). IEEE

Book   Google Scholar  

Lamb, R., Neumann, K., & Linder, K. A. (2022). Real-time prediction of science student learning outcomes using machine learning classification of hemodynamics during virtual reality and online learning sessions. Computers and Education: Artificial Intelligence, 3 . https://doi.org/10.1016/j.caeai.2022.100078

Lee, V. R. (2023). Generative AI is forcing people to rethink what it means to be authentic . The Conversation. https://theconversation.com/generative-ai-is-forcing-people-to-rethink-what-it-means-to-be-authentic-204347?fbclid=IwAR2oio90vOzTQbXyPjaCrihKj7S5SYJ9lBsBHHnrN4PayQucLP7T1QCUzw4&mibextid=Zxz2cZ . Accessed 11 June 2023

Markauskaite, L., Marrone, R., Poquet, O., Knight, S., Martinez-Maldonado, R., Howard, S., Tondeur, J., De Laat, M., Shum, S. B., Gašević, D., & Siemens, G. (2022). Rethinking the entwinement between artificial intelligence and human learning: What capabilities do learners need for a world with AI?, Computers and Education: Artificial Intelligence, 3 , https://doi.org/10.1016/j.caeai.2022.100056

Martinez, D., Malyska, N., Streilein, B., Caceres, R., Campbell, W., Dagli, C., Gadepally, V., Greenfield, K., Hall, R., King, A., Lippmann, R., Miller, B., Reynolds, D., Richardson, F., Sahin, C., Tran, A., Trepagnier, P., & Zipkin, J. (2019). Artificial intelligence: Short history, present developments, and future outlook . Massachusetts Institute of Technology.

McAdoo, T., (2023). How to cite ChatGPT? APA Style. https://apastyle.apa.org/blog/how-to-cite-chatgpt . Accessed 10 June 2023

McKinsey & Company (2023, January 19). What is generative AI? https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai . Accessed 11 June 2023

Metz, C. (2023). What makes A.I. chatbots go wrong? The curious case of the hallucinating software . The New York Times. https://www.nytimes.com/2023/03/29/technology/ai-chatbots-hallucinations.html . Accessed 10 June 2023

Mislevy, R. J., Behrens, J. T., Dicerbo, K. E., & Levy, R. (2012). Design and discovery in educational assessment: Evidence-centered design, psychometrics, and educational data mining. Journal of Educational Data Mining, 4 (1), 11–48.

Mutimukwe, C., Viberg, O., Oberg, O. M., & Cerratto- Pargman, T. (2022). Students’ privacy concerns in learning analytics: Model development. British Journal of Educational Technology, 53 , 932–951. https://doi.org/10.1111/bjet.13234

Nelson, J. (2023). Italy welcomes ChatGPT back after ban over AI privacy concerns . Decrypt. https://decrypt.co/138362/italy-welcomes-chatgpt-back-after-ban-over-ai-privacy-concerns . Accessed 3 May 2023

Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M. E., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder-Kurlanda, K., Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., … Staab, S. (2019). Bias in data-driven artificial intelligence systems—An introductory survey. WIREs Data Mining and Knowledge Discovery, 10 (3). https://doi.org/10.1002/widm.1356

OpenAI. (2022). Introducing ChatGPT . https://openai.com/blog/chatgpt/ . Accessed 1 June 2023

Oppy, G., & Dowe, D. (2021). The turing test, In: E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy . https://plato.stanford.edu/archives/win2021/entries/turing-test/ . Accessed 5 May 2023

Ouyang, F., Zheng, L., & Jiao, P. (2022). Artificial intelligence in online higher education: A systematic review of empirical research from 2011 to 2020. Education and Information Technologies, 27 (6), 7893–7925. https://doi.org/10.1007/s10639-022-10925-9

Pavlik, J. V. (2023). Collaborating with ChatGPT: Considering the implications of generative artificial intelligence for journalism and media education. Journalism & Mass Communication Educator . https://doi.org/10.1177/10776958221149577

Pham, Y. K., Murray, C., & Gau, J. (2021). The inventory of teacher-student relationships: Factor structure and associations with school engagement among high-risk youth. Psychology in the Schools, 59 (2), 413–429. https://doi.org/10.1002/pits.22617

Popenici, S. A., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12 (1), 1–13. https://doi.org/10.1186/s41039-017-0062-8

Pressey, S. L. (1950). Development and appraisal of devices providing immediate automatic scoring of objective tests and concomitant self-instruction. The Journal of Psychology, 29 (2), 417–447.

Price, W. N., & Cohen, I. G. (2019). Privacy in the age of medical big data. Nature Medicine, 25 , 37–43. https://doi.org/10.1038/s41591-018-0272-7

Satariano, A. (2023). ChatGPT is banned in Italy over privacy concerns . The New York Times. https://www.nytimes.com/2023/03/31/technology/chatgpt-italy-ban.html . Accessed 1 April 2023

Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European Journal of Education Research, Development, and Policy, 57 (4), 620–631. https://doi.org/10.1111/ejed.12532

Senge, P. M. (2006). The fifth discipline: The art & practice of the learning organization . Doubleday.

Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher, 29 (7), 4–14.

Stokel-Walker, C. (2023). January 18). ChatGPT listed as author on research papers: Many scientists disapprove. Nature, 613 , 620–621. https://doi.org/10.1038/d41586-023-00107-z

Surden, H. (2019). Artificial intelligence and law: An overview. Georgia State University Law Review 35 (4). https://readingroom.law.gsu.edu/gsulr/vol35/iss4/8 . Accessed 2 June 2023

Swiecki, Z., Khosravi, H., Chen, G.; Martinez-Maldonado, R., Lodge, J. M., Milligan, S., Selwyn, N., & Gašević, D. (2022). Assessment in the age of artificial intelligence. Computers and Education: Artificial Intelligence, 3 . https://doi.org/10.1016/j.caeai.2022.100075

Terry, O. K. (2023). I’m a student. You have no idea how much we’re using ChatGPT. No professor or software could ever pick up on it. The Chronicle of Higher Education . https://www.chronicle.com/article/im-a-student-you-have-no-idea-how-much-were-using-chatgpt . Accessed 9 June 2023

The National Assessment of Educational Progress (NAEP) (2021). Technology based assessment project . https://nces.ed.gov/nationsreportcard/studies/tba/ . Accessed  1 May 2023

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379 (6630), 313. https://doi.org/10.1126/science.adg78

U.S. DOE Office of Educational Technology (2017). Reimagining the role of technology in education: 2017 National Education Technology Plan Update . Washington, D.C. https://tech.ed.gov/netp/ . Accessed 3 Jan 2023

U.S. DOE Office of Educational Technology. (2023). Artificial intelligence and future of teaching and learning: Insights and recommendations , Washington, DC. https://tech.ed.gov/ai-future-of-teaching-and-learning/ . Accessed 1 June 2023

Waller, C. (2023). The state of global digital accessibility: Current challenges and opportunities . Accessibility.com. https://www.accessibility.com/blog/the-state-of-global-digital-accessibility-current-challenges-and-opportunities . Accessed 13 June 2023

Wang, P. (2020). On defining artificial intelligence. Journal of Artificial General Intelligence, 11 (2), 73–86.

Wiggins, G. (1998). Educative assessment: Designing assessments to inform and improve student performance . Jossey-Bass Publishers.

Xu, W., & Ouyang, F. (2022). A systematic review of AI role in the educational system based on a proposed conceptual framework.  Education and Information Technologies , 1–29. https://doi.org/10.1007/s10639-021-10774-y

Yu, E. (2023). Intelligent enough? Artificial intelligence for online learners. Journal of Educators Online, 20 (1). https://doi.org/10.9743/JEO.2023.20.1.16

Zhou, H., & Hunter, S. I. (2023). Guest post — Accessibility powered by AI: How artificial intelligence can help universalize access to digital content. The Scholarly Kitchen. https://scholarlykitchen.sspnet.org/2023/06/05/guest-post-accessibility-powered-by-ai-how-artificial-intelligence-can-help-universalize-access-to-digital-content/ . Accessed 13 June 2023

Download references

Author information

Authors and affiliations.

Wilkes University, Wilkes Barre, PA, USA

University of Central Florida, Orlando, FL, USA

Baiyun Chen

James Madison University, Harrisonburg, VA, USA

Juhong Christie Liu

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jin Mao .

Ethics declarations

Ethical approval.

This article does not contain any studies with human participants or animals performed by any of the authors.

Conflict of Interest

The authors declare that they have no conflict of interest. The first author is serving as co-editor of the DELT-STC special issue but will recuse herself from the review process for this manuscript.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Mao, J., Chen, B. & Liu, J.C. Generative Artificial Intelligence in Education and Its Implications for Assessment. TechTrends 68 , 58–66 (2024). https://doi.org/10.1007/s11528-023-00911-4

Download citation

Accepted : 10 October 2023

Published : 11 November 2023

Issue Date : January 2024

DOI : https://doi.org/10.1007/s11528-023-00911-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Generative Artificial Intelligence
  • Systems Thinking
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Artificial intelligence in education: Addressing ethical challenges in K-12 settings

Selin akgun.

Michigan State University, East Lansing, MI USA

Christine Greenhow

Associated data.

Not applicable.

Artificial intelligence (AI) is a field of study that combines the applications of machine learning, algorithm productions, and natural language processing. Applications of AI transform the tools of education. AI has a variety of educational applications, such as personalized learning platforms to promote students’ learning, automated assessment systems to aid teachers, and facial recognition systems to generate insights about learners’ behaviors. Despite the potential benefits of AI to support students’ learning experiences and teachers’ practices, the ethical and societal drawbacks of these systems are rarely fully considered in K-12 educational contexts. The ethical challenges of AI in education must be identified and introduced to teachers and students. To address these issues, this paper (1) briefly defines AI through the concepts of machine learning and algorithms; (2) introduces applications of AI in educational settings and benefits of AI systems to support students’ learning processes; (3) describes ethical challenges and dilemmas of using AI in education; and (4) addresses the teaching and understanding of AI by providing recommended instructional resources from two providers—i.e., the Massachusetts Institute of Technology’s (MIT) Media Lab and Code.org. The article aims to help practitioners reap the benefits and navigate ethical challenges of integrating AI in K-12 classrooms, while also introducing instructional resources that teachers can use to advance K-12 students’ understanding of AI and ethics.

Introduction

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” — Stephen Hawking.

We may not think about artificial intelligence (AI) on a daily basis, but it is all around us, and we have been using it for years. When we are doing a Google search, reading our emails, getting a doctor’s appointment, asking for driving directions, or getting movie and music recommendations, we are constantly using the applications of AI and its assistance in our lives. This need for assistance and our dependence on AI systems has become even more apparent during the COVID-19 pandemic. The growing impact and dominance of AI systems reveals itself in healthcare, education, communications, transportation, agriculture, and more. It is almost impossible to live in a modern society without encountering applications powered by AI  [ 10 , 32 ].

Artificial intelligence (AI) can be defined briefly as the branch of computer science that deals with the simulation of intelligent behavior in computers and their capacity to mimic, and ideally improve, human behavior [ 43 ]. AI dominates the fields of science, engineering, and technology, but also is present in education through machine-learning systems and algorithm productions [ 43 ]. For instance, AI has a variety of algorithmic applications in education, such as personalized learning systems to promote students’ learning, automated assessment systems to support teachers in evaluating what students know, and facial recognition systems to provide insights about learners’ behaviors [ 49 ]. Besides these platforms, algorithm systems are prominent in education through different social media outlets, such as social network sites, microblogging systems, and mobile applications. Social media are increasingly integrated into K-12 education [ 7 ] and subordinate learners’ activities to intelligent algorithm systems [ 17 ]. Here, we use the American term “K–12 education” to refer to students’ education in kindergarten (K) (ages 5–6) through 12th grade (ages 17–18) in the United States, which is similar to primary and secondary education or pre-college level schooling in other countries. These AI systems can increase the capacity of K-12 educational systems and support the social and cognitive development of students and teachers [ 55 , 8 ]. More specifically, applications of AI can support instruction in mixed-ability classrooms; while personalized learning systems provide students with detailed and timely feedback about their writing products, automated assessment systems support teachers by freeing them from excessive workloads [ 26 , 42 ].

Despite the benefits of AI applications for education, they pose societal and ethical drawbacks. As the famous scientist, Stephen Hawking, pointed out that weighing these risks is vital for the future of humanity. Therefore, it is critical to take action toward addressing them. The biggest risks of integrating these algorithms in K-12 contexts are: (a) perpetuating existing systemic bias and discrimination, (b) perpetuating unfairness for students from mostly disadvantaged and marginalized groups, and (c) amplifying racism, sexism, xenophobia, and other forms of injustice and inequity [ 40 ]. These algorithms do not occur in a vacuum; rather, they shape and are shaped by ever-evolving cultural, social, institutional and political forces and structures [ 33 , 34 ]. As academics, scientists, and citizens, we have a responsibility to educate teachers and students to recognize the ethical challenges and implications of algorithm use. To create a future generation where an inclusive and diverse citizenry can participate in the development of the future of AI, we need to develop opportunities for K-12 students and teachers to learn about AI via AI- and ethics-based curricula and professional development [ 2 , 58 ]

Toward this end, the existing literature provides little guidance and contains a limited number of studies that focus on supporting K-12 students and teachers’ understanding of social, cultural, and ethical implications of AI [ 2 ]. Most studies reflect university students’ engagement with ethical ideas about algorithmic bias, but few addresses how to promote students’ understanding of AI and ethics in K-12 settings. Therefore, this article: (a) synthesizes ethical issues surrounding AI in education as identified in the educational literature, (b) reflects on different approaches and curriculum materials available for teaching students about AI and ethics (i.e., featuring materials from the MIT Media Lab and Code.org), and (c) articulates future directions for research and recommendations for practitioners seeking to navigate AI and ethics in K-12 settings.

Next, we briefly define the notion of artificial intelligence (AI) and its applications through machine-learning and algorithm systems. As educational and educational technology scholars working in the United States, and at the risk of oversimplifying, we provide only a brief definition of AI below, and recognize that definitions of AI are complex, multidimensional, and contested in the literature [ 9 , 16 , 38 ]; an in-depth discussion of these complexities, however, is beyond the scope of this paper. Second, we describe in more detail five applications of AI in education, outlining their potential benefits for educators and students. Third, we describe the ethical challenges they raise by posing the question: “how and in what ways do algorithms manipulate us?” Fourth, we explain how to support students’ learning about AI and ethics through different curriculum materials and teaching practices in K-12 settings. Our goal here is to provide strategies for practitioners to reap the benefits while navigating the ethical challenges. We acknowledge that in centering this work within U.S. education, we highlight certain ethical issues that educators in other parts of the world may see as less prominent. For example, the European Union (EU) has highlighted ethical concerns and implications of AI, emphasized privacy protection, surveillance, and non-discrimination as primary areas of interest, and provided guidelines on how trustworthy AI should be [ 3 , 15 , 23 ]. Finally, we reflect on future directions for educational and other research that could support K-12 teachers and students in reaping the benefits while mitigating the drawbacks of AI in education.

Definition and applications of artificial intelligence

The pursuit of creating intelligent machines that replicate human behavior has accelerated with the realization of artificial intelligence. With the latest advancements in computer science, a proliferation of definitions and explanations of what counts as AI systems has emerged. For instance, AI has been defined as “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings” [ 49 ]. This particular definition highlights the mimicry of human behavior and consciousness. Furthermore, AI has been defined as “the combination of cognitive automation, machine learning, reasoning, hypothesis generation and analysis, natural language processing, and intentional algorithm mutation producing insights and analytics at or above human capability” [ 31 ]. This definition incorporates the different sub-fields of AI together and underlines their function while reaching at or above human capability.

Combining these definitions, artificial intelligence can be described as the technology that builds systems to think and act like humans with the ability of achieving goals . AI is mainly known through different applications and advanced computer programs, such as recommender systems (e.g., YouTube, Netflix), personal assistants (e.g., Apple’s Siri), facial recognition systems (e.g., Facebook’s face detection in photographs), and learning apps (e.g., Duolingo) [ 32 ]. To build on these programs, different sub-fields of AI have been used in a diverse range of applications. Evolutionary algorithms and machine learning are most relevant to AI in K-12 education.

Algorithms are the core elements of AI. The history of AI is closely connected to the development of sophisticated and evolutionary algorithms. An algorithm is a set of rules or instructions that is to be followed by computers in problem-solving operations to achieve an intended end goal. In essence, all computer programs are algorithms. They involve thousands of lines of codes which represent mathematical instructions that the computer follows to solve the intended problems (e.g., as computing numerical calculation, processing an image, and grammar-checking in an essay). AI algorithms are applied to fields that we might think of as essentially human behavior—such as speech and face recognition, visual perception, learning, and decision-making and learning. In that way, algorithms can provide instructions for almost any AI system and application we can conceive [ 27 ].

Machine learning

Machine learning is derived from statistical learning methods and uses data and algorithms to perform tasks which are typically performed by humans [ 43 ]. Machine learning is about making computers act or perform without being given any line-by-line step [ 29 ]. The working mechanism of machine learning is the learning model’s exposure to ample amounts of quality data [ 41 ]. Machine-learning algorithms first analyze the data to determine patterns and to build a model and then predict future values through these models. In other words, machine learning can be considered a three-step process. First, it analyzes and gathers the data, and then, it builds a model to excel for different tasks, and finally, it undertakes the action and produces the desired results successfully without human intervention [ 29 , 56 ]. The widely known AI applications such as recommender or facial recognition systems have all been made possible through the working principles of machine learning.

Benefits of AI applications in education

Personalized learning systems, automated assessments, facial recognition systems, chatbots (social media sites), and predictive analytics tools are being deployed increasingly in K-12 educational settings; they are powered by machine-learning systems and algorithms [ 29 ]. These applications of AI have shown promise to support teachers and students in various ways: (a) providing instruction in mixed-ability classrooms, (b) providing students with detailed and timely feedback on their writing products, (c) freeing teachers from the burden of possessing all knowledge and giving them more room to support their students while they are observing, discussing, and gathering information in their collaborative knowledge-building processes [ 26 , 50 ]. Below, we outline benefits of each of these educational applications in the K-12 setting before turning to a synthesis of their ethical challenges and drawbacks.

Personalized learning systems

Personalized learning systems, also known as adaptive learning platforms or intelligent tutoring systems, are one of the most common and valuable applications of AI to support students and teachers. They provide students access to different learning materials based on their individual learning needs and subjects [ 55 ]. For example, rather than practicing chemistry on a worksheet or reading a textbook, students may use an adaptive and interactive multimedia version of the course content [ 39 ]. Comparing students’ scores on researcher-developed or standardized tests, research shows that the instruction based on personalized learning systems resulted in higher test scores than traditional teacher-led instruction [ 36 ]. Microsoft’s recent report (2018) of over 2000 students and teachers from Singapore, the U.S., the UK, and Canada shows that AI supports students’ learning progressions. These platforms promise to identify gaps in students’ prior knowledge by accommodating learning tools and materials to support students’ growth. These systems generate models of learners using their knowledge and cognition; however, the existing platforms do not yet provide models for learners’ social, emotional, and motivational states [ 28 ]. Considering the shift to remote K-12 education during the COVID-19 pandemic, personalized learning systems offer a promising form of distance learning that could reshape K-12 instruction for the future [ 35 ].

Automated assessment systems

Automated assessment systems are becoming one of the most prominent and promising applications of machine learning in K-12 education [ 42 ]. These scoring algorithm systems are being developed to meet the need for scoring students’ writing, exams and assignments, and tasks usually performed by the teacher. Assessment algorithms can provide course support and management tools to lessen teachers’ workload, as well as extend their capacity and productivity. Ideally, these systems can provide levels of support to students, as their essays can be graded quickly [ 55 ]. Providers of the biggest open online courses such as Coursera and EdX have integrated automated scoring engines into their learning platforms to assess the writings of hundreds of students [ 42 ]. On the other hand, a tool called “Gradescope” has been used by over 500 universities to develop and streamline scoring and assessment [ 12 ]. By flagging the wrong answers and marking the correct ones, the tool supports instructors by eliminating their manual grading time and effort. Thus, automated assessment systems deal very differently with marking and giving feedback to essays compared to numeric assessments which analyze right or wrong answers on the test. Overall, these scoring systems have the potential to deal with the complexities of the teaching context and support students’ learning process by providing them with feedback and guidance to improve and revise their writing.

Facial recognition systems and predictive analytics

Facial recognition software is used to capture and monitor students’ facial expressions. These systems provide insights about students’ behaviors during learning processes and allow teachers to take action or intervene, which, in turn, helps teachers develop learner-centered practices and increase student’s engagement [ 55 ]. Predictive analytics algorithm systems are mainly used to identify and detect patterns about learners based on statistical analysis. For example, these analytics can be used to detect university students who are at risk of failing or not completing a course. Through these identifications, instructors can intervene and get students the help they need [ 55 ].

Social networking sites and chatbots

Social networking sites (SNSs) connect students and teachers through social media outlets. Researchers have emphasized the importance of using SNSs (such as Facebook) to expand learning opportunities beyond the classroom, monitor students’ well-being, and deepen student–teacher relations [ 5 ]. Different scholars have examined the role of social media in education, describing its impact on student and teacher learning and scholarly communication [ 6 ]. They point out that the integration of social media can foster students’ active learning, collaboration skills, and connections with communities beyond the classroom [ 6 ]. Chatbots also take place in social media outlets through different AI systems [ 21 ]. They are also known as dialogue systems or conversational agents [ 26 , 52 ]. Chatbots are helpful in terms of their ability to respond naturally with a conversational tone. For instance, a text-based chatbot system called “Pounce” was used at Georgia State University to help students through the registration and admission process, as well as financial aid and other administrative tasks [ 7 ].

In summary, applications of AI can positively impact students’ and teachers’ educational experiences and help them address instructional challenges and concerns. On the other hand, AI cannot be a substitute for human interaction [ 22 , 47 ]. Students have a wide range of learning styles and needs. Although AI can be a time-saving and cognitive aide for teachers, it is but one tool in the teachers’ toolkit. Therefore, it is critical for teachers and students to understand the limits, potential risks, and ethical drawbacks of AI applications in education if they are to reap the benefits of AI and minimize the costs [ 11 ].

Ethical concerns and potential risks of AI applications in education

The ethical challenges and risks posed by AI systems seemingly run counter to marketing efforts that present algorithms to the public as if they are objective and value-neutral tools. In essence, algorithms reflect the values of their builders who hold positions of power [ 26 ]. Whenever people create algorithms, they also create a set of data that represent society’s historical and systemic biases, which ultimately transform into algorithmic bias. Even though the bias is embedded into the algorithmic model with no explicit intention, we can see various gender and racial biases in different AI-based platforms [ 54 ].

Considering the different forms of bias and ethical challenges of AI applications in K-12 settings, we will focus on problems of privacy, surveillance, autonomy, bias, and discrimination (see Fig.  1 ). However, it is important to acknowledge that educators will have different ethical concerns and challenges depending on their students’ grade and age of development. Where strategies and resources are recommended, we indicate the age and/or grade level of student(s) they are targeting (Fig. ​ (Fig.2 2 ).

An external file that holds a picture, illustration, etc.
Object name is 43681_2021_96_Fig1_HTML.jpg

Potential ethical and societal risks of AI applications in education

An external file that holds a picture, illustration, etc.
Object name is 43681_2021_96_Fig2_HTML.jpg

Student work from the activity of “Youtube Redesign” (MIT Media Lab, AI and Ethics Curriculum, p.1, [ 45 ])

One of the biggest ethical issues surrounding the use of AI in K-12 education relates to the privacy concerns of students and teachers [ 47 , 49 , 54 ]. Privacy violations mainly occur as people expose an excessive amount of personal information in online platforms. Although existing legislation and standards exist to protect sensitive personal data, AI-based tech companies’ violations with respect to data access and security increase people’s privacy concerns [ 42 , 54 ]. To address these concerns, AI systems ask for users’ consent to access their personal data. Although consent requests are designed to be protective measures and to help alleviate privacy concerns, many individuals give their consent without knowing or considering the extent of the information (metadata) they are sharing, such as the language spoken, racial identity, biographical data, and location [ 49 ]. Such uninformed sharing in effect undermines human agency and privacy. In other words, people’s agency diminishes as AI systems reduce introspective and independent thought [ 55 ]. Relatedly, scholars have raised the ethical issue of forcing students and parents to use these algorithms as part of their education even if they explicitly agree to give up privacy [ 14 , 48 ]. They really have no choice if these systems are required by public schools.

Another ethical concern surrounding the use of AI in K-12 education is surveillance or tracking systems which gather detailed information about the actions and preferences of students and teachers. Through algorithms and machine-learning models, AI tracking systems not only necessitate monitoring of activities but also determine the future preferences and actions of their users [ 47 ]. Surveillance mechanisms can be embedded into AI’s predictive systems to foresee students’ learning performances, strengths, weaknesses, and learning patterns . For instance, research suggests that teachers who use social networking sites (SNSs) for pedagogical purposes encounter a number of problems, such as concerns in relation to boundaries of privacy, friendship authority, as well as responsibility and availability [ 5 ]. While monitoring and patrolling students’ actions might be considered part of a teacher’s responsibility and a pedagogical tool to intervene in dangerous online cases (such as cyber-bullying or exposure to sexual content), such actions can also be seen as surveillance systems which are problematic in terms of threatening students’ privacy. Monitoring and tracking students’ online conversations and actions also may limit their participation in the learning event and make them feel unsafe to take ownership for their ideas. How can students feel secure and safe, if they know that AI systems are used for surveilling and policing their thoughts and actions? [ 49 ].

Problems also emerge when surveillance systems trigger issues related to autonomy, more specifically, the person’s ability to act on her or his own interest and values. Predictive systems which are powered by algorithms jeopardize students and teachers’ autonomy and their ability to govern their own life [ 46 , 47 ]. Use of algorithms to make predictions about individuals’ actions based on their information raise questions about fairness and self-freedom [ 19 ]. Therefore, the risks of predictive analysis also include the perpetuation of existing bias and prejudices of social discrimination and stratification [ 42 ].

Finally, bias and discrimination are critical concerns in debates of AI ethics in K-12 education [ 6 ]. In AI platforms, the existing power structures and biases are embedded into machine-learning models [ 6 ]. Gender bias is one of the most apparent forms of this problem, as the bias is revealed when students in language learning courses use AI to translate between a gender-specific language and one that is less-so. For example, while Google Translate translated the Turkish equivalent of “S he/he is a nurse ” into the feminine form, it also translated the Turkish equivalent of “ She/he is a doctor ” into the masculine form [ 33 ]. This shows how AI models in language translation carry the societal biases and gender-specific stereotypes in the data [ 40 ]. Similarly, a number of problematic cases of racial bias are also associated with AI’s facial recognition systems. Research shows that facial recognition software has improperly misidentified a number of African American and Latino American people as convicted felons [ 42 ].

Additionally, biased decision-making algorithms reveal themselves throughout AI applications in K-12 education: personalized learning, automated assessment, SNSs, and predictive systems in education. Although the main promise of machine-learning models is increased accuracy and objectivity, current incidents have revealed the contrary. For instance, England’s A-level and GCSE secondary level examinations were cancelled due to the pandemic in the summer of 2020 [ 1 , 57 ]. An alternative assessment method was implemented to determine the qualification grades of students. The grade standardization algorithm was produced by the regulator Ofqual. With the assessment of Ofqual’s algorithm based on schools' previous examination results, thousands of students were shocked to receive unexpectedly low grades. Although a full discussion of the incident is beyond the scope of this article [ 51 ] it revealed how the score distribution favored students who attended private or independent schools, while students from underrepresented groups were hit hardest. Unfortunately, automated assessment algorithms have the potential to reconstruct unfair and inconsistent results by disrupting student’s final scores and future careers [ 53 ].

Teaching and understanding AI and ethics in educational settings

These ethical concerns suggest an urgent need to introduce students and teachers to the ethical challenges surrounding AI applications in K-12 education and how to navigate them. To meet this need, different research groups and nonprofit organizations offer a number of open-access resources based on AI and ethics. They provide instructional materials for students and teachers, such as lesson plans and hands-on activities, and professional learning materials for educators, such as open virtual learning sessions. Below, we describe and evaluate three resources: “AI and Ethics” curriculum and “AI and Data Privacy” workshop from the Massachusetts Institute of Technology (MIT) Media Lab as well as Code.org’s “AI and Oceans” activity. For readers who seek to investigate additional approaches and resources for K-12 level AI and ethics interaction, see: (a) The Chinese University of Hong Kong (CUHK)’s AI for the Future Project (AI4Future) [ 18 ]; (b) IBM’s Educator’s AI Classroom Kit [ 30 ], Google’s Teachable Machine [ 25 ], UK-based nonprofit organization Apps for Good [ 4 ], and Machine Learning for Kids [ 37 ].

"AI and Ethics Curriulum" for middle school students by MIT Media Lab

The MIT Media Lab team offers an open-access curriculum on AI and ethics for middle school students and teachers. Through a series of lesson plans and hand-on activities, teachers are guided to support students’ learning of the technical terminology of AI systems as well as the ethical and societal implications of AI [ 2 ]. The curriculum includes various lessons tied to learning objectives. One of the main learning goals is to introduce students to basic components of AI through algorithms, datasets, and supervised machine-learning systems all while underlining the problem of algorithmic bias [ 45 ]. For instance, in the activity “ AI Bingo” , students are given bingo cards with various AI systems, such as online search engine, customer service bot, and weather app. Students work with their partners collaboratively on these AI systems. In their AI Bingo chart, students try to identify what prediction the selected AI system makes and what dataset it uses. In that way, they become more familiar with the notions of dataset and prediction in the context of AI systems [ 45 ].

In the second investigation, “Algorithms as Opinions” , students think about algorithms as recipes, which are created by set of instructions that modify an input to produce an output [ 45 ]. Initially, students are asked to write an algorithm to make the “ best” jelly sandwich and peanut butter. They explore what it means to be “ best” and see how their opinions of best in their recipes are reflected in their algorithms. In this way, students are able to figure out that algorithms can have various motives and goals. Following this activity, students work on the “Ethical Matrix” , building on the idea of the algorithms as opinions [ 45 ]. During this investigation, students first refer back to their developed algorithms through their “best” jelly sandwich and peanut butter. They discuss what counts as the “best” sandwich for themselves (most healthy, practical, delicious, etc.). Then, through their ethical matrix (chart), students identify different stakeholders (such as their parents, teacher, or doctor) who care about their peanut butter and jelly sandwich algorithm. In this way, the values and opinions of those stakeholders also are embedded in the algorithm. Students fill out an ethical matrix and look for where those values conflict or overlap with each other. This matrix is a great tool for students to recognize different stakeholders in a system or society and how they are able to build and utilize the values of the algorithms in an ethical matrix.

The final investigation which teaches about the biased nature of algorithms is “Learning and Algorithmic Bias” [ 45 ]. During the investigation, students think further about the concept of classification. Using Google’s Teachable Machine tool [ 2 ], students explore the supervised machine-learning systems. Students train a cat–dog classifier using two different datasets. While the first dataset reflects the cats as the over-represented group, the second dataset indicates the equal and diverse representation between dogs and cats [ 2 ]. Using these datasets, students compare the accuracy between the classifiers and then discuss which dataset and outcome are fairer. This activity leads students into a discussion about the occurrence of bias in facial recognition algorithms and systems [ 2 ].

In the rest of the curriculum, similar to the AI Bingo investigation, students work with their partners to determine the various forms of AI systems in the YouTube platform (such as its recommender algorithm and advertisement matching algorithm). Through the investigation of “ YouTube Redesign”, students redesign YouTube’s recommender system. They first identify stakeholders and their values in the system, and then use an ethical matrix to reflect on the goals of their YouTube’s recommendation algorithm [ 45 ]. Finally, through the activity of “YouTube Socratic Seminar” , students read an abridged version of Wall Street Journal article by participating in a Socratic seminar. The article was edited to shorten the text and to provide more accessible language for middle school students. They discuss which stakeholders were most influential or significant in proposing changes in the YouTube Kids app and whether or not technologies like auto play should ever exist. During their discussion, students engage with the questions of: “Which stakeholder is making the most change or has the most power?”, “Have you ever seen an inappropriate piece of content on YouTube? What did you do?” [ 45 ].

Overall, the MIT Media Lab’s AI and Ethics curriculum is a high quality, open-access resource with which teachers can introduce middle school students to the risks and ethical implications of AI systems. The investigations described above involve students in collaborative, critical thinking activities that force them to wrestle with issues of bias and discrimination in AI, as well as surveillance and autonomy through the predictive systems and algorithmic bias.

“AI and Data Privacy” workshop series for K-9 students by MIT Media Lab

Another quality resource from the MIT Media Lab’s Personal Robots Group is a workshop series designed to teach students (between the ages 7 and 14) about data privacy and introduce them to designing and prototyping data privacy features. The group has made the content, materials, worksheets, and activities of the workshop series into an open-access online document, freely available to teachers [ 44 ].

The first workshop in the series is “ Mystery YouTube Viewer: A lesson on Data Privacy” . During the workshop, students engage with the question of what privacy and data mean [ 44 ]. They observe YouTube’s home page from the perspective of a mystery user. Using the clues from the videos, students make predictions about what the characters in the videos might look like or where they might live. In a way, students imitate YouTube algorithms’ prediction mode about the characters. Engaging with these questions and observations, students think further about why privacy and boundaries are important and how each algorithm will interpret us differently based on who creates the algorithm itself.

The second workshop in the series is “ Designing ads with transparency: A creative workshop” . Through this workshop, students are able to think further about the meaning, aim, and impact of advertising and the role of advertisements in our lives [ 44 ]. Students collaboratively create an advertisement using an everyday object. The objective is to make the advertisement as “transparent” as possible. To do that, students learn about notions of malware and adware, as well as the components of YouTube advertisements (such as sponsored labels, logos, news sections, etc.). By the end of the workshop, students design their ads as a poster, and they share with their peers.

The final workshop in MIT’s AI and data privacy series is “ Designing Privacy in Social Media Platforms”. This workshop is designed to teach students about YouTube, design, civics, and data privacy [ 44 ]. During the workshop, students create their own designs to solve one of the biggest challenges of the digital era: problems associated with online consent. The workshop allows students to learn more about the privacy laws and how they impact youth in terms of media consumption. Students consider YouTube within the lenses of the Children’s Online Privacy Protections Rule (COPPA). In this way, students reflect on one of the components of the legislation: how might students get parental permission (or verifiable consent)?

Such workshop resources seem promising in helping educate students and teachers about the ethical challenges of AI in education. Specifically, social media such as YouTube are widely used as a teaching and learning tool within K-12 classrooms and beyond them, in students’ everyday lives. These workshop resources may facilitate teachers’ and students’ knowledge of data privacy issues and support them in thinking further about how to protect privacy online. Moreover, educators seeking to implement such resources should consider engaging students in the larger question: who should own one’s data? Teaching students the underlying reasons for laws and facilitating debate on the extent to which they are just or not could help get at this question.

Investigation of “AI for Oceans” by Code.org

A third recommended resource for K-12 educators trying to navigate the ethical challenges of AI with their students comes from Code.org, a nonprofit organization focused on expanding students’ participation in computer science. Sponsored by Microsoft, Facebook, Amazon, Google, and other tech companies, Code.org aims to provide opportunities for K-12 students to learn about AI and machine-learning systems [ 20 ]. To support students (grades 3–12) in learning about AI, algorithms, machine learning, and bias, the organization offers an activity called “ AI for Oceans ”, where students are able to train their machine-learning models.

The activity is provided as an open-access tutorial for teachers to help their students explore how to train, model and classify data , as well as to understand how human bias plays a role in machine-learning systems. During the activity, students first classify the objects as either “fish” or “not fish” in an attempt to remove trash from the ocean. Then, they expand their training data set by including other sea creatures that belong underwater. Throughout the activity, students are also able to watch and interact with a number of visuals and video tutorials. With the support of their teachers, they discuss machine learning, steps and influences of training data, as well as the formation and risks of biased data [ 20 ].

Future directions for research and teaching on AI and ethics

In this paper, we provided an overview of the possibilities and potential ethical and societal risks of AI integration in education. To help address these risks, we highlighted several instructional strategies and resources for practitioners seeking to integrate AI applications in K-12 education and/or instruct students about the ethical issues they pose. These instructional materials have the potential to help students and teachers reap the powerful benefits of AI while navigating ethical challenges especially related to privacy concerns and bias. Existing research on AI in education provides insight on supporting students’ understanding and use of AI [ 2 , 13 ]; however, research on how to develop K-12 teachers’ instructional practices regarding AI and ethics is still in its infancy.

Moreover, current resources, as demonstrated above, mainly address privacy and bias-related ethical and societal concerns of AI. Conducting more exploratory and critical research on teachers’ and students’ surveillance and autonomy concerns will be important to designing future resources. In addition, curriculum developers and workshop designers might consider centering culturally relevant and responsive pedagogies (by focusing on students’ funds of knowledge, family background, and cultural experiences) while creating instructional materials that address surveillance, privacy, autonomy, and bias. In such student-centered learning environments, students voice their own cultural and contextual experiences while trying to critique and disrupt existing power structures and cultivate their social awareness [ 24 , 36 ].

Finally, as scholars in teacher education and educational technology, we believe that educating future generations of diverse citizens to participate in the ethical use and development of AI will require more professional development for K-12 teachers (both pre-service and in-service). For instance, through sustained professional learning sessions, teachers could engage with suggested curriculum resources and teaching strategies as well as build a community of practice where they can share and critically reflect on their experiences with other teachers. Further research on such reflective teaching practices and students’ sense-making processes in relation to AI and ethics lessons will be essential to developing curriculum materials and pedagogies relevant to a broad base of educators and students.

This work was supported by the Graduate School at Michigan State University, College of Education Summer Research Fellowship.

Data availability

Code availability, declarations.

The authors declare that they have no conflict of interest.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Selin Akgun, Email: ude.usm@lesnugka .

Christine Greenhow, Email: ude.usm@wohneerg .

Artificial Intelligence in Education and Learning (AI in Research)

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

ai in education research paper

Explore insights from the AI in Education Report

April 25, 2024.

By Microsoft Education Team

ai in education research paper

Share this article

The swift rise of generative AI is reshaping how schools approach creation, problem-solving, learning, and communication. Your schools are in a pivotal moment when critical thinking and metacognitive skills are more important than ever as new technology develops.

As we continue to learn, Microsoft believes it is important to share our early findings from our AI in Education Report . In this report, we highlight insights from our research, as well as research from partner organizations.

Key takeaways from the AI in Education Report include:

  • Start AI conversations today. There is an urgent need to communicate clearly and openly about AI, increase AI literacy, and create usage guidelines at educational organizations.
  • Learn how AI can help. There is a clear opportunity for AI to help educators and administrators lighten workloads, boost productivity, and improve efficiency.
  • Explore new ways to learn with AI. Early studies demonstrate the potential of AI to improve educational experiences and learning outcomes.
  • Prepare for the workplace of the future. Students need to build people skills and technical capacity to prepare for a world transformed by AI.

Explore the AI in Education Report for resources and recommendations that help represent the opportunities that come with this unique moment.

Start AI conversations today

When you’re getting started with using AI tools, it’s common to begin with figuring out ways to make everyday tasks easier. In education, AI also brings opportunities to provide actionable insights, improve learning outcomes, and make more time for human connection and collaboration. But there are also challenges to navigate and overcome to realize that potential. To better understand the needs and opportunities around AI in education, Microsoft surveyed educators, academic and IT leaders, and students from K-12 schools and higher education institutions about their perceptions, familiarity, uses, and concerns around AI tools.

Sample findings from the survey include:

  • 47% of education leaders use AI every day
  • 68% of all educators have used AI at least once or twice
  • 62% of all students have used AI at least once or twice

ai in education research paper

Survey results from the AI in Education Report show a comparison of the familiarity and usage of AI between leaders, all educators, and students in school settings. It highlights the significant difference in daily use of AI among these groups.

Despite generally low familiarity with AI, especially among students, it’s noteworthy that respondents from each group are using AI. This widespread adoption underscores the need for clear guidance and practical frameworks to help navigate the complexities of AI in education. Concerns about cheating are prevalent across all groups, including students, further highlighting the importance of establishing transparent and consistent guidance.

Take these next steps to start AI conversations at your school or institution:

  • Request that your school or district leaders create clear guidelines and policies and provide professional learning opportunities. Consider sharing the TeachAI Toolkit as a resource.
  • Help students learn how to use AI responsibly without compromising their academic integrity by setting clear expectations.

Common ways that generative AI tools are used in schools

AI can enable personalized learning, free up time for educators to focus on what matters most, and help address issues of equity and accessibility . It can also improve operational efficiency, bringing much-needed support to overburdened administrators and IT teams. There is a clear opportunity for AI to help educators and administrators lighten workloads, boost productivity, and improve efficiency.

Among respondents who report using AI, some of the most common tasks they use it for include:

  • Leaders  use AI tools mostly to improve efficiency of operational and administrative processes, improve access to resources, support communication with students, and identify opportunities for student improvement.
  • Educators  use AI tools mainly to create or update lesson plans, brainstorm new ideas, simplify complex topics, free up their time, and differentiate instruction to address students’ needs.
  • Students  use AI tools mostly to summarize information, help them brainstorm, get answers or information quickly, get initial feedback, and improve their writing skills.

ai in education research paper

Survey results from the AI in Education Report show the widespread use and potential of AI in enhancing learning experiences and outcomes for different roles.

Learn how AI can help your school

Each month, the heaviest Microsoft 365 Education users receive hundreds of emails and chat messages to get things done. AI can enable greater productivity in tasks like lesson planning and curriculum development, which make up 45% of teachers’ responsibilities. That frees up time for educators to do the things only humans can do—like connect with students.

Educational institutions are moving fast when it comes to AI, and they’re seeing significant returns on their investment. However, an IDC study on the opportunity of AI in education found that education leaders feel less prepared for AI-driven change than their peers in other industries.

Education organizations can take these steps to increase preparedness and develop a strategy:

  • Establish a guiding committee that defines and steers AI strategy, responsible use policies, governance models, and priorities.
  • Prepare for change by building a centralized, cross-functional AI team that can connect AI initiatives to the organization’s existing priorities and create training opportunities.
  • Prioritize high-value, low-complexity AI use cases. Start small, collect, and respond to feedback, and plan for scalable and impactful solutions.

To hear more IDC insights from a Microsoft sponsored study, explore the following resources:

  • Read Education’s AI Journey Behind the Headlines
  • Watch AI’s Impact in Education Extends Far Beyond the Classroom

Explore new ways to learn with AI

Students and educators alike have already made a discovery about the benefits of using generative AI in the classroom, particularly when used as a personalized academic coach that encourages learning and engagement rather than simply giving responses.

Explore these key takeaways from early studies about the potential impact of generative AI on learning:

  • In December 2023, Microsoft Research and Harsh Kumar of the University of Toronto discovered that AI-generated explanations enhanced learning compared to solely viewing correct answers. The advantages were most significant for students who first attempted problems independently before receiving assistance.
  • A 2023 study by Harvard University and Yale University professors found that AI chatbots can give students in large classes an experience that approximates an ideal one-to-one relationship between educator and student.
One student shared that it “felt like having a personal tutor...I love how AI bots will answer questions without ego and without judgment, generally entertaining even the stupidest of questions without treating them like they’re stupid.”

Take these next steps to explore how AI can support student learning:

  • Model and encourage a growth mindset that includes learning, iteration, and curiosity.
  • Learn from others and explore educational AI resources.
  • Be intentional in your design of new AI experiences. What is your goal and how might AI help you achieve it?

Prepare for the workplace of the future

Workplaces, like classrooms, have been altered by the rise of generative AI tools. As a result, the skills that students need to learn have changed, too.

Important findings about the evolution of workplace skills include that 82% of leaders surveyed for Microsoft’s 2023 Work Trend Index say employees will need new skills to be prepared for the growth of AI. And learning to work alongside AI won’t just be about building technical capacity. It will be necessary to prioritize people skills, and new analytical, emotional, and critical thinking skills. According to the 2023 LinkedIn Future of Work Report , 92% of U.S. executives agree that people skills are more important than ever.

ai in education research paper

Survey results from Microsoft’s 2023 Work Trend Index show that skills like analytical judgment, flexibility, emotional intelligence, creative evaluation, intellectual curiosity, bias detection and handling, and AI delegation will be essential.

Take these steps to help prepare your students for future-ready skills:

  • Teach students metacognitive and human-centered skills including the ability to analyze, understand, and control their own thought processes. You can start by asking students why they agree or disagree with AI-generated content.
  • Model using AI tools to spark discussion and explore alternative views instead of only providing answers.

The rapid ascent of generative AI is revolutionizing how schools foster creativity, approach challenges, and enhance learning . Discover insights, resources, and recommendations in our AI in Education Report to seize the potential of this transformative era.

Related stories

ai in education research paper

Kickstart your school’s AI journey with the Microsoft Education AI Toolkit

AI is igniting enthusiasm in classrooms, department meetings, board rooms, and administrative offices across the country. For many, generative AI is changing what it means to create, solve problems, communicate, and even learn. It’s not just teachers and students embracing this new technology; education leaders are also turning to AI to improve operational processes and provide equitable access to resources among other opportunities.

ai in education research paper

Expanding Microsoft Copilot access in education

Over the last year, we have seen incredible innovation, resiliency, and adaptability around the intersection of AI technology and education. Through deep collaboration with education institutions and thoughtful consideration we can leverage AI to improve efficiency, bring time and joy back to teaching, and help students prepare for an AI driven future

ai in education research paper

Advancing opportunities for AI in higher education

Last month, LinkedIn released their Future of Work Report and found that new AI tools have the potential to lighten workloads and help professionals, like educators, focus on the most critical parts of their job.

  • SCHOOL STORIES
  • MICROSOFT EDUCATOR CENTER
  • CONTACT SALES

The past, present and future of AI in education

On UCI Podcast, Shayan Doroudi and Nia Nixon share expertise on tech evolutions in teaching and learning

Shayan Doroudi and Nia Nixon, assistant professors of education

ChatGPT, a popular artificial intelligence tool that can generate written materials in response to prompts, reached 100 million monthly active users in January 2023, just two months after its launch. The unprecedented escalation made OpenAI’s creation the fastest-growing consumer application in history, according to a report from UBS based on analytics from Similarweb .

Responses from the world of education to an AI tool that could write essays for students were immediate and mixed , with some teachers concerned about student cheating and plagiarism and others celebrating its potential to generate new ideas for instruction and lesson planning.

Various researchers in UC Irvine’s School of Education are studying the wide range of technological innovations available to enhance education, including assistant professors Shayan Doroudi and Nia Nixon . One facet of Doroudi’s research focuses on how different technologies improve learning. A segment of Nixon’s work centers on developing AI-based interventions to promote inclusivity in team problem-solving environments.

How are artificial intelligence tools currently affecting teaching and learning? What are some of the most promising applications that have been developed so far? How are AI tools being used to personalize learning experiences – and what are the benefits and drawbacks of that approach? What’s next? These are some of the questions Nixon and Doroudi address in this episode of the UCI Podcast.

The music for this episode, titled “Computer Bounce,” was provided by Geographer via the audio library in YouTube Studio.

To get the latest episodes of the UCI Podcast delivered automatically, subscribe at:

Apple Podcasts – Spotify

Cara Capuano / The UCI Podcast:

From the University of California, Irvine, I’m Cara Capuano. Thank you for listening to The UCI Podcast. Today’s episode focuses on artificial intelligence and education, and we have a pair of guests willing to share their wisdom on this ever-changing topic. They’re from UC Irvine’s School of Education – Shayan Doroudi and Nia Nixon – both assistant professors.

Professor Doroudi runs the Mathe lab. Mathe is a Greek word for “learn,” but it’s also an acronym for Models, Analytics, Technologies, and Histories of Education. The lab has a particular focus on the multifaceted relationship between AI and education, including the historical connections between the two fields, which spans over 50 years.

Nixon heads the Language and Learning Analytics Lab – or LaLa Lab – which explores the intersections of technology with learning and education, with a particular focus on learning analytics, AI and collaborative engagement. Thank you both for joining us today.

Thank you for having us.

Shayan Doroudi:

Yeah, thank you for having us.

Let’s start our conversation with what makes you tick. What first drew your attention to AI and education?

So, for me, I was an undergrad at the University of Memphis, and I was exploring different research labs. So, I tried cognitive psychology and clinical psychology and then I got into what was called the Affective Computing Lab. And so, in that lab we did a lot of analysis and assessment of students’ emotions while they were learning. So, we would track their pupil movements, postural shifts, language, while they were engaging with intelligent tutoring systems. It was inherently a very AI-focused lab and that sort of birthed my interest in the field and all of its possibilities.

And what about you, Professor Doroudi?

Yeah, so, I didn’t start as early in my undergraduate career, but while I was an undergraduate student, I took a class online. It was a MOOC – massive open online course. So, there was one course that was about artificial intelligence, and it drew like 160,000 students or something. I was one of those many, many students. I liked the content of the course, but I also liked how the course was being delivered and the educational experience. I think that sort of seeded my interest, in some sense, in both AI and education.

I did an internship at Udacity, which was a company that put out that course. And at some point in that internship, I said, “I think I want to do this for my Ph.D. I want to study how to improve education with tools like AI and machine learning.” And so, that sort of started my experience.

And I didn’t know about intelligent tutoring systems – which Nia referred to – but when I actually started at my Ph.D. at Carnegie Mellon University, I realized, “Oh, people have been working on this for decades.” And then I learned about intelligent student tutoring systems and started working on them for my Ph.D. as well.

It’s nice for me to hear that you had “discovery moments” with the tools because they are ever- changing and, in the grand scheme of life, they’re still fairly new. So, it’s good to hear from who I see as seasoned experts in the field that you also had that new “ah ha!” moment and came to AI through kind of a genuine experience.

How are AI tools currently impacting teaching and learning, and what are some of the most promising applications that you’ve seen?

It’s interesting. If you asked me this like two years ago, I would’ve talked about certain tools, but I think probably most listeners are aware that things have changed a lot over the past year with ChatGPT and generative artificial intelligence. Now, there are so many new tools that are popping up, so many new ways that people are trying to use it. And one hope I have is that people don’t forget that people have been actually working on this before ChatGPT.

There’s lots of things that we mentioned – intelligent tutoring systems. These are platforms to help students learn sort of in an individualized or personalized way to guide them through problem solving. So, there’s more traditional ones of those and then now, with ChatGPT, people are trying to create these chatbots that can help tutor students. And I think we’ll get to this a little bit later – there are pros and cons of the different approaches, and there’s things to watch out for. But yeah, I think there’s a lot of interesting tools being developed currently.

I completely agree with Shayan. If you walk away with anything from this conversation, it’s that this isn’t a new field. Decades of research have been put into using AI in educational context. And a lot of those sort of fall into three super broad categories of assessment – using AI to assess education in different ways. Personalization – so, intelligent tutoring systems is a great example of that. And then educational delivery, content delivery. But that’s definitely been incredibly transformed in the last two years by all of the things that he was just discussing.

One of the most promising things? That’s a huge question, and it’s really hard for me to even begin to answer because I also know that this is being recorded. So, I think what I think is the most promising thing in this moment today versus tomorrow will probably be different.

But I will say that I think the conversational aspects of these newer models – and the social aspects in the context of education – are huge. And what we can do with that – the human-like engagement that we can do – it opens the door for a host of different possibilities.

Professor Nixon, you just talked about the personalization aspect, one of the ways that AI tools are being used. How do they personalize learning experiences for students? How can they do that?

Right, great question. Historically, we’ve been able to sort of map out a student’s knowledge of a particular subject and then provide them with – or expose them to – different levels of difficulty in content as they navigate through any educational platform. So that means you – as a novice – I might unfold things in a personalized way for you to not overwhelm you and not have you disengage or become frustrated.

Another way is dealing with emotion. So, as I mentioned earlier, I started out in an affective computing lab and one of the huge things that came out of that is emotions are important for learning, which is odd that that’s a new kind of thing – relatively new – but when you’re confused or frustrated, you’re more likely to disengage than you when you’re in flow and everything disappears and everything is at the right level for you.

So, AI can be used to go, “Hey, I think you look a little confused. Let me give you a hint. Oh, it looks like you might have a misconception. Let me correct that for you.” So, you don’t slip into these unproductive states of learning – affective states of learning. So, those are two examples. There are tons more of how AI can be used to kind of personalize the learning journey for students.

What are the benefits and the potential drawbacks of that kind of personalized approach?

One of the drawbacks is our kind of over-reliance on technology. I struggle with this thought because it feels antiquated in some way because I feel like if you look in history, there was pushback on writing things down when we first started writing things down. There was pushback on the printing press. And there’s pushback here because we’re saying, “Oh, we’re over relying on technology and AI and we’re outsourcing so much of our cognitive abilities to AI.” But also, we got past all of these other obstacles and those weren’t actually very accurate. So, there’s a tension there, when I say about that being a drawback.

I think one benefit is that teachers can’t necessarily give individualized attention to every student. So, if we are able to personalize experiences as well for individual students, they might be able to get sort of a unique experience that they wouldn’t otherwise be able to get in a large classroom.

At the same time, I don’t want to overemphasize that because I think there’s a lot of hype and a lot of companies will try to sell the products as doing this perfect kind of personalization, but we still haven’t figured it out really. And a good teacher can do certain things – or a good tutor can do certain things – that I don’t think we’ve been able to replicate with technology, with AI.

You know, we can personalize in certain ways, as Nia mentioned, but I think learning is very complex and this is something I’ve realized in my own research. I’ve tried to do some of this work, and I’ve realized it’s easier said than done, right? And so, learning is just very complex. And when you bring in the emotional pieces, the social pieces, like we don’t really know how to model all of that to know what’s the right thing to do.

And the technology’s limited by what it can do, whereas a teacher can say, “Okay, if this isn’t working, you know, let’s all just go outside. Let’s do something totally different.” And a teacher can come up with that on the spot. No AI tool that I know of is doing something like that.

With modern approaches now with these language-based tutors – these chatbots – they can seem like they can personalize very well, but they actually lack some of the rich understanding that Nia talked about earlier, like modeling exactly the kinds of knowledge that we want students to learn and knowing exactly what to do.

The way it’s approaching it is totally different. It’s doing it in a way that we don’t really – can’t really – predict what it’s going to do. And so, as researchers and educators, we don’t really know what it’s going to do. So, sometimes it’ll behave really well, and sometimes it might not – a lot of times it doesn’t actually. So, that’s one of the drawbacks to really be aware of.

You alluded earlier, Professor Doroudi, to some of the ethical considerations that go into integrating AI into education. What do those look like?

Yeah, I think there’s a number of ethical considerations. One is data from students and data privacy issues. I’m not an expert on that but I think, “Where’s that data going? Who has access to it?” Sometimes these are companies that make these tools. What do they do with that data? Are they selling it to people or to other companies? And so, I think there’s lots of considerations there.

And another one that I’ve been interested in – in my own work – is this issue of equity. And AI has a lot of biases that when we fit models to data that can be biased in many different ways. And these biases sometimes, you know, it’s not that someone’s not well-intentioned. Sometimes we have the best of intentions, but now we’re sort of seeding some of our authority to this tool that we’ve developed, but we don’t really know what it’s going to do in all cases.

So, it might behave differently with different students from different backgrounds. For example, with ChatGPT or these language-based AI tools, they’re trained on data. And their data might be more representative of certain kinds of students and not others, right? And then when interacting with students who might speak different dialects or just come from a different cultural background from whatever cultural backgrounds were most representing that data, the AI might interact differently with those students in ways that we might not even expect ahead of time.

We’ve talked about some of the concerns that arise when we implement AI in the learning environment. Are there any that haven’t been mentioned yet?

When we think about what these systems could look like in a couple of years, they’re going to move from just focusing on cognitive development – or primarily cognitive development – to becoming multimodal sensing agents. And by that, I mean we can start to have rooms that can track your facial expressions when you move in and out of them and track different physical shifts as well as your language and discourse and use all of that for good, in one instance, where we’re saying, “Oh, we can track when a student is stressed out, or different social developmental things that would be helpful.”

I think another concern there is a different type of privacy that I don’t hear talked about a lot – beyond just the data privacy – but maybe we could call it like emotional privacy of students and sort of what we expect to be our internal states of being and being kind of exposed by these AI systems. And so, I think that that’s an interesting one – one I’m still percolating on. I don’t know how to best discuss it just yet, but I think that it will become, um, a topic of conversation moving forward.

Yeah, there’s a lot of concern that these tools are going to be used for surveillance, right, and for ill intentions, right? Like, it might sound like, “Oh, this is great. We’re able to track all of these things. We have all these sensors in the classroom,” and it’s like, “Well, what are you doing with it?”

And as we’ve seen with, for example, like during the pandemic, a lot of universities and high schools, they were using these proctoring software and they would be using video data to see is the student doing something – misbehaving. At the beginning, they used some facial recognition software and sometimes some software wouldn’t detect students with darker skin tones and so there’s issues like that. And then sometimes the student might be doing something – maybe they have a tic or something – and the software would flag them as cheating, right? So, it’s like surveillance that really has negative repercussions, again, due to biases that I mentioned earlier.

With this increasing reliance on AI and education, how do we ensure equitable access to these technologies for all students, regardless of their socioeconomic background or perhaps their geographical location?

I think that kind of a task for policymakers, right? Prioritizing projects that are aimed at contributing to that – I think it’s a huge one. And – to some of Shayan’s concerns as well – we need policies in place to both protect students and ensure access to these things. And that’s kind of two sides of the same coin, right? We want you to have it and we want to protect you from it as well.

Looking ahead, what do you envision as the next breakthrough for the use of AI in education?

Forefront in my mind is something that I’ve been very fascinated by for the last couple of years – and that we actually have a collaboration going on around – is this idea of… well, I also want to give a shout out to an article called “Machines as Teammates” – it’s got like 12 authors. It’s a conceptual piece all around “what does it look like when we stop using AI as a tool?”

So, like Alexa or Siri, like, “Hey, do this for me, put this on my shopping list.” And it becomes something akin to you and I speaking right now. We treat it very much like another human. We engage with it, we help it, it helps us, we navigate and solve problems together in teams.

And so, I think – to your question – I think the next kind of big breakthrough, or one of the next big breakthroughs that we are working on, is imagining, or starting to study, AI as teammates. So, AI not as a virtual agent or a virtual tutor, but AI is another peer student trying to solve the problem alongside you with all the same and different, perhaps unique emotions and cognitions and things. So, I think that that will be interesting to see.

I’m always wary to make predictions because it’s so hard, you know, I wouldn’t have predicted this sort of boom in interest in AI and education that came about when ChatGPT got released. But I think one prediction that I might make is that the future of AI in education is going to be bespoke.

By that, I mean that we’re not going to see like one killer app that everyone’s going to be using and it’s going to be used in all schools. That’s never really happened in the history of educational technology. So many people have talked about the promise of a particular application or a particular software or tool, and for a while there was a lot of interest in that, and then it sort of died out for various reasons.

But I think what we see now happening is that sort of the AI is being put in the hands of people who previously couldn’t create their own tools. Now they can sort of try to create tools with the AI, right? Through things like prompt engineering, you can prompt the AI to behave in a certain way. And as I mentioned – as we’ve discussed earlier – this has a lot of limitations. You can’t always expect it to behave in the way you expect, but now teachers can create a custom application for their classroom that was not really possible before. Or school districts can come up with custom applications with AI that, again, wasn’t really possible before – you know, a company had to develop something that many school districts would adopt.

So, I think we’re going to see a lot of these sort of custom tools being built by individual educators, by districts and various stakeholders, right? By students themselves, right? Students can create AI tools – that itself is an educational experience. So, I think we’re going to this sort of proliferation of different tools that we don’t even know, you know… as researchers, we won’t even know what’s being developed. But some of them will be working really well in some cases. Some of them might not. And then, hopefully they’ll move on and try something different.

What steps can educators and policymakers take to kind of prepare for whatever the next wave is? I mean, ChatGPT came in like a tsunami and washed over all of us and was a gigantic “wow!” And not knowing what’s next, is there anything that educators and policymakers can do to get ready for that?

Yeah, that’s a tough one. I part of getting ready for the next step is really understanding what’s going on with what AI really is, how these tools work. And I think that speaks to AI literacy. You know, we talk about this for students, often. This has been a growing area: that students need to be literate about AI tools because they’re common society. So many jobs are now requiring them, otherwise they might displace people’s jobs – you know, a lot of the rhetoric that exists out there.

But I think teachers also need to be AI literate. They have to understand what these tools are, how they work, when they don’t work. And part of that AI literacy, I think, could be the more you have of it – if a new tool comes about, you can more quickly get up to speed with that, right? Rather than going from scratch to like, “Oh, I have to understand what this tool is entirely.”

So, if we work – you know, policymakers, researchers and educators – if we work together to increase efforts in AI literacy, both for students and for teachers and administrators, all the stakeholders, then I think people will have some familiarity with what these tools are and how they work. Just like people, hopefully, teachers already have familiarity with computers, the internet, these tools. So, if new things come about, they can adapt to these things.

But with AI because it’s sort of a little bit more foreign and people don’t have a good sense of what’s happening behind the scenes, I think there needs to be more work developing that. And that’s one thing we’re doing right now in my lab with a Ph.D. student of mine: we’re actually trying to survey undergraduate students to see how much literacy they have about these new generative AI tools and what some common misconceptions might be. So that’s the first step, understanding what people already know and what they don’t know, and then working to address those barriers and challenges.

I couldn’t agree more. So, policymakers can support efforts for AI literacy. One of my classes I teach is called “21st Century Literacies.” And in that class, we cover collaboration, communication, creativity – all of these things that have become increasingly important in the 21st century – not that they weren’t important before – but as we’ve moved from an industrial, individualized sort to more collectivist working environments, collaborative working environments, I think AI literacy is just as, if not more important, than all of those. And I’ve started to integrate that into the classroom because it’s so critical for students and teachers to have some type of foundation to navigate because I feel like a lot of the flailing that you might kind of see right now in education and AI is a lack of education around AI and/or misinformation around it. And so, addressing some of those is going be great moving forward.

Is there anything that either of you wanted to share that I didn’t ask about that you thought, “This is something I want to make sure I bring to this conversation and share with the audience?”

Maybe a closing point is there’s been a lot of discussion around the pros and cons of AI and education and some people just trying to shut it down initially, or shut down the most recent wave, completely remove it from the classroom. And I don’t think that that is a realistic approach or a helpful approach. I think this ties nicely into the AI literacy where this is not a switch that we can turn off. We are here, for better or for worse. And I think doing rigorous research around a lot of the topics that we discussed today is how we move forward, combined with educating students and teachers, and learning how to use this to our benefit. Instead of being fearful of it.

One thing I’d like to add is we’ve been talking so far about AI tools, right? And AI – for practical purposes – how it’s going to be used, for better or for worse, in classrooms. But one focus of my research has been AI not only as sort of this practical tool, but as this lens to understand human intelligence and ourselves as people. And that was actually really the quest for developing AI in the early days – was really focused on developing tools that that could help us understand how the mind works from a cognitive science perspective. And so, I think that’s sort of been … I wouldn’t say completely forgotten. There are still people thinking about that, but I think it’s been largely abandoned because AI has become so powerful as a tool that people just focus on it as like, “What can we do with it?”

And the AIs that we’ve developed have looked very different from people. So, I think because of that, people have just sort of moved away from that. But I think thinking about how AI can help us understand ourselves better, and this has a lot of educational implications. A lot of those early researchers were interested in, “Well, how can we understand how people learn and then use that to improve education?” And I think there’s a lot of opportunities there. With some of the new tools, for example, a lot of people talk about how, “Oh, these tools are amazing! They seem to show aspects of intelligence, but they also have these weird behaviors that are very not human-like.” So, by reflecting on these tools – by reflecting on things like ChatGPT – we can think about, “What does that tell us about ourselves as people?”

And how can students engage in experiences with these AIs to understand what makes us distinctly human in a sense? And one project we’re trying to get started on this, I’m collaborating with UCI philosophy professor Duncan Pritchard – who was actually a previous guest on this podcast – and we’re thinking about how what AI can tell us about intellectual virtues and how children interacting, or youth interacting, with AI can learn more about the importance of intellectual virtues, which AI, I would say, does not have.

Yes, there’s a whole “Anteater Virtues” that Professor Pritchard is in charge of. Thank you both so much for joining us today to share your in-depth knowledge about AI and education.

I’m Cara Capuano. Thank you for listening to our conversation. For the latest UCI News, please visit news.uci.edu. The UCI Podcast is a production of Strategic Communications and Public Affairs at the University of California, Irvine. Please subscribe wherever you listen to podcasts.

More From Forbes

Four ways ai could revolutionize education technology.

Forbes Technology Council

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Ashok Manoharan, Founder/CTO FocusLabs .

Artificial intelligence (AI) has a lot of potential for innovation in educational technology and to impact traditional teaching methods.

Last year, the World Economy Forum (WEF) found that teachers often put in over fifty hours a week , with direct student interaction making up less than half of this work. Students are also facing an engagement gap, with 50% of students saying they are "not engaged in what they are learning in school most of the time," according to Gradient Learning research.

AI could impact instructors and students alike with innovations ranging from personalized learning journeys to predictive analytics. There are already several key solutions available in this space, such as Thinkster Math , Jill Watson , Nuance and Congii , among several others.

However, this is just the start of a far larger paradigm shift in education. Here are five ways AI is beginning to revolutionize educational technology.

‘Baby Reindeer’ Star Says Real Martha Searches Need To Stop

‘challengers’ reviews: does zendaya tennis movie score with critics, patriots select north carolina quarterback drake maye with no 3 pick in nfl draft, assessments.

In the past, assessments were more like memory competitions than actual learning. Recall ability was used to make judgments, which reduced exams to simple memorization drills while missing important aspects of learning, like how to navigate real-world situations analytically.

Generative AI (GenAI) can encourage a transition from memorization to interaction. For example, AI can allow students to converse directly with a GenAI solution, which could help assess broader aspects of learning , such as the ability to solve problems, depth of knowledge and domain-specific competence while maintaining objectivity in the judgment outcomes.

Customized AI Mentorship

By engaging students in individualized academic journeys, AI can encourage critical thinking and provide proactive help.

The availability of AI mentors , for example, can provide unceasing liberated from timing restrictions. Teachers, therefore, have access to new ways to enhance student learning through individualized tutoring that blends academic instruction with personalized support.

Furthermore, this type of AI can make learning more captivating and accessible to a wider range of scholars.

Hybrid Learning

Today, hybrid learning has changed the landscape of education by blurring the line between digital and physical learning environments.

This shifting landscape will require a good deal of support of both educators and students. AI can help streamline aspects of this transition by sorting through learner data to customize educational paths , predict academic success and provide immediate feedback .

This could be especially helpful for students if they want to experience a conventional educational environment while also acquiring targeted online skills that will make them more marketable in the job market. By understanding a student's needs, AI can help create learning paths that, for instance, include in-person classes for specific subjects while simultaneously allowing them to start on a web-based certification in graphic design.

Exams With Remote Supervision

High-tech developments can also support remote surveillance during online exams, ensuring a safe testing setting without requiring students to spend time at a predetermined location.

Remote proctoring is a growing sector that is completely revamping traditional test methodologies. Companies are incorporating AI into these solutions to monitor and flag suspicious behavior .

How To Implement AI In Educational Settings

The first step of implementing AI in educational settings is determining the precise demands that technology can fulfill, such as improving individualized instruction or expediting administrative duties.

Educators should begin small, using AI technologies that directly support their learning objectives, and then expand as comfort and familiarity improve. Initiatives such as professional development workshops and collaboration with AI-savvy colleagues might help smooth the shift.

Implementing AI also presents a variety of challenges, from technological difficulties to worries about data privacy and the digital divide. The initial setup and incorporation of AI systems can be difficult for educators requiring continuous support and training. To guarantee that AI applications remain transparent and fair, you'll also have to work with providers who can show how they've addressed problems like bias in algorithms.

Also, critics fear a decline in interpersonal communication. It's crucial to strike a balance between the indispensable human touch and AI application to education. Open discussions about AI's potential and constraints can allay anxieties and create an atmosphere where technology enhances rather than replaces conventional teaching techniques.

By taking thoughtful, informed steps towards AI adoption, educators can leverage its benefits while navigating potential pitfalls, ensuring that technology serves to enhance—not overshadow—the educational experience.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Ashok Manoharan

  • Editorial Standards
  • Reprints & Permissions

IMAGES

  1. (PDF) ARTIFICIAL INTELLIGENCE IN EDUCATION

    ai in education research paper

  2. 005 Artificial Intelligence Essay Largepreview ~ Thatsnotus

    ai in education research paper

  3. Recent AI-related university research bibliography

    ai in education research paper

  4. 005 Artificial Intelligence Essay Largepreview ~ Thatsnotus

    ai in education research paper

  5. (PDF) Artificial Intelligence and its Implications in Education

    ai in education research paper

  6. artificial intelligence research paper 2019 pdf

    ai in education research paper

VIDEO

  1. AI in education: Collaborative discussions and experimentation with students

  2. Decoding the "AI and Future of Education" Report

  3. Generative AI in the classroom: Hype or reality?

  4. How AI is forcing teachers to change the way they teach

  5. Comprehensive Exploration of the Archimedean Property: Insights & Proofs for UG Mathematics Students

  6. How Misuse of Funding Could Affect Education

COMMENTS

  1. AI technologies for education: Recent research & future directions

    2.1 Prolific countries. Artificial intelligence in education (AIEd) research has been conducted in many countries around the world. The 40 articles reported AIEd research studies in 16 countries (See Table 1).USA was so far the most prolific, with nine articles meeting all criteria applied in this study, and noticeably seven of them were conducted in K-12.

  2. (PDF) ARTIFICIAL INTELLIGENCE IN EDUCATION

    This paper explains how Artificial Intelligence (AI) can and is being applied in the educational sector. Artificial Intelligence in the educational sector is one of the currently expanding ...

  3. Artificial intelligence in education: The three paradigms

    The research questions are what are the different roles of AI in education, how AI are connected to the existing educational and learning theories, and to what extent the use of AI technologies influence learning and instruction. In order to locate and summarize relevant papers, the systematic procedures of literature selection and ...

  4. Artificial intelligence innovation in education: A twenty-year data

    The term AI, coined by John McCarthy in 1955, is defined as a computer with the capability to perform a variety of human cognitive tasks, such as communicating, reasoning, learning, and/or problem-solving (Nilsson, 1998).Baker and Smith (2019) further explain that AI represents a generic term to describe a wide collection of different technologies and algorithms (e.g., machine learning, NLP ...

  5. Artificial Intelligence in Education: A Review

    This study assesses the impact of AI on education in administration, instruction, and learning. It reviews the literature and framework of AI and its applications and effects in education, from computer technologies to humanoid robots and chatbots.

  6. Artificial intelligence in higher education: the state of the field

    This systematic review provides unique findings with an up-to-date examination of artificial intelligence (AI) in higher education (HE) from 2016 to 2022. Using PRISMA principles and protocol, 138 articles were identified for a full examination. Using a priori, and grounded coding, the data from the 138 articles were extracted, analyzed, and coded. The findings of this study show that in 2021 ...

  7. Artificial Intelligence in Education (AIEd): a high-level academic and

    This article presents a high-level overview of AI in Education (AIEd) research and practice, focusing on reducing teachers' workload, contextualized learning, assessments and intelligent tutoring systems. It also discusses the ethical dimension of AIEd and the impact of Covid-19 on the field.

  8. Generative AI in Education and Research: Opportunities, Concerns, and

    In this article, we discuss the role of generative artificial intelligence (AI) in education. The integration of AI in education has sparked a paradigm shift in teaching and learning, presenting both unparalleled opportunities and complex challenges. This paper explores critical aspects of implementing AI in education to advance educational goals, ethical considerations in scientific ...

  9. Systematic review of research on artificial intelligence applications

    This paper seeks to provide an overview of research on AI applications in higher education through a systematic review. Out of 2656 initially identified publications for the period between 2007 and 2018, 146 articles were included for final synthesis, according to explicit inclusion and exclusion criteria.

  10. The impact of artificial intelligence on learner ...

    The next section provides the theoretical framework and background behind this research paper by describing the main aspects of the learner-instructor interaction and AI systems in education. ... Computers and Education: Artificial Intelligence, 1, 100001. Google Scholar Jou, M., Lin, Y. T., & Wu, D. W. (2016). Effect of a blended learning ...

  11. AI in learning: Preparing grounds for future learning

    In education and learning, applying AI to education is not new. The history of AI in learning research and educational applications goes back over 50 years (Minsky & Papert, 1968).In 2016, the researchers' panel report summarized (Stone et al., 2016) that there have been several common AI-related themes worldwide over the past several years, such as teaching robots, intelligent tutoring ...

  12. A systematic review of AI role in the educational system based on a

    The main focus of this research is to explore the roles of AI during the instructional and learning processes. We only consider the instructional and learning processes, that include the instructor and learner as subjects, and relationships include the instructor-student relationship (e.g., an instructor delivers knowledge to a student in the instructor-directed lecturing), student-self ...

  13. Artificial intelligence in education: Addressing ethical challenges in

    To address these issues, this paper (1) briefly defines AI through the concepts of machine learning and algorithms; (2) introduces applications of AI in educational settings and benefits of AI systems to support students' learning processes; (3) describes ethical challenges and dilemmas of using AI in education; and (4) addresses the teaching ...

  14. Full article: Artificial intelligence in higher education: a

    This paper highlights the trajectory of AI research in higher education (HE) through bibliometric analysis and topic modeling approaches. We used the PRISMA guidelines to select 304 articles published in the Scopus database between 2012 and 2021.

  15. PDF Artificial Intelligence and the Future of Teaching and Learning

    The 2023 AI Index Report from the Stanford Institute for Human-Centered AI has documented notable acceleration of investment in AI as well as an increase of research on ethics, including issues of fairness and transparency.2 Of course, research on topics like ethics is increasing because problems are observed.

  16. Evaluating Artificial Intelligence in Education for Next Generation

    This study aims to examine teacher's and student's perceptions of the use and effectiveness of AI in education. Its curse and perceived as a good education system and human knowledge. The optimistic use of AI in class is strongly recommended by teachers and students. But every teacher is more adapted to new technological changes than students.

  17. Generative Artificial Intelligence in Education and Its Implications

    The abrupt emergence and rapid advancement of generative artificial intelligence (AI) technologies, transitioning from research labs to potentially all aspects of social life, has brought a profound impact on education, science, arts, journalism, and every facet of human life and communication. The purpose of this paper is to recapitulate the use of AI in education and examine potential ...

  18. Artificial Intelligence in Education and Schools

    This study was supported within the project named Artificial Intelligence Education for C hildren ... This research paper aims to investigate the impact of Chat GPT, an AI language model, on ...

  19. AI and its implications for research in higher education: a critical

    Literature review. Artificial Intelligence (AI) has dramatically altered the landscape of academic research, acting as a catalyst for both methodological innovation and broader shifts in scholarly paradigms (Pal, Citation 2023).Its transformative power is evident across multiple disciplines, enabling researchers to engage with complex datasets and questions at a scale previously unimaginable ...

  20. Artificial intelligence in education: Addressing ethical challenges in

    Existing research on AI in education provides insight on supporting students' understanding and use of AI ... Tiple, Vasile, Recommendations on the European Commission's WHITE PAPER on Artificial Intelligence - A European approach to excellence and trust, COM(2020) 65 final (the 'AI White Paper') (2020). 10.2139/ssrn.3706099.

  21. Full article: Advancing the generative AI in education research agenda

    In the expansive landscape of education, the integration of Generative Artificial Intelligence (Generative AI) has initiated a transformative wave, reshaping established paradigms of learning, teaching, and assessment (Baidoo-Anu & Ansah, Citation 2023; Qadir, Citation 2023).The papers presented in this special issue collectively provide a nuanced exploration of the evolving role of generative ...

  22. (PDF) Role of Artificial Intelligence in Education

    Role of Artificial Intelligence in Education. Kandula Neha. Assistant Professor, Dept of Artificial Intelligence, Vidya Jyothi Institute of Technology, Aziz Naga r, Hyderabad- 75. [email protected] ...

  23. Artificial Intelligence in Education and Learning (AI in Research

    Digitalization and the use of computers in the process of education has been in place for years. Recently we have also seen a growing interest of Artificial Intelligence in the education and learning on various fields, like, digital lessons, AI tutoring, AI in testing systems, AI in research, education related task automations and more. Through this research we present the importance of AI in ...

  24. (PDF) Impact of Artificial Intelligence (AI) on Education: Changing

    Md Mousuf Raza. Abstract. Artificial Intelligence (AI) technology is to make human life easy and t rouble-free and. contribute to t he advancement of human development. AI is a driving ...

  25. A Social Perspective on AI in the Higher Education System: A ...

    The application of Artificial Intelligence in Education (AIED) is experiencing widespread interest among students, educators, researchers, and policymakers. AIED is expected, among other things, to enhance learning environments in the higher education system. However, in line with the general trends, there are also increasing concerns about possible negative and collateral effects.

  26. Explore insights from the AI in Education Report

    Survey results from the AI in Education Report show the widespread use and potential of AI in enhancing learning experiences and outcomes for different roles. Learn how AI can help your school Each month, the heaviest Microsoft 365 Education users receive hundreds of emails and chat messages to get things done.

  27. The past, present and future of AI in education

    Assistant professors of education, Shayan Doroudi and Nia Nixon, both run research labs studying the use of artificial intelligence in educational spaces. One focus of Doroudi's Mathe Lab is the historical connection between AI and education, which spans over 50 years.

  28. Students' Intention toward Artificial Intelligence in the ...

    This paper investigates students' intention to use artificial intelligence in education, taking three predictors from the UTAUT model and AI awareness as the moderator. ... provides an outlook for future research directions and describes possible research applications. Feature papers are submitted upon individual invitation or recommendation ...

  29. Four Ways AI Could Revolutionize Education Technology

    Artificial intelligence (AI) has a lot of potential for innovation in educational technology and to impact traditional teaching methods. Last year, the World Economy Forum (WEF) found that ...

  30. Chatbots applications in education: A systematic review

    1. Introduction. The use of Artificial Intelligence (AI) in education is rapidly expanding (Roos, 2018).One of the most popular AI technologies used to support teaching and learning activities is the Chatbot system (Okonkwo & Ade-Ibijola, 2020).Chatbots are being considered as a useful technology to facilitate learning within the educational context (Clarizia et al., 2018, pp. 291-302).