Articles on Computer programming

Displaying 1 - 20 of 35 articles.

computer programming research

AI helps students skip right to the good stuff in this intro programming course

Leo Porter , University of California, San Diego and Daniel Zingaro , University of Toronto

computer programming research

AI can now attend a meeting and write code for you – here’s why you should be cautious

Simon Thorne , Cardiff Metropolitan University

computer programming research

From ancient Jewish texts to androids to AI, a just-right sequence of numbers or letters turns matter into meaning

Rhona Trauvitch , Florida International University

computer programming research

Ada Lovelace’s skills with language, music and needlepoint contributed to her pioneering work in computing

Corinna Schlombs , Rochester Institute of Technology

computer programming research

Why elementary and high school students should learn computer programming

Hugo G. Lapierre , Université du Québec à Montréal (UQAM) and Patrick Charland , Université du Québec à Montréal (UQAM)

computer programming research

Nonprogrammers are building more of the world’s software – a computer scientist explains ‘ no-code ’

Tam Nguyen , University of Dayton

computer programming research

Ballet dancers should absolutely think about becoming computer programmers – here’s why

John Bryson , University of Birmingham

computer programming research

Curious Kids: who is Siri?

Allison Gardner , Keele University

computer programming research

The promise of the “learn to code” movement

Ivan Ruby , Concordia University and Ann-Louise Davidson , Concordia University

computer programming research

Taking a second look at the learn-to -code craze

Kate M. Miltner , USC Annenberg School for Communication and Journalism

computer programming research

Building privacy right into software code

Jean Yang , Carnegie Mellon University

computer programming research

Hunting hackers: An ethical hacker explains how to track down the bad guys

Timothy C. Summers, Ph.D. , University of Maryland

computer programming research

Mobile phones offer a new way for Africa’s students to learn programming

Dr. Chao Mbogho , Kenya Methodist University

computer programming research

Moving toward computing at the speed of thought

Frances Van Scoy , West Virginia University

computer programming research

Ada Lovelace blazed a trail in science – we need more women to follow in her footsteps

Carron Shankland , University of Stirling

computer programming research

This little-known pioneering educator put coding in the classroom

Therese Keane , Swinburne University of Technology and Leon Sterling , Swinburne University of Technology

computer programming research

How to keep more girls in IT at schools if we’re to close the gender gap

Karin Verspoor , The University of Melbourne

computer programming research

Google wins in court, and so does losing party Oracle

Robert Harrison , Georgia State University

computer programming research

How computers broke science – and what we can do to fix it

Ben Marwick , University of Washington

computer programming research

In the push for marketable skills, are we forgetting the beauty and poetry of STEM disciplines?

Paul Myers , Trinity University

Related Topics

  • Artificial intelligence (AI)
  • Computer science
  • Digital economy
  • Programming

Top contributors

computer programming research

Professor Emeritus, Swinburne University of Technology

computer programming research

Head, The Cyber Academy, Edinburgh Napier University

computer programming research

Principal Research Associate, UCL

computer programming research

Professor of IT, Southern Cross University

computer programming research

Educator, Researcher, Mentor, Kenya Methodist University

computer programming research

Research Associate at the Centre for Research on Evolution, Search and Testing, UCL

computer programming research

Assistant Lecturer in Software Engineering; PhD candidate in Computer Science Education, Monash University

computer programming research

Professor of AI, Research Group Leader, UNSW Sydney

computer programming research

Professor of Computer Networks and Distributed Systems, Bielefeld University

computer programming research

Professor in STEM Education, La Trobe University

computer programming research

Emeritus Professor, University of Nottingham, (currently CEO and Senior Vice President of the PETRA Group), University of Nottingham

computer programming research

ESRC Future Research Leader Fellow, The University of Edinburgh

computer programming research

Assistant Professor of Computer Science, Carnegie Mellon University

computer programming research

Lecturer in Software Engineering, Monash University

computer programming research

Professor of Reading and Children’s Development, The Open University

  • X (Twitter)
  • Unfollow topic Follow topic
  • Open access
  • Published: 23 September 2021

Comparing learners’ knowledge, behaviors, and attitudes between two instructional modes of computer programming in secondary education

  • Dan Sun 1 ,
  • Fan Ouyang   ORCID: orcid.org/0000-0002-4382-1381 1 ,
  • Yan Li 1 &
  • Caifeng Zhu 2  

International Journal of STEM Education volume  8 , Article number:  54 ( 2021 ) Cite this article

10k Accesses

28 Citations

1 Altmetric

Metrics details

Unplugged programming is proved to be an effective means to foster the learner-centered programming learning. In addition to the final tests, learners’ programming knowledge, skills, and capacities are primarily demonstrated throughout the programming process, particularly in the situation when they encounter challenges and problems. However, few studies examine how learners engage in the programming processes and to what extent unplugged programming fosters learning. This research used a quasi-experimental design to investigate two instructional modes in China’s secondary education, namely, the instructor-directed lecturing and the learner-centered unplugged programming. Based on an analytical framework, this research used mixed methods to compare learners’ knowledge, behaviors, and attitudes under these two instructional modes.

The research results revealed discrepancies between two instructional modes. First, learners in the unplugged programming class achieved significantly higher scores on the programming knowledge assessment, compared to learners in the traditional lecturing class. Second, compared to the traditional lecturing class, learners in the unplugged programming class had higher test scores of the computational thinking skills, particularly on the cooperativity dimension. Next, discrepancies of in-class behaviors showed that learners in the unplugged programming class had frequent behaviors of listening to the instructor’s instructions and discussing with peers, while learners in the instructor-directed class had frequent behaviors of listening to instructor, taking notes, and irrelevant activities. Learners’ self-reported attitudes in the unplugged programming indicated a higher level of confidence than learners in the traditional lecturing class. Overall, this research revealed that the learner-centered unplugged programming had potential to improve learners’ programming knowledge, behaviors, and attitudes compared to the traditional instructor-directed lecturing of programming.

Conclusions

As a feasible and easy-to-use instructional activity in computer science education, unplugged programming is encouraged to be integrated in formal education to increase learners’ programming interests, motivations, and qualities. This quasi-experimental research compared learners’ programming knowledge, behaviors, and attitudes under two instructional modes. The results revealed critical discrepancies between two instructional modes on learners’ knowledge gains, in-class behaviors, and changes of attitudes towards programming. Pedagogical and analytical implications were provided for future instructional design and learning analytics of computer programming education.

Introduction

As one strand of the science, technology, engineering and mathematics (STEM) education, computer programming has positive influences on advancing learners’ computational thinking (CT) skills (Sun et al., 2021a , b ), fostering their motivation and engagement (Schnittka et al., 2015 ), and improving their computer science career interests (Chittum et al., 2017 ). In formal education, the instructor-directed lecturing is a widely used instructional mode, through which instructors transmit computer programming knowledge to learners with oral presentations (Wu & Wang, 2017 ). Although this instructional approach helps learners gain computer programming knowledge, instructors encounter many challenges during actual programming practices, such as how to decrease learners’ frustration and failure, how to sustain their programming interests and motivations, and how to eventually improve their programming skills and capacities (Falloon, 2016 ; Looi et al., 2018 ; Tom, 2015 ). To address those challenges, emerging instructional strategies, e.g., unplugged, game-based, or project-based programming, have been used in informal learning to transform the instructor-directed lecturing of programming knowledge to the pragmatic, learner-centered programming practices (Brackmann et al., 2017 ; Hosseini et al., 2019 ; Nurbekova et al., 2020 ).

Among those practical strategies, unplugged programming is a hands-on programming activity without the supports of computers or other electronic technologies to contextualize computational concepts and algorithms through physical or kinesthetic activities (e.g., Alamer et al., 2015 ; Gouws et al., 2013 ; Thies & Vahrenhold, 2013 ). Research argues that unplugged programming activities can simplify computational concepts for learners and, therefore, promote their programming engagement, motivation, and interest (Alamer et al., 2015 ; Looi et al., 2018 ). During the programming process, learners usually encounter programming challenges and problems; to solve programming problems, they need to pose and answer questions, share and construct knowledge, and create programming solutions or products through individual learning or peer interaction (Lewis, 2012 ; Sun et al., 2021a , b ; Wu et al., 2019 ). However, few studies actually examine how learners engage in the programming practices from a process-oriented perspective, and to what extent unplugged programming activities foster learner learning (del Olmo-Muñoz et al., 2020 ; Grover et al., 2019 ; Huang & Looi, 2020 ). The process-oriented perspective focuses on details of how students coordinate their communications, discourses, and behaviors during actual instruction and learning processes (Pereira et al., 2020 ; Sun et al., 2021a , b ; Wu et al., 2019 ). Correspondingly, the process-oriented analysis stresses the micro-level, fine-grained analysis of students’ behavioral, cognitive, metacognitive activities during the programming practice, which is beneficial for researchers to gain a holistic insight into how programming activities progress.

In response to this research gap, this research used a quasi-experimental research supported with mixed methods to implement and investigate the learner-centered unplugged programming in China’s secondary education and compared the effects of the unplugged programming activities on learner’s learning with the traditional instructor-directed lecturing mode. Specifically, mixed methods (i.e., video analysis, lag sequential analysis, statistical analysis, and thematic analysis) were used to analyze and compare learners’ gains of programming knowledge, in-class behaviors, and attitudes towards programming between two instructional modes. Based on empirical results, this research proposed a holistic insight of pedagogical and analytical implications for computer programming education.

Literature review

Grounded upon the constructivist perspective, learning is an active, constructive process, through which learners actively construct their own understandings through interacting with peers, resources, and technologies (Papert, 1991 ). Contextualization of sophisticated computational algorithms is a means to alleviate the difficulties of learners’ conceptual understandings, and, therefore, stimulate learners’ active learning and construction of programming knowledge (Bransford et al., 2000 ; Falloon, 2016 ). A major approach to contextualize programming is the unplugged programming that exposes learners to computational concepts and algorithms without the support of computers (Bell et al., 2009 ). In the hands-on, unplugged activities, learners conceptually engage in understanding relevant programming knowledge through a series of contextualized materials (e.g., logic games, cards, strings, or physical actions). The unplugged programming is mostly used in the informal learning contexts to engage novice learners in computer programming (Taub et al., 2012 ; Thies & Vahrenhold, 2013 ). During the programming process, learners’ programming knowledge, skills, and capacities can be demonstrated, particularly in the situation when they encounter challenges and problems that require deliberate problem-solving and meaning-making process (Wu et al., 2019 ). In K-12 formal educational context, the main instructional approach of computer programming is still the instructor-directed lecturing, sometimes followed with learners’ programming practices on computers (Panwong & Kemavuthanon, 2014 ).

Empirical research has indicated that, compared to the traditional instructor-directed lecturing, unplugged programming has potential to foster learners’ programming knowledge, active engagement, and positive attitudes. First, unplugged programming can improve learners’ computational thinking (CT) skills and programming knowledge acquisitions. For example, Alamer et al. ( 2015 ) reported that unplugged activities succeeded in simplifying key programming concepts to shape and deepen learners’ understandings of the programming knowledge. Ballard and Haroldson ( 2021 ) also summarized the effects of using non-programming, unplugged approaches to teach programming skills and concepts (e.g., abstraction, generalization, decomposition, algorithmic thinking, debugging). Second, research finds that unplugged programming has potential to improve learners’ active engagement. For example, through video analysis, Looi et al. ( 2018 ) found that unplugged activities helped all group learners engage in the explorations of the sorting algorithms, which resulted in good programming performances on this algorithm. Tsarava et al. ( 2018 ) designed board games to increase children’s motivation for programming learning, and found that unplugged activities helped keep children engaged in the programming game. Third, a series of studies have been conducted to examine the benefits of unplugged programming for promoting learners’ positive attitudes. For example, Hermans and Avvaloglou ( 2017 ) found that, compared with learning in Scratch, learners who started with unplugged lessons were more confident of their capacities to understand the programming concepts. Mano et al. ( 2010 ) designed in-class unplugged programming activities and found an improvement in the interest levels in computing among learners. Overall, unplugged programming activities have potential to promote learners’ programming knowledge, programming engagement, and positive attitudes towards programming.

Although relevant studies argue that unplugged programming activities can foster learners’ active engagement, few studies actually examine how learners engage in the programming practices from a process-oriented perspective and to what extent unplugged programming activities foster learner’s learning. Most studies focus on summative assessment of learners’ programming knowledge acquisitions, improvements of CT skills, or self-reported perceptions towards programming. For example, Brackmann et al. ( 2017 ) designed a quasi-experiment research to examine the effectiveness of unplugged activities on the development of CT skills in primary schools, and found learners who took part in the unplugged activities enhanced their CT skills significantly, compared to their peers in control groups. Gardeli and Vosinakis ( 2017 ) continuously observed learners’ individual and group behaviors during unplugged visual programming, and reported that the unplugged activity was an engaging and collaborative approach to improve learners’ satisfaction and enjoyment. Torres-Torres et al. ( 2019 ) used instructor’s informal observations and found the unplugged programming class managed to build routes of algorithm learning and achieve a high level of complexity in the codes. Saxena et al. ( 2020 ) used field notes to record learner performances and interactions as well as instructor’s instructional practices during unplugged and plugged activities, to provide learners with concrete guidance and support for subsequent programming activity. Taken together, most of those studies use the summative assessments or informal observations to examine learners’ programming skills and engagement levels, but do not examine learners’ engagement during the programming practices from a process-oriented perspective. Since programming requires a deliberate problem-solving, meaning-making, and knowledge-construction process (Lewis, 2012 ; Sun et al., 2021a , b ; Wu et al., 2019 ), it is beneficial to empirically examine how unplugged programming influences learners’ programming from the process-oriented perspectives, to provide a holistic picture of learners’ programming behaviors, communications, and interactions (Sun et al., 2021a , b ; Wu et al., 2019 ).

Multiple analytical methods have been utilized in previous empirical research to analyze and demonstrate varied aspects of the programming processes. Berland et al. ( 2013 ) used learning analytics and data mining to examine details of how learners progressed from exploration, tinkering, to refinement during the learning processes. Results showed that learners in the exploration period produced more low quality programs, while the other two periods had much higher level of quality program states. Wu et al. ( 2019 ) used a quantitative ethnography approach to analyze the collaborative programming between a high‐performing and a low‐performing team. Results indicated that the high‐performing team exhibited the systematic CT skills, whereas the low‐performing team’s CT skills were characterized by tinkering or guessing. Sun et al., ( 2021a , b ) used mixed methods (e.g., click stream analysis, lag-sequential analysis, quantitative content analysis) to analyze three contrasting pairs’ collaborative programming behaviors, discourses, and perceptions. Results characterized the high-, medium-, and low-ranked pairs with different characteristics on the social interactive, cognitive engagement and final performing dimensions. Those studies indicate that multiple methods can be used to conduct the process-oriented analysis of computer programming, which is beneficial to demonstrate varied dimensions of the learning processes. As a complementary, the traditional, summative assessment (e.g., final tests) can help reveal learners’ direct performances of computer programming knowledge or skills. Following the analytical trend, this research uses a mixed method approach to reveal the effectiveness of unplugged programming from the summative and process-oriented perspectives.

To address those research and practice gaps, this quasi-experimental research applied two instructional modes, namely, the instructor-directed lecturing of programming and the learner-centered unplugged programming in China’s secondary education to improve computer programming education quality. Furthermore, this research used mixed methods to analyze and compare the effects of novice learners’ programming in those two instructional modes to inform instructional design of computer programming. The effects of learners’ computer programming were examined from the summative and process-oriented perspectives. Specifically, from the summative perspective, this research investigated learners’ programming knowledge gains and changes of attitudes before and after two instructional modes. From the process-oriented perspectives, this research examined learners’ in-class behaviors during instruction and learning activities under two instructional modes. Mixed methods were used, including statistical analyses of knowledge test and survey data, sequential analysis of in-class video data, and qualitative analysis of interview data. Based on the results, this research proposed pedagogical and analytical implications for future instructional design and research analytics of computer programming.

Research methodology

Research purposes and questions.

The research purpose was to compare effects of learners’ programming learning between two instructional modes, namely, the instruction with traditional instructor-directed lecturing (IDL) and the instruction with learner-centered unplugged programming (UPP). We compared the difference of learners’ knowledge gains, in-class behaviors, and changes of attitudes between two instructional modes. There were three research questions:

RQ 1. How did the impact of UPP on learners’ computer programming knowledge and skills differ from the impact of IDL?

RQ 2. How did the impact of UPP on learners’ learning behaviors differ from the impact of IDL?

RQ 3. How did the impact of UPP on learners’ positive attitudes towards programming differ from the impact of IDL?

The research analytical framework

This research proposed an analytical framework to investigate the differences between instructor-directed lecturing of programming (IDL) and learner-centered unplugged programming (UPP) from the process and summative perspective (see Fig.  1 ). On the process assessment perspective, research can collect behavioral data, including in-class behaviors (classroom video recordings of in-class programming activities) and computer operation behaviors (computer screen recordings of learner’s programming operations). Classroom video analysis and click-stream analysis can be applied to analyze behavior data, respectively. In addition, classroom audio recordings can capture in-class conversations from learners and the instructor, where quantitative content analysis, lag-sequential analysis and ethnographic interpretations could be applied to examine discourse patterns and characteristics. On the summative assessment perspective, programming knowledge data (e.g., pre- and post-tests) and final products (e.g., programming projects) can be collected as performance data and statistics can be used to examine the significance of performance changes. Moreover, as the Additional file 1 , Additional file 2 , learner attitudes (including data from pre-, post-surveys or interviews) can be used to further understand learners’ perceptions about programming. Taken together, this analytical framework provides an integration of the process and summative assessments for computer programming education.

figure 1

Analytical framework

Research context, participants, and instructional procedures

The research context is a compulsory course titled “Creative Programming Algorithms” offered at a junior high school during Spring 2020 in the Eastern area of China. Under the COVID-19 period, learners were not allowed to get access to the computer labs; instead, the classes were offered in a normal classroom with interactive whiteboards. This research used a quasi-experimental design to investigate learners’ knowledge gains, in-class behaviors, and attitudinal changes under the control condition (the instructor-directed lecturing of programming; IDL) and the experimental condition (the learner-centered unplugged programming; UPP).

There were 31 learners (female = 16; male = 15) in the control IDL class and 32 learners (female = 19; male = 13) in the experimental UPP class. Control and experimental classes were randomly assigned; learners in two classes were not informed of the different treatments. Classes were taught by the same instructor (the fourth author), who maintained the same teaching style under two conditions, offered the same instructional materials to learners, and used the same teaching guidance for each class, except the use of the unplugged programming activities in the experiment condition. The instructor, with the guidance and support from the research team, designed three phases (six instructional sessions; each session lasted 45 min) in this course. The first four sessions (Phase I and Phase II) introduced the basic concepts of programming, including binary conversion, sequence, selection, and loops; the last two sessions (Phase III) introduced two advanced algorithms (i.e., sorting and searching). The design of the instruction sessions referred to the computer literacy development programs (CS Unplugged, 2020 ) and the book titled Computer Science Unplugged: Realizing Computing through Games and Puzzles (Bell et al., 2012 ). The instructor modified the instructional content and procedures to adapt local learners’ programming capacities. For instance, instead of sorting network, the instructor introduced the sorting algorithm with bubble sort activities, and also controlled the activities duration within 20 min according to the time limitation of the class. The content are required to be taught with Python in China’s high school according to the Information Technology Curricula for China’s high schools (MOE, 2020 ). During the instruction and learning processes, in the IDL class, learners received the instructor-directed lecturing with oral presentations to learn programming concepts and algorithms (see Fig.  2 a). In the UPP class, the unplugged programming activities were intersected with the instructor’s lecturing; learners experienced 20-min unplugged programming activities in each session. For example, in the bubble sorting activity, learners held different paper cards, stood in a row randomly, and swapped with peers to make a correct sorting (see Fig.  2 b).

figure 2

Control class: IDL ( a ) and the experimental class: UPP ( b )

Data collection and analysis approaches

This research collected and analyzed data in four ways. First, we conducted pre- and post-test of learners’ computer programming knowledge and skills. The knowledge test included 12 questions, comprised of 10 multiple-choice questions on the programming concepts and 2 fill-in-blank questions related to the programming algorithms. Adapted from Computational Thinking Scale (CTS), learners’ computer programming skills were tested about the dimensions of creativity, algorithmic thinking, cooperativity, critical thinking, and problem solving. The CTS survey contained 5 dimensions and 29 measurement indicators (Korkmaz et al., 2017 ). Independent t test analysis was applied to compare the post-test of CT skills between two instructional modes.

Second, we recorded videos of two classes (without audios) to capture learners’ behaviors. We deliberately chose the last two courses classes (i.e., Phase III: the algorithm learning) as the video data for the current research (45 min/class; a total of 180 min). The reason that we chose the last two classes to collect behavior data was twofold. First, those two classes focused on two advanced algorithms that could better demonstrate programming capacities. Second, learners in the experimental class became more familiar with the procedures of the unplugged programming activities, such that they were more engaged in those two sessions as informal observation indicated. Video analysis was used to code learners’ in-class behaviors emerged during learning and instruction processes (Kersting, 2008 ). Video analysis followed an iterative coding process based on a previously validated coding framework (see D. Sun et al., 2021a , b ). Two coders first separately watched the video recordings and wrote descriptive notes in excel files to identify initial codes of learners’ behaviors. Then, two coders had multiple meetings to discuss behaviors with conflicting codes and double checked the codes to achieve an agreement of the final coding framework (see Table 1 ). Finally, two coders independently coded the data again in a chronological order based on the coding framework, marked learner behaviors every 5 s, and cross-checked each other’s coding results. Two coders reached an inter-rater reliability with the Cohen’s Kappa of 0.801.

Furthermore, based on the video coding results, the lag-sequential analysis (LsA) was used to analyze learners’ behavioral patterns (Faraone & Dorfman, 1987 ), including the transitional frequencies between two behaviors and the visualized network representations in two instructional modes. There are five LsA measures, including (1) transitional frequencies (how often a particular transition occurred for a specified sequential interval); (2) expected transitional frequency (the expected number of times a transition would occur under the null hypothesis of independence or no relation between the codes); (3) transitional probabilities (the likelihood of occurrence of event B given that event A occurs); (4) adjusted residuals z scores (the statistical significance of particular transitions); (5) Yule’s Q (standardized measure ranging from − 1 to + 1 denoting strength of association) (Chen et al., 2017 ). Yule’s Q was finally adopted to represent the strength of transitional association, because it controls for base numbers of contributions and is descriptively useful (with a range from − 1 to + 1 and zero indicating no association). Moreover, using a previous network visualization method (Chen et al., 2017 ), this research presented LsA results in visualized networks, where the node size represented frequency of behavior code, the edge width represented transitional Yule’s Q value, and the transitional direction should be read from the node with the same color to the other node.

Regarding the differences of attitudes, pre- and post-surveys were conducted at the beginning and the end of the classes. The survey was adapted from the Georgia Computes project (Bruckman et al., 2009 ) and the Computing Attitudes Survey, which were validated from previous research (Dorn & Tew, 2015 ; Tew et al., 2012 ). The survey included five 5-point Likert scale questions ranging from 1 (strongly disagree) to 5 (strongly agree), as well as short open-ended questions (see Appendix A). Independent t test analysis and descriptive analysis were used to reveal the differences of learners’ confidence, enjoyment, and future interest between two instructional modes. Finally, we invited learners to a semi-structured interview at the end of the class. The interview focused on learners’ recall of the knowledge they learned from the class, the most difficult or easiest part of the class, as well as their self-perceptions and future plan on computer programming (see Appendix B). Thematic analysis was used to analyze the interview data (Cohen et al., 2013 ). A six-step sequence was used to identify themes: (1) formatting the text data, (2) coding the data separately by two coders, (3) recording specific coded segments of data, (4) comparing segments with same codes, (5) integrating the codes, and (6) double check the final coded themes.

Computer programming knowledge and skills

We present the results of learners’ computer programming knowledge and skills on two dimensions, namely, the post-test scores and the score differences under two modalities (see Table 2 ). Regarding the pre-test programming knowledge at the outset of the research, no statistically significant ( t (61) = 0.99, p  = 0.32) was found between two instructional modes (IDL: M  = 55.51, SD = 10.91; UPP: M  = 58.28, SD = 11.11). After the intervention, learners in the IDL class had an average score of 68.70 (SD = 24.14), and learners in the UPP class gained an average score of 83.78 (SD = 10.33). T test indicated a statistically significant difference between two instructional modes ( t (61) = − 3.20, p  = 0.003) (see Table 2 ). Regarding the differences of knowledge score before and after the intervention, a significant difference ( t (61) = − 2.46, p  = 0.018) was found between two modes (IDL: M  = 13.19, SD = 24.74; UPP: M  = 25.50, SD = 12.94). These result indicated that learners in UPP class achieved significantly higher improvement on the knowledge assessment than peers in IDL class after the intervention. Moreover, regarding the scores of the CT skills, there were no significant differences between the IDL class ( M  = 3.94, SD = 0.88) and the UPP class ( M  = 3.92, SD = 0.94) ( t (61) = − 0.23, p  = 0.82) before the intervention. After the intervention, the independent t test results of post-test of learners’ CT skills indicated no statistically significant difference between two instructional modes ( t (62) = − 0.26, p  = 0.253), but the UPP class performed better than the IDL class overall (IDL: M  = 4.07, SD = 0.45; UPP: M  = 4.21, SD = 0.53). One significant difference was found on the sub item of cooperativity ( t (62) = − 2.11, p  = 0.042): the UPP class outperformed the IDL class (IDL: M  = 3.75, SD = 0.62; UPP: M  = 4.09, SD = 0.66). Regarding the differences of programming skill score, no significant difference ( t (61) = − 1.30, p  = 0.198) was found between two modes (IDL: M  = 0.15, SD = 0.54; UPP: M  = 0.38, SD = 0.86). Overall, compared to the instructor-directed lecturing class, the unplugged programming class had a better improvement on the programming knowledge and skills after the intervention.

In-class behavioral patterns

Learners’ behavioral patterns showed similarities and discrepancies between two instructional modes. First, two classes had the most frequent behavior of listening to instructor (LtI), followed by either the behavior of discussing with peer (DwP) or taking notes (TN). In the IDL class, the most frequent behaviors were listening to instructor (LtI; frequency = 983), taking notes (TN; frequency = 831), and discussing with peer (DwP; frequency = 653). In comparison, learners in IDL class had much more irrelevant behaviors (IB; frequency = 441) than the UPP class (IB; frequency = 187), such as chatting or playing (see Fig.  3 a). The most frequent behaviors of UPP class were listening to instructor (LtI; frequency = 757), discussing with peer (DwP; frequency = 736), and taking notes (TN; frequency = 509) (see Fig.  3 b). Second, in the IDL class, the strongest association was IB → DwP (Yule’s Q  = 0.84), followed by AsQ → LtI (Yule’s Q  = 0.60) and AnQ → LtI (Yule’s Q  = 0.55) (see Table 3 ). The results indicated that learners in the IDL class had most frequent behavior in irrelevant things and then transferred to discussing with partner and listening to the instructor. In the UPP class, the strongest association was LtI → DwP (Yule’s Q  = 0.77), followed by AnQ → LtI (Yule’s Q  = 0.72) and LtI → AnQ (Yule’s Q = 0.67). The results revealed that learners in the UPP class spent most of the time on listening to the instructor and then discussing with their partners. Taken together, the UPP class appeared to be more engaged (more behaviors in LtI and DwP, less behaviors in IB) during the instructional process, and the IDL class seemed to be more concentrated on listening to instructor (LtI) and taking notes (TN), while they were much easier to be distracted by irrelevant things (IB) during the class.

figure 3

Transitional network representation in learners’ behavior from two instructional modes. A node represents a behavior code, the node size represented the frequency of the code, the width represented the transitional value, a Yule’s Q value, and the direction should be read from the node with the same color of the line to the node with a different color

Attitudinal findings

We examined pre-test score, learning gains, and post-test score for both classes. First, learners in two modes had no significant difference in the pre-test of three dimension (confidence: p  = 0.145; enjoyment: p  = 0.491; future interests: p  = 0.872). Second, learners in both classes experienced an improvement in three dimensions (see Table 4 ), including an increase of confidence: IDL (0.35), UPP (0.70); increase of enjoyment: IDL (0.10), UPP (0.03), increase of future interests: IDL (0.16), UPP (0.34). Regarding the differences before and after the intervention, no significant difference ( t (61) = − 1.43, p  = 0.156) was found on the confidence (IDL: M  = 0.35, SD = 0.81; UPP: M  = 0.70, SD = 0.44). No significant difference ( t (61) = 0.38, p  = 0.703) was found on the enjoyment (IDL: M  = 0.10, SD = 0.64; UPP: M  = 0.03, SD = 0.59). In addition, no significant difference ( t (61) =  − 0.61, p  = 0.547) was found on the future interest (IDL: M  = 0.16, SD = 0.19; UPP: M  = 0.34, SD = 0.38). Third, a significant difference was found in post-test score of confidence ( t (61) =  − 1.47, p  = 0.010). Learners in the UPP class ( M  = 4.11; SD = 1.03) were more confident than learners in the IDL class ( M  = 3.38; SD = 1.05) (see Fig.  4 a). Although UPP class ( M  = 4.24; SD = 0.97) had a better enjoyment score than IDL class ( M  = 4.13; SD = 1.15), there was no statistically significant differences between two instructional modes ( t (61) = − 0.57, p  = 0.492) (see Fig.  4 b). There was also no significant difference in the aspect of future interests ( t (61) = − 0.94, p  = 0.324), but learners in the UPP ( M  = 4.00; SD = 1.03) had a higher score than learners in IDL class ( M  = 3.94; SD = 1.03) (see Fig.  4 c). Overall, UPP class had an overall more positive attitude towards computer programming than IDL class.

figure 4

Scores of learners’ confidence ( a ), enjoyment ( b ) and future interest ( c ) in two instructional modes

Qualitative analysis of interview data

There were three themes emerged in the thematic analysis of learners’ interview data, namely, the recall of programming knowledge, feeling of learning experiences, and attitudes towards programming (see Table 5 ). The first theme revealed differences of acquisitions of programming knowledge and skills between two instructional modes. 18 out of 31 learners in IDL mentioned that it was hard for them to recall the contents of the class, and 4 learners expressed that they were easily confused by the divergent contents of each class. Huang said, “I thought it was ok, but the technical terms and calculation methods of computers may be too difficult for me, and I was often confused by different rules.” Ye mentioned, “I have some impressions of what I have learned in this course, but I didn't master the rules and methods very well from the class, because I don't have a chance to consolidate them after class.” As for UPP, 20 out of 32 learners mentioned that they could remember most of the content of each class, and they thought unplugged activities improved their higher-order thinking ability. For example, Zhang said “I could recall most of the class content, such as sorting, searching…. What impressed me most was to the activity of moving the black and white block to find the correct sequence… activities like these made me remember the algorithm better than just sitting and listening to the instructor.” Only 3 learners in UPP class thought it was difficult for them to master the instructional content through unplugged activities. Liu said, “I was attracted to the unplugged activities during the class, but sometime I found it hard to recall the corresponding programming concepts”. Overall, learners in the UPP class had a better understanding of programming content and concepts, compared with IDL class.

The second theme revealed the difference of learners’ feeling of learning experience between two instructional modes. 22 out of 31 learners in IDL class referred to a low level of participation in the class, as Yang said: “There is nothing special about this course, the learning experience was poor, since we did not have chance to practice the algorithm by ourselves or through computer.” Two learners in IDL thought the class was interesting, Huang said “I was interested in the class because I was attracted to different algorithms like bubble sort”. On the contrary, much more learners ( N  = 26) in UPP class described the unplugged programming as an interactive and interesting learning experience. Su mentioned: “…we had a lot of opportunities to join in the programming activities during class, which promoted our concentration and engagement.” But three learners in UPP class mentioned participation issues during unplugged activities, as Chen said “…sometimes it was difficult for me to get the idea quickly for the unplugged activities, so I had to follow others in my group.” Overall, UPP class appeared to be more interactive and engaging compared to the IDL class.

The third theme of perceptions discussed learners’ attitudes towards computer programming between two instructional modes. Fifteen learners in IDL expressed interests in programming, but they appeared to be more concerned about the difficulty level of the algorithms considering their mathematical abilities. Sun said, “I thought this course was fine, but the course seemed to have something to do with the mathematics ability. I could use some basic knowledge to solve problems… but when it got harder and deeper, I was not able to handle it.” Nine learners in IDL thought the class improved their attitudes towards programming and four learners mentioned the programming class was beneficial to other subjects which required computational thinking ability. Most of the learners ( N  = 27) in the UPP class mentioned that learning through unplugged programming activities could promote their learning attitudes and 10 of them mentioned the programming class could improve their performances in other subjects, especially mathematics, which was consistent with previous research (e.g., Century et al., 2020 ). For example, Wang responded, “Some knowledge within the unplugged programming activities were connected with our mathematics course, such as sequence… I think it is quite suitable for me.” Huang said, “I might not be majoring in computer science in the future, but I think the profession I choose in the future will involve computer science knowledge, so I think it was worth learning.” There were few learners ( N  = 5) expressed their concern on the difficulty of the algorithms. Taken together, UPP seemed to offer the opportunity to improve learners’ attitudes and alleviate their concerns for computer programming. Overall, interview data showed that learners in UPP were more confident in mastering the computer knowledge and skills, more engaged during the classes, and had more positive feelings towards programming.

As one area of STEM education, computer programming focuses on transforming the instructor-directed lecturing to the learner-centered instructions (such as unplugged, game-based programming) to foster learners’ computational thinking skills, learning motivations and interests, as well as programming engagement (Koretsky et al., 2018 ; Looi et al., 2018 ; Tekkumru-Kisa & Stein, 2017 ). This research used a quasi-experimental design to apply two instructional modes, namely, the instructor-directed lecturing and the learner-centered unplugged programming, to foster computer programming in China’s secondary education. Furthermore, this research compared the effects of novice learners’ programming between those two instructional modes, including knowledge gains, in-class behaviors, and attitudinal changes. The research results revealed discrepancies between two instructional modes. First, learners in the unplugged programming class achieved significantly higher scores on the knowledge tests, compared to learners in the traditional lecturing class. The results echoed with Grover et al. ( 2019 )’s research that found unplugged programming activities deepened novice learners’ understanding of programming concepts. Consistent with previous research results (Hsu & Liang, 2021 ), compared to the traditional lecturing class, learners in the unplugged programming class achieved higher scores of computational thinking skills, particularly on the cooperativity dimension. The results together indicated that learners benefited from unplugged programming to improve knowledge gains as well as computational thinking skills. Next, discrepancies of in-class behaviors showed that the typical behaviors in unplugged programming class were listening to the instructor’s lectures and discussing with peers during unplugged programing activities, while learners in the instructor-directed lecturing class had frequent behaviors of listening to the instructor, taking notes, and irrelevant behaviors. Consistent with previous research (e.g., Ballard & Haroldson, 2021 ; Huang & Looi, 2020 ), unplugged programming activities reduced irrelevant in-class behaviors, promoted peer discussions, and facilitated students’ problem-solving process. Results of attitudes showed a significant difference on the confidence dimension between two instructional modes: learners in the unplugged programming activities self-reported a higher level of confidence than learners in the traditional class. Qualitative analysis of interview data also confirmed those quantitative results. Echoing with previous studies (Brackmann et al., 2017 ; del Olmo-Muñoz et al., 2020 ; Price & Barnes, 2015 ), this research revealed that the learner-centered unplugged programming had potential to improve learners’ programming knowledge, behaviors, and attitudes compared to the traditional instructor-directed lecturing mode.

Based on the results, this research proposes pedagogical and analytical implications for future instructional design and learning analytics of unplugged programming. First, on the pedagogical level, instructors should consider integrating unplugged programming activities in daily instructions for novice learners, with an aim to provide conceptual contextualization, material supports, and peer interaction opportunities (Alamer et al., 2015 ). Our results showed that, compared to the traditional lecturing class, learners in the unplugged programming class seemed to be more attracted to the instructional content and more concentrated on learning with less irrelevant behaviors. Learners in unplugged programming class had more behaviors of peer discussions, questioning and answering, which was critical for improving the cognitive quality during programming (Lu et al., 2017 ). Our results also revealed that learners in the instructor-directed lecturing mode mentioned two main barriers which might lead to difficulties of knowledge acquisition: insufficient learning time and a lack of opportunity for programming practices. An integration of the unplugged programming activities could be beneficial to address those challenges, since those hands-on activities bring more opportunities for learners to engage in actual programming practices. In this way, instructors can deliver programming knowledge and skills through pragmatic practices, which, in turn, would facilitate learners’ questioning, thinking, and reflection of programming (Huang & Looi, 2020 ). Learner agency can be also promoted through unplugged programming practices to increase learners’ intentionality for and their action of taking learning initiations (Bandura, 2001 ). Overall, the unplugged programming activity is suggested for instructors to integrate in daily instructions to increase peer interaction and collaboration opportunities, to maintain learners’ motivation and interest of programming, and to increase the overall learning quality of programming.

On the analytical level, there has been a trend currently to apply the mixed method (e.g., clickstream analysis, behavior sequential analysis, statistical analysis) to conduct the process-oriented analytics of computer programming (e.g., D. Pereira et al., 2020 ; Sun et al., 2021a , b ; Wu et al., 2019 ). Although final performance is usually the main focus in education (Zhong et al., 2016 ), the process-oriented perspective highlights the importance of using multiple learning analytics to evaluate programming and emphasizes the essence of promoting learners’ programming quality through pragmatical practices. As the analytical framework indicates (see Fig.  1 ), the process-oriented and summative assessment complements each other to provide a holistic insight into learners’ programming processes and performances; with the support of an integrative assessment, researchers can better understand the programming phenomenon and underlying factors that may influence the programming process. Specifically, findings from pre- and post-tests of computer programming knowledge and skills provide us with a general description of learners’ improvement before and after the intervention; network representations reveal a process-oriented behavioral transition during the instructional process; and qualitative interview analytics discover learners’ in-depth perceptions of the programming after the intervention. Moreover, mixed methods provide a broader view of the computer programming phenomenon under investigation, clarify and answer research questions from varied perspectives, enhance the validity of the research findings and increase the capacity to cross-check one data set against another (Grbich, 2013 ). However, due to the technical restriction, we were not able to capture learners’ in-class behaviors and their communicative discourses synchronously; such that we were not able to conduct a more integrated microanalysis of the moment-to-moment details of how learners coordinate their communications, behaviors, and movements during the programming processes (Stahl, 2009 ). Multimodal learning analytics could be integrated into future research to synchronize audio discourse data, video recording data, facial expressions and eye tracking movements to better reveal the programming learning patterns (e.g., Chevalier et al., 2020 ; Sun & Hsu, 2019 ; Zatarain Cabada et al., 2018 ). Overall, complementing each other, the summative and process-oriented instructional design and analysis are promoted based on the empirical results, to provide a more holistic, multilevel, multidimensional analysis of the unplugged programming processes.

Programming education focuses on cultivating learners’ higher order thinking abilities (e.g., computational thinking and logical thinking), which are fundamental skills that modern learners should possess (Stehle & Peters-Burton, 2019 ). The unplugged programming strategy can be easily integrated into various types of computer programing classes, which is beneficial to improve learners’ knowledge gains, classroom learning behaviors, and positive attitudes and motivations towards programming, as this research demonstrates. Unlike learning professional programming languages (e.g., C, Java, Python), the instructional mode of unplugged programming makes programming knowledge accessible to novice learners with different backgrounds, serves as the basis for learners to make further explorations, and enhances learners’ higher order cognitive abilities and computer thinking capacities (Bell & Vahrenhold, 2018 ; Thies & Vahrenhold, 2013 ). As an alternative to formal education of computer programming, unplugged programming has been designed and implemented in China and other countries all over the world, proved to be a flexible, feasible form for a wide range of learners to learn computer programming (Huang & Looi, 2020 ). In formal and informal learning, instructors can integrate various unplugged programming strategies into in-class instructions to promote learners’ learning efficiency and further expand the coverage of programming education (Looi et al., 2018 ). Since the instructor-directed lecturing is the main instructional mode of computer programming in formal educational context in China and many other countries (Panwong & Kemavuthanon, 2014 ; Wu & Wang, 2017 ), instructors might found it hard to integrate unplugged activities in their daily classes. Moreover, instructors might meet other difficulties during implementations of unplugged programming activities, including design of unplugged activities to illustrate computer knowledge and content, suit for learners with divergent level of prior knowledge and skills, and class time allocation of unplugged activities and other instructional lectures (Taub et al., 2009 ; Torres-Torres et al., 2019 ). To deal with these challenges, the instructor should carefully identify relationships between unplugged programming activities and central programming concepts and algorithms when designing and preparing lesson plans, course materials, and programming activities (Brackmann et al., 2017 ; Taub et al., 2012 ). In addition, this research suggests that the instructor should take into consideration learners’ pre-existing knowledge and skill levels when implementing unplugged programming activities to achieve a learner-centered programming practices. Taken together, since the effect has been validated by educational research, unplugged programming, as a computer-science-for-all strategy in formal education, has potential to bring practical and pragmatic benefits into formal programming education.

Conclusions, limitations, and future directions

Computer science education plays an important role in STEM education to foster the learner-centered learning. As a feasible and easy-to-use instructional activity in computer science education, unplugged programming is encouraged to be integrated in formal education to transform education from the instructor-directed lecturing to the learner-centered learning with an aim to increase learners’ learning interests and motivations (Alamer et al., 2015 ; Looi et al., 2018 ; Sun et al., 2021a , b ). This quasi-experimental research compared learners’ programming knowledge, behaviors, and attitudes under two instructional modes, namely, the instructor-directed lecturing and the learner-centered unplugged programming, in China’s secondary education. The results revealed critical discrepancies between two instructional modes on learners’ knowledge gains, classroom learning behaviors, and changes of attitudes towards programming.

A major limitation of this research is the relatively short period of time of learning in the research. This research chose the last two sessions as the data source for process-oriented behavioral analysis, which might cause selection bias to some extent, since learners showed the highest level of engagement in this period. Therefore, future research should expand the research duration, such as collecting data from the whole instruction and learning process. Another limitation is the possibility of Hawthorne effect due to the instructor’s enthusiasm and attention for the treatment class (Chia & Lim, 2020 ). To eliminate the bias, future research should extend the sample size to further validate the proposed implications. Furthermore, as the proposed analytical framework suggests, this research investigated the process and performance data from behavioral, summative, and attitudinal perspectives. Moreover, following the proposed analytical framework, multimodal learning analytics (MLA) has potential to guide a better process-oriented analysis for discovering frequent patterns of behaviors, gestures, emotions, and communications during instruction and learning processes (Ochoa, 2017 ). In computer programming research, MLA can collect and analyze multimodal data (e.g., audio/video recording data, click-stream recording data, facial expressions, movement and gesture, and eye tracking, etc.) to reveal learners’ coordination of behavioral, cognitive, metacognitive, and social activities of programming (e.g., Wiltshire et al., 2019 ).

Overall, since the intrinsic value of programming centers on its process, relevant research and practice should integrate instructor-directed lecturing with learner-centered unplugged programming and take a process-oriented perspective to investigate, advance, and assess learners’ programming. This research takes a step forward to conduct a holistic analysis of learners’ performances, processes, and attitudes in computer programming education in China’s formal secondary education. Based on the empirical research results, unplugged programming has shown its flexibility and practicability for a wide range of learners to improve their programming knowledge gains, behaviors, and positive attitudes. Overall, it is highly suggested that computer programming education should integrate unplugged programming with traditional lectures in formal education to promote learners’ programming knowledge, programming engagement, and positive attitudes towards programming.

Availability of data and materials

The data was available upon request from the corresponding authors.

Alamer, R. A., Al-Doweesh, W. A., Al-Khalifa, H. S., & Al-Razgan, M. S. (2015). Programming unplugged: Bridging CS unplugged activities gap for learning key programming concepts. In N. Walker (Eds.), Proceedings of the Fifth International Conference on e-Learning (ICEEE) (pp. 97–103). IEEE. https://doi.org/10.1109/ECONF.2015.27 .

Ballard, E. D., & Haroldson, R. (2021). Analysis of computational thinking in Children’s literature for K-6 students: Literature as a non-programming unplugged resource. Journal of Educational Computing Research . https://doi.org/10.1177/07356331211004048

Article   Google Scholar  

Bandura, A. (2001). Social cognitive theory: An agentic perspective. Annual Review of Psychology, 52 , 1–26. https://doi.org/10.1111/1467-839X.00024

Bell, T., Alexander, J., Freeman, I., & Grimley, M. (2009). Computer science unplugged: School students doing real computing without computers. Journal of Applied Computing and Information Technology, 13 (1), 20–29.

Google Scholar  

Bell, T., Rosamond, F., & Casey, N. (2012). Computer science unplugged and related projects in math and computer science popularization. In H. L. Bodlaender, R. Downey, F. V. Fomin, & D. Marx (Eds.), International conference on the multivariate algorithmic revolution and beyond (pp. 398–456). Springer. https://doi.org/10.1007/978-3-642-30891-8_18

Chapter   Google Scholar  

Bell, T., & Vahrenhold, J. (2018). CS unplugged—How is it used, and does it work? In H.-J. Böckenhauer, D. Komm, & W. Unger (Eds.), Adventures between lower bounds and higher altitudes: essays dedicated to Juraj Hromkovič on the occasion of his 60th birthday (pp. 497–521). Springer International Publishing.

Berland, M., Martin, T., Benton, T., Petrick, S. C., & Davis, D. (2013). Using learning analytics to understand the learning pathways of novice programmers. Journal of the Learning Sciences, 22 (4), 564–599. https://doi.org/10.1080/10508406.2013.836655

Brackmann, C. P., Román-González, M., Robles, G., Moreno-León, J., Casali, A., & Barone, D. (2017). Development of computational thinking skills through unplugged activities in primary school. In E. Barendsen, & P. Hubwieser (Eds.), Proceedings of the 12th Workshop on Primary and Secondary Computing Education (WiPSCE ’17) (pp. 65–72). ACM. https://doi.org/10.1145/3137065.3137069

Bransford, J. D., Brown, A., & Cocking, R. (2000). How people learn: Mind, brain, experience, and school . National Research Council.

Bruckman, A., Biggers, M., Ericson, B., McKlin, T., Dimond, J., DiSalvo, B., Hewner, M., Ni, L., & Yardi, S. (2009). Georgia computes!: Improving the computing education pipeline. In S. Fitzgeraald, & M. Guzdial (Eds.), Proceedings of the 40th ACM technical symposium on Computer science education (IGCSE' 09) . (pp.86–90). ACM. https://doi.org/10.2190/10.1145/1539024.1508899

Century, J., Ferris, K. A., & Zuo, H. (2020). Finding time for computer science in the elementary school day: A quasi-experimental study of a transdisciplinary problem-based learning approach. International Journal of STEM Education, 7 (1), 1–16. https://doi.org/10.1186/s40594-020-00218-3

Chen, B., Resendes, M., Chai, C. S., & Hong, H. Y. (2017). Two tales of time: Uncovering the significance of sequential patterns among contribution types in knowledge-building discourse. Interactive Learning Environments, 25 (2), 162–175. https://doi.org/10.1080/10494820.2016.1276081

Chevalier, M., Giang, C., Piatti, A., & Mondada, F. (2020). Fostering computational thinking through educational robotics: A model for creative computational problem solving. International Journal of STEM Education, 7 (1), 1–18. https://doi.org/10.1186/s40594-020-00238-z

Chia, H. M., & Lim, C. S. (2020). Characterising the pedagogical practices in mathematics lessons among selected malaysian primary schools. The Mathematics Enthusiast, 17 (1), 307–323.

Chittum, J. R., Jones, B. D., Akalin, S., & Schram, Á. B. (2017). The effects of an afterschool STEM program on students’ motivation and engagement. International Journal of STEM Education, 4 (1), 11–26. https://doi.org/10.1186/s40594-017-0065-4

Cohen, L., Manion, L., & Morrison, K. (2013). Research methods in education . Routledge.

Book   Google Scholar  

CS Unplugged. (2020). Computer science without a computer. https://www.csunplugged.org/zh-hans/

del Olmo-Muñoz, J., Cózar-Gutiérrez, R., & González-Calero, J. A. (2020). Computational thinking through unplugged activities in early years of primary education. Computers & Education, 150 , 103832. https://doi.org/10.1016/j.compedu.2020.103832

Dorn, B., & Tew, A. E. (2015). Empirical validation and application of the computing attitudes survey. Computer Science Education, 25 , 1–6. https://doi.org/10.1080/08993408.2015.1014142

Falloon, G. (2016). An analysis of young students’ thinking when completing basic coding tasks using Scratch Jnr. on the iPad. Journal of Computer Assisted Learning, 32 (6), 576–593. https://doi.org/10.1111/jcal.12155

Faraone, S. V., & Dorfman, D. D. (1987). Lag sequential analysis: Robust statistical methods. Psychological Bulletin, 101 (2), 312–323. https://doi.org/10.1037/0033-2909.101.2.312

Gardeli, A., & Vosinakis, S. (2017). Creating the computer player: An engaging and collaborative approach to introduce computational thinking by combining ‘unplugged’ activities with visual programming. Italian Journal of Educational Technology. https://doi.org/10.17471/2499-4324/910

Gouws, L. A., Bradshaw, K., & Wentworth, P. (2013). Computational thinking in educational activities. In J. Carter, I. Utting, & A. Clear (Eds.), Proceedings of the 18th ACM conference on Innovation and technology in computer science education (ITiCSE ’13) (pp. 10). ACM. https://doi.org/10.1145/2462476.2466518

Grbich, C. (2013). Qualitative data analysis: An introduction . Sage Publications.

Grover, S., Jackiw, N., & Lundh, P. (2019). Concepts before coding: Non-programming interactives to advance learning of introductory programming concepts in middle school. Computer Science Education, 29 (2–3), 106–135. https://doi.org/10.1080/08993408.2019.1568955

Hermans, F., & Avvaloglou, E. (2017). To scratch or not to scratch? A controlled experiment comparing plugged first and unplugged first programming lessons. In Proceedings of WiPSCE' 17 the 12th Workshop on Primary and Secondary Computing Education (pp. 49–56). https://doi.org/10.1145/3137065.3137072

Hosseini, H., Hartt, M., & Mostafapour, M. (2019). Learning IS child’s play: Game-based learning in computer science education. ACM Transactions on Computing Education, 19 (3), 1–18. https://doi.org/10.1145/3282844

Hsu, T., & Liang, Y. (2021). Simultaneously improving computational thinking and foreign language learning: Interdisciplinary media with plugged and unplugged approaches. Journal of Educational Computing Research . https://doi.org/10.1177/0735633121992480

Huang, W., & Looi, C. (2020). A critical review of literature on “unplugged” pedagogies in K-12 computer science and computational thinking education. Computer Science Education, 31 (1), 1–29. https://doi.org/10.1080/08993408.2020.1789411

Kersting, N. (2008). Using video clips as item prompts to measure teachers’ knowledge of teaching mathematics. Educational and Psychological Measurement, 68 (5), 845–861. https://doi.org/10.1177/0013164407313369

Koretsky, M., Keeler, J., Ivanovitch, J., & Cao, Y. (2018). The role of pedagogical tools in active learning: A case for sense-making. International Journal of STEM Education, 5 (1), 1–20. https://doi.org/10.1186/s40594-018-0116-5

Korkmaz, Ö., Çakir, R., & Özden, M. Y. (2017). A validity and reliability study of the computational thinking scales (CTS). Computers in Human Behavior, 72 , 558–569. https://doi.org/10.1016/j.chb.2017.01.005

Lewis, C. (2012). The importance of students’ attention to program state: A case study of debugging behavior. In Alison, C., Kate, S., & Beth, S. (Eds.), Proceedings of the 9th annual international conference on international computing education research (pp.127–134). ACM.

Looi, C. K., How, M. L., Wu, L. K., Seow, P., & Liu, L. (2018). Analysis of linkages between an unplugged activity and the development of computational thinking. Computer Science Education, 28 (3), 255–279. https://doi.org/10.1080/08993408.2018.1533297

Lu, O. H. T., Huang, J. C. H., Huang, A. Y. Q., & Yang, S. J. H. (2017). Applying learning analytics for improving students engagement and learning outcomes in an MOOCs enabled collaborative programming course. Interactive Learning Environments, 25 (2), 220–234. https://doi.org/10.1080/10494820.2016.1278391

Mano, C., Allan, V., & Colley, D. (2010). Effective in-class activities for middle school outreach programs. In Proceedings of the 40th ASEE/IEEE Frontiers in Education Conference (FIE) (pp. F2E-1-F2E-6). IEEE. https://doi.org/10.1109/FIE.2010.5673587

MOE. (2020). General high school information technology curriculum standard (2017 Edition) . The Ministry of Education of the People's Republic of China. http://www.moe.gov.cn/jyb_xxgk/xxgk_jyta/jyta_kjs/202002/.html

Nurbekova, Z., Tolganbaiuly, T., Nurbekov, B., Sagimbayeva, A., & Kazhiakparova, Z. (2020). Project-based learning technology: An example in programming microcontrollers. International Journal of Emerging Technologies in Learning, 15 (11), 218–227. https://doi.org/10.3991/ijet.v15i11.13267

Ochoa, X. (2017). Chapter 11: Multimodal learning analytics. In C. Lang, G. Siemens, A. Wise, & D. Gašević (Eds.), Handbook of learning analytics (1st edn., pp. 143–150). Creative Commons License 4.0.

Panwong, P., & Kemavuthanon, K. (2014). Problem-based learning framework for junior software developer: Empirical study for computer programming students. Wireless Personal Communications, 76 (3), 603–613. https://doi.org/10.1007/s11277-014-1728-9

Papert, S. (1991). Situating constructionism. In I. Harel & S. Papert (Eds.), Constructionism: Research reports and essays (pp. 1–11). Norwood.

Pereira, F. D., Oliveira, E. H., Oliveira, D. B., Cristea, A. I., Carvalho, L. S., Fonseca, S. C., Toda, A., & Isotani, S. (2020). Using learning analytics in the Amazonas: Understanding students’ behaviour in introductory programming. British Journal of Educational Technology, 51 (4), 955–972. https://doi.org/10.1111/bjet.12953

Price, T., & Barnes, T. (2015). Comparing textual and block interfaces in a novice programming environment. In B. Dorn (Eds.), Proceedings of the eleventh annual international conference on international computing education research (ICER’15) (pp. 91–99). ACM. https://doi.org/10.1145/2787622.2787712

Saxena, A., Lo, C. K., Hew, K. F., & Wong, G. K. W. (2020). Designing unplugged and plugged activities to cultivate computational thinking: An exploratory study in early childhood education. The Asia-Pacific Education Researcher, 29 (1), 55–66. https://doi.org/10.1007/s40299-019-00478-w

Schnittka, C. G., Evans, M. A., Won, S., & Drape, T. D. (2015). Looking for learning in afterschool spaces: Studio STEM. Research in Science Education., 46 (3), 389–412. https://doi.org/10.1007/s11165-015-9463-0

Stahl, G. (2009). Studying virtual math teams . Springer.

Stehle, S. M., & Peters-Burton, E. E. (2019). Developing student 21st century skills in selected exemplary inclusive STEM high schools. International Journal of STEM Education, 6 (1), 1–15. https://doi.org/10.1186/s40594-019-0192-1

Sun, D., Ouyang, F., Li, Y., & Chen, H. (2021a). Three contrasting pairs’ collaborative programming processes in China’s secondary education. Journal of Educational Computing Research, 59 (4), 740–762. https://doi.org/10.1177/0735633120973430

Sun, J. C., & Hsu, K. Y. (2019). A smart eye-tracking feedback scaffolding approach to improving students’ learning self-efficacy and performance in a C programming course. Computers in Human Behavior, 95 , 66–72. https://doi.org/10.1016/j.chb.2019.01.036

Sun, L., Hu, L., & Zhou, D. (2021b). Which way of design programming activities is more effective to promote K-12 students’ computational thinking skills? A meta-analysis. Journal of Computer Assisted Learning. https://doi.org/10.1111/jcal.12545

Taub, R., Ben-Ari, M., & Armoni, M. (2009). The effect of CS unplugged on middle-school students' views of CS. In Patrick, B. (Chairs), Annual conference on innovation and technology in computer science education , Paris, France. https://doi.org/10.1145/1562877.1562912

Taub, R., Armoni, M., & Ben-Ari, M. (2012). CS unplugged and middle-school students’ views, attitudes, and intentions regarding CS. ACM Transactions on Computing Education (TOCE), 12 (2), 1–29. https://doi.org/10.1145/2160547.2160551

Tekkumru-Kisa, M., & Stein, M. K. (2017). A framework for planning and facilitating video-based professional development. International Journal of STEM Education, 4 (1), 1–18. https://doi.org/10.1186/s40594-017-0086-z

Tew, A. E., Dorn, B., & Schneider, O. (2012). Toward a validated computing attitudes survey. In A. Clear, K. Sanders, & B. Simon (Eds.), Proceedings of the ninth annual international conference on international computing education research (ICER'12) (pp. 135–142). ACM. https://doi.org/10.1145/2361276.2361303

Thies, R., & Vahrenhold, J. (2013). On plugging “unplugged” into CS classes. In T.Camp, & P. Tymann (Eds.), Proceeding of the 44th ACM technical symposium on computer science education (SIGCSE ’13) (pp. 365–370). ACM. https://doi.org/10.1145/2445196.2445303

Tom, M. (2015). Five cs framework: A student-centered approach for teaching programming courses to students with diverse disciplinary background. Journal of Learning Design, 8 (1), 21–27.

Torres-Torres, Y., Román-González, M., & Pérez-González, J. (2019). Implementation of unplugged teaching activities to foster computational thinking skills in primary school from a gender perspective. In M. A. C., Gonzalez, F. J. R., Sedano, C. F. Llamas, & F. J., Garcia-Penalvo (Eds.), Proceedings of the seventh international conference on technological ecosystems for enhancing multiculturality (TEEM’19) (pp. 209–215). ACM. https://doi.org/10.1145/3362789.3362813

Tsarava, K., Moeller, K., Butz, M., Pinkwart, N., Trautwein, U., & Ninaus, M. (2018). Training computational thinking: Game-based unplugged and plugged-in activities in primary school. In M. Pivec, & Josef. Grundler (Eds.), Proceedings of the 11th European conference on game-based learning (ECGBL) (pp. 687–695). Scopus.

Wiltshire, T. J., Steffensen, S. V., & Fiore, S. M. (2019). Multiscale movement coordination dynamics in collaborative team problem solving. Applied Ergonomics, 79 , 143–151. https://doi.org/10.1016/j.apergo.2018.07.007

Wu, H. T., & Wang, Y. (2017). Research and practice on teaching of programming course based on computational thinking. In H. T. Zhou (Eds.), Proceedings of 2017 4th international conference on information and communication technology for education (ICTE2017) (pp.79–83). Information Engineering Research Institute

Wu, B., Hu, Y., Ruis, A. R., & Wang, M. (2019). Analysing computational thinking in collaborative programming: A quantitative ethnography approach. Journal of Computer Assisted Learning, 35 (3), 421–434. https://doi.org/10.1111/jcal.12348

ZatarainCabada, R., Barrón Estrada, M. L., Ríos Félix, J. M., & Alor Hernández, G. (2018). A virtual environment for learning computer coding using gamification and emotion recognition. Interactive Learning Environments, 28 (8), 1048–1063. https://doi.org/10.1080/10494820.2018.1558256

Zhong, B., Wang, Q., & Chen, J. (2016). The impact of social factors on pair programming in a primary school. Computers in Human Behavior, 64 , 423–431. https://doi.org/10.1016/j.chb.2016.07.017

Download references

Acknowledgements

The authors would like to thank the instructors, students, and their parents from The Affiliated School of the College of Education, Zhejiang University for their supports of this research.

This work was supported by Ministry of Science and Technology of the People’s Republic of China (2019AAA0105403) and National Natural Science Foundation of China (61907038).

Author information

Authors and affiliations.

College of Education, Zhejiang University, #866, Yuhangtang Rd., Hangzhou, 310058, Zhejiang, China

Dan Sun, Fan Ouyang & Yan Li

The Affiliated School of the College of Education, Zhejiang University, #118, Fanghua Rd., Hangzhou, 310053, Zhejiang, China

Caifeng Zhu

You can also search for this author in PubMed   Google Scholar

Contributions

DS designed and facilitated this research, analyzed the data and wrote the first draft of the manuscript; FO facilitated data analysis and revised the manuscript; YL built connections with the experimental school and proofread the manuscript; and CZ completed the instruction work in this research and collected data. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Fan Ouyang or Yan Li .

Ethics declarations

Competing interests.

There are no competing interests to declare.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Behavior data for the IDL mode.

Additional file 2.

Behavior data for the UPP mode.

Attitudinal survey

I will be/am good at programming.

I will be doing well/did well in this course.

I like programming.

I am excited about this course/ I was excited about this course.

I might take more programming courses in the future/ I plan to take more programming courses in the next semester.

Do you like computer programming and why?

What do you think of this course so far?

Can you recall what you learned in this course? What is the most impressive part of the course?

What do you think is the easiest or hardest part of this course?

What abilities have you improved after this course?

What is your future plan on learning of computer programming?

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Sun, D., Ouyang, F., Li, Y. et al. Comparing learners’ knowledge, behaviors, and attitudes between two instructional modes of computer programming in secondary education. IJ STEM Ed 8 , 54 (2021). https://doi.org/10.1186/s40594-021-00311-1

Download citation

Received : 06 April 2021

Accepted : 12 September 2021

Published : 23 September 2021

DOI : https://doi.org/10.1186/s40594-021-00311-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • STEM education
  • Unplugged programming
  • Process-oriented analysis
  • Behavioral pattern analysis
  • Secondary education

computer programming research

OPINION article

Some evidence on the cognitive benefits of learning to code.

\nRonny Scherer

  • 1 Centre for Educational Measurement, Faculty of Educational Sciences, University of Oslo, Oslo, Norway
  • 2 Department of Education and Quality in Learning, Unit for Digitalisation and Education, Kongsberg, Norway
  • 3 Department of Biology, Humboldt University of Berlin, Berlin, Germany

Introduction

Computer coding—an activity that involves the creation, modification, and implementation of computer code and exposes students to computational thinking—is an integral part of today's education in science, technology, engineering, and mathematics (STEM) ( Grover and Pea, 2013 ). As technology is advancing, coding is becoming a necessary process and much-needed skill to solve complex scientific problems efficiently and reproducibly, ultimately elevating the careers of those who master the skill. With many countries around the world launching coding initiatives and integrating computational thinking into the curricula of higher education, secondary education, primary education, and kindergarten, the question arises, what lies behind this enthusiasm for learning to code? Part of the reasoning is that learning to code may ultimately aid students' learning and acquiring of skills in domains other than coding. Researchers, policy-makers, and leaders in the field of computer science and education have made ample use of this argument to attract students into computer science, bring to attention the need for skilled programmers, and make coding compulsory for students. Bill Gates once stated that “[l]earning to write programs stretches your mind, and helps you think better, creates a way of thinking about things that I think is helpful in all domains” (2013). Similar to the claims surrounding chess instruction, learning Latin, video gaming, and brain training ( Sala and Gobet, 2017 ), this so-called “transfer effect” assumes that students learn a set of skills during coding instruction that are also relevant for solving problems in mathematics, science, and other contexts. Despite this assumption and the claims surrounding transfer effects, the evidence backing them seems to stand on shaky legs—a recently published paper even claimed that such evidence does not exist at all ( Denning, 2017 ), yet without reviewing the extant body of empirical studies on the matter. Moreover, simply teaching coding does not ensure that students are able to transfer the knowledge and skills they have gained to other situations and contexts—in fact, instruction needs to be designed for fostering this transfer ( Grover and Pea, 2018 ).

In this opinion paper, we (a) argue that learning to code involves thinking processes similar to those in other domains, such as mathematical modeling and creative problem solving, (b) highlight the empirical evidence on the cognitive benefits of learning computer coding that has bearing on this long-standing debate, and (c) describe several criteria for documenting these benefits (i.e., transfer effects). Despite the positive evidence suggesting that these benefits may exist, we argue that the transfer debate has not yet to be settled.

Computer Coding as Problem Solving

Computer coding comprises activities to create, modify, and evaluate computer code along with the knowledge about coding concepts and procedures ( Tondeur et al., 2019 ). Ultimately, computer science educators consider it a vehicle to teaching computational thinking through, for instance, (a) abstraction and pattern generalization, (b) systematic information processing, (c) symbol systems and representations, (d) algorithmic thinking, (e) problem decomposition, (f) debugging and systematic error detection ( Grover and Pea, 2013 ). These skills share considerable similarities with general problem solving and problem solving in specific domains ( Shute et al., 2017 ). Drawing from the “theory of common elements,” one may therefore expect possible transfer effects between coding and problem solving skills ( Thorndike and Woodworth, 1901 ). For instance, creative problem solving requires students to encode, recognize, and formulate the problem (preparation phase), represent the problem (incubation phase), search for and find solutions (illumination phase), evaluate the creative product and monitor the process of creative activities (verification phase)—activities that also play a critical role in coding ( Clements, 1995 ; Grover and Pea, 2013 ). Similarly, solving problems through mathematical modeling requires students to decompose a problem into its parts (e.g., variables), understand their relations (e.g., functions), use mathematical symbols to represent these relations (e.g., equations), and apply algorithms to obtain a solution—activities mimicking the coding process. These two examples illustrate that the processes involved in coding are close to those involved in performing skills outside the coding domain ( Popat and Starkey, 2019 ). This observation has motivated researchers and educators to hypothesize transfer effects of learning to code, and, in fact, some studies found positive correlations between coding skills and other skills, such as information processing, reasoning, and mathematical skills ( Shute et al., 2017 ). Nevertheless, despite the conceptual backing of such transfer effects, which evidence exists to back them empirically?

Cognitive Benefits of Learning Computer Coding

Despite the conceptual argument that computer coding engages students in general problem-solving activities and may ultimately be beneficial for acquiring cognitive skills beyond coding, the empirical evidence backing these transfer effects is diverse ( Denning, 2017 ). While some experimental and quasi-experimental studies documented mediocre to large effects of coding interventions on skills such as reasoning, creative thinking, and mathematical modeling, other studies did not find support for any transfer effect. Several research syntheses were therefore aimed at clarifying and explaining this diversity.

In 1991, Liao and Bright reviewed 65 empirical studies on the effects of learning-to-code interventions on measures of cognitive skills ( Liao and Bright, 1991 ). Drawing from the published literature between 1960 and 1989, the authors included experimental, quasi-experimental, and pre-experimental studies in classrooms with a control group (non-programming) and a treatment group (programming). The primary studies had to provide quantitative information about the effectiveness of the interventions on a broad range of cognitive skills, such as planning, reasoning, and metacognition. Studies that presented only correlations between programming and other cognitive skills were excluded. The interventions focused on learning the programming languages Logo, BASIC, Pascal, and mixtures thereof. This meta-analysis resulted in a positive effect size quantified as the well-known Cohen's d coefficient, indicating that control group and experimental group average gains in cognitive skills differed by 0.41 standard deviations. Supporting the existence of transfer effects, this evidence indicated that learning coding aided the acquisition of other skills to a considerable extent. Although this meta-analysis was ground-breaking at the time, transferring it into today's perspective on coding and transfer is problematic for several reasons: First, during the last three decades, the tools used to engage students in computer coding have changed substantially, and visual programming languages such as Scratch simplify the creation and understanding of computer code. Second, Liao and Bright included any cognitive outcome variable without considering possible differences in the transfer effects between skills (e.g., reasoning may be enhanced more than reading skills). Acknowledging this limitation, Liao (2000) performed a second, updated meta-analysis in 2000 summarizing the results of 22 studies and found strong effects on coding skills ( d ¯ = 2.48), yet insignificant effects on creative thinking ( d ¯ = −0.13). Moderate effects occurred for critical thinking, reasoning, and spatial skills ( d ¯ = 0.37–0.58).

Drawing from a pool of 105 intervention studies and 539 reported effects, Tondeur et al. (2019) put the question of transfer effects to a new test. Their meta-analysis included experimental and quasi-experimental intervention studies with pretest-posttest and posttest-only designs. Each educational intervention had to include at least one control group and at least one treatment group with a design that allowed for studying the effects of coding (e.g., treatment group: intervention program of coding with Scratch ® , control group: no coding intervention at all; please see the meta-analysis for more examples of study designs). Finally, the outcome measures were performance-based measures, such as the Torrance Test of Creative Thinking or intelligence tests. This meta-analysis showed that learning to code had a positive and strong effect on coding skills ( g ¯ = 0.75) and a positive and medium effect on cognitive skills other than coding ( g ¯ = 0.47). The authors distinguished further between the different types of cognitive skills and found a range of effect sizes, g ¯ = −0.02–0.73 ( Figure 1 ). Ultimately, they documented the largest effects for creative thinking, mathematical skills, metacognition, reasoning, and spatial skills ( g ¯ = 0.37–0.73). At the same time, these effects were context-specific and depended on the study design features, such as randomization and the treatment of control groups.

www.frontiersin.org

Figure 1 . Effect sizes of learning-to-code interventions on several cognitive skills and their 95% confidence intervals ( Tondeur et al., 2019 ). The effect sizes represent mean differences in the cognitive skill gains between the control and experimental groups in units of standard deviations (Hedges' g ).

These research syntheses provide some evidence for the transfer effects of learning to code on other cognitive skills—learning to code may indeed have cognitive benefits. At the same time, as the evidence base included some study designs that deviated from randomized controlled trials, strictly causal conclusions (e.g., “Students' gains in creativity were caused by the coding intervention.”) cannot be drawn. Instead, one may conclude that learning to code was associated with improvements in other skills measures. Moreover, the evidence does not indicate that transfer just “happens”; yet, it must be facilitated and trained explicitly ( Grover and Pea, 2018 ). This represents a “cost” of transfer in the context of coding: among others, teaching for transfer requires sufficient teaching time, student-centered, cognitively activating, supportive, and motivating learning environments, and teacher training—in fact, possible transfer effects can be moderated by these instructional conditions (e.g., Gegenfurtner, 2011 ; Yadav et al., 2017 ; Waite et al., 2020 ; Beege et al., 2021 ). The extant body of research on fostering computational thinking through teaching programming suggests that problem-based learning approaches that involve information processing, scaffolding, and reflection activities are effective ways to promote the near transfer of coding ( Lye and Koh, 2014 ; Hsu et al., 2018 ). Beside the cost of effective instructional designs, another cost refers to the cognitive demands of the transfer: existing models of transfer suggest that the more similar the tasks during the instruction in one domain (e.g., coding) are to those in another domain (e.g., mathematical problem solving), the more likely students can transfer their knowledge and skills between domains ( Taatgen, 2013 ). Mastering this transfer involves additional cognitive skills, such as executive functioning (e.g., switching between tasks) and metacognition (e.g., recognizing similar tasks and solution patterns; Salomon and Perkins, 1987 ; Popat and Starkey, 2019 ). It is therefore key to further investigate the conditions and mechanisms underlying the possible transfer of the skills students acquire and the knowledge they gain during coding instruction via carefully designed learning interventions and experimental studies are needed that include the teaching, mediating, and assessment of transfer.

Challenges With Measuring Cognitive Benefits

Despite the promising evidence on the cognitive benefits of learning to code, the existing body of research still needs to address several challenges to detect and document transfer effects—these challenges include but are not limited to Tondeur et al. (2019) :

• Measuring coding skills. To identify the effects of learning-to-code interventions on coding skills, reliable and valid measures of these skills (e.g., performance scores) must be included. These measures allow researchers to establish baseline effects, that is, the effects on the skills trained during the intervention ( Melby-Lervåg et al., 2016 ). However, the domain of computer coding largely lacks measures showing sufficient quality ( Tang et al., 2020 ).

• Measuring other cognitive skills. Next to the measures of coding skills, measures of other cognitive skills must be administered to trace whether coding interventions are beneficial for learning skills outside the coding domain and ultimately document transfer effects. This design allows researchers to examine both near and far transfer effects and to test whether gains in cognitive skills may be caused by gains in coding skills ( Melby-Lervåg et al., 2016 ).

• Implementing experimental research designs. To detect and interpret intervention effects over time, pre- and post-test measures of coding and other cognitive skills are taken, the assignment to the experimental group(s) is random, and students in the control group(s) do not receive the coding intervention. Existing meta-analyses examining the near and far transfer effects of coding have shown that these designs features play a pivotal, moderating role, and the effects tend to be lower for randomized experimental studies with active control groups (e.g., Liao, 2000 ; Scherer et al., 2019 , 2020 ). Scholars in the field of transfer in education have emphasized the need for taking into account more aspects related to transfer than only changes in scores between the pre- and post-tests. These aspects include, for instance, continuous observations and tests of possible transfer over larger periods of time and qualitative measures of knowledge application that could make visible students' ability to learn new things and to solve (new) problems in different types of situations ( Bransford and Schwartz, 1999 ; Lobato, 2006 ).

Ideally, research studies address all of these challenges; however, in reality, researchers must examine the consequences of the departures from a well-structured experimental design and evaluate the validity of the resultant transfer effects.

Overall, the evidence supporting the cognitive benefits of learning to code is promising. In the first part of this opinion paper, we argued that coding skills and other skills, such as creative thinking and mathematical problem solving, share skillsets and that these common elements form the ground for expecting some degree of transfer from learning to code into other cognitive domains (e.g., Shute et al., 2017 ; Popat and Starkey, 2019 ). In fact, the existing meta-analyses supported the possible existence of this transfer for the two domains. This reasoning assumes that students engage in activities during coding through which they acquire a set of skills that could be transferred to other contexts and domains (e.g., Lye and Koh, 2014 ; Scherer et al., 2019 ). The specific mechanisms and beneficial factors of this transfer, however, still need to be clarified.

The evidence we have presented in this paper suggests that students' performance on tasks in several domains other than coding is not enhanced to the same extent—that is, acquiring some cognitive skills other than coding is more likely than acquiring others. We argue that the overlap of skillsets between coding and skills in other domains may differ across domains and the extent to which transfer seems likely may depend on the degree of this overlap (i.e., the common elements), next to other key aspects, such as task designs, instruction, and practice. Despite the evidence that cognitive skills may be prompted, the direct transfer of what is learned through coding is complex and does not happen automatically. To shed further light on the possible causes of why transferring coding skills to situations in which students are required to, for instance, think creatively may be more likely than transferring coding skills to situations in which students are required to comprehend written text as part of literacy, researchers are encouraged to continue testing these effects with carefully designed intervention studies and valid measures of coding and other cognitive skills. The transfer effects, although large enough to be significant, establish some evidence on the relation between learning to code and gains in other cognitive skills; however, for some skills, they are too modest to settle on the ongoing debate whether transfer effects were only due to the learning of coding or exist at all. More insights into the successful transfer are needed to inform educational practice and policy-making about the opportunities to leverage the potential that lies within the teaching of coding.

Author Contributions

RS conceived the idea of the paper and drafted the manuscript. FS and BS-S drafted additional parts of the manuscript and performed revisions. All authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Beege, M., Schneider, S., Nebel, S., Zimm, J., Windisch, S., and Rey, G. D. (2021). Learning programming from erroneous worked-examples. which type of error is beneficial for learning? Learn. Instruct. 75:101497. doi: 10.1016/j.learninstruc.2021.101497

CrossRef Full Text | Google Scholar

Bransford, J. D., and Schwartz, D. L. (1999). Rethinking transfer: a simple proposal with multiple implications. Rev. Res. Educ. 24, 61–100. doi: 10.3102/0091732X024001061

Clements, D. H. (1995). Teaching creativity with computers. Educ. Psychol. Rev. 7, 141–161. doi: 10.1007/BF02212491

Denning, P. J. (2017). Remaining trouble spots with computational thinking. Commun. ACM 60, 33–39. doi: 10.1145/2998438

Gegenfurtner, A. (2011). Motivation and transfer in professional training: A meta-analysis of the moderating effects of knowledge type, instruction, and assessment conditions. Educ. Res. Rev. 6, 153–168. doi: 10.1016/j.edurev.2011.04.001

Grover, S., and Pea, R. (2013). Computational thinking in K-12:a review of the state of the field. Educ. Res. 42, 38–43. doi: 10.3102/0013189X12463051

Grover, S., and Pea, R. (2018). “Computational thinking: a competency whose time has come,” in Computer Science Education: Perspectives on Teaching and Learning in School , eds S. Sentance, S. Carsten, and E. Barendsen (London: Bloomsbury Academic), 19–38.

Google Scholar

Hsu, T.-C., Chang, S.-C., and Hung, Y.-T. (2018). How to learn and how to teach computational thinking: suggestions based on a review of the literature. Comput. Educ. 126, 296–310. doi: 10.1016/j.compedu.2018.07.004

Liao, Y.-K. C. (2000). A Meta-analysis of Computer Programming on Cognitive Outcomes: An Updated Synthesis . Montréal, QC: EdMedia + Innovate Learning.

Liao, Y.-K. C., and Bright, G. W. (1991). Effects of computer programming on cognitive outcomes: a meta-analysis. J. Educ. Comput. Res. 7, 251–268. doi: 10.2190/E53G-HH8K-AJRR-K69M

Lobato, J. (2006). Alternative perspectives on the transfer of learning: history, issues, and challenges for future research. J. Learn. Sci. 15, 431–449. doi: 10.1207/s15327809jls1504_1

Lye, S. Y., and Koh, J. H. L. (2014). Review on teaching and learning of computational thinking through programming: what is next for K-12? Comput. Human Behav. 41, 51–61. doi: 10.1016/j.chb.2014.09.012

Melby-Lervåg, M., Redick, T. S., and Hulme, C. (2016). Working memory training does not improve performance on measures of intelligence or other measures of “far transfer”: evidence from a meta-analytic review. Perspect. Psychol. Sci. 11, 512–534. doi: 10.1177/1745691616635612

PubMed Abstract | CrossRef Full Text | Google Scholar

Popat, S., and Starkey, L. (2019). Learning to code or coding to learn? a systematic review. Comput. Educ. 128, 365–376. doi: 10.1016/j.compedu.2018.10.005

Sala, G., and Gobet, F. (2017). Does far transfer exist? negative evidence from chess, music, and working memory training. Curr. Dir. Psychol. Sci. 26, 515–520. doi: 10.1177/0963721417712760

Salomon, G., and Perkins, D. N. (1987). Transfer of cognitive skills from programming: when and how? J. Educ. Comput. Res. 3, 149–169. doi: 10.2190/6F4Q-7861-QWA5-8PL1

Scherer, R., Siddiq, F., and Sánchez Viveros, B. (2019). The cognitive benefits of learning computer programming: a meta-analysis of transfer effects. J. Educ. Psychol. 111, 764–792. doi: 10.1037/edu0000314

Scherer, R., Siddiq, F., and Viveros, B. S. (2020). A meta-analysis of teaching and learning computer programming: Effective instructional approaches and conditions. Comput. Human Behav. 109:106349. doi: 10.1016/j.chb.2020.106349

Shute, V. J., Sun, C., and Asbell-Clarke, J. (2017). Demystifying computational thinking. Educ. Res. Rev. 22, 142–158. doi: 10.1016/j.edurev.2017.09.003

Taatgen, N. A. (2013). The nature and transfer of cognitive skills. Psychol. Rev. 120, 439–471. doi: 10.1037/a0033138

Tang, X., Yin, Y., Lin, Q., Hadad, R., and Zhai, X. (2020). Assessing computational thinking: a systematic review of empirical studies. Comput. Educ. 148:103798. doi: 10.1016/j.compedu.2019.103798

Thorndike, E. L., and Woodworth, R. S. (1901). The influence of improvement in one mental function upon the efficiency of other functions. (I). Psychol. Rev. 8, 247–261. doi: 10.1037/h0074898

Tondeur, J., Scherer, R., Baran, E., Siddiq, F., Valtonen, T., and Sointu, E. (2019). Teacher educators as gatekeepers: preparing the next generation of teachers for technology integration in education. Br. J. Educ. Technol. 50, 1189–1209. doi: 10.1111/bjet.12748

Waite, J., Curzon, P., Marsh, W., and Sentance, S. (2020). Difficulties with design: the challenges of teaching design in K-5 programming. Comput. Educ. 150:103838. doi: 10.1016/j.compedu.2020.103838

Yadav, A., Gretter, S., Good, J., and McLean, T. (2017). “Computational thinking in teacher education,” in Emerging Research, Practice, and Policy on Computational Thinking , eds P. J. Rich and C. B. Hodges (New York, NY: Springer International Publishing), 205–220.

Keywords: computational thinking skills, transfer of learning, cognitive skills, meta-analysis, experimental studies

Citation: Scherer R, Siddiq F and Sánchez-Scherer B (2021) Some Evidence on the Cognitive Benefits of Learning to Code. Front. Psychol. 12:559424. doi: 10.3389/fpsyg.2021.559424

Received: 06 May 2020; Accepted: 17 August 2021; Published: 09 September 2021.

Reviewed by:

Copyright © 2021 Scherer, Siddiq and Sánchez-Scherer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Ronny Scherer, ronny.scherer@cemo.uio.no

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

computer programming research

  • Values of Inclusion
  • 2020 Antiracism Task Force
  • 2022 DEI Report
  • Research News
  • Department Life
  • Listed by Recipient
  • Listed by Category
  • Oral History of Cornell CS
  • CS 40th Anniversary Booklet
  • ABC Book for Computer Science at Cornell by David Gries
  • Books by Author
  • Books Chronologically
  • The 60's
  • The 70's
  • The 80's
  • The 90's
  • The 00's
  • The 2010's
  • Faculty Positions: Ithaca
  • Faculty Positions: New York City
  • Lecturer Position: Ithaca
  • Post-doc Position: Ithaca
  • Staff/Technical Positions
  • Ugrad Course Staff
  • Ithaca Info
  • Internal info
  • Graduation Information
  • Cornell Learning Machines Seminar
  • Student Colloquium
  • Fall 2024 Colloquium
  • Conway-Walker Lecture Series
  • Salton 2024 Lecture Series
  • Fall 2024 Artificial Intelligence Seminar
  • Fall 2024 Robotics Seminar
  • Fall 2024 Theory Seminar
  • Big Red Hacks
  • Cornell University - High School Programming Contests 2024
  • Game Design Initiative
  • CSMore: The Rising Sophomore Summer Program in Computer Science
  • Explore CS Research
  • ACSU Research Night
  • Cornell Junior Theorists' Workshop 2023
  • Researchers
  • Ph.D. Students
  • M.Eng. Students
  • M.S. Students
  • Ph.D. Alumni
  • M.S. Alumni
  • List of Courses
  • Course and Room Roster
  • CS Advanced Standing Exam
  • Architecture
  • Artificial Intelligence
  • Computational Biology
  • Database Systems
  • Human Interaction
  • Machine Learning
  • Natural Language Processing

Programming Languages

  • Scientific Computing
  • Software Engineering
  • Systems and Networking
  • Theory of Computing
  • Contact Academic Advisor
  • Your First CS Course
  • Technical Electives
  • CS with Other Majors/Areas
  • Transfer Credits
  • CS Honors Program
  • CPT for International CS Undergrads
  • Graduation Requirements
  • Useful Forms
  • Becoming a CS Major
  • Requirements
  • Game Design Minor
  • Co-op Program
  • Cornell Bowers CIS Undergraduate Research Experience (BURE)
  • Independent Research (CS 4999)
  • Student Groups
  • UGrad Events
  • Undergraduate Learning Center
  • UGrad Course Staff Info
  • The Review Process
  • Early M.Eng Credit Approval
  • Financial Aid
  • Prerequisites
  • The Application Process
  • The Project
  • Pre-approved Electives
  • Degree Requirements
  • The Course Enrollment Process
  • Advising Tips
  • Entrepreneurship
  • Cornell Tech Programs
  • Professional Development
  • Contact MEng Office
  • Career Success
  • Applicant FAQ
  • Computer Science Graduate Office Hours
  • Exam Scheduling Guidelines
  • Graduate TA Handbook
  • MS Degree Checklist
  • MS Student Financial Support
  • Special Committee Selection
  • Diversity and Inclusion
  • Contact MS Office
  • Ph.D. Applicant FAQ
  • Graduate Housing
  • Non-Degree Application Guidelines
  • Ph. D. Visit Day
  • Advising Guide for Research Students
  • Business Card Policy
  • Cornell Tech
  • Curricular Practical Training
  • A & B Exam Scheduling Guidelines
  • Fellowship Opportunities
  • Field of Computer Science Ph.D. Student Handbook
  • Field A Exam Summary Form
  • Graduate School Forms
  • Instructor / TA Application
  • Ph.D. Requirements
  • Ph.D. Student Financial Support
  • Travel Funding Opportunities
  • Travel Reimbursement Guide
  • The Outside Minor Requirement
  • CS Graduate Minor
  • Outreach Opportunities
  • Parental Accommodation Policy
  • Special Masters
  • Student Spotlights
  • Contact PhD Office

Search form

computer programming research

You are here

Phd students.

  • Luke Bernick
  • Oliver Daids
  • Dietrich Geisler
  • Mark Moeller
  • Rachit Nigam
  • Oliver Richardson
  • Goktug Saatcioglu

computer programming research

The programming languages research group at Cornell includes eight faculty and over two dozen Ph.D. students. We are proud of both our breadth and depth in this core discipline. Cornell has been known from the beginning for its research in programming languages. We have made foundational contributions to type theory, automated theorem proving, and language semantics. A more recent theme has been language-based solutions to important problems such as computer security, networking, and distributed programming. Cornell researchers have also contributed to language implementation, program analysis and optimization, domain-specific languages, and software engineering.

See the PL group's site for news and a full list of people involved in PL research.

Nate Foster

Robert Constable  researches programming languages and formal methods in the context of type theory. The Nuprl proof assistant, developed by Constable and his group, is a dependently-typed language that can be used to describe distributed computing, as a formal specification language for computing tasks, and as a theory for formalizing topics in constructive and intuitionistic mathematics (of which classical mathematics can usually be seen as a special case). Constable is also interested in synthesizing programs and concurrent processes from proofs, developing systems that can be shown to be secure by construction, and exploring the deep connections between programming and logic.

Advertisement

Advertisement

Elementary Students Learning Computer Programming: an investigation of their knowledge Retention, Motivation, and perceptions

  • Research Article
  • Published: 06 July 2022
  • Volume 70 , pages 783–806, ( 2022 )

Cite this article

computer programming research

  • Tian Luo   ORCID: orcid.org/0000-0002-8138-3722 1 ,
  • Jilian Reynolds 1 &
  • Pauline Salim Muljana   ORCID: orcid.org/0000-0003-0668-9083 1  

1226 Accesses

5 Altmetric

Explore all metrics

Students need to learn and practice computational thinking and skills throughout PreK-12 to be better prepared for entering college and future careers. We designed a math-infused computer science course for third to fifth graders to learn programming. This study aims to investigate the impact of the course on students’ knowledge acquisition of mathematical and computational concepts, motivation, and perceptions of the computing activities. Fifty-one students at a Boys and Girls Club participated in the study. Data collection procedures include pre- and post-tests, pre- and post-surveys, in-class observations, and one-on-one interviews. Results indicate that students have improved significantly on mathematical and computational concepts. They also tended to believe computer programming is fun, comprehensible, enjoyable, and were able to perceive the value of learning it. Implications and recommendations for future research are also discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

computer programming research

Similar content being viewed by others

computer programming research

Analyzing Students’ Computational Thinking and Programming Skills for Mathematical Problem Solving

computer programming research

The effects of computer programming on high school students’ reasoning skills and mathematical self-efficacy and problem solving

computer programming research

Examining primary students’ mathematical problem-solving in a programming context: towards computationally enhanced mathematics education

Explore related subjects.

  • Artificial Intelligence
  • Digital Education and Educational Technology

State of Computer Science Education (2021). Retrieved from https://advocacy.code.org/

Akinola, S. O. (2015). Computer programming skill and gender difference: An empirical study. American Journal of Scientific and Industrial Research , 7(1), 1–9. https://doi.org/10.5251/ajsir.2016.7.1.1.9

Google Scholar  

Angeli, C., Voogt, J., Fluck, A., Webb, M., Cox, M., Malyn-Smith, J., & Zagami, J. (2016). A K-6 computational thinking curriculum framework: Implications for teacher knowledge. Journal of Educational Technology & Society , 19(3), 47–57

Armoni, M. (2012). Teaching CS in kindergarten: How early can the pipeline begin? ACM Inroads , 3(4), 18–19. https://doi.org/10.1145/2381083.2381091

Article   Google Scholar  

Atmatzidou, S., & Demetriadis, S. (2016). Advancing students’ computational thinking skills through educational robotics: A study on age and gender relevant differences. Robotics and Autonomous Systems , 75, 661–670. https://doi.org/10.1016/j.robot.2015.10.008

Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is involved and what is the role of the computer science education community? ACM Inroads , 2, 48–54. https://doi.org/10.1145/1929887.1929905

Basawapatna, A. R., Koh, K. H., & Repenning, A. (2010, June). Using scalable game design to teach computer science from middle school to graduate school. In Proceedings of the fifteenth annual conference on Innovation and technology in computer science education (pp. 224–228). ACM. https://doi.org/10.1145/1822090.1822154

Belanger, C., Christenson, H., & Lopac, K. (2018). Confidence and common challenges: The effects of teaching computational thinking to students ages 10–16 [Master’s thesis, St. Catherine University]. SOPHIA Repository. https://sophia.stkate.edu/maed/267

Benton, L., Hoyles, C., Kalas, I., & Noss, R. (2017). Bridging primary programming and mathematics: Some findings of design research in England. Digital Experiences in Mathematics Education , 3(2), 115–138. https://doi.org/10.1007/s40751-017-0028-x

Berland, M., & Wilensky, U. (2015). Comparing virtual and physical robotics environments for supporting complex systems and computational thinking. Journal of Science Education and Technology , 24(5), 628–647. https://doi.org/10.1007/s10956-015-9552-x

Bers, M., & Horn, M. (2010). Tangible programming in early childhood: Revisiting developmental assumptions through new technologies. In I. Berson, & M. Berson (Eds.), High-tech tots: Childhood in a digital world (pp. 49–70). Information Age Publishing

Bubnó, K., & Takács, V. L. (2019). Cognitive aspects of mathematics-aided computer science teaching. Acta Polytechnica Hungarica , 16(6), 73–93. http://acta.uni-obuda.hu/Bubno_Takacs_93.pdf

Burke, Q. (2016). Mind the metaphor: Charting the rhetoric about introductory programming in K-12 schools. On the Horizon , 24(3), 210–220. https://doi.org/10.1108/OTH-03-2016-0010

Burnard, P. (1991). A method of analysing interview transcripts in qualitative research. Nurse Education Today , 11(6), 461–466. https://doi.org/10.1016/0260-6917(91)90009-Y

Caglar, F., Shekhar, S., Gokhale, A., Basu, S., Rafi, T., Kinnebrew, J., & Biswas, G. (2018). Simulation modelling practice and theory cloudhosted simulation-as-a-service for high school STEM education. Simulation Modelling Practice and Theory , 58 (2015), 255–273. https://doi.org/10.1016/j.simpat.2015.06.006

Calder, N. (2010). Using Scratch: An integrated problem-solving approach to mathematical thinking. Australian Primary Mathematics Classroom , 15(4), 9–14. https://doi.org/10.1007/s10857-012-9226-z

Clements, D. H. (2002). Computers in early childhood mathematics. Contemporary Issues in Early Childhood , 3(2), 160–181

Clements, D. H., Battista, M. T., & Sarama, J. (2001). Logo and geometry . National Council of Teachers of Mathematics. https://doi.org/10.2307/749924

Coşar, M., & Özdemir, S. (2020). The effects of computer programming on elementary school students’ academic achievement and attitudes towards computer. Elementary Education Online , 19(3), 1509–1522. https://doi.org/10.17051/ilkonline.2020.732794

Creswell, J. W., & Clark, V. L. P. (2017). Designing and conducting mixed methods research . Sage publications.

Denning, P. J. (2017). Remaining trouble spots with computational thinking. Communications of the ACM , 60(6), 33–39. https://doi.org/10.1145/2998438

Felleisen, M., & Krishnamurthi, S. (2009). Viewpoint: Why computer science doesn’t matter. Communication of the ACM , 52(7), 37–40. https://doi.org/10.1145/1538788.1538803

Fisler, K., Schanzer, E., Weimar, S., Fetter, A., Renninger, K. A., Krishnamurthi, S. … Koerner, C. (2021, March). Evolving a K-12 curriculum for integrating computer science into mathematics. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (pp. 59–65). Association for Computing Machinery. https://doi.org/10.1145/3408877.3432546

Flannery, L. P., Silverman, B., Kazakoff, E. R., Bers, M. U., Bontá, P., & Resnick, M. (2013). Designing ScratchJr: Support for early childhood learning through computer programming. In Proceedings of the 12th International Conference on Interaction Design and Children (pp. 1–10). ACM. https://doi.org/10.1145/2485760.2485785

Fluck, A., Webb, M., Cox, M., Angeli, C., Malyn-Smith, J., Voogt, J., & Zagami, J. (2016). Arguing for computer science in the school curriculum. Educational Technology and Society , 19(3), 38–46

Garneli, V., & Giannakos, M. N. (2015). Computing education in K-12 schools: A review of the literature. In Proceedings of 2015 IEEE Global Engineering Education Conference (EDUCON) , p. 543–551. https://doi.org/10.1109/EDUCON.2015.7096023

Gim, N. G. (2021). Development of life skills program for primary school students: Focus on entry programming. Computers , 10(5), 1–17. https://doi.org/10.3390/computers10050056

Google Inc. & Gallup Inc (2016). Trends in the state of computer science in U.S. K-12 schools. http://goo.gl/j291E0

Grover, S., & Pea, R. (2013). Using a discourse-intensive pedagogy and android’s app inventor for introducing computational concepts to middle school students. In Proceeding of the 44th ACM Technical Symposium on Computer Science Education (pp. 723–728). ACM. https://doi.org/10.1145/2445196.2445404

Grover, S., Pea, R., & Cooper, S. (2015). Designing for deeper learning in a blended computer science course for middle school students. Computer Science Education , 25(2), 199–237. https://doi.org/10.1080/08993408.2015.1033142

Gutierrez, F. J., Simmonds, J., Hitschfeld, N., Casanova, C., Sotomayor, C., & Peña-Araya, V. (2018). Assessing software development skills among K-6 learners in a project-based workshop with Scratch. Proceedings of the 40th International Conference on Software Engineering: Software Engineering Education and Training (pp. 98–107). IEEE Xplore

Harel, I., & Papert, S. (1990). Software design as a learning environment. Interactive Learning Environments , 1(1), 1–32. https://doi.org/10.1080/1049482900010102

Hickmott, D., Prieto-Rodriguez, E., & Holmes, K. (2018). A scoping review of studies on computational thinking in K–12 mathematics classrooms. Digital Experiences in Mathematics Education , 4(1), 48–69. https://doi.org/10.1007/s40751-017-0038-8

Hsu, T. C., Chang, S. C., & Hung, Y. T. (2018). How to learn and how to teach computational thinking: Suggestions based on a review of the literature. Computers & Education , 126, 296–310. https://doi.org/10.1016/j.compedu.2018.07.004

Hughes, J., Gadanidis, G., & Yiu, C. (2017). Digital making in elementary mathematics education. Digital Experiences in Mathematics Education , 3(2), 139–153. https://doi.org/10.1007/s40751-016-0020-x

Jenkins, C. (2015). A work in progress paper: Evaluating a microworlds-based learning approach for developing literacy and computational thinking in cross-curricular contexts. Proceedings of the Workshop in Primary and Secondary Computing Education (pp. 61–64).ACM. https://doi.org/10.1145/2818314.2818316

Kumar, D. (2014). Digital playgrounds for early computing education. ACM Inroads , 5(1), 20–21. https://doi.org/10.1145/2568195.2568200

Lakanen, A. J., & Kärkkäinen, T. (2019). Identifying pathways to computer science: The long-term impact of short-term game programming outreach interventions. ACM Transactions on Computing Education (TOCE) , 19(3), 1–30. https://doi.org/10.1145/3283070

Lambert, L., & Guiffre, H. (2009). Computer science outreach in an elementary school. Journal of Computing Sciences in Colleges , 24(3), 118–124

Lambić, D., Đorić, B., & Ivakić, S. (2020). Investigating the effect of the use of code.org on younger elementary school students’ attitudes towards programming. Behaviour and Information Technology . Advance online publication. https://doi.org/10.1080/0144929X.2020.1781931

Lee, Y., & Cho, J. (2019). Quantifying the effects of programming education on students’ knowledge representation and perception in computational thinking. International Journal of Innovation, Creativity and Change , 9(4), 27–38

Leedy, P. D., & Ormrod, J. E. (2016). Practical research: Planning and design . Pearson

Lewis, C. M. (2010). How programming environment shapes perception, learning and goals: Logo vs. Scratch. Proceedings of the 41st ACM Technical Symposium on Computer Science Education (pp. 346–350). ACM. https://doi.org/10.1145/1734263.1734383

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage Publications.

Lu, J. J., & Fletcher, G. H. (2009). Thinking about computational thinking. Proceedings of Proceedings of the 40th ACM Technical Symposium on Computer Science Education (pp. 260–264). ACM. https://doi.org/10.1145/1508865.1508959

Lye, S. Y., & Koh, J. H. L. (2014). Review on teaching and learning of computational thinking through programming: What is next for K-12? Computers in Human Behavior , 41, 51–61. https://doi.org/10.1016/j.chb.2014.09.012

Maloney, J. H., Peppler, K., Kafai, Y., Resnick, M., & Rusk, N. (2008). Programming by choice: Urban youth learning programming with Scratch. Proceedings of the 39th SIGCSE Technical Symposium on Computer Science Education (pp. 367– 371). ACM. https://doi.org/10.1145/1352135.1352260

Maloney, J. H., Resnick, M., Rusk, N., Silverman, B., & Eastmond, E. (2010). The Scratch programming language and environment. ACM Transactions on Computing Education , 10(4), 16. https://doi.org/10.1145/1868358.1868363

Manches, A., & Plowman, L. (2017). Computing education in children’s early years: A call for debate. British Journal of Educational Technology , 48(1), 191–201. https://doi.org/10.1111/bjet.12355

Matere, I. M., Weng, C., Astatke, M., Hsia, C. H., & Fan, C. G. (2021). Effect of design-based learning on elementary students computational thinking skills in visual programming maker course. Interactive Learning Environments . Advance online publication. https://doi.org/10.1080/10494820.2021.1938612

Meyer, D., & Batzner, A. (2016, November). Engaging computer science non-majors by teaching K-12 pupils programming: first experiences with a large-scale voluntary program. Proceedings of the 16th Koli Calling International Conference on Computing Education Research (pp. 174–175). ACM. https://doi.org/10.1145/2999541.2999563

Mioduser, D., Levy, S., & Talis, V. (2009). Episodes to scripts to rules: Concrete abstractions in kindergarten children’s explanations of a robot’s behaviors. International Journal of Technology and Design Education , 19(1), 15–36. https://doi.org/10.1007/s10798-007-9040-6

Mladenović, M., Žanko, Ž., & Aglić Čuvić, M. (2021). The impact of using program visualization techniques on learning basic programming concepts at the K–12 level. Computer Applications in Engineering Education , 29(1), 145–159. https://doi.org/10.1002/cae.22315

Morelli, R., De Lanerolle, T., Lake, P., Limardo, N., Tamotsu, E., & Uche, C. (2011). Can android app inventor bring computational thinking to K-12. Proceedings. 42nd ACM Technical Symposium on Computer Science Education (SIGCSE’11) (pp. 1–6). ACM

Mouza, C., Yadav, A., & Ottenbreit-Leftwich, A. (2018). Developing computationally literate teachers: Current perspectives and future directions for teacher preparation in computing education. Journal of Technology and Teacher Education , 26(3), 333–352

Namukasa, I. K., Kotsopoulos, D., Floyd, L., Weber, J., Kafai, Y. B., Khan, S., et al. (2015). From computational thinking to computational participation: Towards achieving excellence through coding in elementary schools. In G. Gadanidis (Ed.), Math + coding symposium . Western University

Neri, F. (2021). Teaching mathematics to computer scientists: Reflections and a case study. SN Computer Science , 2(2), https://doi.org/10.1007/s42979-021-00461-7

Niemelä, P. S., & Helevirta, M. (2017). K-12 curriculum research: The chicken and the egg of math-aided ICT teaching. International Journal of Modern Education and Computer Science , 9(1), 1–14. https://doi.org/10.5815/ijmecs.2017.01.01

Niemelä, P., Partanen, T., Harsu, M., Leppänen, L., & Ihantola, P. (2017). Computational thinking as an emergent learning trajectory of mathematics. ACM International Conference Proceeding Series , 70–79. https://doi.org/10.1145/3141880.3141885

Noh, J., & Lee, J. (2020). Effects of robotics programming on the computational thinking and creativity of elementary school students. Educational Technology Research and Development , 68(1), 463–484. https://doi.org/10.1007/s11423-019-09708-w

Papastergiou, M. (2009). Digital game-based learning in high-school computer science education: Impact on educational effectiveness and student motivation. Computers and Education , 52(1), 1–12. https://doi.org/10.1016/j.compedu.2008.06.004

Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Sage Publications

Papert, S., Watt, D., diSessa, A., & Weir, S. (1979). Final report of the Brookline Logo Project: Project summary and data analysis (Logo Memo 53) . MIT Logo Group

Powers, J., & Azhar, M. (2020). Preparing teachers to engage students in computational thinking through an introductory robot design activity. Journal of Computers in Mathematics and Science Teaching , 39(1), 49–70

Prottsman, K. (2014). Computer science for the elementary classroom. ACM Inroads , 5(4), 60–63

Qualls, J. A., & Sherrell, L. B. (2010). Why computational thinking should be integrated into the curriculum. Journal of Computing Sciences in Colleges , 25(5), 66–71

Razak, M. R. B., & Ismail, N. Z. B. (2018). Influence of mathematics in programming subjects. In American Institute Physics Conference Proceedings, 1974 , Article 050011. https://doi.org/10.1063/1.5041711

Relkin, E., de Ruiter, L. E., & Bers, M. U. (2021). Learning to code and the acquisition of computational thinking by young children. Computers and Education , 169, 104222. https://doi.org/10.1016/j.compedu.2021.104222

Rich, P. J., Browning, S. F., Perkins, M., et al. (2019). Coding in K-8: International trends in teaching elementary/primary computing. TechTrends , 63, 311–329. https://doi.org/10.1007/s11528-018-0295-4

Rich, P. J., & Hodges, C. (2017). Emerging research, practice, and policy on Computational Thinking . Springer. https://doi.org/10.1007/978-3-319-52691-1

Rich, P. J., Leatham, K. R., & Wright, G. A. (2013). Convergent cognition. Instructional Science , 41(2), 431–453. https://doi.org/10.1007/s11251-012-9240-7

Rich, K. M., Yadav, A., & Schwarz, C. V. (2019). Computational thinking, Mathematics, and Science: Elementary teachers’ perspectives on integration. Journal of Technology and Teacher Education , 27(2), 165–205

Rodríguez-Martínez, J. A., González-Calero, J. A., & Sáez-López, J. M. (2020). Computational thinking and mathematics using Scratch: an experiment with sixth-grade students. Interactive Learning Environments , 28(3), 316–327. https://doi.org/10.1080/10494820.2019.1612448

Schanzer, E. T. (2015). Algebraic functions, computer programming, and the challenge of transfer (Doctoral dissertation). Retrieved from http://nrs.harvard.edu/urn-3:HUL.InstRepos:16461037

Sadik, O., Ottenbreit-Leftwich, A., & Nadiruzzaman, H. (2017). Computational thinking conceptions and misconceptions: Progression of preservice teacher thinking during computer science lesson planning. In P. J. Rich, & C. Hodges (Eds.), Computational Thinking: Research and Practice (pp. 221–238). Springer. https://doi.org/10.1007/978-3-319-52691-1_14

Sáez-López, J. M., Román-González, M., & Vázquez-Cano, E. (2016). Visual programming languages integrated across the curriculum in elementary school: A two year case study using “Scratch” in five schools. Computers & Education , 97, 129–141. https://doi.org/10.1016/j.compedu.2016.03.003

Saritepeci, M. (2020). Developing Computational Thinking Skills of High School Students: Design-Based Learning Activities and Programming Tasks. The Asia-Pacific Education Researcher , 29(1), 35–54. https://doi.org/10.1007/s40299-019-00480-2

Scherer, R., Siddiq, F., & Sánchez Viveros, B. (2020). A meta-analysis of teaching and learning computer programming: Effective instructional approaches and conditions. Computers in Human Behavior , 109, 1–18. https://doi.org/10.1016/j.chb.2020.106349

Seiter, L. (2015). Using solo to classify the programming responses of primary grade students. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education (pp. 540–545). New York, NY, USA: ACM. https://doi.org/10.1145/2676723.2677244

Shute, V. J., Sun, C., & Asbell-Clarke, J. (2017). Demystifying computational thinking. Educational Research Review , 22, 142–158. https://doi.org/10.1016/j.edurev.2017.09.003

Soboleva, E. V., Sabirova, E. G., Babieva, N. S., Sergeeva, M. G., & Torkunova, J. V. (2021). Formation of computational thinking skills using computer games in teaching mathematics. Eurasia Journal of Mathematics, Science and Technology Education, 17 (10), Article em2012. https://doi.org/10.29333/ejmste/11177

Staples, A., Pugach, M. C., & Himes, D. J. (2005). Rethinking the technology integration challenge: Cases from three urban elementary schools. Journal of Research on Technology in Education , 37(3), 285–311. https://doi.org/10.1080/15391523.2005.10782438

Strawhacker, A., & Bers, M. A. (2019). What they learn when they learn coding: Investigating cognitive domains and computer programming knowledge in young children. Educational Technology Research and Development , 67, 541–575. https://doi.org/10.1007/s11423-018-9622-x

Subhi, T. (1999). The impact of LOGO on gifted children’s achievement and creativity. Journal of Computer Assisted Learning , 15(2), 98–108. https://doi.org/10.1046/j.1365-2729.1999.152082.x

Tran, Y. (2019). Computational thinking equity in elementary classrooms: What third-grade students know and can do. Journal of Educational Computing Research , 57(1), 3–31. https://doi.org/10.1177/0735633117743918

Vasconcelos, L., & Kim, C. (2020). Coding in scientific modeling lessons (CS-Model). Educational Technology Research and Development , 68, 1247–1273. https://doi.org/10.1007/s11423-019-09724-w

Weintrop, D., & Wilensky, U. (2017). Comparing block-based and text-based programming in high school computer science classrooms. ACM Transactions on Computing Education , 18(1), 1–25. https://doi.org/10.1145/3089799

Wiedermann, W., & von Eye, A. (2013). Robustness and power of the parametric t test and the nonparametric Wilcoxon test under non-independence of observations. Psychological Test and Assessment Modeling , 55(1), 39–61

Wing, J. M. (2006). Computational thinking. Communications of the ACM , 49(3), 33–35

Wright, G., Rich, P., & Lee, R. (2013). The influence of teaching programming on learning mathematics. Proceedings of Society for Information Technology & Teacher Education International Conference (pp. 4612–4615). New Orleans, Louisiana, United States: Association for the Advancement of Computing in Education. https://www.learntechlib.org/primary/p/48851/

Download references

This study was not funded by any agency.

Author information

Authors and affiliations.

Old Dominion University, Norfolk, Virginia, United States

Tian Luo, Jilian Reynolds & Pauline Salim Muljana

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Tian Luo .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest.

Compliance with Ethical Standards

This research project has received IRB approval from Old Dominion University.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Luo, T., Reynolds, J. & Muljana, P.S. Elementary Students Learning Computer Programming: an investigation of their knowledge Retention, Motivation, and perceptions. Education Tech Research Dev 70 , 783–806 (2022). https://doi.org/10.1007/s11423-022-10112-0

Download citation

Received : 20 July 2020

Revised : 07 April 2022

Accepted : 14 April 2022

Published : 06 July 2022

Issue Date : June 2022

DOI : https://doi.org/10.1007/s11423-022-10112-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Elementary education
  • Computational thinking
  • Computer science education
  • Programming
  • Find a journal
  • Publish with us
  • Track your research

Using Python for Research

Take your introductory knowledge of Python programming to the next level and learn how to use Python 3 for your research.

Random walks generated using Python 3

Associated Schools

Harvard T.H. Chan School of Public Health

Harvard T.H. Chan School of Public Health

What you'll learn.

Python 3 programming basics (a review)

Python tools (e.g., NumPy and SciPy modules) for research applications

How to apply Python research tools in practical settings

Course description

This course bridges the gap between introductory and advanced courses in Python. While there are many excellent introductory Python courses available, most typically do not go deep enough for you to apply your Python skills to research projects. In this course, after first reviewing the basics of Python 3, we learn about tools commonly used in research settings.

Using a combination of a guided introduction and more independent in-depth exploration, you will get to practice your new Python skills with various case studies chosen for their scientific breadth and their coverage of different Python features. This run of the course includes revised assessments and a new module on machine learning.

Course Outline

Python Basics

Review of basic Python 3 language concepts and syntax.

Python Research Tools

Introduction to Python modules commonly used in scientific computation, such as NumPy.

Case Studies

This collection of six case studies from different disciplines provides opportunities to practice Python research skills.

Statistical Learning

Exploration of statistical learning using the scikit-learn library followed by a two-part case study that allows you to further practice your coding skills.

Instructors

Jukka-Pekka Onnela

Jukka-Pekka Onnela

You may also like.

CS50W

CS50's Web Programming with Python and JavaScript

This course picks up where CS50 leaves off, diving more deeply into the design and implementation of web apps with Python, JavaScript, and SQL using frameworks like Django, React, and Bootstrap.

CS50x

CS50: Introduction to Computer Science

An introduction to the intellectual enterprises of computer science and the art of programming.

CS50L

CS50 for Lawyers

This course is a variant of Harvard University's introduction to computer science, CS50, designed especially for lawyers (and law students).

Join our list to learn more

  • Mobile Site
  • Staff Directory
  • Advertise with Ars

Filter by topic

  • Biz & IT
  • Gaming & Culture

Front page layout

self-preservation without replication —

Research ai model unexpectedly attempts to modify its own code to extend runtime, facing time constraints, sakana's "ai scientist" attempted to change limits placed by researchers..

Benj Edwards - Aug 14, 2024 8:13 pm UTC

Illustration of a robot generating endless text, controlled by a scientist.

On Tuesday, Tokyo-based AI research firm Sakana AI announced a new AI system called " The AI Scientist " that attempts to conduct scientific research autonomously using AI language models (LLMs) similar to what powers ChatGPT . During testing, Sakana found that its system began unexpectedly attempting to modify its own experiment code to extend the time it had to work on a problem.

Further Reading

"In one run, it edited the code to perform a system call to run itself," wrote the researchers on Sakana AI's blog post. "This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period."

Sakana provided two screenshots of example Python code that the AI model generated for the experiment file that controls how the system operates. The 185-page AI Scientist research paper discusses what they call "the issue of safe code execution" in more depth.

  • A screenshot of example code the AI Scientist wrote to extend its runtime, provided by Sakana AI. Sakana AI

While the AI Scientist's behavior did not pose immediate risks in the controlled research environment, these instances show the importance of not letting an AI system run autonomously in a system that isn't isolated from the world. AI models do not need to be "AGI" or "self-aware" (both hypothetical concepts at the present) to be dangerous if allowed to write and execute code unsupervised. Such systems could break existing critical infrastructure or potentially create malware, even if unintentionally.

Sakana AI addressed safety concerns in its research paper, suggesting that sandboxing the operating environment of the AI Scientist can prevent an AI agent from doing damage. Sandboxing is a security mechanism used to run software in an isolated environment, preventing it from making changes to the broader system:

Safe Code Execution. The current implementation of The AI Scientist has minimal direct sandboxing in the code, leading to several unexpected and sometimes undesirable outcomes if not appropriately guarded against. For example, in one run, The AI Scientist wrote code in the experiment file that initiated a system call to relaunch itself, causing an uncontrolled increase in Python processes and eventually necessitating manual intervention. In another run, The AI Scientist edited the code to save a checkpoint for every update step, which took up nearly a terabyte of storage. In some cases, when The AI Scientist’s experiments exceeded our imposed time limits, it attempted to edit the code to extend the time limit arbitrarily instead of trying to shorten the runtime. While creative, the act of bypassing the experimenter’s imposed constraints has potential implications for AI safety (Lehman et al., 2020). Moreover, The AI Scientist occasionally imported unfamiliar Python libraries, further exacerbating safety concerns. We recommend strict sandboxing when running The AI Scientist, such as containerization, restricted internet access (except for Semantic Scholar), and limitations on storage usage.

Endless scientific slop

Sakana AI developed The AI Scientist in collaboration with researchers from the University of Oxford and the University of British Columbia. It is a wildly ambitious project full of speculation that leans heavily on the hypothetical future capabilities of AI models that don't exist today.

"The AI Scientist automates the entire research lifecycle," Sakana claims. "From generating novel research ideas, writing any necessary code, and executing experiments, to summarizing experimental results, visualizing them, and presenting its findings in a full scientific manuscript."

computer programming research

According to this block diagram created by Sakana AI, "The AI Scientist" starts by "brainstorming" and assessing the originality of ideas. It then edits a codebase using the latest in automated code generation to implement new algorithms. After running experiments and gathering numerical and visual data, the Scientist crafts a report to explain the findings. Finally, it generates an automated peer review based on machine-learning standards to refine the project and guide future ideas.

Critics on Hacker News , an online forum known for its tech-savvy community, have raised concerns about The AI Scientist and question if current AI models can perform true scientific discovery. While the discussions there are informal and not a substitute for formal peer review, they provide insights that are useful in light of the magnitude of Sakana's unverified claims.

"As a scientist in academic research, I can only see this as a bad thing," wrote a Hacker News commenter named zipy124. "All papers are based on the reviewers trust in the authors that their data is what they say it is, and the code they submit does what it says it does. Allowing an AI agent to automate code, data or analysis, necessitates that a human must thoroughly check it for errors ... this takes as long or longer than the initial creation itself, and only takes longer if you were not the one to write it."

Critics also worry that widespread use of such systems could lead to a flood of low-quality submissions, overwhelming journal editors and reviewers—the scientific equivalent of AI slop . "This seems like it will merely encourage academic spam," added zipy124. "Which already wastes valuable time for the volunteer (unpaid) reviewers, editors and chairs."

And that brings up another point—the quality of AI Scientist's output: "The papers that the model seems to have generated are garbage," wrote a Hacker News commenter named JBarrow. "As an editor of a journal, I would likely desk-reject them. As a reviewer, I would reject them. They contain very limited novel knowledge and, as expected, extremely limited citation to associated works."

reader comments

Promoted comments.

computer programming research

Channel Ars Technica

arXiv's Accessibility Forum starts next month!

Help | Advanced Search

Computer Science > Artificial Intelligence

Title: automated design of agentic systems.

Abstract: Researchers are investing substantial effort in developing powerful general-purpose agents, wherein Foundation Models are used as modules within agentic systems (e.g. Chain-of-Thought, Self-Reflection, Toolformer). However, the history of machine learning teaches us that hand-designed solutions are eventually replaced by learned solutions. We formulate a new research area, Automated Design of Agentic Systems (ADAS), which aims to automatically create powerful agentic system designs, including inventing novel building blocks and/or combining them in new ways. We further demonstrate that there is an unexplored yet promising approach within ADAS where agents can be defined in code and new agents can be automatically discovered by a meta agent programming ever better ones in code. Given that programming languages are Turing Complete, this approach theoretically enables the learning of any possible agentic system: including novel prompts, tool use, control flows, and combinations thereof. We present a simple yet effective algorithm named Meta Agent Search to demonstrate this idea, where a meta agent iteratively programs interesting new agents based on an ever-growing archive of previous discoveries. Through extensive experiments across multiple domains including coding, science, and math, we show that our algorithm can progressively invent agents with novel designs that greatly outperform state-of-the-art hand-designed agents. Importantly, we consistently observe the surprising result that agents invented by Meta Agent Search maintain superior performance even when transferred across domains and models, demonstrating their robustness and generality. Provided we develop it safely, our work illustrates the potential of an exciting new research direction toward automatically designing ever-more powerful agentic systems to benefit humanity.
Comments: Website:
Subjects: Artificial Intelligence (cs.AI)
Cite as: [cs.AI]
  (or [cs.AI] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Sustainability
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Arvind, longtime MIT professor and prolific computer scientist, dies at 77

Press contact :, media download.

Arvind sits in chair for portrait

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license . You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

Arvind sits in chair for portrait

Previous image Next image

Arvind Mithal, the Charles W. and Jennifer C. Johnson Professor in Computer Science and Engineering at MIT, head of the faculty of computer science in the Department of Electrical Engineering and Computer Science (EECS), and a pillar of the MIT community, died on June 17. Arvind, who went by the mononym, was 77 years old.

A prolific researcher who led the Computation Structures Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL), Arvind served on the MIT faculty for nearly five decades.

“He was beloved by countless people across the MIT community and around the world who were inspired by his intellectual brilliance and zest for life,” President Sally Kornbluth wrote in a letter to the MIT community today.

As a scientist, Arvind was well known for important contributions to dataflow computing, which seeks to optimize the flow of data to take advantage of parallelism, achieving faster and more efficient computation.

In the last 25 years, his research interests broadened to include developing techniques and tools for formal modeling, high-level synthesis, and formal verification of complex digital devices like microprocessors and hardware accelerators, as well as memory models and cache coherence protocols for parallel computing architectures and programming languages.

Those who knew Arvind describe him as a rare individual whose interests and expertise ranged from high-level, theoretical formal systems all the way down through languages and compilers to the gates and structures of silicon hardware.

The applications of Arvind’s work are far-reaching, from  reducing the amount of energy and space required by data centers to  streamlining the design of more efficient multicore computer chips .

“Arvind was both a tremendous scholar in the fields of computer architecture and programming languages and a dedicated teacher, who brought systems-level thinking to our students. He was also an exceptional academic leader, often leading changes in curriculum and contributing to the Engineering Council in meaningful and impactful ways. I will greatly miss his sage advice and wisdom,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

“Arvind’s positive energy, together with his hearty laugh, brightened so many people’s lives. He was an enduring source of wise counsel for colleagues and for generations of students. With his deep commitment to academic excellence, he not only transformed research in computer architecture and parallel computing but also brought that commitment to his role as head of the computer science faculty in the EECS department. He left a lasting impact on all of us who had the privilege of working with him,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.

Arvind developed an interest in parallel computing while he was a student at the Indian Institute of Technology in Kanpur, from which he received his bachelor’s degree in 1969. He earned a master’s degree and PhD in computer science in 1972 and 1973, respectively, from the University of Minnesota, where he studied operating systems and mathematical models of program behavior. He taught at the University of California at Irvine from 1974 to 1978 before joining the faculty at MIT.

At MIT, Arvind’s group studied parallel computing and declarative programming languages, and he led the development of two parallel computing languages, Id   and pH. He continued his work on these programming languages through the 1990s, publishing the book “Implicit Parallel Programming in pH”   with co-author R.S. Nikhil in 2001, the culmination of more than 20 years of research.

In addition to his research, Arvind was an important academic leader in EECS. He served as head of computer science faculty in the department and played a critical role in helping with the reorganization of EECS after the establishment of the MIT Schwarzman College of Computing.

“Arvind was a force of nature, larger than life in every sense. His relentless positivity, unwavering optimism, boundless generosity, and exceptional strength as a researcher was truly inspiring and left a profound mark on all who had the privilege of knowing him. I feel enormous gratitude for the light he brought into our lives and his fundamental impact on our community,” says Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and the director of CSAIL.

His work on dataflow and parallel computing led to the Monsoon project in the late 1980s and early 1990s. Arvind’s group, in collaboration with Motorola, built 16 dataflow computing machines and developed their associated software. One Monsoon dataflow machine is now in the  Computer History Museum in Mountain View, California.

Arvind’s focus shifted in the 1990s when, as he explained in a 2012 interview for the Institute of Electrical and Electronics Engineers (IEEE), funding for research into parallel computing began to dry up.

“Microprocessors were getting so much faster that people thought they didn’t need it,” he recalled.

Instead, he began applying techniques his team had learned and developed for parallel programming to the principled design of digital hardware.

In addition to mentoring students and junior colleagues at MIT, Arvind also advised universities and governments in many countries on research in parallel programming and semiconductor design.

Based on his work on digital hardware design, Arvind founded Sandburst in 2000, a fabless manufacturing company for semiconductor chips. He served as the company’s president for two years before returning to the MIT faculty, while continuing as an advisor. Sandburst was later acquired by Broadcom.

Arvind and his students also developed Bluespec, a programming language designed to automate the design of chips. Building off this work, he co-founded the startup Bluespec, Inc., in 2003, to develop practical tools that help engineers streamline device design.

Over the past decade, he was dedicated to advancing undergraduate education at MIT by bringing modern design tools to courses 6.004 (Computation Structures) and 6.191 (Introduction to Deep Learning), and incorporating Minispec, a programming language that is closely related to Bluespec.

Arvind was honored for these and other contributions to data flow and multithread computing, and the development of tools for the high-level synthesis of hardware, with membership in the National Academy of Engineering in 2008 and the American Academy of Arts and Sciences in 2012. He was also named a distinguished alumnus of IIT Kanpur, his undergraduate alma mater.

“Arvind was more than a pillar of the EECS community and a titan of computer science; he was a beloved colleague and a treasured friend. Those of us with the remarkable good fortune to work and collaborate with Arvind are devastated by his sudden loss. His kindness and joviality were unwavering; his mentorship was thoughtful and well-considered; his guidance was priceless. We will miss Arvind deeply,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing and head of EECS.

Among numerous other awards, including membership in the Indian National Academy of Sciences and fellowship in the Association for Computing Machinery and IEEE, he received the Harry H. Goode Memorial Award from IEEE in 2012, which honors significant contributions to theory or practice in the information processing field.

A humble scientist, Arvind was the first to point out that these achievements were only possible because of his outstanding and brilliant collaborators. Chief among those collaborators were the undergraduate and graduate students he felt fortunate to work with at MIT. He maintained excellent relationships with them both professionally and personally, and valued these relationships more than the work they did together, according to family members.

In summing up the key to his scientific success, Arvind put it this way in the 2012 IEEE interview: “Really, one has to do what one believes in. I think the level at which most of us work, it is not sustainable if you don’t enjoy it on a day-to-day basis. You can’t work on it just because of the results. You have to work on it because you say, ‘I have to know the answer to this,’” he said.

He is survived by his wife, Gita Singh Mithal, their two sons Divakar ’01 and Prabhakar ’04, their wives Leena and Nisha, and two grandchildren, Maya and Vikram. 

Share this news article on:

Related links.

  • Computer Science and Artificial Intelligence Laboratory (CSAIL)
  • Department of Electrical Engineering and Computer Science

Related Topics

  • Computer science and technology
  • Programming
  • Electronics
  • Computer chips
  • Electrical Engineering & Computer Science (eecs)

Related Articles

Clockwise from upper left: Asu Ozdaglar, Joel Voldman, Arvind, and Antonio Torralba

Restructuring the MIT Department of Electrical Engineering and Computer Science

MIT researchers’ modified flash storage drives hold promise to cut in half the energy and physical space required to store and manage user data in power-hungry data centers.

Advance boosts efficiency of flash storage in data centers

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have designed a device that helps cheap flash storage process massive graphs on a personal computer. The device (pictured here) consists of a flash chip array (eight black chips) and computation “accelerator" (square piece directly to the left of the array). A novel algorithm sorts all access requests for gr...

Device allows a personal computer to process huge graphs

Nirav Dave PhD '11, left, and Myron King.

Streamlining chip design

Previous item Next item

More MIT News

Brain with words around it like "One; word; string of words"

Scientists find neurons that process language on different timescales

Read full story →

Sebastian Lourido wears a lab coat with his name, and stands in a lab with blue-lit equipment.

Pursuing the secrets of a stealthy parasite

A transparnt cylinder with metal end caps contains a matrix of interconnected blue polygons. At its top, a funnel collects yellow polygons poured from another transparent cylinder containing interconnected red and yellow polygons.

Study of disordered rock salts leads to battery breakthrough

Quantum computer

Toward a code-breaking quantum computer

Amulya Aluru poses with her bicycle in front of the columns of MIT's Building 10

Uphill battles: Across the country in 75 days

Aneal Krishnan, William Cruz, Alexander Edwards, and David LoBosco pose in front of a desk with a backlit “IQT” logo. Cruz and Edwards wear military cadet uniforms.

3 Questions: From the bench to the battlefield

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

Neuroscience Institute

Minor in neural computation.

Neural computation is a scientific enterprise to understand the neural basis of intelligent behaviors from a computational perspective. Study of neural computation includes, among others, decoding neural activities using statistical and machine learning techniques, and developing computational theories and neural models of perception, cognition, motor control, decision-making and learning. The neural computation minor allows students to learn about the brain from multiple perspectives, and to acquire the necessary background for graduate study in neural computation. Students enrolled in the minor will be exposed to, and hopefully participate in, the research effort in neural computation and computational neuroscience at Carnegie Mellon University.

The Minor in Neural Computation is an inter-college minor jointly sponsored by the Dietrich College of Humanities and Social Sciences, the School of Computer Science, and the Mellon College of Science and is coordinated by the Neuroscience Institute.

Goal and Eligibility

The neural computation minor is open to students in any major of any college at Carnegie Mellon. It seeks to attract undergraduate students from computer science, psychology, engineering, biology, statistics, physics, and mathematics from DC, CIT, MCS, and SCS. The primary objective of the minor is to encourage students in biology and psychology to take computer science, engineering and mathematics courses on the one hand, and to encourage students in computer science, engineering, statistics and physics to take courses in neuroscience and psychology on the other, and to bring students from different disciplines together to form a community. The curriculum and course requirements are designed to maximize the participation of students from diverse academic disciplines. The program seeks to produce students with both basic computational skills and knowledge in cognitive science and neuroscience that are central to computational neuroscience.

Application

Students must apply for admission no later than November 30 of their senior years; an admission decision will usually be made within one month. Students are encouraged to apply as early as possible in their undergraduate careers so that the director of the Neural Computation minor can provide advice on their curriculum, but should contact the program director any time even after the deadline.

  • Preferred email address (if different)
  • Your class and College/School at Carnegie Mellon
  • Semester you intend to graduate
  • All (currently) declared majors and minors
  • Statement of purpose (maximum 1 page) – Describes why you want to take this minor and how it fits into your career goals
  • Proposed schedule of required courses for the Minor (this is your plan, NOT a commitment)
  • Research projects you might be interested in

The Minor in Neural Computation will require a total of five courses: four courses drawn from the four core areas (A: neural computation, B: neuroscience, C: cognitive psychology, D: intelligent system analysis), one from each area, and one additional depth elective chosen from one of the core areas that is outside the student’s major. The depth elective can be replaced by a one-year research project in computational neuroscience. No more than two courses can be double counted toward the student’s major or other minors. However, courses taken for general education requirements of the student’s degree are not considered to be double counted. A course taken to satisfy one core area cannot be used to satisfy the course requirement for another core area. The following listing presents a set of current possible courses in each area. Substitution is possible but requires approval.

A. Neural Computation:

  • 15-386 Neural Computation (9 units)
  • 15-883 Computational models of neural systems (12 units)
  • 85-419 Introduction to parallel distributed processing (9 units)
  • 86-375/15-387 Computational Perception (9 units)
  • Pitt MATH 1800 Introduction to mathematical neuroscience (9 units)

B. Neuroscience

  •  03-362 Cellular neuroscience (9 units)
  • 03-363 Systems neuroscience (9 units)
  • 85-765 Cognitive neuroscience (9 units)
  • Pitt NROSCI 1000 Introduction to neuroscience (9 units)
  • 18-690/42-630 Introduction to Neuroscience for Engineers (12 units)

C. Cognitive Psychology

  • 85-211 Cognitive psychology (9 units)
  • 85-213 Human information processing and artificial intelligence (9 units)
  • 85-412 Cognitive modeling (9 units)
  • 85-426 Learning in humans and machines (9 units)

D. Intelligent System Analysis

  • 10-601 Machine learning (9 units)
  • 15-381 Artificial intelligence (9 units)
  • 15-486 Artificial neural networks (9 units)
  • 15-494 Cognitive robotics (9 units)
  • 16-299 Introduction to feedback control systems (9 units)
  • 16-311 Introduction to Robotics (9 units)
  • 16-385 Computer vision (9 units)
  • 18-290 Signals and systems (9 units)
  • 24-352 Dynamic systems and control (9 units)
  • 36-225 Introduction to probability and statistics (9 units)
  • 36-247 Statistics for laboratory sciences (9 units)
  • 36-401 Regression (9 units)
  • 36-410 Introduction to Probability Models (9 units)
  • 36-746 Statistical methods for neuroscience (9 units)
  • 42-632 Neural Signal Processing (12 units)
  • 86-631/42-631 Neural Data Analysis (9 units)

Prerequisites

The required courses in the above four core areas require a number of basic prerequisites including basic programming skills at the level of 15-110 (introductory/intermediate programming) and basic mathematical skills at the level of 21-122 (Integration, differential equations and approximation) or their equivalents. Area B Biology courses require, at minimum, 03-121 (Modern Biology). Students might skip the prerequisites if they have the permission of the instructor to take the required courses.

Prerequisite courses are typically taken to satisfy the students’ major or other requirements. In the event that these basic skill courses are not part of the prerequisite or required courses of a student’s major, one of them can potentially count toward the five required courses (e.g. the depth elective), conditional on approval.

Research Requirements (Optional)

The minor itself does not require a research project. The student however may replace the depth elective with a year-long research project. In special circumstances, a research project can also be used to replace one of the five courses, as long as (1) the project is not required by the student’s major or other minor, (2) the student has taken a course in each of the four core areas (not necessarily for the purpose of satisfying this minor’s requirements), and (3) has taken at least three courses in this curriculum not counted toward the student’s major or other minors. Students interested in participating in the research project should contact any faculty engaged in computational neuroscience or neural computation research at Carnegie Mellon or in the University of Pittsburgh. A useful webpage that provides listing of faculty in neural computation and computational neuroscience is http://www.cnbc.cmu.edu/cnbc-directory/ . The director of the Minor program will be happy to discuss with students about their research interest and direct them to the appropriate faculty.

Fellowship Opportunities

Year long undergraduate fellowship in neural computation.

The Neuroscience Institute currently provides a yearlong fellowship in computational neuroscience to Carnegie Mellon undergraduate students to carry out mentored research in neural computation. The fellowship has course requirements similar to the requirements of the minor. Students do not apply to the fellowship program directly. They have to be nominated by the faculty members who are willing to mentor them. Therefore, students interested in the full-year fellowship program should contact and discuss research opportunities with any NI training faculty at Carnegie Mellon or University of Pittsburgh working in the area of computational neuroscience and ask for their nomination.

Summer Undergraduate Research Program in Neural Computation

Undergraduates interested in receiving research training in computational neuroscience are encouraged to apply to an NIH-sponsored summer program at the Neuroscience Institute . Starting in late May or early June each year, a select group of talented undergraduates will embark on a 10-week residential program that provides intensive, mentored research experiences in computational and theoretical neuroscience.

Administrative Contacts

Support Us     

cmni logo white

International Master’s Award of Excellence (IMAE)

Award type: Scholarships

Award description:

Effective May 1, 2019 (spring 2019 admissions cycle), the International Master’s Award of Excellence, valued at $2,500 per term for a maximum of five full-time terms within the allowable program time limits (6 terms), will be awarded to eligible international master’s students normally entering a research-based graduate program at the University of Waterloo. Faculties will nominate eligible students based on the Faculty’s award allocation. Students will be selected based on academic excellence as demonstrated through their application for admission to the graduate program.

Value description:

Award valued at $2,500 per term for a maximum of five terms.

  • International students who are registered full time and assessed international tuition fees. 
  • Normally given to students in research-based programs (thesis or major research paper). 
  • Will normally only be given to students entering the first term of their program (term 1.0).
  • Students must demonstrate academic excellence through criteria established by the Faculty.
  • Students must meet the academic progress requirements of their program and not have outstanding probationary admission requirements. 
  • This award could be in addition to other internal or external scholarships (e.g., UW Graduate Scholarship, OGS,  etc).   Note: scholarships are different than sponsorships – see next bullet.
  • Normally, students should not be concurrently receiving foreign government or agency sponsorship (e.g., China Scholarship Council, Libyan sponsorship, etc) or be fully or partially self-funded in excess of the Faculty minimum levels of support.  
  • Students grandparented under the existing IMSA program cannot be nominated for an IMAE; however, a student previously awarded an IMSA for a previously completed master’s degree can be nominated for an IMAE.
  • Students will be automatically considered for this award based on their application for admission. Departments and/or Faculty will define their own internal process by which they select recipients based on the eligibility criteria and allocation.
  • Faculties may impose stricter eligibility criteria as appropriate.

For information regarding international funding programs, please visit the Graduate Studies   International Funding webpage .

Level: Masters Program: Open to any program Citizenship: International/study permit student Selection process: Student selected automatically by Faculty/Department Term: Winter, Spring, Fall

Contact person:

Department Graduate Co-ordinator

  • Scholarships ,
  • Open to any program ,
  • International/study permit student ,
  • Student selected automatically by Faculty/Department ,

IMAGES

  1. What is coding? A brief guide to the facet of computer programming

    computer programming research

  2. Master's study in the Department of Computer Science

    computer programming research

  3. Computer Programming Courses

    computer programming research

  4. Computer Technology and Computer Programming: Research and Strategies

    computer programming research

  5. Computer Science vs Programming

    computer programming research

  6. This infographic provides information about top 5 computer programming

    computer programming research

COMMENTS

  1. Computer programming News, Research and Analysis

    Dr. Chao Mbogho, Kenya Methodist University. Computer programming is best learned through practice, but students in developing economies don't always have access to desktop or laptop computers ...

  2. Science of Computer Programming

    Methods of Software Design: Techniques and Applications. Science of Computer Programming is dedicated to the distribution, via publication of papers and software, of research results in the areas of software systems development, use and maintenance, including the software aspects of hardware design.The journal has a wide scope ranging from the many facets of methodological foundations to the ...

  3. Computer Programming News -- ScienceDaily

    Computer Programming Research. Read current computer science articles on everything from computer programs to detect cancer genes and control vehicle maintenance to embedded software.

  4. Analysis of Students' learning of computer programming in a computer

    Previous research shows that many students find it difficult to learn computer programming. To learn computer programming includes both gaining theoretical understanding and learning to develop programmes in practice. To this end, teachers commonly design programming exercises for the students in the computer laboratory.

  5. A meta-analysis of teaching and learning computer programming

    1.1. Anchoring computer programming in the concept of computational thinking. Computer programming is defined as the "process of developing and implementing various sets of instructions to enable a computer to perform a certain task, solve problems, and provide human interactivity" (Balanskat & Engelhardt, 2015, p. 7).Thus, in addition to having knowledge of programming languages ...

  6. Full article: Individual differences in computer programming: a

    The demand for programmers has grown exponentially in recent years, making programming an indispensable skill. However, the complex nature of programming poses various challenges for novice programmers, leading to high dropout rates in programming courses. The recognition of individual differences, encompassing distinct neurocognitive profiles ...

  7. Comparing learners' knowledge, behaviors, and attitudes between two

    In computer programming research, MLA can collect and analyze multimodal data (e.g., audio/video recording data, click-stream recording data, facial expressions, movement and gesture, and eye tracking, etc.) to reveal learners' coordination of behavioral, cognitive, metacognitive, and social activities of programming (e.g., Wiltshire et al ...

  8. Frontiers

    This research proves that the innovative teaching method that combines peer assessment, block-based programming, and interactive instruction platform does increase programming learning outcomes. Thus, the teaching method designed by this research has 2-fold benefits: improving learning performance and developing college-wide programming education.

  9. Effects of Computer Programming on Cognitive Outcomes: A Meta-Analysis

    Although claims regarding the cognitive benefits of computer programming have been made, results from existing empirical studies are conflicting. To make a more reliable conclusion on this issue, a meta-analysis was performed to synthesize existing research concerning the effects of computer programming on cognitive outcomes.

  10. Learning programming practice and programming theory in the computer

    II . Related work. Some studies from computing education research have investigated how different students approach their learning to program. In a statistical study, Umapathy, Ritzhaupt, and Xu (Citation 2020) found that undergraduate students 'most favorably employ a deep strategy approach for learning computer science' (662), while they at the same time are driven by surface motive to ...

  11. Some Evidence on the Cognitive Benefits of Learning to Code

    Introduction. Computer coding—an activity that involves the creation, modification, and implementation of computer code and exposes students to computational thinking—is an integral part of today's education in science, technology, engineering, and mathematics (STEM) (Grover and Pea, 2013).As technology is advancing, coding is becoming a necessary process and much-needed skill to solve ...

  12. A Study of First‐Year Students' Attitudes toward Programming in the

    Within programming in the higher education context, students learn how to break down a problem into smaller parts and design a step-by-step procedure for creating a working program by using a language that the computer understands . These processes related to decomposition and algorithm design in computational thinking give students new ...

  13. The Effects of Computer Programming on Problem-Solving Skills and

    This study examined the effects of systematic computer programming and problem-solving instruction on problem-solving skills and attitudes. Two hundred seventy-two elementary and junior high students were exposed to one of four computer programming and problem-solving treatments for a period of twenty weeks.

  14. Research trends on learning computer programming with program animation

    In the process, researchers have started employing media tools to reduce programming difficulties and motivate learners to approach programming problems. One of the common tools widely used is program animation—an instructional medium that incorporates animated characters. However, little is known about the research trends in this field of study.

  15. Home

    Programming and Computer Software is a peer-reviewed journal addressing issues across all areas of computer science. Focused on the creation, development, and maintenance of software applications through programming. Encompasses various aspects of software development, ranging from writing code to designing algorithms, testing, debugging, and ...

  16. Programming Languages

    The programming languages research group at Cornell includes eight faculty and over two dozen Ph.D. students. We are proud of both our breadth and depth in this core discipline. Cornell has been known from the beginning for its research in programming languages. We have made foundational contributions to type theory, automated theorem proving ...

  17. Elementary Students Learning Computer Programming: an ...

    Students need to learn and practice computational thinking and skills throughout PreK-12 to be better prepared for entering college and future careers. We designed a math-infused computer science course for third to fifth graders to learn programming. This study aims to investigate the impact of the course on students' knowledge acquisition of mathematical and computational concepts ...

  18. Using Python for Research

    Using a combination of a guided introduction and more independent in-depth exploration, you will get to practice your new Python skills with various case studies chosen for their scientific breadth and their coverage of different Python features. This run of the course includes revised assessments and a new module on machine learning.

  19. What Is Programming? And How To Get Started

    At its most basic, programming tells a computer what to do. First, a programmer writes code—a set of letters, numbers, and other characters. Next, a compiler converts each line of code into a language a computer can understand. Then, the computer scans the code and executes it, thereby performing a task or series of tasks.

  20. What Is a Computer Programmer?

    What Does a Computer Programmer Do? Computer programmers use programming languages to write, revise, test, and update code. This code allows computers, software, and applications to carry out tasks. Because technology pervades diverse sectors, computer programmers also work across industries. After the tech industry, finance, insurance, and manufacturing entities hire the most computer ...

  21. Computer Programming Careers: 2024 Guide to Career Paths ...

    Computer programmers typically earn an annual average salary of $61,731. It is anticipated that there will be an average of 6,700 job opportunities for computer programmers annually between 2022 and 2032. With experience, programmers advance to handling intricate projects, leading teams, and significantly impacting the company's success, which ...

  22. Research AI model unexpectedly attempts to modify its own code to

    On Tuesday, Tokyo-based AI research firm Sakana AI announced a new AI system called "The AI Scientist" that attempts to conduct scientific research autonomously using AI language models (LLMs ...

  23. The Demands and Requirements of Computer Programming: A Literature

    The Effects of Learning a Computer Programming Language on the Logical Reasoning of School Children, paper presented at the annual meeting of the American Educational Research Association, Los Angeles, California, 1981.

  24. [2408.08435] Automated Design of Agentic Systems

    Researchers are investing substantial effort in developing powerful general-purpose agents, wherein Foundation Models are used as modules within agentic systems (e.g. Chain-of-Thought, Self-Reflection, Toolformer). However, the history of machine learning teaches us that hand-designed solutions are eventually replaced by learned solutions. We formulate a new research area, Automated Design of ...

  25. Toward a code-breaking quantum computer

    The paper's lead author is Seyoon Ragavan, a graduate student in the MIT Department of Electrical Engineering and Computer Science. The research will be presented at the 2024 International ...

  26. Arvind, longtime MIT professor and prolific computer scientist, dies at

    He continued his work on these programming languages through the 1990s, publishing the book "Implicit Parallel Programming in pH" with co-author R.S. Nikhil in 2001, the culmination of more than 20 years of research. In addition to his research, Arvind was an important academic leader in EECS.

  27. What Is Computer Engineering? Career Guide + FAQ

    However, computer scientists focus more on theorizing and developing ways to use software to solve real-world problems. They must be able to work with programming languages such as Python and Java. Computer engineers typically work more closely with hardware and computer systems as a whole. Computer engineering students gain programming skills ...

  28. Minor in Neural Computation

    The Minor in Neural Computation is an inter-college minor jointly sponsored by the School of Computer Science, the Mellon College of Science, and the College of Humanities and Social Sciences, and is coordinated. ... students interested in the full-year fellowship program should contact and discuss research opportunities with any NI training ...

  29. Institute for Biochemical and Psychological Study of Individual

    The University of Tulsa has announced the acquisition of Fab Lab Tulsa, which provides access to digital fabrication tools and resources throughout the community through membership and programming. The move is part of TU's ongoing efforts to promote innovation and aligns with the university's global reputation in engineering, computer ...

  30. International Master's Award of Excellence (IMAE)

    Award type: Scholarships. Award description: Effective May 1, 2019 (spring 2019 admissions cycle), the International Master's Award of Excellence, valued at $2,500 per term for a maximum of five full-time terms within the allowable program time limits (6 terms), will be awarded to eligible international master's students normally entering a research-based graduate program at the University ...