Building operational excellence in higher education

When colleges and universities think about building academic enterprises for the 21st century, they often overlook one of the most critical aspects: the back-office structures needed to run complex organizations. By failing to modernize and streamline administrative functions (including HR, finance, and facilities), universities put themselves at a serious disadvantage, making it harder to fulfill their academic missions.

Take faculty recruitment and retention. The perceived level of the administrative burden is often a major factor in the attractiveness of an academic job offer. In 2018, the administrative burden on productive research faculty was measured at 44 percent of their workload (up from 42 percent in 2012). 1 Sandra Schneider, “Results of the 2018 FDP faculty workload survey: Input for optimizing time on active research,” Federal Demonstration Partnership, January 2019, thefdp.org. For faculty members, the prospect of moving to an institution where they would have a lighter administrative load is a huge selling point, since “institutional procedures and red tape” ranks as one of the top five sources of stress. 2 Ellen Bara Stolzenberg et al., “Undergraduate teaching faculty: The HERI faculty survey 2016-2017,” Higher Education Research Institute, February 2019, heri.ucla.edu. Something similar happens with students: studies show that the need to jump through administrative hoops is an important driver of “summer melt,” when students admitted to a school fail to matriculate for the upcoming year. 3 Emily Arnim, “Why summer melt happens—and how to freeze it,” EAB, April 30, 2019, eab.com.

Outdated and ineffective administrative operations can have more direct effects on an institution’s reputation. Financial fraud, ineffective or unfair personnel practices, and grants lost as a result of poor research administration can all lead to negative press reports—or worse. The 115-year-old College of New Rochelle, in New Rochelle, NY, recently closed its doors after fraud decimated its finances and reputation, making any chance of recovery impossible. 4 Dave Zucker, “As College of New Rochelle closes, Mercy steps in to take on displaced students,” Westchester Magazine , March 5, 2019, westchestermagazine.com.

A vast challenge

In our experience, most colleges and universities that set out to improve their administrative operations fail to meet their stated goals and in some cases take a step backward. There are several reasons, many relating to the unique constraints of academic institutions:

  • Starting from the top down. Universities are essentially confederations of departments and functions, each with its own internal organization and power structure. Rather than gathering input and alignment from these constituencies, many new administrative plans are run centrally and fail to gain traction.
  • Putting the answer before the problem. Another common pitfall is starting with a solution and looking for ways to solve a problem for that answer rather than doing the work needed to gain a deep understanding of the problem on the ground and building a solution collaboratively with stakeholders.
  • Focusing on dollars rather than sense. Other change programs flounder because they focus primarily on cost savings rather than on improving service levels or the experience of the administrative staff.

A failed program is more than just a loss of time and money. By raising the expectations of faculty and staff and then failing to follow through on them, such failures stoke resentment and make it harder for future programs to gain traction.

While improving administrative operations remains a vast challenge for many universities, a few are taking a new approach—and posting meaningful results. In some cases, institutions that transformed their back offices have managed to halve the time needed to hire new staff or have reduced wasteful procurement transactions by more than 50 percent.

A new approach for a university in gridlock: A case study

A major public research university knew it had reached the breaking point. Its outdated administrative operations were holding it back on several fronts. Slow response times, red tape, and time-consuming administrative tasks had generated resentment and frustration among faculty. Some had already left for other universities, citing a lack of support for research administration, an inability to hire critical lab staff in less than six months, and difficulty keeping labs stocked with supplies.

Part of the problem was that no one seemed to be accountable. The schools and other units blamed the central administration. Central staff, meanwhile, thought the schools and units weren’t doing their part. In this stalemate, nothing got fixed.

Things only got worse when university leaders decided to create a shared-services effort intended to deliver multimillion-dollar savings. When frustrated deans and faculty heard about the effort, they made it clear that any plan conceived without their input would not have their support. With no resolution in sight, and core functions such as hiring and procurement in jeopardy, university leaders realized they needed to find a new approach.

Would you like to learn more about our Social Sector Practice ?

Rethinking administrative operations from the ground up.

The leadership realized that instead of once again creating a solution they would then impose on a diverse system, they had to understand the problems from the point of view of the various stakeholders and then design targeted fixes. With that fundamentally different perspective, the change team created a carefully thought out road map and began the hard work of redesigning systems and processes:

  • The first step was a listening tour to hear directly from faculty and staff on the problems they encountered. What were their pain points? Where exactly were the bottlenecks? The team got unvarnished feedback. From the director of a research center: “We had to hire temporary employees just to complete our normal tasks because the hiring time is so slow.” From a dean: “The university felt like it was in gridlock.”
  • Next, the change team convened a group of design teams, made up of members from both the schools and the central staff, to break down the problems, reimagine the processes from a blank sheet of paper, and implement changes.
  • The team started the redesign process with two specific initial goals: reducing the time needed to hire administrative staff from an average of more than 80 days to 45 days and reducing the number of procurement vouchers—tens of thousands of them—that wasted thousands of hours of staff time and failed to capture the right data.
  • As the team worked through each service, it followed a fast, structured process, designing new solutions in about two months, piloting them for two to three months, and then rolling them out to the campus in waves of schools and units over the following six months.

The results were unequivocal: time to hire fell by 46 percent for nonfaculty positions, and improper procurement (measured by the volume of unnecessary vouchers) fell by 57 percent (Exhibit 1).

So far, the improvement in hiring time has had significant downstream effects. For example, 96 percent of hiring managers report acceptances by their first-choice candidates. In the past, many first choices had dropped out of the process to pursue other opportunities as their names sat in the queue during the months-long hiring process. Just as important, the change team created a community of faculty, staff, and academic leaders who fully embraced the new ways of working. Over the course of the redesign effort, the process involved more than 400 staff and faculty, held more than 50 listening sessions, convened more than 30 design workshops, and generated a list of dozens of initiatives to pursue in the future (Exhibit 2).

The team is currently pursuing transformational initiatives in research administration, travel, student-worker support, and academic personnel. Its ambitions are equally transformative. For example, its goal in research administration is to cut the time to set up awards in half. This collaborative, bottom-up process led many staff members to tell the leadership that “this feels different” from previous change efforts.

Understanding the elements of success

A few key elements helped make a big difference.

Involve faculty and staff as true collaborators. Don’t drive the change from the central administration down to schools and units. Instead, raise the quality and adoption rate of operational solutions by converting faculty and staff from sideline observers into true collaborators. Start with listening to end users, understanding the obstacles they face, and jointly identifying where and how the current system fails them. In that way, a university can bypass the tendency to consider overarching organizational solutions and focus on solving the actual problems at hand.

Have central administrators work side by side with employees of schools or units. For creating solutions, a partnership between the central administration and the faculty and staff of schools or units is even more critical. It is essential to develop solutions by having representatives from schools and units work together with central staff. Besides gaining a deeper understanding of the problems by including these stakeholders, leaders can begin to convert possible naysayers among faculty and staff into allies.

Focus on the university’s mission. While efficiencies and cost savings are important, they are notoriously hard to capture and reinvest. In addition, any sense that the real goal is to cut costs is unlikely to build internal allies among faculty and staff who already feel undersupported. Instead, leaders should communicate a message of improved service levels that can help further the university’s academic and community-impact missions.

Transformation 101: How universities can overcome financial headwinds to focus on their mission

Transformation 101: How universities can overcome financial headwinds to focus on their mission

Show an impact early. There’s a saying that nothing succeeds like success. By starting with one or two services that can be improved quickly and showing an impact within six months, leaders can build belief in the effort. Winning over skeptical constituents will make the rest of it move forward more easily.

Invest in a continuous-improvement team. Staff volunteers committing many hours a week on top of their day jobs can’t sustain changes and expand into other areas of the university entirely by themselves. Creating a small team dedicated to executing transformation initiatives across administrative functions can help accelerate and sustain the momentum for change across the university. A high-functioning team will have a catalog of services (such as training, facilitation, and full-on process redesign) that helps it tailor its support to the specific details of a given problem.

Focus on a transformational rather than incremental impact. Redesigning administrative operations across a university is a big effort. Leaders should take full advantage of the opportunity by thinking about a total transformation, not incremental change. Typical efforts aim for a 20 percent improvement. When leaders set their sights on improvements of more than 50 percent, they can free themselves from the status quo. That magnitude of change will force the change team to start with a truly blank slate and to reimagine a dramatically improved future one.

Taking an important step in transforming a university

A final insight: the work this university did enabled leaders of the administrative functions to shift their sights beyond fighting fires to the truly strategic parts of their work. The progress on hiring, for example, helped surface the challenges the university faces in attracting and retaining talent—particularly underrepresented minority faculty. Furthermore, conversations about improving the performance of the administrative functions highlighted the aspirations of leaders and staff to use machine learning, automation, and other advanced techniques in their work.

Although administrative operations are often overlooked, efficient and effective ones can lead to much broader changes. When universities can hire the high-potential candidates they seek, eliminate wasted time of faculty and staff, and unlock the power of data, they can catapult ahead in their ability to meet their educational and research missions.

Stay current on your favorite topics

Suhrid Gajendragadkar is a senior partner in McKinsey’s Washington, DC, office, where Ted Rounsaville is an associate partner and Jason Wright is a partner; Duwain Pinder is a consultant in the New Jersey office.

Explore a career with us

Related articles.

operational research in higher education

Universities and the conglomerate challenge

How higher-education institutions can transform themselves using advanced analytics

How higher-education institutions can transform themselves using advanced analytics

To read this content please select one of the options below:

Please note you do not have access to teaching notes, operations research and higher education administration.

Journal of Educational Administration

ISSN : 0957-8234

Article publication date: 1 January 1993

Presents a selective review of the application of Operations Research (OR) in higher education administration. Identifies eight important types of operational management problems in higher education and discusses the use of OR methods to deal with these problems. Seeks to make administrators in higher education aware of the great potential of application of OR to assist them in making decisions, and to show the OR community that higher education is an area where plentiful opportunities exist for OR applications.

  • Decision making
  • Educational administration
  • Higher education
  • Operations management
  • Operational research

Cheng, T.C.E. (1993), "Operations Research and Higher Education Administration", Journal of Educational Administration , Vol. 31 No. 1. https://doi.org/10.1108/09578239310024737

Copyright © 1993, MCB UP Limited

Related articles

We’re listening — tell us what you think, something didn’t work….

Report bugs here

All feedback is valuable

Please share your general feedback

Join us on our journey

Platform update page.

Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

Questions & More Information

Answers to the most commonly asked questions here

  • Professional
  • International

Select a product below:

  • Connect Math Hosted by ALEKS
  • My Bookshelf (eBook Access)

Sign in to Shop:

Log In to My PreK-12 Platform

  • AP/Honors & Electives
  • my.mheducation.com
  • Open Learning Platform

Log In to My Higher Ed Platform

  • Connect Math Hosted by Aleks

Business and Economics

Accounting Business Communication Business Law Business Mathematics Business Statistics & Analytics Computer & Information Technology Decision Sciences & Operations Management Economics Finance Keyboarding Introduction to Business Insurance and Real Estate Management Information Systems Management Marketing Student Success

Humanities, Social Science and Language

American Government Anthropology Art Career Development Communication Criminal Justice Developmental English Education Film Composition Health and Human Performance

History Humanities Music Philosophy and Religion Psychology Sociology Student Success Theater World Languages

Science, Engineering and Math

Agriculture and Forestry Anatomy & Physiology Astronomy and Physical Science Biology - Majors Biology - Non-Majors Chemistry Cell/Molecular Biology and Genetics Earth & Environmental Science Ecology Engineering/Computer Science Engineering Technologies - Trade & Tech Health Professions Mathematics Microbiology Nutrition Physics Plants and Animals

Digital Products

Connect® Course management ,  reporting , and  student learning  tools backed by  great support .

McGraw Hill GO Greenlight learning with the new eBook+

ALEKS® Personalize learning and assessment

ALEKS® Placement, Preparation, and Learning Achieve accurate math placement

SIMnet Ignite mastery of MS Office and IT skills

McGraw Hill eBook & ReadAnywhere App Get learning that fits anytime, anywhere

Sharpen: Study App A reliable study app for students

Virtual Labs Flexible, realistic science simulations

Inclusive Access Reduce costs and increase success

LMS Integration Log in and sync up

Math Placement Achieve accurate math placement

Content Collections powered by Create® Curate and deliver your ideal content

Custom Courseware Solutions Teach your course your way

Professional Services Collaborate to optimize outcomes

Remote Proctoring Validate online exams even offsite

Institutional Solutions Increase engagement, lower costs, and improve access for your students

General Help & Support Info Customer Service & Tech Support contact information

Online Technical Support Center FAQs, articles, chat, email or phone support

Support At Every Step Instructor tools, training and resources for ALEKS , Connect & SIMnet

Instructor Sample Requests Get step by step instructions for requesting an evaluation, exam, or desk copy

Platform System Check System status in real time

Introduction to Operations Research

Introduction to Operations Research , 11th Edition

Format options:.

  • Print from $70.00

Textbook Rental (150 Days Access)

  • Rent for a fraction of the printed textbook price
  • Complete text bound in hardcover or softcover

Loose-Leaf Purchase

Unbound loose-leaf version of full text

Shipping Options

  • Next-day air
  • 2nd-day air

Orders within the United States are shipped via FedEx or UPS Ground. For shipments to locations outside of the U.S., only standard shipping is available. All shipping options assume the product is available and that processing an order takes 24 to 48 hours prior to shipping.

* The estimated amount of time this product will be on the market is based on a number of factors, including faculty input to instructional design and the prior revision cycle and updates to academic research-which typically results in a revision cycle ranging from every two to four years for this product. Pricing subject to change at any time.

Instructor Information

Quick Actions ( Only for Validated Instructor Accounts ):

  • Table of Contents
  • Author Bios

For over four decades, Introduction to Operations Research has been the classic text on operations research. While building on the classic strengths of the text, the author continues to find new ways to make the text current and relevant to students. One way is by incorporating a wealth of state-of-the art, user-friendly software and more coverage of business applications than ever before. When the first co-author received the prestigious Expository Writing Award from INFORMS for a recent edition, the award citation described the reasons for the book’s great success as follows: “Two features account for this success. First, the editions have been outstanding from students’ points of view due to excellent motivation, clear and intuitive explanations, good examples of professional practice, excellent organization of material, very useful supporting software, and appropriate but not excessive mathematics. Second, the editions have been attractive from instructors’ points of view because they repeatedly infuse state-of-the-art material with remarkable lucidity and plain language.”

1) Introduction

2) Overview of How Operations Research and Analytics Professionals Analyze Problems

3) Introduction to Linear Programming

4) Solving Linear Programming Problems: The Simplex Method

5) The Theory of the Simplex Method

6) Duality Theory

7) Linear Programming under Uncertainty

8) Other Algorithms for Linear Programming

9) The Transportation and Assignment Problems

10) Network Optimization Models

11) Dynamic Programming

12) Integer Programming

13) Nonlinear Programming

14) Metaheuristics

15) Game Theory

16) Decision Analysis

17) Queueing Theory

18) Inventory Theory

19) Markov Decision Processes

20) Simulation

Appendix 1 - Documentation for the OR Courseware

Appendix 2 - Convexity

Appendix 3 - Classical Optimization Methods

Appendix 4 - Matrices and Matrix Operations

Appendix 5 - Table for a Normal Distribution

Artificial intelligence in higher education: the state of the field

  • Helen Crompton   ORCID: orcid.org/0000-0002-1775-8219 1 , 3 &
  • Diane Burke 2  

International Journal of Educational Technology in Higher Education volume  20 , Article number:  22 ( 2023 ) Cite this article

87k Accesses

84 Citations

59 Altmetric

Metrics details

This systematic review provides unique findings with an up-to-date examination of artificial intelligence (AI) in higher education (HE) from 2016 to 2022. Using PRISMA principles and protocol, 138 articles were identified for a full examination. Using a priori, and grounded coding, the data from the 138 articles were extracted, analyzed, and coded. The findings of this study show that in 2021 and 2022, publications rose nearly two to three times the number of previous years. With this rapid rise in the number of AIEd HE publications, new trends have emerged. The findings show that research was conducted in six of the seven continents of the world. The trend has shifted from the US to China leading in the number of publications. Another new trend is in the researcher affiliation as prior studies showed a lack of researchers from departments of education. This has now changed to be the most dominant department. Undergraduate students were the most studied students at 72%. Similar to the findings of other studies, language learning was the most common subject domain. This included writing, reading, and vocabulary acquisition. In examination of who the AIEd was intended for 72% of the studies focused on students, 17% instructors, and 11% managers. In answering the overarching question of how AIEd was used in HE, grounded coding was used. Five usage codes emerged from the data: (1) Assessment/Evaluation, (2) Predicting, (3) AI Assistant, (4) Intelligent Tutoring System (ITS), and (5) Managing Student Learning. This systematic review revealed gaps in the literature to be used as a springboard for future researchers, including new tools, such as Chat GPT.

A systematic review examining AIEd in higher education (HE) up to the end of 2022.

Unique findings in the switch from US to China in the most studies published.

A two to threefold increase in studies published in 2021 and 2022 to prior years.

AIEd was used for: Assessment/Evaluation, Predicting, AI Assistant, Intelligent Tutoring System, and Managing Student Learning.

Introduction

The use of artificial intelligence (AI) in higher education (HE) has risen quickly in the last 5 years (Chu et al., 2022 ), with a concomitant proliferation of new AI tools available. Scholars (viz., Chen et al., 2020 ; Crompton et al., 2020 , 2021 ) report on the affordances of AI to both instructors and students in HE. These benefits include the use of AI in HE to adapt instruction to the needs of different types of learners (Verdú et al., 2017 ), in providing customized prompt feedback (Dever et al., 2020 ), in developing assessments (Baykasoğlu et al., 2018 ), and predict academic success (Çağataylı & Çelebi, 2022 ). These studies help to inform educators about how artificial intelligence in education (AIEd) can be used in higher education.

Nonetheless, a gap has been highlighted by scholars (viz., Hrastinski et al., 2019 ; Zawacki-Richter et al., 2019 ) regarding an understanding of the collective affordances provided through the use of AI in HE. Therefore, the purpose of this study is to examine extant research from 2016 to 2022 to provide an up-to-date systematic review of how AI is being used in the HE context.

Artificial intelligence has become pervasive in the lives of twenty-first century citizens and is being proclaimed as a tool that can be used to enhance and advance all sectors of our lives (Górriz et al., 2020 ). The application of AI has attracted great interest in HE which is highly influenced by the development of information and communication technologies (Alajmi et al., 2020 ). AI is a tool used across subject disciplines, including language education (Liang et al., 2021 ), engineering education (Shukla et al., 2019 ), mathematics education (Hwang & Tu, 2021 ) and medical education (Winkler-Schwartz et al., 2019 ),

Artificial intelligence

The term artificial intelligence is not new. It was coined in 1956 by McCarthy (Cristianini, 2016 ) who followed up on the work of Turing (e.g., Turing, 1937 , 1950 ). Turing described the existence of intelligent reasoning and thinking that could go into intelligent machines. The definition of AI has grown and changed since 1956, as there has been significant advancements in AI capabilities. A current definition of AI is “computing systems that are able to engage in human-like processes such as learning, adapting, synthesizing, self-correction and the use of data for complex processing tasks” (Popenici et al., 2017 , p. 2). The interdisciplinary interest from scholars from linguistics, psychology, education, and neuroscience who connect AI to nomenclature, perceptions and knowledge in their own disciplines could create a challenge when defining AI. This has created the need to create categories of AI within specific disciplinary areas. This paper focuses on the category of AI in Education (AIEd) and how AI is specifically used in higher educational contexts.

As the field of AIEd is growing and changing rapidly, there is a need to increase the academic understanding of AIEd. Scholars (viz., Hrastinski et al., 2019 ; Zawacki-Richter et al., 2019 ) have drawn attention to the need to increase the understanding of the power of AIEd in educational contexts. The following section provides a summary of the previous research regarding AIEd.

Extant systematic reviews

This growing interest in AIEd has led scholars to investigate the research on the use of artificial intelligence in education. Some scholars have conducted systematic reviews to focus on a specific subject domain. For example, Liang et. al. ( 2021 ) conducted a systematic review and bibliographic analysis the roles and research foci of AI in language education. Shukla et. al. ( 2019 ) focused their longitudinal bibliometric analysis on 30 years of using AI in Engineering. Hwang and Tu ( 2021 ) conducted a bibliometric mapping analysis on the roles and trends in the use of AI in mathematics education, and Winkler-Schwartz et. al. ( 2019 ) specifically examined the use of AI in medical education in looking for best practices in the use of machine learning to assess surgical expertise. These studies provide a specific focus on the use of AIEd in HE but do not provide an understanding of AI across HE.

On a broader view of AIEd in HE, Ouyang et. al. ( 2022 ) conducted a systematic review of AIEd in online higher education and investigated the literature regarding the use of AI from 2011 to 2020. The findings show that performance prediction, resource recommendation, automatic assessment, and improvement of learning experiences are the four main functions of AI applications in online higher education. Salas-Pilco and Yang ( 2022 ) focused on AI applications in Latin American higher education. The results revealed that the main AI applications in higher education in Latin America are: (1) predictive modeling, (2) intelligent analytics, (3) assistive technology, (4) automatic content analysis, and (5) image analytics. These studies provide valuable information for the online and Latin American context but not an overarching examination of AIEd in HE.

Studies have been conducted to examine HE. Hinojo-Lucena et. al. ( 2019 ) conducted a bibliometric study on the impact of AIEd in HE. They analyzed the scientific production of AIEd HE publications indexed in Web of Science and Scopus databases from 2007 to 2017. This study revealed that most of the published document types were proceedings papers. The United States had the highest number of publications, and the most cited articles were about implementing virtual tutoring to improve learning. Chu et. al. ( 2022 ) reviewed the top 50 most cited articles on AI in HE from 1996 to 2020, revealing that predictions of students’ learning status were most frequently discussed. AI technology was most frequently applied in engineering courses, and AI technologies most often had a role in profiling and prediction. Finally, Zawacki-Richter et. al. ( 2019 ) analyzed AIEd in HE from 2007 to 2018 to reveal four primary uses of AIEd: (1) profiling and prediction, (2) assessment and evaluation, (3) adaptive systems and personalization, and (4) intelligent tutoring systems. There do not appear to be any studies examining the last 2 years of AIEd in HE, and these authors describe the rapid speed of both AI development and the use of AIEd in HE and call for further research in this area.

Purpose of the study

The purpose of this study is in response to the appeal from scholars (viz., Chu et al., 2022 ; Hinojo-Lucena et al., 2019 ; Zawacki-Richter et al., 2019 ) to research to investigate the benefits and challenges of AIEd within HE settings. As the academic knowledge of AIEd HE finished with studies examining up to 2020, this study provides the most up-to-date analysis examining research through to the end of 2022.

The overarching question for this study is: what are the trends in HE research regarding the use of AIEd? The first two questions provide contextual information, such as where the studies occurred and the disciplines AI was used in. These contextual details are important for presenting the main findings of the third question of how AI is being used in HE.

In what geographical location was the AIEd research conducted, and how has the trend in the number of publications evolved across the years?

What departments were the first authors affiliated with, and what were the academic levels and subject domains in which AIEd research was being conducted?

Who are the intended users of the AI technologies and what are the applications of AI in higher education?

A PRISMA systematic review methodology was used to answer three questions guiding this study. PRISMA principles (Page et al., 2021 ) were used throughout the study. The PRISMA extension Preferred Reporting Items for Systematic Reviews and Meta-Analysis for Protocols (PRISMA-P; Moher et al., 2015 ) were utilized in this study to provide an a priori roadmap to conduct a rigorous systematic review. Furthermore, the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA principles; Page et al., 2021 ) were used to search, identify, and select articles to be included in the research were used for searching, identifying, and selecting articles, then in how to read, extract, and manage the secondary data gathered from those studies (Moher et al., 2015 , PRISMA Statement, 2021 ). This systematic review approach supports an unbiased synthesis of the data in an impartial way (Hemingway & Brereton, 2009 ). Within the systematic review methodology, extracted data were aggregated and presented as whole numbers and percentages. A qualitative deductive and inductive coding methodology was also used to analyze extant data and generate new theories on the use of AI in HE (Gough et al., 2017 ).

The research begins with the search for the research articles to be included in the study. Based on the research question, the study parameters are defined including the search years, quality and types of publications to be included. Next, databases and journals are selected. A Boolean search is created and used for the search of those databases and journals. Once a set of publications are located from those searches, they are then examined against an inclusion and exclusion criteria to determine which studies will be included in the final study. The relevant data to match the research questions is then extracted from the final set of studies and coded. This method section is organized to describe each of these methods with full details to ensure transparency.

Search strategy

Only peer-reviewed journal articles were selected for examination in this systematic review. This ensured a level of confidence in the quality of the studies selected (Gough et al., 2017 ). The search parameters narrowed the search focus to include studies published in 2016 to 2022. This timeframe was selected to ensure the research was up to date, which is especially important with the rapid change in technology and AIEd.

The data retrieval protocol employed an electronic and a hand search. The electronic search included educational databases within EBSCOhost. Then an additional electronic search was conducted of Wiley Online Library, JSTOR, Science Direct, and Web of Science. Within each of these databases a full text search was conducted. Aligned to the research topic and questions, the Boolean search included terms related to AI, higher education, and learning. The Boolean search is listed in Table 1 . In the initial test search, the terms “machine learning” OR “intelligent support” OR “intelligent virtual reality” OR “chatbot” OR “automated tutor” OR “intelligent agent” OR “expert system” OR “neural network” OR “natural language processing” were used. These were removed as they were subcategories of terms found in Part 1 of the search. Furthermore, inclusion of these specific AI terms resulted in a large number of computer science courses that were focused on learning about AI and not the use of AI in learning.

Part 2 of the search ensured that articles involved formal university education. The terms higher education and tertiary were both used to recognize the different terms used in different countries. The final Boolean search was “Artificial intelligence” OR AI OR “smart technologies” OR “intelligent technologies” AND “higher education” OR tertiary OR graduate OR undergraduate. Scholars (viz., Ouyang et al., 2022 ) who conducted a systematic review on AIEd in HE up to 2020 noted that they missed relevant articles from their study, and other relevant journals should intentionally be examined. Therefore, a hand search was also conducted to include an examination of other journals relevant to AIEd that may not be included in the databases. This is important as the field of AIEd is still relatively new, and journals focused on this field may not yet be indexed in databases. The hand search included: The International Journal of Learning Analytics and Artificial Intelligence in Education, the International Journal of Artificial Intelligence in Education, and Computers & Education: Artificial Intelligence.

Electronic and hand searches resulted in 371 articles for possible inclusion. The search parameters within the electronic database search narrowed the search to articles published from 2016 to 2022, per-reviewed journal articles, and duplicates. Further screening was conducted manually, as each of the 138 articles were reviewed in full by two researchers to examine a match against the inclusion and exclusion criteria found in Table 2 .

The inter-rater reliability was calculated by percentage agreement (Belur et al., 2018 ). The researchers reached a 95% agreement for the coding. Further discussion of misaligned articles resulted in a 100% agreement. This screening process against inclusion and exclusion criteria resulted in the exclusion of 237 articles. This included the duplicates and those removed as part of the inclusion and exclusion criteria, see Fig.  1 . Leaving 138 articles for inclusion in this systematic review.

figure 1

(From: Page et al., 2021 )

PRISMA flow chart of article identification and screening

The 138 articles were then coded to answer each of the research questions using deductive and inductive coding methods. Deductive coding involves examining data using a priori codes. A priori are pre-determined criteria and this process was used to code the countries, years, author affiliations, academic levels, and domains in the respective groups. Author affiliations were coded using the academic department of the first author of the study. First authors were chosen as that person is the primary researcher of the study and this follows past research practice (e.g., Zawacki-Richter et al., 2019 ). Who the AI was intended for was also coded using the a priori codes of Student, Instructor, Manager or Others. The Manager code was used for those who are involved in organizational tasks, e.g., tracking enrollment. Others was used for those not fitting the other three categories.

Inductive coding was used for the overarching question of this study in examining how the AI was being used in HE. Researchers of extant systematic reviews on AIEd in HE (viz., Chu et al., 2022 ; Zawacki-Richter et al., 2019 ) often used an a priori framework as researchers matched the use of AI to pre-existing frameworks. A grounded coding methodology (Strauss & Corbin, 1995 ) was selected for this study to allow findings of the trends on AIEd in HE to emerge from the data. This is important as it allows a direct understanding of how AI is being used rather than how researchers may think it is being used and fitting the data to pre-existing ideas.

Grounded coding process involved extracting how the AI was being used in HE from the articles. “In vivo” (Saldana, 2015 ) coding was also used alongside grounded coding. In vivo codes are when codes use language directly from the article to capture the primary authors’ language and ensure consistency with their findings. The grounded coding design used a constant comparative method. Researchers identified important text from articles related to the use of AI, and through an iterative process, initial codes led to axial codes with a constant comparison of uses of AI with uses of AI, then of uses of AI with codes, and codes with codes. Codes were deemed theoretically saturated when the majority of the data fit with one of the codes. For both the a priori and the grounded coding, two researchers coded and reached an inter-rater percentage agreement of 96%. After discussing misaligned articles, a 100% agreement was achieved.

Findings and discussion

The findings and discussion section are organized by the three questions guiding this study. The first two questions provide contextual information on the AIEd research, and the final question provides a rigorous investigation into how AI is being used in HE.

RQ1. In what geographical location was the AIEd research conducted, and how has the trend in the number of publications evolved across the years?

The 138 studies took place across 31 countries in six of seven continents of the world. Nonetheless, that distribution was not equal across continents. Asia had the largest number of AIEd studies in HE at 41%. Of the seven countries represented in Asia, 42 of the 58 studies were conducted in Taiwan and China. Europe, at 30%, was the second largest continent and had 15 countries ranging from one to eight studies a piece. North America, at 21% of the studies was the continent with the third largest number of studies, with the USA producing 21 of the 29 studies in that continent. The 21 studies from the USA places it second behind China. Only 1% of studies were conducted in South America and 2% in Africa. See Fig.  2 for a visual representation of study distribution across countries. Those continents with high numbers of studies are from high income countries and those with low numbers have a paucity of publications in low-income countries.

figure 2

Geographical distribution of the AIEd HE studies

Data from Zawacki-Richter et. al.’s ( 2019 ) 2007–2018 systematic review examining countries found that the USA conducted the most studies across the globe at 43 out of 146, and China had the second largest at eleven of the 146 papers. Researchers have noted a rapid trend in Chinese researchers publishing more papers on AI and securing more patents than their US counterparts in a field that was originally led by the US (viz., Li et al., 2021 ). The data from this study corroborate this trend in China leading in the number of AIEd publications.

With the accelerated use of AI in society, gathering data to examine the use of AIEd in HE is useful in providing the scholarly community with specific information on that growth and if it is as prolific as anticipated by scholars (e.g., Chu et al., 2022 ). The analysis of data of the 138 studies shows that the trend towards the use of AIEd in HE has greatly increased. There is a drop in 2019, but then a great rise in 2021 and 2022; see Fig.  3 .

figure 3

Chronological trend in AIEd in HE

Data on the rise in AIEd in HE is similar to the findings of Chu et. al. ( 2022 ) who noted an increase from 1996 to 2010 and 2011–2020. Nonetheless Chu’s parameters are across decades, and the rise is to be anticipated with a relatively new technology across a longitudinal review. Data from this study show a dramatic rise since 2020 with a 150% increase from the prior 2 years 2020–2019. The rise in 2021 and 2022 in HE could have been caused by the vast increase in HE faculty having to teach with technology during the pandemic lockdown. Faculty worldwide were using technologies, including AI, to explore how they could continue teaching and learning that was often face-to-face prior to lockdown. The disadvantage of this rapid adoption of technology is that there was little time to explore the possibilities of AI to transform learning, and AI may have been used to replicate past teaching practices, without considering new strategies previously inconceivable with the affordances of AI.

However, in a further examination of the research from 2021 to 2022, it appears that there are new strategies being considered. For example, Liu et. al.’s, 2022 study used AIEd to provide information on students’ interactions in an online environment and examine their cognitive effort. In Yao’s study in 2022, he examined the use of AI to determine student emotions while learning.

RQ2. What departments were the first authors affiliated with, and what were the academic levels and subject domains in which AIEd research was being conducted?

Department affiliations

Data from the AIEd HE studies show that of the first authors were most frequently from colleges of education (28%), followed by computer science (20%). Figure  4 presents the 15 academic affiliations of the authors found in the studies. The wide variety of affiliations demonstrate the variety of ways AI can be used in various educational disciplines, and how faculty in diverse areas, including tourism, music, and public affairs were interested in how AI can be used for educational purposes.

figure 4

Research affiliations

In an extant AIED HE systematic review, Zawacki-Richter et. al.’s ( 2019 ) named their study Systematic review of research on artificial intelligence applications in higher education—where are the educators? In this study, the authors were keen to highlight that of the AIEd studies in HE, only six percent were written by researchers directly connected to the field of education, (i.e., from a college of education). The researchers found a great lack in pedagogical and ethical implications of implementing AI in HE and that there was a need for more educational perspectives on AI developments from educators conducting this work. It appears from our data that educators are now showing greater interest in leading these research endeavors, with the highest affiliated group belonging to education. This may again be due to the pandemic and those in the field of education needing to support faculty in other disciplines, and/or that they themselves needed to explore technologies for their own teaching during the lockdown. This may also be due to uptake in professors in education becoming familiar with AI tools also driven by a societal increased attention. As the focus of much research by education faculty is on teaching and learning, they are in an important position to be able to share their research with faculty in other disciplines regarding the potential affordances of AIEd.

Academic levels

The a priori coding of academic levels show that the majority of studies involved undergraduate students with 99 of the 138 (72%) focused on these students. This was in comparison to the 12 of 138 (9%) for graduate students. Some of the studies used AI for both academic levels: see Fig.  5

figure 5

Academic level distribution by number of articles

This high percentage of studies focused on the undergraduate population was congruent with an earlier AIED HE systematic review (viz., Zawacki-Richter et al., 2019 ) who also reported student academic levels. This focus on undergraduate students may be due to the variety of affordances offered by AIEd, such as predictive analytics on dropouts and academic performance. These uses of AI may be less required for graduate students who already have a record of performance from their undergraduate years. Another reason for this demographic focus can also be convenience sampling, as researchers in HE typically has a much larger and accessible undergraduate population than graduates. This disparity between undergraduates and graduate populations is a concern, as AIEd has the potential to be valuable in both settings.

Subject domains

The studies were coded into 14 areas in HE; with 13 in a subject domain and one category of AIEd used in HE management of students; See Fig.  6 . There is not a wide difference in the percentages of top subject domains, with language learning at 17%, computer science at 16%, and engineering at 12%. The management of students category appeared third on the list at 14%. Prior studies have also found AIEd often used for language learning (viz., Crompton et al., 2021 ; Zawacki-Richter et al., 2019 ). These results are different, however, from Chu et. al.’s ( 2022 ) findings that show engineering dramatically leading with 20 of the 50 studies, with other subjects, such as language learning, appearing once or twice. This study appears to be an outlier that while the searches were conducted in similar databases, the studies only included 50 studies from 1996 to 2020.

figure 6

Subject domains of AIEd in HE

Previous scholars primarily focusing on language learning using AI for writing, reading, and vocabulary acquisition used the affordances of natural language processing and intelligent tutoring systems (e.g., Liang et al., 2021 ). This is similar to the findings in studies with AI used for automated feedback of writing in a foreign language (Ayse et al., 2022 ), and AI translation support (Al-Tuwayrish, 2016 ). The large use of AI for managerial activities in this systematic review focused on making predictions (12 studies) and then admissions (three studies). This is positive to see this use of AI to look across multiple databases to see trends emerging from data that may not have been anticipated and cross referenced before (Crompton et al., 2022 ). For example, to examine dropouts, researchers may consider examining class attendance, and may not examine other factors that appear unrelated. AI analysis can examine all factors and may find that dropping out is due to factors beyond class attendance.

RQ3. Who are the intended users of the AI technologies and what are the applications of AI in higher education?

Intended user of AI

Of the 138 articles, the a priori coding shows that 72% of the studies focused on Students, followed by a focus on Instructors at 17%, and Managers at 11%, see Fig.  7 . The studies provided examples of AI being used to provide support to students, such as access to learning materials for inclusive learning (Gupta & Chen, 2022 ), provide immediate answers to student questions, self-testing opportunities (Yao, 2022 ), and instant personalized feedback (Mousavi et al., 2020 ).

figure 7

Intended user

The data revealed a large emphasis on students in the use of AIEd in HE. This user focus is different from a recent systematic review on AIEd in K-12 that found that AIEd studies in K-12 settings prioritized teachers (Crompton et al., 2022 ). This may appear that HE uses AI to focus more on students than in K-12. However, this large number of student studies in HE may be due to the student population being more easily accessibility to HE researchers who may study their own students. The ethical review process is also typically much shorter in HE than in K-12. Therefore, the data on the intended focus should be reviewed while keeping in mind these other explanations. It was interesting that Managers were the lowest focus in K-12 and also in this study in HE. AI has great potential to collect, cross reference and examine data across large datasets that can allow data to be used for actionable insight. More focus on the use of AI by managers would tap into this potential.

How is AI used in HE

Using grounded coding, the use of AIEd from each of the 138 articles was examined and six major codes emerged from the data. These codes provide insight into how AI was used in HE. The five codes are: (1) Assessment/Evaluation, (2) Predicting, (3) AI Assistant, (4) Intelligent Tutoring System (ITS), and (5) Managing Student Learning. For each of these codes there are also axial codes, which are secondary codes as subcategories from the main category. Each code is delineated below with a figure of the codes with further descriptive information and examples.

Assessment/evaluation

Assessment and Evaluation was the most common use of AIEd in HE. Within this code there were six axial codes broken down into further codes; see Fig.  8 . Automatic assessment was most common, seen in 26 of the studies. It was interesting to see that this involved assessment of academic achievement, but also other factors, such as affect.

figure 8

Codes and axial codes for assessment and evaluation

Automatic assessment was used to support a variety of learners in HE. As well as reducing the time it takes for instructors to grade (Rutner & Scott, 2022 ), automatic grading showed positive use for a variety of students with diverse needs. For example, Zhang and Xu ( 2022 ) used automatic assessment to improve academic writing skills of Uyghur ethnic minority students living in China. Writing has a variety of cultural nuances and in this study the students were shown to engage with the automatic assessment system behaviorally, cognitively, and affectively. This allowed the students to engage in self-regulated learning while improving their writing.

Feedback was a description often used in the studies, as students were given text and/or images as feedback as a formative evaluation. Mousavi et. al. ( 2020 ) developed a system to provide first year biology students with an automated personalized feedback system tailored to the students’ specific demographics, attributes, and academic status. With the unique feature of AIEd being able to analyze multiple data sets involving a variety of different students, AI was used to assess and provide feedback on students’ group work (viz., Ouatik et al., 2021 ).

AI also supports instructors in generating questions and creating multiple question tests (Yang et al., 2021 ). For example, (Lu et al., 2021 ) used natural language processing to create a system that automatically created tests. Following a Turing type test, researchers found that AI technologies can generate highly realistic short-answer questions. The ability for AI to develop multiple questions is a highly valuable affordance as tests can take a great deal of time to make. However, it would be important for instructors to always confirm questions provided by the AI to ensure they are correct and that they match the learning objectives for the class, especially in high value summative assessments.

The axial code within assessment and evaluation revealed that AI was used to review activities in the online space. This included evaluating student’s reflections, achievement goals, community identity, and higher order thinking (viz., Huang et al., 2021 ). Three studies used AIEd to evaluate educational materials. This included general resources and textbooks (viz., Koć‑Januchta et al., 2022 ). It is interesting to see the use of AI for the assessment of educational products, rather than educational artifacts developed by students. While this process may be very similar in nature, this shows researchers thinking beyond the traditional use of AI for assessment to provide other affordances.

Predicting was a common use of AIEd in HE with 21 studies focused specifically on the use of AI for forecasting trends in data. Ten axial codes emerged on the way AI was used to predict different topics, with nine focused on predictions regarding students and the other on predicting the future of higher education. See Fig.  9 .

figure 9

Predicting axial codes

Extant systematic reviews on HE highlighted the use of AIEd for prediction (viz., Chu et al., 2022 ; Hinojo-Lucena et al., 2019 ; Ouyang et al., 2022 ; Zawacki-Richter et al., 2019 ). Ten of the articles in this study used AI for predicting academic performance. Many of the axial codes were often overlapping, such as predicting at risk students, and predicting dropouts; however, each provided distinct affordances. An example of this is the study by Qian et. al. ( 2021 ). These researchers examined students taking a MOOC course. MOOCs can be challenging environments to determine information on individual students with the vast number of students taking the course (Krause & Lowe, 2014 ). However, Qian et al., used AIEd to predict students’ future grades by inputting 17 different learning features, including past grades, into an artificial neural network. The findings were able to predict students’ grades and highlight students at risk of dropping out of the course.

In a systematic review on AIEd within the K-12 context (viz., Crompton et al., 2022 ), prediction was less pronounced in the findings. In the K-12 setting, there was a brief mention of the use of AI in predicting student academic performance. One of the studies mentioned students at risk of dropping out, but this was immediately followed by questions about privacy concerns and describing this as “sensitive”. The use of prediction from the data in this HE systematic review cover a wide range of AI predictive affordances. students Sensitivity is still important in a HE setting, but it is positive to see the valuable insight it provides that can be used to avoid students failing in their goals.

AI assistant

The studies evaluated in this review indicated that the AI Assistant used to support learners had a variety of different names. This code included nomenclature such as, virtual assistant, virtual agent, intelligent agent, intelligent tutor, and intelligent helper. Crompton et. al. ( 2022 ), described the difference in the terms to delineate the way that the AI appeared to the user. For example, if there was an anthropomorphic presence to the AI, such as an avatar, or if the AI appeared to support via other means, such as text prompt. The findings of this systematic review align to Crompton et. al.’s ( 2022 ) descriptive differences of the AI Assistant. Furthermore, this code included studies that provide assistance to students, but may not have specifically used the word assistance. These include the use of chatbots for student outreach, answering questions, and providing other assistance. See Fig.  10 for the axial codes for AI Assistant.

figure 10

AI assistant axial codes

Many of these assistants offered multiple supports to students, such as Alex , the AI described as a virtual change agent in Kim and Bennekin’s ( 2016 ) study. Alex interacted with students in a college mathematics course by asking diagnostic questions and gave support depending on student needs. Alex’s support was organized into four stages: (1) goal initiation (“Want it”), (2) goal formation (“Plan for it”), (3) action control (“Do it”), and (4) emotion control (“Finish it”). Alex provided responses depending on which of these four areas students needed help. These messages supported students with the aim of encouraging persistence in pursuing their studies and degree programs and improving performance.

The role of AI in providing assistance connects back to the seminal work of Vygotsky ( 1978 ) and the Zone of Proximal Development (ZPD). ZPD highlights the degree to which students can rapidly develop when assisted. Vygotsky described this assistance often in the form of a person. However, with technological advancements, the use of AI assistants in these studies are providing that support for students. The affordances of AI can also ensure that the support is timely without waiting for a person to be available. Also, assistance can consider aspects on students’ academic ability, preferences, and best strategies for supporting. These features were evident in Kim and Bennekin’s ( 2016 ) study using Alex.

Intelligent tutoring system

The use of Intelligent Tutoring Systems (ITS) was revealed in the grounded coding. ITS systems are adaptive instructional systems that involve the use of AI techniques and educational methods. An ITS system customizes educational activities and strategies based on student’s characteristics and needs (Mousavinasab et al., 2021 ). While ITS may be an anticipated finding in AIED HE systematic reviews, it was interesting that extant reviews similar to this study did not always describe their use in HE. For example, Ouyang et. al. ( 2022 ), included “intelligent tutoring system” in search terms describing it as a common technique, yet ITS was not mentioned again in the paper. Zawacki-Richter et. al. ( 2019 ) on the other hand noted that ITS was in the four overarching findings of the use of AIEd in HE. Chu et. al. ( 2022 ) then used Zawacki-Richter’s four uses of AIEd for their recent systematic review.

In this systematic review, 18 studies specifically mentioned that they were using an ITS. The ITS code did not necessitate axial codes as they were performing the same type of function in HE, namely, in providing adaptive instruction to the students. For example, de Chiusole et. al. ( 2020 ) developed Stat-Knowlab, an ITS that provides the level of competence and best learning path for each student. Thus Stat-Knowlab personalizes students’ learning and provides only educational activities that the student is ready to learn. This ITS is able to monitor the evolution of the learning process as the student interacts with the system. In another study, Khalfallah and Slama ( 2018 ) built an ITS called LabTutor for engineering students. LabTutor served as an experienced instructor in enabling students to access and perform experiments on laboratory equipment while adapting to the profile of each student.

The student population in university classes can go into the hundreds and with the advent of MOOCS, class sizes can even go into the thousands. Even in small classes of 20 students, the instructor cannot physically provide immediate unique personalize questions to each student. Instructors need time to read and check answers and then take further time to provide feedback before determining what the next question should be. Working with the instructor, AIEd can provide that immediate instruction, guidance, feedback, and following questioning without delay or becoming tired. This appears to be an effective use of AIEd, especially within the HE context.

Managing student learning

Another code that emerged in the grounded coding was focused on the use of AI for managing student learning. AI is accessed to manage student learning by the administrator or instructor to provide information, organization, and data analysis. The axial codes reveal the trends in the use of AI in managing student learning; see Fig.  11 .

figure 11

Learning analytics was an a priori term often found in studies which describes “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (Long & Siemens, 2011 , p. 34). The studies investigated in this systematic review were across grades and subject areas and provided administrators and instructors different types of information to guide their work. One of those studies was conducted by Mavrikis et. al. ( 2019 ) who described learning analytics as teacher assistance tools. In their study, learning analytics were used in an exploratory learning environment with targeted visualizations supporting classroom orchestration. These visualizations, displayed as screenshots in the study, provided information such as the interactions between the students, goals achievements etc. These appear similar to infographics that are brightly colored and draw the eye quickly to pertinent information. AI is also used for other tasks, such as organizing the sequence of curriculum in pacing guides for future groups of students and also designing instruction. Zhang ( 2022 ) described how designing an AI teaching system of talent cultivation and using the digital affordances to establish a quality assurance system for practical teaching, provides new mechanisms for the design of university education systems. In developing such a system, Zhang found that the stability of the instructional design, overcame the drawbacks of traditional manual subjectivity in the instructional design.

Another trend that emerged from the studies was the use of AI to manage student big data to support learning. Ullah and Hafiz ( 2022 ) lament that using traditional methods, including non-AI digital techniques, asking the instructor to pay attention to every student’s learning progress is very difficult and that big data analysis techniques are needed. The ability to look across and within large data sets to inform instruction is a valuable affordance of AIEd in HE. While the use of AIEd to manage student learning emerged from the data, this study uncovered only 19 studies in 7 years (2016–2022) that focused on the use of AIEd to manage student data. This lack of the use was also noted in a recent study in the K-12 space (Crompton et al., 2022 ). In Chu et. al.’s ( 2022 ) study examining the top 50 most cited AIEd articles, they did not report the use of AIEd for managing student data in the top uses of AIEd HE. It would appear that more research should be conducted in this area to fully explore the possibilities of AI.

Gaps and future research

From this systematic review, six gaps emerged in the data providing opportunities for future studies to investigate and provide a fuller understanding of how AIEd can used in HE. (1) The majority of the research was conducted in high income countries revealing a paucity of research in developing countries. More research should be conducted in these developing countries to expand the level of understanding about how AI can enhance learning in under-resourced communities. (2) Almost 50% of the studies were conducted in the areas of language learning, computer science and engineering. Research conducted by members from multiple, different academic departments would help to advance the knowledge of the use of AI in more disciplines. (3) This study revealed that faculty affiliated with schools of education are taking an increasing role in researching the use of AIEd in HE. As this body of knowledge grows, faculty in Schools of Education should share their research regarding the pedagogical affordances of AI so that this knowledge can be applied by faculty across disciplines. (4) The vast majority of the research was conducted at the undergraduate level. More research needs to be done at the graduate student level, as AI provides many opportunities in this environment. (5) Little study was done regarding how AIEd can assist both instructors and managers in their roles in HE. The power of AI to assist both groups further research. (6) Finally, much of the research investigated in this systematic review revealed the use of AIEd in traditional ways that enhance or make more efficient current practices. More research needs to focus on the unexplored affordances of AIEd. As AI becomes more advanced and sophisticated, new opportunities will arise for AIEd. Researchers need to be on the forefront of these possible innovations.

In addition, empirical exploration is needed for new tools, such as ChatGPT that was available for public use at the end of 2022. With the time it takes for a peer review journal article to be published, ChatGPT did not appear in the articles for this study. What is interesting is that it could fit with a variety of the use codes found in this study, with students getting support in writing papers and instructors using Chat GPT to assess students work and with help writing emails or descriptions for students. It would be pertinent for researchers to explore Chat GPT.

Limitations

The findings of this study show a rapid increase in the number of AIEd studies published in HE. However, to ensure a level of credibility, this study only included peer review journal articles. These articles take months to publish. Therefore, conference proceedings and gray literature such as blogs and summaries may reveal further findings not explored in this study. In addition, the articles in this study were all published in English which excluded findings from research published in other languages.

In response to the call by Hinojo-Lucena et. al. ( 2019 ), Chu et. al. ( 2022 ), and Zawacki-Richter et. al. ( 2019 ), this study provides unique findings with an up-to-date examination of the use of AIEd in HE from 2016 to 2022. Past systematic reviews examined the research up to 2020. The findings of this study show that in 2021 and 2022, publications rose nearly two to three times the number of previous years. With this rapid rise in the number of AIEd HE publications, new trends have emerged.

The findings show that of the 138 studies examined, research was conducted in six of the seven continents of the world. In extant systematic reviews showed that the US led by a large margin in the number of studies published. This trend has now shifted to China. Another shift in AIEd HE is that while extant studies lamented the lack of focus on professors of education leading these studies, this systematic review found education to be the most common department affiliation with 28% and computer science coming in second at 20%. Undergraduate students were the most studied students at 72%. Similar to the findings of other studies, language learning was the most common subject domain. This included writing, reading, and vocabulary acquisition. In examination of who the AIEd was intended for, 72% of the studies focused on students, 17% instructors, and 11% managers.

Grounded coding was used to answer the overarching question of how AIEd was used in HE. Five usage codes emerged from the data: (1) Assessment/Evaluation, (2) Predicting, (3) AI Assistant, (4) Intelligent Tutoring System (ITS), and (5) Managing Student Learning. Assessment and evaluation had a wide variety of purposes, including assessing academic progress and student emotions towards learning, individual and group evaluations, and class based online community assessments. Predicting emerged as a code with ten axial codes, as AIEd predicted dropouts and at-risk students, innovative ability, and career decisions. AI Assistants were specific to supporting students in HE. These assistants included those with an anthropomorphic presence, such as virtual agents and persuasive intervention through digital programs. ITS systems were not always noted in extant systematic reviews but were specifically mentioned in 18 of the studies in this review. ITS systems in this study provided customized strategies and approaches to student’s characteristics and needs. The final code in this study highlighted the use of AI in managing student learning, including learning analytics, curriculum sequencing, instructional design, and clustering of students.

The findings of this study provide a springboard for future academics, practitioners, computer scientists, policymakers, and funders in understanding the state of the field in AIEd HE, how AI is used. It also provides actionable items to ameliorate gaps in the current understanding. As the use AIEd will only continue to grow this study can serve as a baseline for further research studies in the use of AIEd in HE.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Alajmi, Q., Al-Sharafi, M. A., & Abuali, A. (2020). Smart learning gateways for Omani HEIs towards educational technology: Benefits, challenges and solutions. International Journal of Information Technology and Language Studies, 4 (1), 12–17.

Google Scholar  

Al-Tuwayrish, R. K. (2016). An evaluative study of machine translation in the EFL scenario of Saudi Arabia. Advances in Language and Literary Studies, 7 (1), 5–10.

Ayse, T., & Nil, G. (2022). Automated feedback and teacher feedback: Writing achievement in learning English as a foreign language at a distance. The Turkish Online Journal of Distance Education, 23 (2), 120–139. https://doi.org/10.7575/aiac.alls.v.7n.1p.5

Article   Google Scholar  

Baykasoğlu, A., Özbel, B. K., Dudaklı, N., Subulan, K., & Şenol, M. E. (2018). Process mining based approach to performance evaluation in computer-aided examinations. Computer Applications in Engineering Education, 26 (5), 1841–1861. https://doi.org/10.1002/cae.21971

Belur, J., Tompson, L., Thornton, A., & Simon, M. (2018). Interrater reliability in systematic review methodology: Exploring variation in coder decision-making. Sociological Methods & Research, 13 (3), 004912411887999. https://doi.org/10.1177/0049124118799372

Çağataylı, M., & Çelebi, E. (2022). Estimating academic success in higher education using big five personality traits, a machine learning approach. Arab Journal Scientific Engineering, 47 , 1289–1298. https://doi.org/10.1007/s13369-021-05873-4

Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8 , 75264–75278. https://doi.org/10.1109/ACCESS.2020.2988510

Chu, H., Tu, Y., & Yang, K. (2022). Roles and research trends of artificial intelligence in higher education: A systematic review of the top 50 most-cited articles. Australasian Journal of Educational Technology, 38 (3), 22–42. https://doi.org/10.14742/ajet.7526

Cristianini, N. (2016). Intelligence reinvented. New Scientist, 232 (3097), 37–41. https://doi.org/10.1016/S0262-4079(16)31992-3

Crompton, H., Bernacki, M. L., & Greene, J. (2020). Psychological foundations of emerging technologies for teaching and learning in higher education. Current Opinion in Psychology, 36 , 101–105. https://doi.org/10.1016/j.copsyc.2020.04.011

Crompton, H., & Burke, D. (2022). Artificial intelligence in K-12 education. SN Social Sciences, 2 , 113. https://doi.org/10.1007/s43545-022-00425-5

Crompton, H., Jones, M., & Burke, D. (2022). Affordances and challenges of artificial intelligence in K-12 education: A systematic review. Journal of Research on Technology in Education . https://doi.org/10.1080/15391523.2022.2121344

Crompton, H., & Song, D. (2021). The potential of artificial intelligence in higher education. Revista Virtual Universidad Católica Del Norte, 62 , 1–4. https://doi.org/10.35575/rvuen.n62a1

de Chiusole, D., Stefanutti, L., Anselmi, P., & Robusto, E. (2020). Stat-Knowlab. Assessment and learning of statistics with competence-based knowledge space theory. International Journal of Artificial Intelligence in Education, 30 , 668–700. https://doi.org/10.1007/s40593-020-00223-1

Dever, D. A., Azevedo, R., Cloude, E. B., & Wiedbusch, M. (2020). The impact of autonomy and types of informational text presentations in game-based environments on learning: Converging multi-channel processes data and learning outcomes. International Journal of Artificial Intelligence in Education, 30 (4), 581–615. https://doi.org/10.1007/s40593-020-00215-1

Górriz, J. M., Ramírez, J., Ortíz, A., Martínez-Murcia, F. J., Segovia, F., Suckling, J., Leming, M., Zhang, Y. D., Álvarez-Sánchez, J. R., Bologna, G., Bonomini, P., Casado, F. E., Charte, D., Charte, F., Contreras, R., Cuesta-Infante, A., Duro, R. J., Fernández-Caballero, A., Fernández-Jover, E., … Ferrández, J. M. (2020). Artificial intelligence within the interplay between natural and artificial computation: Advances in data science, trends and applications. Neurocomputing, 410 , 237–270. https://doi.org/10.1016/j.neucom.2020.05.078

Gough, D., Oliver, S., & Thomas, J. (2017). An introduction to systematic reviews (2nd ed.). Sage.

Gupta, S., & Chen, Y. (2022). Supporting inclusive learning using chatbots? A chatbot-led interview study. Journal of Information Systems Education, 33 (1), 98–108.

Hemingway, P. & Brereton, N. (2009). In Hayward Medical Group (Ed.). What is a systematic review? Retrieved from http://www.medicine.ox.ac.uk/bandolier/painres/download/whatis/syst-review.pdf

Hinojo-Lucena, F., Arnaz-Diaz, I., Caceres-Reche, M., & Romero-Rodriguez, J. (2019). A bibliometric study on its impact the scientific literature. Education Science . https://doi.org/10.3390/educsci9010051

Hrastinski, S., Olofsson, A. D., Arkenback, C., Ekström, S., Ericsson, E., Fransson, G., Jaldemark, J., Ryberg, T., Öberg, L.-M., Fuentes, A., Gustafsson, U., Humble, N., Mozelius, P., Sundgren, M., & Utterberg, M. (2019). Critical imaginaries and reflections on artificial intelligence and robots in postdigital K-12 education. Postdigital Science and Education, 1 (2), 427–445. https://doi.org/10.1007/s42438-019-00046-x

Huang, C., Wu, X., Wang, X., He, T., Jiang, F., & Yu, J. (2021). Exploring the relationships between achievement goals, community identification and online collaborative reflection. Educational Technology & Society, 24 (3), 210–223.

Hwang, G. J., & Tu, Y. F. (2021). Roles and research trends of artificial intelligence in mathematics education: A bibliometric mapping analysis and systematic review. Mathematics, 9 (6), 584. https://doi.org/10.3390/math9060584

Khalfallah, J., & Slama, J. B. H. (2018). The effect of emotional analysis on the improvement of experimental e-learning systems. Computer Applications in Engineering Education, 27 (2), 303–318. https://doi.org/10.1002/cae.22075

Kim, C., & Bennekin, K. N. (2016). The effectiveness of volition support (VoS) in promoting students’ effort regulation and performance in an online mathematics course. Instructional Science, 44 , 359–377. https://doi.org/10.1007/s11251-015-9366-5

Koć-Januchta, M. M., Schönborn, K. J., Roehrig, C., Chaudhri, V. K., Tibell, L. A. E., & Heller, C. (2022). “Connecting concepts helps put main ideas together”: Cognitive load and usability in learning biology with an AI-enriched textbook. International Journal of Educational Technology in Higher Education, 19 (11), 11. https://doi.org/10.1186/s41239-021-00317-3

Krause, S. D., & Lowe, C. (2014). Invasion of the MOOCs: The promise and perils of massive open online courses . Parlor Press.

Li, D., Tong, T. W., & Xiao, Y. (2021). Is China emerging as the global leader in AI? Harvard Business Review. https://hbr.org/2021/02/is-china-emerging-as-the-global-leader-in-ai

Liang, J. C., Hwang, G. J., Chen, M. R. A., & Darmawansah, D. (2021). Roles and research foci of artificial intelligence in language education: An integrated bibliographic analysis and systematic review approach. Interactive Learning Environments . https://doi.org/10.1080/10494820.2021.1958348

Liu, S., Hu, T., Chai, H., Su, Z., & Peng, X. (2022). Learners’ interaction patterns in asynchronous online discussions: An integration of the social and cognitive interactions. British Journal of Educational Technology, 53 (1), 23–40. https://doi.org/10.1111/bjet.13147

Long, P., & Siemens, G. (2011). Penetrating the fog: Analytics in learning and education. Educause Review, 46 (5), 31–40.

Lu, O. H. T., Huang, A. Y. Q., Tsai, D. C. L., & Yang, S. J. H. (2021). Expert-authored and machine-generated short-answer questions for assessing students learning performance. Educational Technology & Society, 24 (3), 159–173.

Mavrikis, M., Geraniou, E., Santos, S. G., & Poulovassilis, A. (2019). Intelligent analysis and data visualization for teacher assistance tools: The case of exploratory learning. British Journal of Educational Technology, 50 (6), 2920–2942. https://doi.org/10.1111/bjet.12876

Moher, D., Shamseer, L., Clarke, M., Ghersi, D., Liberati, A., Petticrew, M., Shekelle, P., & Stewart, L. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic Reviews, 4 (1), 1–9. https://doi.org/10.1186/2046-4053-4-1

Mousavi, A., Schmidt, M., Squires, V., & Wilson, K. (2020). Assessing the effectiveness of student advice recommender agent (SARA): The case of automated personalized feedback. International Journal of Artificial Intelligence in Education, 31 (2), 603–621. https://doi.org/10.1007/s40593-020-00210-6

Mousavinasab, E., Zarifsanaiey, N., Kalhori, S. R. N., Rakhshan, M., Keikha, L., & Saeedi, M. G. (2021). Intelligent tutoring systems: A systematic review of characteristics, applications, and evaluation methods. Interactive Learning Environments, 29 (1), 142–163. https://doi.org/10.1080/10494820.2018.1558257

Ouatik, F., Ouatikb, F., Fadlic, H., Elgoraria, A., Mohadabb, M. E. L., Raoufia, M., et al. (2021). E-Learning & decision making system for automate students assessment using remote laboratory and machine learning. Journal of E-Learning and Knowledge Society, 17 (1), 90–100. https://doi.org/10.20368/1971-8829/1135285

Ouyang, F., Zheng, L., & Jiao, P. (2022). Artificial intelligence in online higher education: A systematic review of empirical research from 2011–2020. Education and Information Technologies, 27 , 7893–7925. https://doi.org/10.1007/s10639-022-10925-9

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T., Mulrow, C., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. British Medical Journal . https://doi.org/10.1136/bmj.n71

Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12 (22), 1–13. https://doi.org/10.1186/s41039-017-0062-8

PRISMA Statement. (2021). PRISMA endorsers. PRISMA statement website. http://www.prisma-statement.org/Endorsement/PRISMAEndorsers

Qian, Y., Li, C.-X., Zou, X.-G., Feng, X.-B., Xiao, M.-H., & Ding, Y.-Q. (2022). Research on predicting learning achievement in a flipped classroom based on MOOCs by big data analysis. Computer Applied Applications in Engineering Education, 30 , 222–234. https://doi.org/10.1002/cae.22452

Rutner, S. M., & Scott, R. A. (2022). Use of artificial intelligence to grade student discussion boards: An exploratory study. Information Systems Education Journal, 20 (4), 4–18.

Salas-Pilco, S., & Yang, Y. (2022). Artificial Intelligence application in Latin America higher education: A systematic review. International Journal of Educational Technology in Higher Education, 19 (21), 1–20. https://doi.org/10.1186/S41239-022-00326-w

Saldana, J. (2015). The coding manual for qualitative researchers (3rd ed.). Sage.

Shukla, A. K., Janmaijaya, M., Abraham, A., & Muhuri, P. K. (2019). Engineering applications of artificial intelligence: A bibliometric analysis of 30 years (1988–2018). Engineering Applications of Artificial Intelligence, 85 , 517–532. https://doi.org/10.1016/j.engappai.2019.06.010

Strauss, A., & Corbin, J. (1995). Grounded theory methodology: An overview. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 273–285). Sage.

Turing, A. M. (1937). On computable numbers, with an application to the Entscheidungs problem. Proceedings of the London Mathematical Society, 2 (1), 230–265.

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59 , 443–460.

MathSciNet   Google Scholar  

Ullah, H., & Hafiz, M. A. (2022). Exploring effective classroom management strategies in secondary schools of Punjab. Journal of the Research Society of Pakistan, 59 (1), 76.

Verdú, E., Regueras, L. M., Gal, E., et al. (2017). Integration of an intelligent tutoring system in a course of computer network design. Educational Technology Research and Development, 65 , 653–677. https://doi.org/10.1007/s11423-016-9503-0

Vygotsky, L. S. (1978). Mind and society: The development of higher psychological processes . Harvard University Press.

Winkler-Schwartz, A., Bissonnette, V., Mirchi, N., Ponnudurai, N., Yilmaz, R., Ledwos, N., Siyar, S., Azarnoush, H., Karlik, B., & Del Maestro, R. F. (2019). Artificial intelligence in medical education: Best practices using machine learning to assess surgical expertise in virtual reality simulation. Journal of Surgical Education, 76 (6), 1681–1690. https://doi.org/10.1016/j.jsurg.2019.05.015

Yang, A. C. M., Chen, I. Y. L., Flanagan, B., & Ogata, H. (2021). Automatic generation of cloze items for repeated testing to improve reading comprehension. Educational Technology & Society, 24 (3), 147–158.

Yao, X. (2022). Design and research of artificial intelligence in multimedia intelligent question answering system and self-test system. Advances in Multimedia . https://doi.org/10.1155/2022/2156111

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16 (1), 1–27. https://doi.org/10.1186/s41239-019-0171-0

Zhang, F. (2022). Design and application of artificial intelligence technology-driven education and teaching system in universities. Computational and Mathematical Methods in Medicine . https://doi.org/10.1155/2022/8503239

Zhang, Z., & Xu, L. (2022). Student engagement with automated feedback on academic writing: A study on Uyghur ethnic minority students in China. Journal of Multilingual and Multicultural Development . https://doi.org/10.1080/01434632.2022.2102175

Download references

Acknowledgements

The authors would like to thank Mildred Jones, Katherina Nako, Yaser Sendi, and Ricardo Randall for data gathering and organization.

Author information

Authors and affiliations.

Department of Teaching and Learning, Old Dominion University, Norfolk, USA

Helen Crompton

ODUGlobal, Norfolk, USA

Diane Burke

RIDIL, ODUGlobal, Norfolk, USA

You can also search for this author in PubMed   Google Scholar

Contributions

HC: Conceptualization; Data curation; Project administration; Formal analysis; Methodology; Project administration; original draft; and review & editing. DB: Conceptualization; Data curation; Project administration; Formal analysis; Methodology; Project administration; original draft; and review & editing. Both authors read and approved this manuscript.

Corresponding author

Correspondence to Helen Crompton .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Crompton, H., Burke, D. Artificial intelligence in higher education: the state of the field. Int J Educ Technol High Educ 20 , 22 (2023). https://doi.org/10.1186/s41239-023-00392-8

Download citation

Received : 30 January 2023

Accepted : 23 March 2023

Published : 24 April 2023

DOI : https://doi.org/10.1186/s41239-023-00392-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial Intelligence
  • Systematic review
  • Higher education

operational research in higher education

Educational Membership icon

  • New! Member Benefit New! Member Benefit
  • Featured Analytics Hub
  • Resources Resources
  • Member Directory
  • Networking Communities
  • Advertise, Exhibit, Sponsor
  • Find or Post Jobs

Connect Icon

  • Learn and Engage Learn and Engage
  • Bridge Program

operational research in higher education

  • Compare AACSB-Accredited Schools
  • Explore Programs

Bullseye mission icon

  • Advocacy Advocacy
  • Featured AACSB Announces 2024 Class of Influential Leaders
  • Diversity, Equity, Inclusion, and Belonging
  • Influential Leaders
  • Innovations That Inspire
  • Connect With Us Connect With Us
  • Accredited School Search
  • Accreditation
  • Learning and Events
  • Advertise, Sponsor, Exhibit
  • Tips and Advice
  • Is Business School Right for Me?

The Future of OER in Higher Education

Article Icon

  • Open educational resources can be integrated into college courses to make higher education more engaging, accessible, and affordable for students.
  • The advent of generative artificial intelligence provides instructors with new, more efficient ways to develop and update OER so that these materials stay aligned with evolving trends and course objectives.
  • Many microcredentialing courses now use OER, which allows students to develop in-demand skills without paying the high cost of traditional higher education.

The past few years have brought drastic changes to higher education, in both how students learn and how instructors teach. Whether due to  increases in remote and hybrid learning  during the COVID-19 pandemic or the growing prominence of generative artificial intelligence (GenAI), business educators have had to adapt to change quickly, often without much institutional support.

Even through all this upheaval, textbooks have remained the most popular course material in higher education, according to a  survey conducted  by the research firm Bay View Analytics. However, increasingly instructors are replacing print materials such as hardcopy textbooks and homework handouts with digital options such as online textbooks and homework assistance platforms. Instructors also have embraced open educational resources (OER). Bay View’s data also show that the percentage of educators using OER as required course materials nearly doubled between the 2019–20 and 2022–23 academic years, from 15 percent to 29 percent.

Creative Commons  defines OER  as “teaching, learning, and research materials that reside in the public domain or have been released under an open license that permits their free use and re-purposing by others.” Lacking traditional copyright restrictions, these materials permit instructors to engage more easily in the  5Rs : remixing, retaining, revising, reusing, and redistributing the materials.

These materials offer many benefits for instructors and students, addressing both existing and emerging challenges. But today, the versatility of OER is evolving with the advent of GenAI and the increasing popularity of alternative, skills-based educational paths. The latest developments in OER are allowing instructors and business programs to create and use these resources to make the educational experience more engaging, accessible, and relevant to all learners.

The Challenges of Print

Regardless of institution size or course level, business instructors and students collectively face several common challenges when using traditional print-based course materials:

Cost— Most faculty responding to Bay View’s survey view the cost of educational materials as “a serious problem” for their students. Postsecondary students spend, on average, between  628 USD and 1,200 USD  annually on textbooks and related resources. In some cases, students have reported that they  take fewer classes , withdraw from courses, change their majors, or even drop out of college entirely because they cannot afford required materials. Others have reported that they  skip meals  or take on additional part-time jobs and shifts at work to afford their textbooks.

Print textbooks often contain outdated information and data, but it’s no longer sufficient for students to view graphs using information that is several years old.

Time— It can take several days or weeks for students to purchase the required materials for their classes. Moreover, knowing that many students forgo buying required materials because of cost, instructors often feel that they must derive core course content directly from the textbook. As a result, they must divert class time away from experiential learning, hindering their ability to teach students effectively.

Relevance— Print textbooks often contain  outdated information and data , but it’s no longer sufficient for students to view graphs related to economic productivity or income inequality using information that is several years old. Likewise, it is no longer appropriate for students to read case studies from once-prominent sectors and companies whose significance is sharply declining, as they are supplanted by new and  growing industries . In today’s data-driven age, it’s essential for students to have access to the most recent analytics and quantitative research. 

Customization— The rise of remote learning and GenAI has heightened the need for instructors to personalize classroom resources and experiences to meet individual students’ needs. This is very difficult to do in introductory business courses with hundreds of students—and nearly impossible to do with commercially published print textbooks.

The OER option helps instructors address these issues. Available in the public domain or under an open license, OER can be customized to align with course objectives and amended with updated information and data. For instance, instructors can update an existing chart or graph with newer data or replace an older case study with one involving an industry or company more relevant to today’s students.

Because these resources are free, students do not have to decide how or whether they can afford required textbooks. Likewise, instructors can teach with confidence, knowing that all students have required course materials on the first day of class.

When OER Meets GenAI

The rise of GenAI is reshaping OER just as it is  reshaping higher education . Institutions are experimenting with GenAI to streamline time-consuming tasks involved with planning and teaching a course—and this can include finding, creating, and implementing OER.

For example, schools such as the  University of Texas at San Antonio  and  West Virginia University  in Morgantown offer grants to faculty who implement existing or create original OER to address time and resource barriers.  Miami University  in Oxford, Ohio, and the  University of Massachusetts Amherst  now consider their faculty’s OER development in tenure and promotion decisions.

GenAI can help instructors take even more of the legwork out of OER creation and adoption. Artificial intelligence can act as a kind of teaching assistant that instructors can use to quickly complete previously time-consuming tasks, such as:

  • Proofreading and receiving feedback on the OER that they create.
  • Generating descriptive alt text and captions for images.
  • Creating discussion prompts, homework questions, practice problems, and assignments.
  • Finding current examples of industries and companies relevant to today’s students (asking an AI tool, for example, to suggest information related to Instagram instead of MySpace).
  • Finding real-world examples of how certain sectors, organizations, and products apply specific business concepts, such as disruptive innovation or product life cycles.
  • Providing updated information on the business legal environment.
  • Creating assessments and rubrics.
  • Translating existing open-license content into other languages.
  • Updating existing OER with examples that allow increasingly  diverse cohorts  of undergraduate students to see themselves in the course material. For instance, GenAI can help instructors more easily integrate cases that involve organizations with  diverse leadership  or entrepreneurs from  historically underrepresented backgrounds .

Created by and for the educational community, OER can offer instructors a great deal of flexibility and versatility in designing their courses in ways that align with learning objectives and meet student needs. That said, GenAI cannot replace the critical role instructors play in OER creation. Given the  murky legality  surrounding AI and copyright, business instructors should be transparent about their use of AI and provide appropriate attributions.

Additionally, instructors should fact-check AI outputs such as case studies, examples, and statistics to ensure the information that AI tools provide is correct. There already have been many incidents in which ChatGPT has provided users with false statistics, made-up examples , and  other glaring errors —known as  hallucinations . Such errors underscore the importance of responsible GenAI use when creating course materials.

Nonetheless, by using GenAI tools responsibly, instructors can significantly streamline OER creation and customization in ways that improve their students’ learning experiences.

OER’s Role in Microcredentials

Employers increasingly  rely on microcredentials  to fill skills gaps in their workforces and consider microcredentials in place of traditional college degrees. Meanwhile, students are turning to microcredentials to  gain in-demand career skills  without attending—and paying tuition to—four-year universities. OER adoption is evolving to address the growing interest in these alternative, skills-based educational paths.

One initiative using open materials in microcredentials is  OERu , an international network of more than 40 partner universities, including Penn State in State College and  BCcampus , a platform that supports postsecondary education in British Columbia, Canada. With funding from UNESCO, the platform offers accredited, online  microcredentialing courses  based on OER. Each full course consists of a set of micro Open Online Courses (mOOCs), which students worldwide can take for free.

Many universities now have dedicated OER librarians and coordinators to aid instructors in adopting open resources in their classrooms, and others offer courses that train educators to use OER effectively.

After students complete a set of mOOCs that corresponds to a full course, they can obtain academic credit from one of the partner universities by taking a formal assessment. The assessment fees are significantly lower than the cost of full-time study, which means that students can develop in-demand skills without the burden of high tuition costs.

OERu offers a  certificate  that introduces students to the fundamentals of business and management while providing an overview of business careers across various sectors. This certificate includes 26 mOOCs, whose topics range from an  introduction to microeconomics  to the  foundations of marketing  to the  skills required to plan and manage organizations .

Institutions such as the  University of Pittsburgh  in Pennsylvania offer microcredentials in subjects such as accounting, marketing analytics, and technology management to help MBA students fully prepare for postgraduation employment. For many schools, offering OER-supported microcredentials is a viable, cost-efficient way to attract more prospective students and ensure existing students develop in-demand skills.

Open-Ended Opportunities

Educators who want to introduce OER materials into their classrooms will find that various platforms offer high-quality, timely resources. One option is OpenStax, based out of Rice University in Houston. At OpenStax, we are committed to helping educators find, create, and use OER materials. Our textbook library  now includes 16 open business textbooks . Another option is the Open Education Network (OEN), a consortium based at the University of Minnesota in Minneapolis. Comprising more than 330 educational institutions, the OEN offers an  Open Textbook Library  with 1,459 open textbooks, including  more than 130 business titles .

Educators also can find relevant OER on other platforms—often within their own institutions. Many universities now have dedicated OER librarians and coordinators to aid instructors in adopting open resources in their classrooms. For example, the University of Nebraska at Lincoln provides a guide on OER  directed to campus librarians, and the OEN even offers a  certificate in open education librarianship . Unite!, an alliance of European universities, has developed  its own course  to train educators on using OER effectively.

Open educational resources offer a range of opportunities for educators not only to enhance students’ educational experiences, but also to reach broader audiences. Fortunately, learning how to adopt, create, and improve OER has never been easier. Educators can simply conduct a quick Internet search, meet with their institutions’ OER librarians or program coordinators, chat with a colleague—or even ask ChatGPT—to point themselves in the right direction.

  • artificial intelligence
  • digital transformation
  • higher education
  • learner engagement
  • learner success
  • microcredentials
  • online learning

Video Icon

Efficiency in education

  • Published: 21 March 2017
  • Volume 68 , pages 331–338, ( 2017 )

Cite this article

operational research in higher education

  • Jill Johnes 1 ,
  • Maria Portela 2 &
  • Emmanuel Thanassoulis   ORCID: orcid.org/0000-0002-3769-5374 3  

56k Accesses

64 Citations

19 Altmetric

Explore all metrics

Education is important at national, local and individual levels. Its benefits accrue both to society and to individuals, and as such provision of education in many countries is paid for at least in part from the public purse. With competing demands for government funding, it is important for education to be provided as efficiently as possible. Efficiency occurs when outputs from education (such as test results or value added) are produced at the lowest level of resource (be that financial or, for example, the innate ability of students). This special issue is devoted to the topic of efficiency in education, and is well-timed given that governments around the world struggle with public finances in the wake of the global financial crisis of 2008. In this paper, we explore and provide an overview of the themes of the special issue and introduce the papers contained therein.

Avoid common mistakes on your manuscript.

1 Introduction

Education is important at all levels. At national or state levels, there is increasing evidence that education is positively related to economic growth (Hanushek and Kimko, 2000 ; Hanushek and Woessmann, 2008 ; Hanushek and Woessmann, 2010 , 2012 ; Hanushek et al, 2015 ). Hanushek and Woessmann ( 2008 ), for example, report, using a cross-country dataset, that for each additional year of schooling, the long-run growth rate of GDP per capita is 0.58% points higher, and this value is statistically significant. Footnote 1 While quantity of education is important, quality of education (usually measured by performance of students in standard international tests) is even more so: Hanushek and Woessmann ( 2008 ) conclude from results of several studies that there is around a one percentage point gain in GDP growth rates for every one country-level standard deviation higher test performance.

In addition to these benefits to society, education is also important in determining lifetime returns of individuals (see, for example, Psacharopoulos, 1994 ; Psacharopoulos and Patrinos, 2004 ; Walker and Zhu, 2008 ; Colclough et al, 2010 ; Chevalier, 2011 ; Walker and Zhu, 2011 ). For example, the private rate of return to investment in an additional year of schooling in a developed economy such as the United States is of the order of 10% per year in real terms (Psacharopoulos and Patrinos, 2004 ). This is likely to be higher for less developed countries (Psacharopoulos and Patrinos, 2004 ), and might vary by level of education (Colclough et al, 2010 ).

Some of the effects of education are clearly beneficial to society as a whole (social or external returns), while others are confined solely to the individual (and are therefore private). The existence of substantial social and external benefits from education (McMahon, 2004 ) justifies its public provision. Thus, compulsory education is typically funded from the public purse, while further and higher education, which is traditionally seen to have a greater proportion of private benefits than primary and secondary education, is usually only partially funded by government.

With competing demands for public money, however, it is important that resources for education are used efficiently: there have been few attempts to evaluate the costs of inefficiency in education, but one study suggests that the losses from inefficiency in secondary education are under 1% of potential GDP (Taylor, 1994 ). In addition, the results surrounding the relationship between education and growth suggest that it is important to distinguish between the quantity of education provided and the quality of provision. This has important implications for studies of efficiency in education since measures of quality are traditionally more difficult to derive than measures of quantity.

It is useful to distinguish at the outset between the terms ‘efficiency’ and ‘effectiveness’. Efficiency refers to ‘doing things right’, while effectiveness relates to ‘doing the right things’ (Drucker, 1967 ). Thus, in the context of education, efficient use of resources (be that financial or the innate ability of students) occurs when the observed outputs from education (such as test results or value added) are produced at the lowest level of resource; effective use of resources ensures that the mix of outcomes from education desired by society are achieved. It is efficiency (rather than effectiveness) of education with which this special issue is largely concerned.

Identifying how efficiently education is provided has challenged researchers over the decades. Development of frontier estimation techniques (in the late 1970s) such as data envelopment analysis (DEA) (Charnes et al, 1978 , 1979 ; Banker et al, 1984 ) and stochastic frontier analysis (SFA) (Aigner et al, 1977 ; Battese and Corra, 1977 ; Meeusen and van den Broeck, 1977 ) led to an expanding literature on efficiency in the education context. Education institutions (such as schools or universities) are seen as multi-product organisations producing an array of outputs from various inputs. Frontier estimation methods can be used to estimate cost functions or production frontiers for these institutions from which efficiency estimates can be derived.

This special issue represents a timely reflection on efficiency in education as countries struggle to recover from the global financial crisis (which started circa 2008) and its effect on public funding. The special issue grew out of (but was not confined to) a two-day workshop on efficiency in education which took place in London 2014. This introductory paper is structured in seven sections of which this is the first. The remaining sections provide an overview of the themes addressed by the special issue and introduce the papers featured within.

2 Frontier estimation methods: a literature review

In line with the overarching theme of the special issue ‘Efficiency in education: a review of literature and a way forward’ by De Witte and López-Torres focuses on reviewing exclusively the efficiency (rather than effectiveness) in education literature. The paper, aimed at experienced researchers in the field, provides a comprehensive overview of frontier efficiency measurement techniques and their application in the education context up to 2015. A unique feature of this review compared to previous ones (for example Worthington, 2001 ; Johnes, 2004 ; Emrouznejad et al, 2008 ) is that it bridges the gap between the parametric (generally in the form of regression or SFA) education economics literature and the non-parametric (typically in the form of DEA) efficiency literature. This is indeed a useful contribution and it draws out hitherto unremarked connections between themes in the two strands of literature.

This paper provides an excellent resource to researchers in the field as it covers studies based on various levels of analysis (individual students, institutions and nations), identifies the datasets and measures of inputs and outputs which have been used in past papers and details the possible non-discretionary or environmental variables which are relevant in education studies. Discussion of methodological concerns revolves around endogeneity and its sources, in particular omitted variable bias, measurement error, selection bias and simultaneous causality issues. This leads to a discussion and comparison of each of these problems in the parametric and non-parametric contexts. The efficiency (non-parametric) literature is criticised for largely ignoring the possible detrimental effects of endogeneity on efficiency while devoting too much energy to minor methodological details.

A particular contribution of the review concerns the links made between parametric and non-parametric approaches in four cases. First of all, matching analysis is compared to conditional efficiency. Second, quantile regression is related to partial frontiers. Third, difference-in-difference analysis is compared to meta-frontier analysis. Fourth, it is noted that value added studies are more prevalent in the economics of education literature than in the efficiency literature, where they are relatively rare. Mutual benefits, it is argued, could be made in each of these four areas if researchers in one field learnt from those in the other.

3 Assessing equity and effectiveness in resource allocation for primary and secondary education

According to the review of De Witte and López-Torres in this special issue, educational studies may focus on several levels (university, school/high school, district, county and country), but only a small number of frontier-based efficiency studies have focused on country or multi-country analysis. There are several reasons why authors may avoid cross-country efficiency analyses. First, comparable data at national level can be difficult to obtain. But the availability of datasets such as TIMMS (the Trends in International Mathematics and Science Study), PIRLS (the Progress in International Reading Literacy Study) and PISA (Programme for International Student Assessment) have made it possible to compare countries based on pupil attainment. Second, an assumption underlying frontier estimation is that the units of assessment face the same production conditions and technology. This assumption is difficult to maintain in a cross-country framework especially where the sample of countries might be particularly diverse. The heterogeneity of country technologies and education policies may therefore hinder the comparability of the results, but at the same time, it is the only way to compare and benchmark educational policies across countries. Some examples of cross-country analyses include Afonso and St Aubyn ( 2005 , 2006 ), Giménez et al ( 2007 ) and Thieme et al ( 2012 ).

In this issue Cordero, Santin and Simancas-Rodriguez in their paper ‘Assessing European primary school performance through a conditional nonparametric model’ contribute to the cross-country empirical literature by providing an application of a frontier-based method to assess the efficiency of primary schools in 16 European countries (based on data from PIRLS, 2011). Efficiency of primary schools is assessed through an order-m non-parametric approach where a single output (average results in PIRLS Reading test) and inputs relating to the prior achievement of students, and to school resources such as teachers, computers and instructional hours, are used. The importance of the environment where schools operate is stressed in this paper and taken into account in a second-stage analysis, where country and school contextual factors are considered to account for the heterogeneity of countries and schools. The findings reveal that country-specific factors have a higher influence on efficiency than school-specific factors highlighting, therefore, the importance of benchmarking countries’ educational policies.

Much is being done on cross-country analyses by the OECD, whose report on equity and quality in education we highlight (OECD, 2012 ). Cross-country comparisons focus regularly on funding and educational expenditure issues (Afonso and St Aubyn, 2005 , 2006 ), but the general consensus appears to be that providing more money and resources to schools is not enough to improve their quality and their students’ performance (Hanushek, 2003 ). The way the money (or funding) is allocated, however, is a means by which governments can improve equity between schools facing different environments (typically a harsher environment is one where the percentage of economically and culturally disadvantaged students is higher). These issues are at the heart of the papers in this special issue by Haelermans and Ruggiero entitled ‘Nonparametric estimation of the cost of adequacy in education: the case of Dutch schools’ and by Weber, Grosskopf, Hayes and Taylor (henceforth Weber et al ) entitled ‘Would weighted-student funding enhance intra-district equity in Texas? A simulation using DEA’.

These papers represent a timely contribution to the literature given the current interest in allocation of funding in Europe in response to the 2008 economic crisis (see European Commission, 2014 for the various funding mechanisms of public sector schools). In England, for example, the Government has recently produced a consultation document on the funding of schools (Department for Education, 2016 ). A major part of the proposal is a move away from block funds allocated to schools on the basis of historical costs and towards a funding mechanism which removes inequities by allocating a lump sum to schools and incorporating a national mechanism for dealing with the extra costs faced by low-population areas with small schools. In Portugal, a new formula for the financing of higher education institutions was put forward in July 2015, but public primary and secondary schools are still financed based on approved budgets.

The case of the Netherlands is analysed in this special issue by Haelermans and Ruggiero, where it is shown that schools in harsher environments do indeed receive extra funds; however, excess funding does not compensate for the excess costs of achieving acceptable standards (the authors derive the cost required for schools to achieve a certain standard of performance deemed acceptable Footnote 2 ). The minimum cost to achieve these standards is called adequacy by the authors (see also Levačić, 2008 ). Results further suggest that the minimum cost to reach standards for schools located in favourable environments are about 70% of the costs of schools in harsher environments, which testifies to the importance of taking the environment of schools into account in efficiency and effectiveness studies.

In Weber et al 's paper, the authors also tackle financing issues, this time in the US (schools in the district of Texas), linking these with equity issues. The equity the authors are interested in is not equity of school budgets, but equity of school outcomes (analysed under two budget scenarios: (1) current budget and (2) a simulated budget determined by student weighted funding, based on the schools’ number and type of students). Main results show that policies that reduce inefficiency tend to enhance equity as well. The paper also suggests that weighted student funding may be a way to reduce inequalities, but cautions against the fact that for inefficient schools an enhanced budget may not resolve their inefficiencies and inequalities. That is, there are winners (schools that would see their budgets increase under a weighted student funding) and losers (schools that would see their budget shrink under a weighted student funding), but extra funds will eventually only benefit efficient schools, which are more able to use the extra resources efficiently. This paper therefore links three important issues in education: funding, efficiency and equity (see also Woessmann, 2008 for links between efficiency and equity of schools in the EU). In addition, Weber et al contribute to and extend the literature on school funding formulae (Levačić, 2008 ; BenDavid-Hadar and Ziderman, 2011 ).

4 Assessing aspects of efficiency and productivity in tertiary education

As noted earlier, education (including higher education) contributes to economic growth; higher education also receives public funding in many countries, and so it is important to understand productivity growth in universities. The paper by Edvardsen, Førsund and Kittelsen (henceforth Edvardsen et al ) entitled ‘Productivity development of Norwegian institutions of higher education 2004–2013’ provides an excellent example of how a Malmquist productivity index (including computation of components) can be used to inform policy makers and managers. The study is based on universities in Norway over a 10-year period. With only a small number of exceptions, previous studies of higher education productivity growth (Flegg et al, 2004 ; Carrington et al, 2005 ; Johnes, 2008 ; Worthington and Lee, 2008 ; Kempkes and Pohl, 2010 ; Margaritis and Smart, 2011 ) rely on point estimates of productivity change. This study, however, applies a bootstrap procedure (Simar and Wilson, 1998 , 1999 , 2000 ) for the Malmquist productivity index (MPI) which takes into account sampling variation. It differs from Parteka and Wolszczak-Derlacz ( 2013 ), which also applies bootstrap methods in the MPI context, in that it (i) derives and examines the components of the MPI and (ii) visually inspects productivity change in the context of labour input changes.

The production relationship is defined with 2 inputs and 4 outputs. The initial analysis of the components of MPI (catch-up and frontier shift) suggests that the two measures move in parallel until 2009 after which frontier shift grows markedly, while the catch-up measure gradually deteriorates. Productivity change distributions for each university over time are examined in three time blocks and reveal a general picture that the group of institutions with significant productivity decrease is shrinking while the group with productivity increase is expanding.

The authors note that it would be interesting to extend the study to examine the relationship between size and productivity growth and in particular to the question of whether merging institutions might increase productivity; the effect of merging on both efficiency and productivity is largely unresearched (Johnes, 2014 ). While there are some mergers in this dataset, the small number precludes a more detailed study at present but is something which might be possible as the database increases.

5 Using student ratings to assess performance in tertiary education

There are two papers in this special issue (one by Thanassoulis, Dey, Petridis, Georgiou and Goniadis henceforth Thanassoulis et al – entitled “Evaluating higher education teaching performance using combined analytic hierarchy process and data envelopment analysis” and another by Sneyer and De Witte entitled ‘The interaction between dropout, graduation rates and quality ratings in universities’) that use students’ views to assess efficiency in the higher education context. They are distinct, however, in that one (Thanassoulis et al ) uses student feedback to assess performance of individual tutors, while the other (Sneyers and De Witte) uses student satisfaction in a model with both graduation and dropout rates to examine efficiency at programme level. Much of the extant literature on efficiency and frontier estimation in higher education focuses on the university or the department as the unit of assessment (exceptions include Dolton et al, 2003 ; Johnes, 2006a , b ; Barra and Zotti, 2016 whose empirical analysis is at the student level, and Colbert et al, 2000 who examine efficiency in the context of MBA programmes). These two papers in this special issue therefore offer original contributions by providing approaches for evaluating efficiency at tutor and programme levels which, as established in the review paper by De Witte and Løpez-Torres in this special issue, have not previously been examined.

The paper by Thanassoulis et al deals with the assessment of teaching efficiency of academic staff. The method it proposes combines the Analytical Hierarchy Process (AHP) and DEA in order to arrive at an overall assessment of a tutor reflecting their performance in teaching. To the extent, however, that a teacher normally also carries out research the method also allows the assessment of the teacher given their performance in research. A crucial feature is that the teaching dimension reflects the value judgements made by the students at the receiving end of the teaching. This is a key departure point of this study from previous studies in this area. The basic premise is that students, depending perhaps on gender, career aspirations and type of course (e.g. optional vs compulsory), may attach different weights to the criteria, deeming some of them more important than others. The different weights are then used in the computation of a mean aggregate score on teaching per tutor, which is operationalized by AHP (Saaty, 1980 ). The aggregate grade (or grades) on teaching along with measures of the research output by the tutor are then used as outputs in a DEA model, set against the salary and teaching experience of the teacher.

The authors illustrate their approach using real data (modified for confidentiality) on these variables for teachers at a Greek University. The DEA model is solved to estimate the scope for improving performance by the teacher depending on the relative emphasis given to teaching versus research. It is noteworthy that whether emphasis is placed solely on improving on teaching or equally on improving teaching and research similar results are obtained where the estimated scope to improve on teaching is concerned. This suggests teaching and research are largely separable, and poor teaching performance is not generally compensated for by good performance in research. Information of this type can be useful to a teacher in terms of setting aspiration levels for improvement in teaching, depending on whether the tutor is to focus on teaching or teaching and research.

The paper by Sneyers and De Witte, in this special issue, addresses the use of first-year student dropout rates, Footnote 3 programme quality ratings and graduation rates Footnote 4 as indicators of university performance for the distribution of funding. In the Netherlands, for example, 7% of the higher education budget is earmarked for performance mainly on these three indicators, yet there is little work to date on the interaction between them. Is it possible, for example, to perform well along all three dimensions simultaneously? Given that dropout rates at the end of the first year at university could actually be a means of selecting the best and most motivated students to go forward, it is important to examine graduation rates and quality rating given the first-year student dropout rate. Specifically, the paper compares programmes on graduation rates and quality ratings (conditional on first-year dropout rates) and examines the programme and institutional characteristics which underpin the performance.

The paper is original in two ways. First, the level of analysis is the programme (rather than, for example, the institution or department). Second, the paper applies a non-parametric conditional efficiency method with continuous environmental variables (Cazals et al, 2002 ; Daraio and Simar, 2005 ) and extends this to also include discrete environmental variables (De Witte and Kortelainen, 2013 ). The significance of the effects of environmental variables on performance at programme level can be derived using this approach.

The study employs a rich dataset for universities in the Netherlands. The authors find that there is considerable variation in how the first-year dropout rate (and the selectivity which that implies) is used to have a positive effect on graduation rates and programme quality ratings. Some programmes are found to be inefficient in terms of their graduation rates and quality ratings (given the incidence of first-year dropout) and could learn from the practices characterising the efficient programmes. There is clear evidence of programme characteristics which influence graduation rates and quality ratings. These results, therefore, have clear policy implications including, for example, that policies formulated at programme level would have higher impact than those formulated at an institution level.

6 Methodological papers with special reference to education

There are two papers with a primary focus on methodology and a secondary one on an empirical application in this issue: one is by Mayston, entitled ‘Convexity, quality and efficiency in education’ and the other by Karagiannis and Paschalidou, entitled ‘Assessing research effectiveness: A comparison of alternative parametric models’.

The paper by Mayston addresses the issue of incorrectly assuming convexity for the production possibility set (PPS) in DEA as this could happen in assessments in the education context. The question is of course not new and many authors have questioned the assumption of convexity in DEA in general. For example, Farrell ( 1959 ) notes that indivisibilities in production or economies of specialisation could lead to a non-convex PPS. He concludes, however, that in the framework of competitive markets lack of convexity in production, or indeed in indifference curves, is unnecessary for “received economic theory” so long as each producer accounts for a negligible part of the total output. Within the extant DEA literature, it is well understood that in many contexts the PPS may not be convex. Free Disposal Hull (FDH) technologies, introduced by Deprins et al ( 1984 ), can be deployed to measure efficiency, set targets for performance, etc. when convexity of the PPS cannot be assumed. An interesting empirical application in which DEA and FDH are used on the same dataset is that by Cullinane et al ( 2005 ). They assess container ports on efficiency where inputs in the form of indivisible capital items such as of berths, gantry cranes, straddle carriers, etc., can lead to a non-convex PPS. They conclude that the FDH method does not in some cases set demanding targets and can make units appear efficient simply for lack of comparators. Its advantage is that when units are not efficient, the benchmarks exist in real life so that they can be used as role models for less efficient units to emulate. DEA with the assumption of convexity of PPS on the other hand is more discriminating in terms of efficiency and so better for setting more challenging performance targets. This, however, can be at the expense of using virtual rather than real units as role model benchmarks for inefficient units.

The Mayston paper argues that in the specific context of assessments by DEA of comparative efficiency in education, convexity may not hold because of the fact that outputs have a quality dimension in a way that differs from output quality in other contexts. In addition, lack of convexity can arise because both physical capital assets such as lecture theatres and libraries are non-divisible and because intangible assets in the form of knowledge specialisation by academics can also lead to indivisibilities of efficient research output. It is suggested that we cannot simply assume convexity in an educational context. This would require that gains due to complementarity between research and teaching quality should be sufficiently strong to make up for the loss of gains that would result from the ‘indivisible’ specialised knowledge needed for the production of original contributions to research.

The situation is further complicated by two facts. First in the educational context, assessments of research and teaching are reflected in grades. Each grade covers a range of performance. Secondly, rewards for grades are highly non-linear (e.g. in the UK research assessments of Universities, the financial benefits from achieving a grade 4 are much higher than for achieving a grade 3). The paper argues factors of the foregoing type in the educational context militate both against convexity in the PPS and lead to non-linear utilities over outputs.

The effect of assuming convexity in DEA when it does not exist is that it can lead to results which understate the true technical efficiency of a unit while at the same time overstating its allocative efficiency. This can happen because the ‘convex’ technically efficient point can be placed on the exterior of the non-convex frontier. Caution is therefore needed, in particular, in decomposing overall inefficiency into allocative and technical efficiency.

The paper by Karagiannis and Paschalidou compares the Benefit of the Doubt (BoD) model of (Cherchye et al, 2007 ) and the Kao and Hung ( 2003 ) (K&H) model in assessing entities characterised by multiple indices of performance. Further, it addresses the case where there is no traditional set of inputs that need to be set against the indices. The authors refer to this context as a case of assessing the ‘effectiveness’ rather than ‘efficiency’ of the use of resources by the entities. Each one of the two methods is used under three alternative approaches for arriving at weights by which the indices of performance on the criteria can be aggregated to an overall index of performance. They illustrate the six resulting approaches using data on the research outputs of faculty of a Greek University.

The BoD model is essentially equivalent to a DEA model in which the PPS is formed using constant returns to scale (CRS) technology when the input level is the notional 1 across all the entities (academics in this case), while the output levels reflect measures of attainment on each criterion (e.g. papers, citations, books, etc. in this case). The K&H model is similar to the BoD model in that it attempts to estimate an optimal set of weights to assign to each criterion. However, it does this under the sole restriction that the weights should add up to 1, rather than under the traditional DEA restrictions. This is equivalent to computing the best weighted average possible for the criteria values of each entity being assessed in turn. Such a weighted average makes better sense in practice when indices of attainment on each criterion are being added so that a composite index is arrived at to reflect overall performance. The paper notes that the K&H and BoD models are related in the solutions produced when the measures of attainment on each criterion range between 0 and 1 (e.g. when they are indices).

The paper proceeds to explain how the two models differ for the case where we may want to restrict weight flexibility reflected in the foregoing paragraph. Six alternative approaches to the flexibility of the weights are used in the paper, ranging from full flexibility (each entity is free to choose the weights assigned to each performance index) to non-flexibility reflected in a common set of weights (each entity assigns the same weights to each performance index). The paper uses data on the research outputs of academics from the authors’ own institution. One key finding is that there is greater variability in results within each one of the two methods (BoD v K&H) depending on how the weights on the criteria are restricted than between the methods themselves when the same type of restriction is applied on the weights. Faculty are found to follow a more or less bi-modal distribution in research effectiveness with very few achieving well on research output and most achieving poorly. The findings clearly have managerial implications for improving research output by faculty.

7 Concluding remarks

In this introductory paper to the special issue, we have presented an overview of the various papers that constitute it, highlighting their main contributions and their main findings. We also put these papers into the context of existing literature on efficiency of education calling the attention of the reader to some fundamental issues in this context. The issues addressed here include: cross-country analyses and their importance for educational policy benchmarking; the need to understand the impact of funding policies on the quality, efficiency and equity of education; the need to analyse educational issues over time in dynamic settings; the importance of using student feedback in tertiary education efficiency analysis as well as the importance of assessing it at person level and finally the importance of understanding methodological assumptions behind efficiency models like convexity and the importance of using alternative assessment models on the same data and reconciling the findings.

The foregoing list is inclusive of current and pertinent issues in education, but many others could have been raised. Some examples of further issues include the impact of certain education practices (like student repetition or streaming) in primary and secondary education; the trade-off or complementarity between teaching and research outputs in university assessments; funding and financing in universities and their impact on efficiency, and the measurement of quality of both inputs and outputs at all levels of education.

We hope this summary will enable the reader at a glance to identify the papers within this special issue that best fit his/her research interests.

The figure is lower at 0.32 once regional differences are taken into account, but is still statistically significant.

Thus, in this paper, efficiency is examined subject to a certain level of effectiveness.

Defined as the percentage of full-time bachelors students ceasing their education at the university during the first year of enrolment. Dropout can occur for personal or academic reasons including non-attainment of the necessary credits to continue.

Defined as the share of re-enrolled full-time bachelors students completing their degree at that institution within the nominal number of study years plus 1 year.

Afonso A and St Aubyn M (2005). Non-parametric approaches to education and health efficiency in OECD countries. Journal of Applied Economics 8 (2):227–246.

Google Scholar  

Afonso A and St Aubyn M (2006). Cross-country efficiency of secondary education provision: A semi-parametric analysis with non-discretionary inputs. Economic Modelling 23 (3):476–491.

Article   Google Scholar  

Aigner D, Lovell CAK and Schmidt P (1977). Formulation and estimation of stochastic frontier production models. Journal of Econometrics 6 (1):21–37.

Banker RD, Charnes A and Cooper WW (1984). Some models for estimating technical and scale inefficiencies in data envelopment analysis. Management Science 30 (9):1078–1092.

Barra C and Zotti R (2016). Managerial efficiency in higher education using individual versus aggregate level data: Does the choice of decision making units count? Managerial and Decision Economics 37 (2,6):106–126.

Battese GE and Corra GS (1977). Estimation of a production frontier model: With application to the pastoral zone of Eastern Australia. Australian Journal of Agricultural Economics 21 (3):169–179.

BenDavid-Hadar I and Ziderman A (2011). A new model for equitable and efficient resource allocation to schools: The Israeli case. Education Economics 19 (4):341–362.

Carrington R, Coelli T and Rao DSP (2005). The performance of Australian universities: Conceptual issues and preliminary results. Economic Papers: A Journal of Applied Economics and Policy 24 (2):145–163.

Cazals C, Florens JP and Simar L (2002). Nonparametric frontier estimation: A robust approach. Journal of Econometrics 106 (1):1–25.

Charnes A, Cooper WW and Rhodes E (1978). Measuring the efficiency of decision making units. European Journal of Operational Research 2 (4):429–444.

Charnes A, Cooper WW and Rhodes E (1979). Measuring the efficiency of decision making units: A short communication. European Journal of Operational Research 3 (4):339.

Cherchye L, Moesen W, Rogge N and Van Puyenbroeck T (2007). An introduction to ‘benefit of the doubt’ composite indicators. Social Indicators Research 82 (1):111–145.

Chevalier A (2011). Subject choice and earnings of UK graduates. Economics of Education Review 30 (6):1187.

Colbert A, Levary RR and Shaner MC (2000). Determining the relative efficiency of MBA programs using DEA. European Journal of Operational Research 125 (3):656–669.

Colclough C, Kingdon G and Patrinos H (2010). The changing pattern of wage returns to education and its implications. Development Policy Review 28 (6):733.

Cullinane K, Song DW and Wang T (2005). The application of mathematical programming approaches to estimating container port production efficiency. Journal of Productivity Analysis 24 (1):73–92.

Daraio C and Simar L (2005). Introducing environmental variables in nonparametric frontier models: A probabilistic approach. Journal of Productivity Analysis 24 (1):93–121.

De Witte K and Kortelainen M (2013). What explains the performance of students in a heterogeneous environment? Conditional efficiency estimation with continuous and discrete environmental variables. Applied Economics 45 (17):2401–2412.

Department for Education (2016). Schools National Funding Formula.

Deprins D, Simar L and Tulkens H (1984). Measuring labor efficiency in post offices. In: Marchand M, Pestieau P and Tulkens H (eds). The Performance of Public Enterprises: Concepts and Measurements . Elsevier: Amsterdam.

Dolton PJ, Marcenaro OD and Navarro L (2003). The effective use of student time: A stochastic frontier production function case study. Economics of Education Review 22 (6):547–560.

Drucker PF (1967). The Effective Executive . Heinemann: London.

Emrouznejad A, Parker BR and Tavares G (2008). Evaluation of research in efficiency and productivity: A survey and analysis of the first 30 years of scholarly literature in DEA. Socio - Economic Planning Sciences 42 (3):151–157.

European Commission (2014). Financing Schools in Europe: Mechanisms, Methods and Criteria in Public Funding . Brussels, Education, Audiovisual and Culture Executive Agency (EACEA)/Eurydice Report.

Farrell MJ (1959). The convexity assumption in the theory of competitive markets. Journal of Political Economy 67 (4):377–391.

Flegg T, Allen D, Field K and Thurlow TW (2004). Measuring the efficiency of British universities: A multi-period data envelopment analysis. Education Economics 12 (3):231–249.

Giménez VM, Prior D and Thieme C (2007). Technical efficiency, managerial efficiency and objective-setting in the educational system: An international comparison. Journal of the Operational Research Society 58 (8):996–1007.

Hanushek EA (2003). The failure of input-based schooling policies. Economic Journal 113 (485):F64–F98.

Hanushek EA and Kimko DD (2000). Schooling, labor-force quality, and the growth of nations. American Economic Review 90 (5):1184–1208.

Hanushek EA, Ruhose J and Woessmann L (2015). Economic gains for U.S. states from educational reform.’ CESifo Working Papers 5662 Center for Economic Studies and Ifo Institute.

Hanushek EA and Woessmann L (2008). The role of cognitive skills in economic development. Journal of Economic Literature 46 (3):607–668.

Hanushek EA and Woessmann L (2010). The economics of international differences in educational achievement. In: Hanushek EA, Machin SJ and Woesmann L (eds). Handbook of the Economics of Education , Volume 3. Elsevier: Amsterdam, pp 89–200.

Hanushek EA and Woessmann L (2012). The economic benefit of educational reform in the European Union. CESifo Economic Studies 58 (1):73–109.

Johnes J (2004). Efficiency measurement. In: Johnes G and Johnes J (eds). International Handbook on the Economics of Education. Edward Elgar: Cheltenham, pp 613–742.

Johnes J (2006a). Measuring efficiency: A comparison of multilevel modelling and data envelopment analysis in the context of higher education. Bulletin of Economic Research 58 (2):75–104.

Johnes J (2006b). Measuring teaching efficiency in higher education: An application of data envelopment analysis to economics graduates from UK universities 1993. European Journal of Operational Research 174 (1):443–456.

Johnes J (2008). Efficiency and productivity change in the English higher education sector from 1996/97 to 2004/05. The Manchester School 76 (6):653–674.

Johnes J (2014). Efficiency and mergers in English higher education 1996/97 to 2008/9: Parametric and non-parametric estimation of the multi-input multi-output distance function. The Manchester School 82 (4):465–487.

Kao C and Hung HT (2003). Ranking university libraries with a posteriori weights. Libri 53 (4):282–289.

Kempkes G and Pohl C (2010). The efficiency of German universities—some evidence from nonparametric and parametric methods. Applied Economics 42 (16):2063–2079.

Levačić R (2008). Financing schools: Evolving patterns of autonomy and control. Educational Management Administration & Leadership 36 (2):221–234.

Margaritis D and Smart W (2011). Productivity changes in Australian universities 1997-2005: A Malmquist analysis. In: 52nd Annual Conference of the New Zealand Association of Economics, Wellington, New Zealand, 29th June–1st July.

McMahon WW (2004). The social and external benefits of education. In: Johnes G and Johnes J (eds). International Handbook on the Economics of Education . Edward Elgar: Cheltenham, pp 211–259.

Meeusen W and van den Broeck J (1977). Efficiency estimation from Cobb-Douglas production functions with composed error. International Economic Review 18 (2):435–444.

OECD (2012). Equity and Quality in Education: Supporting Disadvantaged Students and Schools . OECD.

Parteka A and Wolszczak-Derlacz J (2013). Dynamics of productivity in higher education: Cross-European evidence based on bootstrapped Malmquist indices. Journal of Productivity Analysis 40 (1):67–82.

Portela MCAS and Thanassoulis E (2001). Decomposing school and school-type efficiency. European Journal of Operational Research 132 (2):357–373.

Psacharopoulos G (1994). Returns to investment in education: A global update. World Development 22 (9):1325–1343.

Psacharopoulos G and Patrinos H (2004). Human capital and rates of return. In: Johnes G and Johnes J (eds). International Handbook on the Economics of Education . Edward Elgar: Cheltenham, pp 1–57.

Ramalho EA, Ramalho JJS and Henriques PD (2010). Fractional regression models for second stage DEA efficiency analyses. Journal of Productivity Analysis 34 (3):239–255.

Saaty TL (1980). The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation . McGraw Hill: New York.

Simar L and Wilson PW (1998). Sensitivity analysis of efficiency scores: How to bootstrap in nonparametric frontier models. Management Science 44 (1):49–61.

Simar L and Wilson PW (1999). Estimating and bootstrapping Malmquist indices. European Journal of Operational Research 115 (3):459–471.

Simar L and Wilson PW (2000). Statistical inference in nonparametric frontier models: The state of the art. Journal of Productivity Analysis 13 (1):49–78.

Taylor LL (1994). An economy at risk? The social costs of school inefficiency. Economic Review 1994 (III):1–13.

Thieme C, Giménez V and Prior D (2012). A comparative analysis of the efficiency of national education systems. Asia Pacific Education Review 13 (1):1–15.

Walker I and Zhu Y (2008). The college wage premium and the expansion of higher education in the UK. Scandinavian Journal of Economics 110 (4):695–709.

Walker I and Zhu Y (2011). Differences by degree: Evidence of the net financial rates of return to undergraduate study for England and Wales. Economics of Education Review 30 (6):1177–1186.

Woessmann L (2008). Efficiency and equity of European education and training policies. International Tax and Public Finance 15 (2):199–230.

Worthington AC (2001). An empirical survey of frontier efficiency measurement techniques in education. Education Economics 9 (3):245–268.

Worthington AC and Lee BL (2008). Efficiency, technology and productivity change in Australian universities 1998-2003. Economics of Education Review 27 (3):285–298.

Download references

Acknowledgments

We would like to thank Professor Jonathan Crook and the journal editorial administrator Ms Sarah Parry, for providing excellent support for this special issue. We are also grateful to all referees for their timely and critical reviews of the manuscripts.

Author information

Authors and affiliations.

The Business School, University of Huddersfield, Queensgate, Huddersfield, HD1 3DH, UK

Jill Johnes

CEGE, Católica Porto Business School, Porto, Portugal

Maria Portela

Aston Business School, Aston University, Birmingham, UK

Emmanuel Thanassoulis

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jill Johnes .

Rights and permissions

Reprints and permissions

About this article

Johnes, J., Portela, M. & Thanassoulis, E. Efficiency in education. J Oper Res Soc 68 , 331–338 (2017). https://doi.org/10.1057/s41274-016-0109-z

Download citation

Received : 14 June 2016

Accepted : 15 August 2016

Published : 21 March 2017

Issue Date : April 2017

DOI : https://doi.org/10.1057/s41274-016-0109-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • data envelopment analysis
  • frontier estimation
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Educational Futures- Higher Education Management and Operational

    operational research in higher education

  2. (PDF) Operational Research in education

    operational research in higher education

  3. (PDF) Operational Excellence in Indian Higher Education: Renewed focus

    operational research in higher education

  4. Master of Operational Research: Why Operational research

    operational research in higher education

  5. Why Operations Research is awesome

    operational research in higher education

  6. Operational research model

    operational research in higher education

VIDEO

  1. OPERATION RESEARCH

  2. #3. Definition & Characteristics of Operation Research

  3. Eye Tracking Research Center

  4. OR EP 04 PHASES , SCOPE & LIMITATIONS OF OPERATION RESEARCH

  5. Masters in Operational Research, University of Delhi

  6. Vijay Ramjattan: Researching with linguistically oppressed international students

COMMENTS

  1. Building operational excellence in higher education

    Higher education: Building excellence in administrative operations | McKinsey. When colleges and universities think about building academic enterprises for the 21st century, they often overlook one of the most critical aspects: the back-office structures needed to run complex organizations. By failing to modernize and streamline administrative ...

  2. Operations Research and Management Science in Higher Education: An

    Operations Research (OR) is an interdisciplinary discipline that evolved during WWII to solve complex operational military problems such as scarce resource allocations and logistics. Thereafter, it has been used successfully in the private and public sectors—including in higher education (HE). The purpose of this chapter is to give an ...

  3. Handbook of Operations Research and Management Science in Higher Education

    This handbook covers various areas of Higher Education (HE) in which operations research/management science (OR/MS) techniques are used. Key examples include: international comparisons, university rankings, and rating academic efficiency with Data Envelopment Analysis (DEA); formulating academic strategy with balanced scorecard; budgeting and ...

  4. Operational Research and its Application in Education Management

    Lootsma, F.A. (1980) "Saaty's priority theory and the nomination of a senior professor in operations research", European Journal of Operational Research, 4, pp. 380-88. Google Scholar Mason, R.O. (1969) " Basic concepts for designing management information systems ", AIS Research paper number 8, reprinted in A. Rappaport (ed) (1975) Information ...

  5. A systematic review of operations research and ...

    It is hoped that this study will motivate researchers to engage in higher education research projects and explore the untapped potential of quantitative OR/MS techniques. To achieve its aims, this study addressed the following research questions. ... Importance of operations research in higher education. Int J Oper Res Optim, 7 (2016), pp. 35 ...

  6. Operational Research in education

    Operational Research (OR) techniques have been applied, from the early stages of the discipline, to a wide variety of issues in education. ... Institutions of higher education, for example, produce (in simple terms) teaching, research and third mission (the last reflecting universities' engagement with society). Schools also produce multiple ...

  7. Operations Research and Higher Education Administration

    Abstract. Presents a selective review of the application of Operations Research (OR) in higher education administration. Identifies eight important types of operational management problems in higher education and discusses the use of OR methods to deal with these problems. Seeks to make administrators in higher education aware of the great ...

  8. Home

    Overview. Research in Higher Education is a journal that publishes empirical research on postsecondary education. Open to studies using a wide range of methods, with a special interest in advanced quantitative research methods. Covers topics such as student access, retention, success, faculty issues, institutional assessment, and higher ...

  9. Handbook of Operations Research and Management Science in Higher Education

    Books. Handbook of Operations Research and Management Science in Higher Education. Zilla Sinuany-Stern. Springer Nature, Sep 9, 2021 - Business & Economics - 524 pages. This handbook covers various areas of Higher Education (HE) in which operations research/management science (OR/MS) techniques are used. Key examples include: international ...

  10. Handbook of Operations Research and Management Science in Higher Education

    Abstract. This handbook covers various areas of Higher Education (HE) in which operations research/management science (OR/MS) techniques are used. Key examples include: international comparisons ...

  11. [PDF] Operational Research in education

    Z. Sinuany-Stern. Education, Business. International Series in Operations Research…. 2021. Operations Research (OR) is an interdisciplinary discipline that evolved during WWII to solve complex operational military problems such as scarce resource allocations and logistics. Thereafter, it…. Expand.

  12. (PDF) Operational Research in education

    Our study employed the network-based DEA method to examine the performance of South African higher education institutions in a network structure of teaching and research for the period 2009/10 ...

  13. Framework and operationalisation challenges for quantitative

    Higher Education Quarterly is an international educational research journal publishing articles on policy, leadership, governance & management in higher education. Abstract The increasing availability of data on higher education systems, institutions and their members creates new opportunities for comparative research adopting a quantitative ...

  14. Introduction to Operations Research

    For over four decades, Introduction to Operations Research has been the classic text on operations research. While building on the classic strengths of the text, the author continues to find new ways to make the text current and relevant to students. One way is by incorporating a wealth of state-of-the art, user-friendly software and more ...

  15. PDF Introduction to Operations Research

    engineering and an MS degree in operations research. Ms. Stephens taught public speak-ing in Stanford's School of Engineering and served as a teaching assistant for a case studies course in operations research. As a teaching assistant, she analyzed operations research problems encountered in the real world and transformed these problems into

  16. Operations Research and Management Science in Higher Education: An

    Operations Research (OR) is an interdisciplinary discipline that evolved during WWII to solve complex operational military problems such as scarce resource allocations and logistics. Thereafter, it has been used successfully in the private and public sectors—including in higher education (HE). The purpose of this chapter is to give an overview of the use of OR methodologies in HE and the ...

  17. Full article: Operational Research: methods and applications

    1. Introduction Footnote 1. The year 2024 marks the 75 th anniversary of the Journal of the Operational Research Society, formerly known as Operational Research Quarterly.It is the oldest Operational Research (OR) journal worldwide. On this occasion, my colleague Fotios Petropoulos from University of Bath proposed to the editors of the journal to edit an encyclopedic article on the state of ...

  18. Handbook of Research on Operational Quality Assurance in Higher

    The Handbook of Research on Operational Quality Assurance in Higher Education for Life-Long Learning is a comprehensive scholarly book that focuses on the evolution of the education framework and job market as well as necessary changes needed in organizations to reply to life-long learning and competency-based training initiatives. Highlighting ...

  19. Library Operational Research

    Higher Education Review, 32(1), 55-67. Google Scholar Kantor, P. B. (1979). A review of library operations research. Library Research, 1, 295-345. Google Scholar Kirby, M. (1999). Operations research trajectories: The Anglo-American experience from the 1940s to the 1990s. Operations Research, 48(5), 661-670.

  20. Importance of Operations Research in higher education

    Importance of Operatio ns Res earch in Higher Education 38. 3. Encouraging bottom-up initiatives from the faculty members, setting them in a propitious. learnin g a nd teachin g environmen t, pro ...

  21. Artificial intelligence in higher education: the state of the field

    The purpose of this study is in response to the appeal from scholars (viz., Chu et al., 2022; Hinojo-Lucena et al., 2019; Zawacki-Richter et al., 2019) to research to investigate the benefits and challenges of AIEd within HE settings.As the academic knowledge of AIEd HE finished with studies examining up to 2020, this study provides the most up-to-date analysis examining research through to ...

  22. Full article: Exploring remote supervision in higher education

    Higher education refers to the education level that follows secondary education which includes CoE undergraduate and postgraduate education. Remote supervision in higher education is the practice of monitoring and providing feedback to students from a distance using technology-mediated communication. ... Journal of Research on Technology in ...

  23. Balanced Scorecard in Strategic Planning of Higher Education: Review

    Abstract. This chapter provides a review of the Balanced Scorecard (BSC) applied to higher education institutions (HEIs). BSC was developed in the 1990s as a comprehensive method to measure and manage organizational performance. BSC translates an organization's mission statement and strategy into specific, measurable goals and monitors the ...

  24. The Future of OER in Higher Education

    Many microcredentialing courses now use OER, which allows students to develop in-demand skills without paying the high cost of traditional higher education. The past few years have brought drastic changes to higher education, in both how students learn and how instructors teach. Whether due to increases in remote and hybrid learning during the ...

  25. Religions

    Its operational concept is highly compatible with the development of religious thought and museological thought in the world, and it has successively formed the stage characteristics of socialization, education, localization, and popularization of higher education, which has an important inspirational and leading role in the development of the ...

  26. Efficiency in education

    As noted earlier, education (including higher education) contributes to economic growth; higher education also receives public funding in many countries, and so it is important to understand productivity growth in universities. ... Journal of the Operational Research Society 58(8):996-1007. Article Google Scholar Hanushek EA (2003). The ...