How Technology is Changing Academic Research

  • Partner Content
  • Author: Ryan Smith, Qualtrics. Ryan Smith, Qualtrics

technology research work

Maybe you’ve heard that technology is disrupting education. Even President Obama’s top economic advisor recently went on record saying that the rise of Web-based learning will lower the costs and barriers to a good education and raise living standards around the world. Pretty ambitious stuff, and lots of very smart people agree. 

The digital revolution in education is full steam ahead, and it’s affecting more than just the classroom. Academic research is experiencing a high-tech makeover, as well, and it matters more than most of us know. For the average person, the scholarly pursuits of research faculty might seem harder to digest than storylines about free college , but when you realize that academic research impacts every aspect of our lives — including the economy, medicine and human behavior — then it’s much easier to swallow. 

As is typical in every hype cycle, it’s hard to tell where we are in academia’s digital revolution. Some believe we’re just now experiencing a watershed moment , with the rise of online learning and MOOCs , in particular, signaling the arrival of a new age. Others rightly note that our climb to the digital mountaintop has been a gradual one. Regardless of your perspective, one thing is clear: technology has become the new backbone in the classroom and the lab. 

On the research side, the movement to streamline the painstaking process of analysis began in the late 60s, when academics started using software like SPSS to do complex computations like linear regression instead of scribbling them out in longhand. Adding technology to the mix reduced the potential for human error and increased the speed of the research process. 

Several decades later, cloud-based software started changing the way researchers collect data, as well. When we launched Qualtrics from my dad’s basement a decade ago, my dad was teaching MBAs at Brigham Young University and needed a software platform that was sophisticated enough for his research, but simple enough for his students to use. 

We built Qualtrics because there wasn’t anything on the market at the time that fit the bill, and we soon saw that there was an enormous market in academia. From those humble academic roots, our tiny startup has grown to more than 5,000 organizations, including 1,300 universities worldwide, beginning with the Kellogg School of Management at Northwestern University. Our growth is a reflection of the increasing need for faster access to good data. 

In the academic world, faculty understand this better than anyone because their careers hinge on publishing research in scholarly journals. Standing between every researcher and peer-reviewed publication, however, are mountains of messy logistics that must be removed to focus on what really matters: first-rate research design and analysis. 

A prime example is Corinne Bendersky, an associate professor in UCLA’s Anderson School of Management, who recently published a study in the Academy of Management Journal. Her research, which challenges basic assumptions about the performance of extroverts and neurotics at work, was made easy by using technology to collect her research and experimental data. 

In the past, polling a large organization for a study like this required printing thousands of surveys, buying stamped envelopes, and hoping that everyone receiving the questionnaire would complete it and mail it back. It would also require a major time investment from the organization being surveyed. Conducting experimental research would be even more complicated, requiring busy research aides to print and organize countless scenarios. Technology allowed Dr. Bendersky to avoid these costly hassles and give the business world a key insight on who to hire for high-performing teams.

Technology also removes the intimidation factor for students. As a student at the F.W. Olin Graduate School of Business at Babson, Su-Ting Yang used technology to collect data for the applied research required in her MBA program. For students like Yang, who have grown up around computers, it is much more intuitive to navigate a software interface than to design research on paper — and it’s more accurate, to boot. For Yang, the experience also prepared her for a career as a marketing research analyst at Nuance Communications where she uses the same techniques and technology every day.

Our world is being remade by technology at an increasing rate, and that’s exciting. Just as technology shrinks the world and democratizes information, it is also reshaping how we learn. This is important for a rising generation of students — and also for the researchers who drive the innovations that make life better.

Ryan Smith is co-founder and CEO of Qualtrics.

technology research work

  • Community Content

Get The Magazine

Subscribe now to get 6 months for $5 - plus a free portable phone charger., get our newsletter, wired's biggest stories, delivered to your inbox., follow us on twitter.

Wired Twitter

Visit WIRED Photo for our unfiltered take on photography, photographers, and photographic journalism wrd.cm/1IEnjUH

Follow Us On Facebook

Don't miss our latest news, features and videos., we’re on pinterest, see what's inspiring us., follow us on youtube, don't miss out on wired's latest videos..

Caltech

Artificial Intelligence

Since the 1950s, scientists and engineers have designed computers to "think" by making decisions and finding patterns like humans do. In recent years, artificial intelligence has become increasingly powerful, propelling discovery across scientific fields and enabling researchers to delve into problems previously too complex to solve. Outside of science, artificial intelligence is built into devices all around us, and billions of people across the globe rely on it every day. Stories of artificial intelligence—from friendly humanoid robots to SkyNet—have been incorporated into some of the most iconic movies and books.

But where is the line between what AI can do and what is make-believe? How is that line blurring, and what is the future of artificial intelligence? At Caltech, scientists and scholars are working at the leading edge of AI research, expanding the boundaries of its capabilities and exploring its impacts on society. Discover what defines artificial intelligence, how it is developed and deployed, and what the field holds for the future.

Artificial Intelligence Terms to Know >

Orange and blue filtered illustration of a robot with star shapes covering the top of the frame

What Is AI ?

Artificial intelligence is transforming scientific research as well as everyday life, from communications to transportation to health care and more. Explore what defines AI, how it has evolved since the Turing Test, and the future of artificial intelligence.

Orange and blue filtered illustration of a face made of digital particles.

What Is the Difference Between "Artificial Intelligence" and "Machine Learning"?

The term "artificial intelligence" is older and broader than "machine learning." Learn how the terms relate to each other and to the concepts of "neural networks" and "deep learning."

Blue and orange filtered illustration of a robot holding a balloon and speaking to a human. Robot has thought bubbles showing comparisons of animals, fooods, and paws.

How Do Computers Learn?

Machine learning applications power many features of modern life, including search engines, social media, and self-driving cars. Discover how computers learn to make decisions and predictions in this illustration of two key machine learning models.

Orange and blue filtered cartoon drawing of vehicle intersection

How Is AI Applied in Everyday Life?

While scientists and engineers explore AI's potential to advance discovery and technology, smart technologies also directly influence our daily lives. Explore the sometimes surprising examples of AI applications.

Orange and blue filtered illustration of big data analytics stream

What Is Big Data?

The increase in available data has fueled the rise of artificial intelligence. Find out what characterizes big data, where big data comes from, and how it is used.

Orange and blue filtered illustration of robot head and human head looking at each other

Will Machines Become More Intelligent Than Humans?

Whether or not artificial intelligence will be able to outperform human intelligence—and how soon that could happen—is a common question fueled by depictions of AI in movies and other forms of popular culture. Learn the definition of "singularity" and see a timeline of advances in AI over the past 75 years.

Blue and orange filtered illustration of a self driving car

How Does AI Drive Autonomous Systems?

Learn the difference between automation and autonomy, and hear from Caltech faculty who are pushing the limits of AI to create autonomous technology, from self-driving cars to ambulance drones to prosthetic devices.

Blue and orange filtered image of a human hand touching with robot

Can We Trust AI?

As AI is further incorporated into everyday life, more scholars, industries, and ordinary users are examining its effects on society. The Caltech Science Exchange spoke with AI researchers at Caltech about what it might take to trust current and future technologies.

blue and yellow filtered image of a robot hand using a paintbrush

What is Generative AI?

Generative AI applications such as ChatGPT, a chatbot that answers questions with detailed written responses; and DALL-E, which creates realistic images and art based on text prompts; became widely popular beginning in 2022 when companies released versions of their applications that members of the public, not just experts, could easily use.

Orange and blue filtered photo of a glass building with trees, shrubs, and empty tables and chairs in the foreground

Ask a Caltech Expert

Where can you find machine learning in finance? Could AI help nature conservation efforts? How is AI transforming astronomy, biology, and other fields? What does an autonomous underwater vehicle have to do with sustainability? Find answers from Caltech researchers.

Terms to Know

A set of instructions or sequence of steps that tells a computer how to perform a task or calculation. In some AI applications, algorithms tell computers how to adapt and refine processes in response to data, without a human supplying new instructions.

Artificial intelligence describes an application or machine that mimics human intelligence.

A system in which machines execute repeated tasks based on a fixed set of human-supplied instructions.

A system in which a machine makes independent, real-time decisions based on human-supplied rules and goals.

The massive amounts of data that are coming in quickly and from a variety of sources, such as internet-connected devices, sensors, and social platforms. In some cases, using or learning from big data requires AI methods. Big data also can enhance the ability to create new AI applications.

An AI system that mimics human conversation. While some simple chatbots rely on pre-programmed text, more sophisticated systems, trained on large data sets, are able to convincingly replicate human interaction.

Deep Learning

A subset of machine learning . Deep learning uses machine learning algorithms but structures the algorithms in layers to create "artificial neural networks." These networks are modeled after the human brain and are most likely to provide the experience of interacting with a real human.

Human in the Loop

An approach that includes human feedback and oversight in machine learning systems. Including humans in the loop may improve accuracy and guard against bias and unintended outcomes of AI.

Model (computer model)

A computer-generated simplification of something that exists in the real world, such as climate change , disease spread, or earthquakes . Machine learning systems develop models by analyzing patterns in large data sets. Models can be used to simulate natural processes and make predictions.

Neural Networks

Interconnected sets of processing units, or nodes, modeled on the human brain, that are used in deep learning to identify patterns in data and, on the basis of those patterns, make predictions in response to new data. Neural networks are used in facial recognition systems, digital marketing, and other applications.

Singularity

A hypothetical scenario in which an AI system develops agency and grows beyond human ability to control it.

Training data

The data used to " teach " a machine learning system to recognize patterns and features. Typically, continual training results in more accurate machine learning systems. Likewise, biased or incomplete datasets can lead to imprecise or unintended outcomes.

Turing Test

An interview-based method proposed by computer pioneer Alan Turing to assess whether a machine can think.

Dive Deeper

A human sits at a table flexing his hand. Sensors are attached to the skin of his forearm. A robotic hand next to him mimics his motion.

More Caltech Computer and Information Sciences Research Coverage

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals

Information technology articles from across Nature Portfolio

Information technology is the design and implementation of computer networks for data processing and communication. This includes designing the hardware for processing information and connecting separate components, and developing software that can efficiently and faultlessly analyse and distribute this data.

technology research work

The dream of electronic newspapers becomes a reality — in 1974

Efforts to develop an electronic newspaper providing information at the touch of a button took a step forward 50 years ago, and airborne bacteria in the London Underground come under scrutiny, in the weekly dip into Nature ’s archive.

Latest Research and Reviews

Smart device interest, perceived usefulness, and preferences in rural alabama seniors.

  • Monica Anderson

technology research work

Social signals predict contemporary art prices better than visual features, particularly in emerging markets

  • Kangsan Lee
  • Jaehyuk Park
  • Yong-Yeol Ahn

technology research work

The analysis of pedestrian flow in the smart city by improved DWA with robot assistance

  • Huizhen Long

technology research work

Evolutionary game analysis of data sharing among large and medium-sized enterprises in the perspective of platform empowerment

technology research work

Using an epidemiological model to explore the interplay between sharing and advertising in viral videos

technology research work

Faults locating of power distribution systems based on successive PSO-GA algorithm

  • Wenzhang Xu

Advertisement

News and Comment

technology research work

Autonomous interference-avoiding machine-to-machine communications

An article in IEEE Journal on Selected Areas in Communications proposes algorithmic solutions to dynamically optimize MIMO waveforms to minimize or eliminate interference in autonomous machine-to-machine communications.

Combining quantum and AI for the next superpower

Quantum computing can benefit from the advancements made in artificial intelligence (AI) holistically across the tech stack — AI may even unlock completely new ways of using quantum computers. Simultaneously, AI can benefit from quantum computing leveraging the expected future compute and memory power.

  • Martina Gschwendtner
  • Henning Soller
  • Sheila Zingg

technology research work

How scientists are making the most of Reddit

As X wanes, researchers are turning to Reddit for insights and data, and to better connect with the public.

  • Hannah Docter-Loeb

technology research work

AI image generators often give racist and sexist results: can they be fixed?

Researchers are tracing sources of racial and gender bias in images generated by artificial intelligence, and making efforts to fix them.

technology research work

Why scientists trust AI too much — and what to do about it

Some researchers see superhuman qualities in artificial intelligence. All scientists need to be alert to the risks this creates.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

technology research work

MIT Technology Review

  • Newsletters

Augmenting the realities of work

Immersive AR/VR technologies can add greater value across workplaces and customer interactions, according to global head of Immersive Technology Research at JPMorgan Chase, Blair MacIntyre.

  • MIT Technology Review Insights archive page

technology research work

In association with JPMorgan Chase &Co

Imagine an integrated workplace with 3D visualizations that augment presentations, interactive and accelerated onboarding, and controlled training simulations. This is the future of immersive technology that global head of Immersive Technology Research at JPMorgan Chase, Blair MacIntyre is working to build. Augmented reality (AR) and virtual reality (VR) technologies can blend physical and digital dimensions together and infuse new innovations and efficiencies into business and customer experiences.

"These technologies can offer newer ways of collaborating over distance both synchronously and asynchronously than we can get with the traditional work technologies that we use right now," says MacIntyre. "It's these new ways to collaborate, ways of using the environment and space in new and interesting ways that will hopefully offer new value and change the way we work."

Many enterprises are integrating VR into business practices like video conference calls. But having some participants in a virtual world and some sidelined creates imbalances in the employee experience. MacIntyre's team is looking for ways to use AR/VR technologies that can be additive, like 3D data visualizations that enhance financial forecasting within a bank, not ones that overhaul entire experiences.

Although the potential of AR/VR is quickly evolving, it's unlikely that customers’ interactions or workplace environments will be entirely moved to the virtual world anytime soon. Rather, MacIntyre's immersive technology research looks to infuse efficiencies into existing practices.

"It's thinking about how the technologies integrate and how we can add value where there is value and not trying to replace everything we do with these technologies," MacIntyre says.

AI can help remove some of the tedium from immersive technologies that have made them impractical for widespread enterprise use in the past. Using VR technology in the workplace may prohibit taking notes and having access to traditional input devices and files. AI tools can take and transcribe notes and fill in any other gaps to help remove that friction and eliminate redundancies.

Connected Internet of things (IoT) devices are also key to enabling AR/VR technologies. To create a valuable immersive experience, MacIntyre says, it's imperative to know as much about the surrounding world of the user as well as their needs, habits, and preferences.

"If we can figure out more ways of enabling people to work together in a distributed way, we can start enabling more people to participate meaningfully in a wider variety of jobs," says MacIntyre.

This episode of Business Lab is produced in association with JPMorgan Chase.

Full transcript

Laurel: From MIT Technology Review, I'm Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is emerging technologies, specifically, immersive technologies like augmented and virtual reality. Keeping up with technology trends may be a challenge for most enterprises, but it's a critical way to think about future possibilities from product to customer service to employee experience. Augmented and virtual realities aren't necessarily new, but when it comes to applying them beyond gaming, it's a brave new world. Two words for you: emerging realities. My guest is Blair MacIntyre, who is the global head of Immersive Technology Research at JPMorgan Chase. This podcast is produced in association with JPMorgan Chase. Welcome, Blair.

Blair MacIntyre: Thank you. It's great to be here.

Laurel: Well, let's do a little bit of context setting. Your career has been focused on researching and exploring immersive technology, including software and design tools, privacy and ethics, and game and experience design. So what brought you to JPMorgan Chase, and could you describe your current role?

Blair: So before joining the firm, I had spent the last 23 years as a professor at Georgia Tech and Northeastern University. During that time, as you say, I explored a lot of ways that we can both create things with these technologies, immersive technologies and also, what they might be useful for and what the impacts on people in society and how we experience life are. But as these technologies have become more real, moved out of the lab, starting to see real products from real companies, we have this opportunity to actually see how they might be useful in practice and to have, for me, an impact on how these technologies will be deployed and used that goes beyond the traditional impact that professors might have. So beyond writing papers, beyond teaching students. That's what brought me to the firm, and so my current role is, really, to explore that, to understand all the different ways that immersive technology could impact the firm and its customers. Right? So we think about not just customer-facing and not just products, but also employees and their experience as well.

Laurel: That's really interesting. So why does JPMorgan Chase have a dedicated immersive technology focus in its global technology applied research division, and what are the primary goals of your team's research within finance and large enterprises as a whole?

Blair: That's a great question. So JPMorgan Chase has a fairly wide variety of research going on within the company. There's large efforts in AI/ML, in quantum computing, blockchain. So they're interested in looking at all of the range of new technologies and how they might impact the firm and our customers, and immersive technologies represent one of those technologies that could over time have a relatively large impact, I think, especially on the employee experience and how we interact with our customers. So they really want to have a group of people focusing on, really, looking both in the near and long term, and thinking about how we can leverage the technology now and how we might be able to leverage it down the road, and not just how we can, but what we should not do. Right? So we're interested in understanding of these applications that are being proposed or people are imagining could be used. Which ones actually have value to the company, and which ones may not actually have value in practice?

Laurel: So when people think of immersive technologies like augmented reality and virtual reality, AR and VR, many think of headsets or smartphone apps for gaming and retail shopping experiences. Could you give an overview of the state of immersive technology today and what use cases you find to be the most innovative and interesting in your research?

Blair: So, as you say, I think many people think about smartphones, and we've seen, at least in movies and TV shows, head mounts of various kinds. The market, I would divide it right now into the two parts, the handheld phone and tablet experience. So you can do augmented reality now, and that really translates to we take the camera feed, and we can overlay computer graphics on it to do things like see what something you might want to buy looks like in your living room or do, in an enterprise situation, remote maintenance assistance where I can take my phone, point it at a piece of technology, and a remote expert could draw on it or help me do something with it.

There’s the phone-based things, and we carry these things in our pockets all the time, and they're relatively cheap. So there's a lot of opportunities when it's appropriate to use those, but the big downside of those devices is that you have to hold them in your hands, so if you wanted to try to put information all around you, you would have to hold the device up and look around, which is uncomfortable and awkward. So that is where the head mount displays come in.

So either virtual reality displays which, right now, many of us think about computer games and education as use cases in the consumer world or augmented reality displays. These sorts of displays now let us do the same kind of things we might do with our phones, but we can do it without our hands having to hold something so we can be doing whatever work it was we wanted to do, right? Repairing the equipment, taking notes, working with things in the world around us, and we can have information spread all around us, which I think is the big advantage of head mounts.

So many of the things people imagine when they think about augmented reality in particular involve this serendipitous access to information. I'm walking into a conference room, and I see sort of my notes and information about the people I'm meeting there and the materials from our last meeting, whatever it is, or I'm walking down the street, and I see advertising or other kinds of, say, tourism information, but those things only work if the device is out of mind. If I can put it on, and then go about my life, I'm not going to walk into a conference room, and hold up a phone, and look at everybody through it.

So that, I think, is the big difference. You could implement the same sorts of applications on both the handheld devices and the head-worn devices, but the two different form factors are going to make very different applications appropriate for those two sorts of technologies. On the virtual reality side, we're at the point now where the displays we can buy are light enough and comfortable enough that we could wear them for half an hour, a couple hours without discomfort. So a lot of the applications that people imagine there, I think the most popular things that people have done research on and that I see having a near-term impact in the enterprise are immersive training applications where you can get into a situation rather than, say, watching a video or a little click-through presentation as part of your annual training. You could really be in an experience and hopefully learn more from it. So I think those sorts of experiences where we're totally immersed and focused is where virtual reality comes in. The big thing that I think is most exciting about head-worn displays in particular where we can wear them while we're doing work as opposed to just having these ephemeral experiences with a phone is the opportunity to do things together, to collaborate. So I might want to look at a map on a table and see a bunch of data floating above the map, but it would be better if you and our other colleagues were around the table with me, and we can all see the same things, or if we want to take a training experience, I could be in there getting my training experience, but maybe someone else is joining me and being able to both offer feedback or guidance and so on.

Essentially, when I think about these technologies, I think about the parallels to how we do work regularly, right? We generally collaborate with people. We might grab a colleague and have them look at our laptop to show them something. I might send someone something on my phone, and then we can talk about it. So much of what we do involves interactions with other people and with the data that we are doing our job with that anything we do with these immersive technologies is really going to have to mimic that and give us the ability to do our real work in these immersive spaces with the people that we normally work with.

Laurel: Well, speaking of working with people, how can the scale of an institution like JPMorgan Chase help propel this research forward in immersive technology, and what opportunities does it provide that are otherwise limited in a traditional university or startup research environment?

Blair: I think it comes down to a few different things. On one hand, we have the access to people who are really doing the things that we want to build technologies to help with. Right? So if I wanted to look how I could use immersive visualization of data to help people in human resources do planning or help people who are doing financial modeling look at the data in new and interesting ways, now I could actually do the research in conjunction with the real people who do that work. Right? So I've already and I've been at the firm for a little over a year, and many conversations we've had were either we've had an idea or somebody has come to us with an idea. Through the course of the conversations, relatively quickly, we hone in on things that are much more sophisticated, much more powerful than what we might have thought of at a university where we didn't have that sort of direct access to people doing the work. On the other hand, if we actually build something, we can actually test it with the same people, which is an amazing opportunity. Right? When I go to a conference, we’re going to put 20 people who actually represent the real users of those systems. So, for me, that's where I think the big opportunity of doing research in an enterprise is, is building solutions for the real people of that enterprise and being able to test it with those people.

Laurel: Recent years have actually changed what customers and employees expect from enterprises as well, like omnichannel retail experiences. So immersive technologies can be used to bridge gaps between physical and virtual environments as you were saying earlier. What are the different opportunities that AR and VR can offer enterprises, and how can these technologies be used to improve employee and customer experience?

Blair: So I alluded back to some of that in previous answers. I think the biggest opportunities have to do with how employees within the organization can do new things together, can interact, and also how companies can interact with customers. Now, we're not going to move all of our interactions with our customers into the virtual world, or the metaverse, or whatever you want to call it nowadays anytime soon. Right? But I think there are opportunities for customers who are interested in those technologies, and comfortable with them, and excited by them to get new kinds of experiences and new ways of interacting with our firm or other firms than you could get with webpages and in-person meetings.

The other big opportunity I think is as we move to a more hybrid work environment and a distributed work environment, so a company like JPMorgan Chase is huge and spread around the world. We have over 300,000 employees now in most countries around the world. There might be groups of people, but they're connected together through video right now. These technologies, I think, can offer new ways of collaborating over distance both synchronously and asynchronously than we can get with the traditional work technologies that we use right now. So it's those new ways to collaborate, ways of using the environment and space in new and interesting ways that is going to, hopefully, offer new value and change the way we work.

Laurel: Yeah, and staying on that topic, we can't really have a discussion about technology without talking about AI which is another evolving, increasingly popular technology. So that's being used by many enterprises to reduce redundancies and automate repetitive tasks. In this way, how can immersive technology provide value to people in their everyday work with the help of AI?

Blair: So I think the big opportunity that AI brings to immersive technologies is helping ease a lot of the tedium and burden that may have prevented these technologies from being practical in the past, and this could happen in a variety of ways. When I'm in a virtual reality experience, I don't have access to a keyboard, I don't have access to traditional input devices, I don't have necessarily the same sorts of access to my files, and so on. With a lot of the new AI technologies that are coming around, I can start relying on the computer to take notes. I can have new ways of pulling up information that I otherwise wouldn't have access to. So, I think AI reducing the friction of using these technologies is a huge opportunity, and the research community is actively looking at that because friction has been one of the big problems with these technologies up till now.

Laurel: So, other than AI, what are other emerging technologies that can aid in immersive technology research and development?

Blair: So, aside from AI, if we step back and look at all of the emerging technologies as a whole and how they complement each other, I think we can see new opportunities. So, in our research, we work closely with people doing computer vision and other sort of sensing research to understand the world. We work closely with people looking at internet of things and connected devices because at a 10,000-foot level, all of these technologies are based on the idea of understanding, sensing the world, understanding what people are doing in it, understanding what people's needs might be, and then somehow providing information to them or actuating things in the world, displaying stuff on walls or displays.

From that viewpoint, immersive technologies are primarily one way of displaying things in a new and interesting way and getting input from people, knowing what people want to do, allowing them to interact with data. But in order to do that, they need to know as much about the world around the user as possible, the structure of it, but also, who's there, what we are doing, and so on. So all of these other technologies, especially the Internet of things (IoT) and other forms and ways of sensing what's happening in the world are very complimentary and together can create new sorts of experiences that neither could do alone.

Laurel: So what are some of the challenges, but also, possible opportunities in your research that contrast the future potential of AR and VR to where the technology is today?

Blair: So I think one of the big limitations of technology today is that most of the experiences are very siloed and disconnected from everything else we do. During the pandemic, many of us experimented with how we could have conferences online in various ways, right? A lot of companies, small companies and larger companies, started looking at how you could create immersive meetings and big group experiences using virtual reality technology, but all of those experiences that people created were these closed systems that you couldn't bring things into. So one of the things we're really interested in is how we stop thinking about creating new kinds of experiences and new ways of doing things, and instead think about how do we add these technologies to our existing work practices to enhance them in some way.

So, for example. Right now, we do video meetings. It would be more interesting for some people to be able to join those meetings, say, in VR. Companies have experimented with that, but most of the experiments that people are doing assume that everyone is going to move into virtual reality, or we’re going to bring, say, the people in as a little video wall on the side of a big virtual reality room, making them second class citizens.

I'm really interested and my team is interested in how we can start incorporating technologies like this while keeping everyone a first-class participant in these meetings. As one example, a lot of the systems that large enterprises build, and we're no different, are web-based right now. So if, let's say, I have a system to do financial forecasting, you could imagine there's a bunch of those at a bank, and it's a web-based system, I'm really interested in how do we add the ability for people to go into a virtual reality or augmented reality experience, say, a 3D visualization of some kind of data at the moment they want to do it, do the work that they want to do, invite colleagues in to discuss things, and then go back to the work as it was always done on a desktop web browser. So that idea of thinking of these technologies as a capability, a feature instead of a new whole application and way of doing things permeates all the work we're doing. When I look down the road at where this can go, I see in, say, let's say, two to five years, I see people with displays maybe sitting on their desk. They have their tablet and their phone, and they might also have another display or two sitting there. They're doing their work, and at different times, they might be in a video chat, they might pick up a head mount and put it on to do different things, but it's all integrated. I'm really interested in how we connect these together and reduce friction. Right? If it takes you four or five minutes to move your work into a VR experience, nobody is going to do it because it just is too problematic. So it's that. It's thinking about how the technologies integrate and how we can add value where there is value and not trying to replace everything we do with these technologies.

Laurel: So to stay on that future focus, how do you foresee the immersive technology landscape entirely evolving over the next decade, and how will your research enable those changes?

Blair: So, at some level, it's really hard to answer that question. Right? So if I think back 10 years to where immersive technologies were, it would have been inconceivable for us to imagine the videos that are coming out. So, at some level, I can say, "Well, I have no idea where we're going to be in 10 years." On the other hand, it's pretty safe to imagine the kinds of technologies that we're experimenting with now just getting better, and more comfortable, and more easy to integrate into work. So I think the landscape is going to evolve in the near term to be more amenable to work.

Especially for augmented reality, the threshold that these devices would have to get to such that a lot of people would be willing to wear them all the time while they're walking down the street, playing sports, doing whatever, that's a very high bar because it has to be small, it has to be light, it has to be cheap, it has to have a battery that lasts all day, etcetera, etcetera. On the other hand, in the enterprise, in any business situation, it's easy to imagine the scenario I described. It's sitting on my desk, I pick it up, I put it on, I take it off.

In the medium term after that, I think we will see more consumer applications as people start solving more of the problems that are preventing people from wearing these devices for longer periods of time. Right? It's not just size, and battery power, and comfort, it's also things like optics. Right? A lot of people — not a lot, but say, let's say 10%, 15% of people might experience headaches, or nausea, or other kinds of discomfort when they wear a VR display as they're currently built, and a lot of that has to do with the fact that the optics that you're looking at when you're putting this display are built in a way that makes it hard to comfortably focus at objects at different distances away from you without getting into the nitty-gritty details. For many of us, that's fine. We can deal with the slight problems. But for some people, it's problematic. So as we figure out how to solve problems like that, more people can wear them, and more people can use them. I think that's a really critical issue for not just consumers, but for the enterprise because if we think about a future where more of our business applications and the kind of way we work are done with technologies like this, these technologies have to be accessible to everybody. Right? If that 10% or 15% of people get headaches and feel nauseous wearing this device, you've now disenfranchised a pretty significant portion of your workforce, but I think those can be solved, and so we need to be thinking about how we can enable everybody to use them. On the other hand, technologies like this can enfranchise more people, where right now, working remotely, working in a distributed sense is hard. For many kinds of work, it's difficult to do remotely. If we can figure out more ways of enabling people to work together in a distributed way, we can start enabling more people to participate meaningfully in a wider variety of jobs.

Laurel: Blair, that was fantastic. It's so interesting. I really appreciate your perspective and sharing it here with us on the Business Lab.

Blair: It was great to be here. I enjoyed talking to you.

Laurel: That was Blair MacIntyre, the global head of Immersive Technology Research at JPMorgan Chase, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review. That's it for this episode of Business Lab. I'm your host, Laurel Ruma. I'm the global director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you'll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This podcast is for informational purposes only and it is not intended as legal, tax, financial, investment, accounting or regulatory advice. Opinions expressed herein are the personal views of the individual(s) and do not represent the views of JPMorgan Chase & Co. The accuracy of any statements, linked resources, reported findings or quotations are not the responsibility of JPMorgan Chase & Co.

Keep Reading

Most popular, it’s time to retire the term “user”.

The proliferation of AI means we need a new word.

  • Taylor Majewski archive page

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

  • Casey Crownhart archive page

Sam Altman says helpful agents are poised to become AI’s killer function

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

  • James O'Donnell archive page

A brief, weird history of brainwashing

L. Ron Hubbard, Operation Midnight Climax, and stochastic terrorism—the race for mind control changed America forever.

  • Annalee Newitz archive page

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

  • For Authors
  • Collaboration
  • Privacy Policy

Atlas of Science

  • Conferences & Symposiums

Tools & Methods

The 11 best technology tools for researchers.

Research is a meticulous, painstaking process. But thanks to the help of technology ( https://www.bairesdev.com/insights/it-outsourcing-services/ ), the pain is lessened. No matter your field — whether it’s biology or computer science — there’s a tool out there to help you organize your notes, cite your sources, find important articles, connect with colleagues, and more.

Here’s a selection of 11 of the most useful ones.

This free tool wants to be “your personal research assistant.” It’s a free-to-use citation manager that helps you collect, organize, keep track of, cite, and share your research. You can also sync your research across devices, as Zotero offers integrations with browsers and Word processors.

The largest database of abstracts and citations of peer-reviewed research literature in the world, Scopus includes more than 36,000 titles. It covers subjects such as physical, life, social, and health sciences, with numerous publishers from around the world. It’s free to search for author profiles, as well as claim and update your own. Non-subscribers can also view journal rankings and metrics.

3. QuickCalcs

From GraphPad, QuickCalcs allows you to compute statistical analyses for a variety of data: categorical, continuous, statistical distributions, random numbers, and chemical and radiochemical. You’ll simply choose the category and type of calculator, enter the data, and view your results — all within your browser.

A Digital Object Identifier (DOI) is a unique code consisting of letters, numbers, and special characters assigned to articles so that others can find them online. With Zenodo, you can receive a free DOI for your research, whether it’s a paper, article, essay, blog post, and nearly anything you can think of. Using it, you can share it with a thriving online community of researchers in all kinds of fields.

EndNote is an all-in-one tool for managing your references and citations. You can share your references with teams and keep track of edits and changes, comb resources to find the right ones for you, and create and format bibliographies. The software is packed with other features, including automatic link and reference updating to keep your citations current.

6. ReadCube

Here’s a web, mobile, and desktop platform that will help you manage your research across your devices. You can find, read, and annotate materials and preserve your notes and lists on your phone, laptop, or whatever device you’re using.

7. ResearchGate

Along with offering free access to research in your field, ResearchGate enables you to connect with others in the scientific community. You can share your work and collaborate with others in the industry, as well as get feedback.

You’re also able to see statistics on the impact of your work and the audience it’s garnering, along with receiving alerts when your connections publish new work. It’s completely free to register, too.

8. Google Scholar

Google Scholar is a free search engine that indexes academic research across a wide array of disciplines and formats, including journals, books, articles, dissertations, and more. It’s free to use for everyone, whether you’re a student or simply a curious person. Some articles are also free to read, while others require a login — although you’ll still generally be able to read the abstract either way.

9. F100Prime

Find news and recommendations for articles you should read about work in your field. Along with receiving the recommendations, you’ll get a quick summary of why you should read them. You can also follow local experts and get alerts about the articles they recommend, as well as save searches and get notified when works matching your interests and criteria become available.

Run by Cornell University, arXiv is a free, open-access repository of more than 1.5 million scholarly preprints that are accessible online. It covers fields including computer science, physics, economics, mathematics, statistics, quantitative biology, quantitative finance, and electrical engineering and systems science.

11. SJ Finder

Not only can you browse more than 30,000 accredited journals with existing research through SJ Finder, but you can also receive recommendations on journals that are best suited to publishing your own articles based on keywords in your paper’s title and abstract. The platform also helps you find a community, including labs, research partners, reviewers, and more.

Instant citations, connections with others in your field, access to peer-reviewed journal articles — what could be better? Research is grueling work, but with the help of these tools, you’ll streamline the process tenfold.

Download PDF

Related Articles:

Leave a reply cancel reply.

You must be logged in to post a comment.

Top Keywords

Diabetes | Alzheimer’s disease Cancer | Breast cancer | Tumor Blood pressure | Heart Brain | Kidney | Liver | Lung Stress | Pain | Therapy Infection | Inflammation | Injury DNA | RNA | Receptor | Nanoparticles Bacteria | Virus | Plant

See more …

technology research work

Proofread or Perish: Editing your scientific writing for successful publication

technology research work

Lab Leader makes software applications for experiment design in life science

technology research work

Cyagen Biosciences – Helping you choose the right animal model for your research

Labcollector lims and eln for improving productivity in the lab.

technology research work

Image Cytometer – NucleoCounter® NC-3000™

Recent posts.

  • The Manikin Challenge: manikin-based simulation in the psychiatry clerkship
  • Does UV-B radiation modify gene expression?
  • Ferrate technology: an innovative solution for sustainable sewer and wastewater management
  • Sleep abnormalities in different clinical stages of psychosis
  • A compact high yield isotope enrichment system

Facebook

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Rehabil Assist Technol Eng
  • v.10; Jan-Dec 2023
  • PMC10278405

Impacts of Technology Use on the Workload of Registered Nurses: A Scoping Review

Fatemeh mohammadnejad.

1 School of Health Sciences, University of Northern British Columbia, Prince George, BC, Canada

Shannon Freeman

2 School of Nursing, University of Northern British Columbia, Prince George, BC, Canada

Tammy Klassen-Ross

Dawn hemingway.

3 School of Social Work, University of Northern British Columbia, Prince George, BC, Canada

Davina Banner

Associated data.

Supplemental Material for Impacts of technology use on workload of registered nurses: A scoping review by Fatemeh Mohammadnejad, Shannon Freeman, Tammy Klassen-Ross, Dawn Hemingway and Davina Banner in Journal of Rehabilitation and Assistive Technologies Engineering

Introduction: Technology is an integral part of healthcare. With the rapid development of technological innovations that inform and support nurses, it is important to assess how these technologies may affect their workload particularly in rural contexts, where the workforce and supports may be limited. Methods: This literature review guided by Arksey and O’Malley’s scoping review framework describes the breadth of technologies which impact on nurses’ workload. Five databases (PubMed, CINAHL, PsycInfo, Web of Science, Business Source Complete) were searched. Thirty-five articles met the inclusion criteria. A data matrix was used to organize the findings. Findings: The technology interventions described in the articles covered diverse topics including: Cognitive care technologies; Healthcare providers’ technologies; Communication technologies; E-learning technologies; and Assistive technologies and were categorized as: Digital Information Solutions; Digital Education; Mobile Applications; Virtual Communication; Assistive Devices; and Disease diagnoses groups based on the common features. Conclusion: Technology can play an important role to support nurses working in rural areas, however, not all technologies have the same impact. While some technologies showed evidence to positively impact nursing workload, this was not universal. Technology solutions should be considered on a contextual basis and thought should be given when selecting technologies to support nursing workload.

Introduction

The use of technology has become interwoven into the daily lives of many persons, 1 from the use of smartphones and computers to accessing of the Internet and social media platforms. 2 , 3 The World Health Organization estimates that more than one billion persons require support from assistive technologies, however access to and use of technologies remain fragmented especially for persons living in rural and northern communities. 4

Use of technology can be important for healthcare providers to enhance the health, quality of life, and wellbeing of patients. Not only can the presence of technology in health systems increase the quality of treatments and services provided for patients, but the proper use of technological equipment can also support a safe and highly efficient work environment for health care professionals. 5 Yet, the implementation of health information technologies in underserved rural areas has been limited so far, in part due to the lack of funding and trained staff, as well as insufficient health information technology infrastructures. 6

When considering the integration of technology, barriers including staff shortage, fear of frequent breakdowns due to vulnerable infrastructures, and a potential increase in workload can be important factors affecting healthcare workers’ abilities to leverage technologies. 6 - 8 Further, for those working in rural areas, rural people’s healthcare demands are typically higher, and they have more limited access to health services, technologies, and specialists. 9

Technology use across healthcare settings has impacted all aspects of nursing practice, including nurses’ workload. There is a need to better understand how the use of technology affects nursing workload, especially in rural areas where chronic shortages of health human resources are known to add additional burden. 10 - 12 Therefore, this literature review, sought to answer the following research question: How can nurses use technologies to reduce workload in rural settings? The diversity of technology in the field of health is enormous, therefore, the focus of this review was to describe the breadth of technologies that impact on nurses' workload and to better understand the existing technologies in the field of health specific to rural communities.

This scoping review, guided by Arksey and O’Malley 1 framework, sought to describe the breadth of evidence of the effects of different technologies on nurses’ workload in rural areas and followed the five steps including 1) identifying the research question, 2) identifying relevant studies, 3) study selection, 4) charting the data, and 5) collecting, summarizing, and reporting the results. This systematic approach to the review helped to identify a comprehensive range of data, identify gaps in the existing literature, review the studies conducted in this field, and acquire the knowledge to better understand how nurses use technologies to reduce workload in rural settings.

To identify the research question (Step 1), the key words were determined according to population, intervention, context, and effect/outcome. (See Table 1 ). The guiding research question was “What is known about how nurses use of technology in rural communities affects their workload”.

Population, intervention, and effect brainstorming for different keywords selected for all databases.

Relevant studies were identified (Step 2) by using clearly defined inclusion and exclusion criteria ( Table 2 ). Data from rural settings, where population of residents is under 10,000, 13 was one of the inclusion criteria. Articles published prior to 2000 were excluded from the literature review to ensure that only up-to-date sources were included. Having up-to-date articles and considering the rapid advancement of technology in the last decade were the reasons for using the above-mentioned criterion.

Inclusion and exclusion criteria for article selection.

Study selection (Step 3) was undertaken by systematically searching five databases including: PsycInfo, PubMed, CINAHL, Web of Science, and Business Source Complete (See supplementary file for database specific search strategies). Database-specific subject headings were chosen for each of the main topics conceptualized where appropriate. In addition to the abstract’s title/keyword, topic-specific headings were included in the search. The narrative search strategy description of PubMed, CINAHL, PsycInfo, Web of Science, and Business Source Complete are shown in Supplementary File which provides a step-by-step description of heading searches and the number of retrieved articles. The number of retrieved articles by the databases is shown in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is 10.1177_20556683231180189-fig1.jpg

Process to select articles including PubMed, CINAHL, PsycInfo, Web of Science, and Business source Complete (N = 35): 12 January 2021.

The researcher used a spreadsheet to organize the extracted data matrix using Excel (Step 4). The data matrix included the headings authors’ name, publication date, article’s title, the country in which the study was conducted, journal’s name, research methodology, analysis, measure used to assess the workload, study objectives, main study findings, population/target, sample size, response rate, sex/gender, intervention, length of intervention, type of technology, strength/efficiency of the technology, barriers of technology use, nurses challenges, rural areas challenges, does the technology decrease nurses workload, how does technology affect nurses, how does technology affect patients, existing knowledge, study limitations and gaps, study strengths, and factors for technology acceptance.

The total number of the retrieved articles was 113 and the number of duplicate articles was 14 ( Figure 1 ), (Step 5). Thirty-five articles were included in this literature review and the results were generated in the following sections ( Supplementary Appendix Table SA ). Data were summarized from the data matrix and were organized thematically. Key findings are now described.

The retrieved articles were published between 2000 and 2020. There was a relatively upward trend in the publication of articles related to the implication of technology in the healthcare industry. The number of studies that were published from 2011 to 2020 (65.7%) was almost twice the number of articles published from 2001 to 2010 (34.2%), thus demonstrating an increase in focus on health related technology. The majority of studies were conducted in the United States ( n = 13), followed by Australia ( n = 4), Canada ( n = 3), and Scotland ( n = 2). Other countries where the studies were conducted included Guatemala, England, Russia, Uganda, India, Democratic Republic of Congo, Thailand, Afghanistan, Bangladesh, and Ghana with only one article in the present literature review. Even though the focus of the research question was on rural areas, some of the retrieved articles included both rural and urban areas. Twenty-seven articles concentrated specifically on rural areas. In addition, seven articles focused on both rural and urban areas. One article had an unspecified research area, but was elected applicable in rural areas. 14

The participant populations included in studies were highly diverse. Most studies ( n = 25) assessed health care provider groups including: 1) registered nurses ( n = 14), 2) community health nurses ( n = 9), 3) nursing students ( n = 8), 4) general practitioners (e.g., family physicians; n = 5), 5) licensed practical nurses ( n = 3) and 6) residents ( n = 1). The remaining five articles only used the word “nurse” and did not describe the type of nurse in detail. The rest of the articles in the present literature review included a variety of participant groups, in addition to nurses such as managers, regional stakeholders, directors, health specialists and staff, midwives, patients, and parents. The sample sizes ranged from ten 15 to 43,430. 16 31% of the articles included a sample size between 10 and 20.

Most articles did not specify the difference between sex and gender. Many articles either made no reference to sex, nor gender, when collecting their data or selected only one of the two terms as the target of their study. 11 articles reported on the sex of their participants and two studies included information of the participants’ gender. Females outnumbered males in 12 studies, which was due to the fact that the population studied in most of the articles included nurses, a profession that is comprised of mostly females. None of the studies focused exclusively on a distinct sex or gender. Furthermore, the articles demonstrated a wide range of intervention intervals for different technologies. The length of interventions ranged from 1 week 17 to 5 years. 18 Most articles used qualitative ( n = 15) or quantitative ( n = 12) methodologies. More than 90% of the quantitative articles used self-report measures for the nurses’ workload while some used observation and database metrics. A common feature was the use of Likert or other similar scales ( Supplementary Appendix Table SB ).

Interventions

Technology based interventions described in the articles covered a range of topics and were categorized into the following groups: 1) Digital Information Solutions 2) Digital Education, 3) Mobile Applications, 4) Virtual Communication, 5) Assistive Devices, and 6) Disease diagnoses groups based on their similarities and common features ( Table 3 ). If an intervention included characteristics that could be categorized into more than one group, they were placed into the group closest to their effectiveness, as determined by the primary author.

Interventions’ categories studied in the literature review and their types.

Digital Information Solutions

Digital Information Solutions included Information and Communication Technologies that supported access to information through telecommunication as well as any health system electronic application, such as intra-organizational email, that provided access to patient information technology (e.g., Electronic Health Records with Electronic Prescribing (e-Rx)). 19 Six articles covered Information and Communication Technologies interventions. 5 , 11 , 14 , 20 - 22 Effective use of the existing resources and designing new practical methods for nurses were the innovative solutions resulting from Digital Information Solutions. 11 In low-resource areas, Digital Information Solutions can be an adequate tool to improve health care. 20 In this category, half of the studies descried a positive effect to decrease nursing workload 5 , 20 , 21 while 33% described an increase in nursing workload. 5 , 14

Electronic information is one type of Digital Information Solution reported in three articles. 11 , 20 , 22 Electronic information technologies were fundamental infrastructures of work activities in the healthcare sector and were seen to facilitate clients/patients’ communication with the medical staff, provide healthcare system’s workload allocation, offer cost estimation, reduce additional costs, offer clinical care and diagnostic tests, and accelerate the communication between service providers and financial resource management. 20 , 22 Arakawa et al. 20 undertook a mixed method approach to evaluate the usability of electronic information systems for nurses in rural areas, and demonstrated that collecting patient information in electronic systems was more efficient than writing the information manually on paper. Finding Patients’ information in electronic systems was quicker and easier than finding their records in multiple binders. 20 Electronic information systems were found to register more detailed description of the patient and their medical problem. 22 Although in some cases entering patients’ data in the system was a time-consuming task, in most cases using this intervention saved nurses time and increased their communication quality with patients; therefore, Digital Information Solutions led to decrease nurses’ workload. 11 , 20 , 22

Electronic Health Records (EHRs), another type of Digital Information Solution, reported in two articles, 5 , 21 were used to enter patients’ information and records into electronic clinical systems. Two articles described EHRs with e-Rx interventions that involved sending a prescription directly from the medical center to the pharmacy. 5 , 21 The nurses in rural medical centers were responsible for sending the prescriptions to pharmacies, rather than the patient. 5 , 21 Higher efficiency, full access to drug lists, accessibility of information, access to organized and comprehensive information, drug interaction alerts, decrease in transcription errors, communication with service providers, easier refill process, and the efficiency of workflow were some of the advantages of using EHRs with e-Rx in rural areas. 5 , 21 The effectiveness of EHRs with e-Rx were noted to save time for nurses by sending information through the Internet, supporting them to keep up to date, and streamlining their workflow consequently reducing nurses’ workload. 21

Digital Education

Digital Education, including digital electronic tools and media, enhanced learning opportunities 23 for healthcare workers and for patients (e.g., videoconferencing, video consulting, telemedicine, direct streaming technology). Nurses working in remote or rural areas may not have the same access as in urban areas due to long distances and lack of facilities 24 therefore access to Digital Education improved nurses’ access to educational resources and knowledge. 24 , 25 Application and implementation of Digital Education in remote or rural areas may increase the retention and recruitment of nurses in these regions. 12 , 24 , 26 Gum 12 conducted a study demonstrating that access to Digital Education resources could reduce professional isolation and increase retention and recruitment in remote and rural areas. Information transfer and Digital Education can be expensive. 26 Lack of adequate training and increased workload of nurses were found to be barriers to using this technology. 12 , 26 57 percent of technologies mentioned in this category corroborated that access to Digital Education technology decreased nurses’ workload, 18 , 24 , 25 , 27 , 28 while 29% increased it. 12 , 26 , 28 Nurses working in rural areas believed that lack of training, technology support, and technology resources would cause an increase in their workload. 12 , 26 Videoconferencing/video consulting was helpful for providing online training services for healthcare workers including nurses. 26 Implementing e-health technologies had some advantages including clinical usefulness (76%), functioning of equipment (74%), and ease of equipment use (74%) as well as disadvantages included lack of suitable training (55%), costly equipment (54%), and increase in general practitioners/nurses’ workload (43%). 26 The use of videoconferencing enhanced the efficiency of patient monitoring at home. In rural areas, home healthcare programs can save time and prevent the severity of illness in patients. 26 Therefore, videoconferencing was viewed to decrease the workload of nurses in rural areas. 25 Telemedicine platforms may be used for Digital Education in rural areas to enhance access to self-directed learning for nurses. 28

Mobile Applications

The World Health Organization (WHO) defined Mobile Applications as “medical and public health practice supported by mobile devices, such as mobile phones, patient monitoring devices, personal digital assistants, and other wireless devices” World Health Organization (2023) Mobile Applications interventions include follow-up care provided by doctors, nurses, and medical staff, patients’ follow-up visits to the medical center to check their test results, e-prescription of the medications, determination of checkup time, and other tasks done via telecommunication. In this category which includes computer programs, mobile phones, and medical or health-related websites, 66% of the interventions were found to decrease the workload of nurses, while 33% increased the workload.

One of the Mobile Applications category types includes using the telephone. Keeping telephone communication with patients is one of the key elements in providing primary care; however, there is little information on what causes patients to make phone calls. 29 In this intervention, nurses working in rural areas answered the patients’ phone calls and provided them with appropriate healthcare consultation or referred them to other healthcare providers or to the emergency department. The results showed an increase in self-care and a decrease in self-referrals to emergency departments in rural areas, which helped reduce the workload of nurses. 30 In contrast, Townsend et al. 29 discovered that calling patients or answering their phone calls would increase the workload of nurses in addition to making non-clinical office calls, asking nurse practitioners to change an appointment time, and requiring facilities to increase nurses’ workload. Nonetheless, this technology reduced the number of visits to medical centers and increased the provision of clinical services. 29 Further, it was established that mobile phones accelerated the process of providing healthcare services to those who needed help. Using the telephone decreased the number of non-urgent visits to healthcare facilities, but it also increased the number of phone calls to these facilities. Therefore, some studies on this subject showed an increase in nurses’ workload while others noted a decrease in nursing workload in rural areas. 29 - 31

Another Mobile Application was the Teledoc website designed to provide customers with medical services through live video or phone connection. 32 The results showed that different patients experienced different results after seeing a doctor through Teledoc. Due to environmental and technical errors, some diagnoses could not be made by solely using telemedicine or looking at the patient’s medical images. 32 Data obtained from the Teledoc website showed that the number of patients’ visits increased while the number of in-person visits to health centers decreased. Face-to-face reduction resulted in nurses’ workload reduction in health facilities. Nurses believed that implementing this new technology in rural areas may decrease nursing workload by increasing people’s remote access to healthcare services. This technology also showed potential to limit non-urgent visits to healthcare facilities. 32

Virtual Communication

Virtual Communication is defined as “the communication which uses electronic media to transmit the information or message using computers, email, telephone, video calling, FAX machine, etc.”. 33 Some types of Virtual Communication are text messages, emails, image sharing, social media messages, Electronic Clinical Decision Support Systems (eCDSSs), teleconsultation, and Health Video Library (HVL). The interventions identified connected nurses to share information and access medical assistance. This category includes technologies that connect nurses or physicians to other physicians, pharmacists, and third persons. 10 Clinical efficiency, cost-effectiveness, and improved treatments are some of the benefits of Virtual Communication technologies. Nurses working in rural areas may contact experienced urban nurses through Virtual Communication technologies to seek advice that may help to decrease their workload 34 Lack of Virtual Communication training could increase nursing workload. 10 50% of Virtual Communication technologies showed positive reduction in the workload of nurses 34 , 35 and 50% described an increase. 10 , 17

Teleconsultation is another type of Virtual Communication reported by Opoku et al. 17 In this study, community health nurses (CHNs) were trained to contact a teleconsultation center with their mobile phones whenever they faced any problem. 17 Even though this technology led to faster recovery in patients due to health care quality improvement and increased community health nurses’ knowledge through consultation with other care providers, the workload of teleconsultation team and their work responsibilities increased. 17 The staff workload of teleconsultation increased because they had to answer community health nurses’ questions over the phone in addition to their daily responsibilities. The teleconsultation nurses believed that their salary should have increased due to the increase in their occupational stress and workload. Despite this increase in workload, several community health nurses mentioned that frequent guidance and feedback provided by midwives, physicians, and nurses through telephone consultation increased their knowledge and helped them improve the required skills for treating patients. 17 Technologies that required nurses to assume additional responsibilities within the same amount of time could increase their workload. 10 , 17

Assistive devices

This category of assistive device interventions included technologies that help health care staff provide better treatments including Care Coordinator/Home telehealth, Personal Digital Assistants, and Teleassistance Service in Wound Care. These assistive devices showed potential to promote patients’ independence, thereby reducing need for nursing supports. In this category, two interventions were found to reduce nursing workload, while one intervention showed an increase. 11 , 36 Care Coordination/Home Telehealth provided home telehealth support for the aging veteran who suffered from chronic illnesses to facilitate care coordination for the veterans and prevent unnecessary entry to long-term care. 16 This technology increased aging veterans’ independence by keeping them at home and reduced the workload of nurses by decreasing hospital admissions. 16 Personal Digital Assistants, created to instantly provide nurses with reliable information, helped nursing students by providing the required information rapidly from multiple sources, such as drug references, practical manuals, and physiology and anatomy. 36 Nurses were found to employ this technology as a clinical reference for laboratory quantities and regular nursing procedures. Personal Digital Assistants subsequently reduced nursing workload by accelerating timely access to important information. 36 Teleassistance Service in Wound Care is an audio-visual communication system that uses wireless technology to transmit electronic audio and video synchronously and asynchronously. 37 In this procedure, the nurse records the patient’s wound by using a mobile application so that it may be viewed by a specialist nurse in another medical center. Gagnon et al. 37 ’s case study found that nurses spent a lot of time on ancillary issues (e.g., preparing digital and written reports, collecting additional patient information) and therefore the technology increased their workload. 37

Disease diagnostic technologies

Technologies such as Automated Medical Examination (AME) devices support earlier diagnosis and can also enhance accuracy and timeliness of the diagnosis. This lead to faster patient access to care and recovery as well as a reduced error rate thereby resulting in a decrease in nursing workload. 38 Evseeva et al. 38 examined the use of an AME for pediatric care and described a range of advantages including: timely disease diagnosis in children and adolescents, standard monitoring of children’s health, monitoring healthcare activities in medical institutions, appropriate and rational allocation of healthcare, and proper or efficient development of healthcare system structure. By providing early and accurate diagnosis, this system resulted in a reduction in medical staff’s workload. 38

Barriers and facilitators to technology use by nurses

Each article included in this review identified one or more barriers and/or facilitators to technology use by nurses in rural areas. Teaching or explaining the intended technology to the target population is one of the most important and frequently reported facilitators for the implementation of technology and technology use in nursing groups. 40% of the articles found that providing proper education and upgrading the knowledge of nurses and the target population were fundamental facilitators for the technology use by nurses. 5 , 11 , 14 , 16 - 18 , 22 , 24 , 26 , 29 , 30 , 34 , 37 , 39 , 40 Mills et al. 22 highlighted proper training as a facilitator of computer use by nurses especially in rural areas where training opportunities are more scarce than in urban areas. To reduce workload efforts, nurses must receive sufficient training to use the technology as to limit the amount of additional time necessary for self-learning. 22

Several factors should be considered prior to technology implementation, including degree of technology acceptance by an organization, resource availability, technology experience, telecommunications capability, and technology acceptance by patients and nurses. Addressing these barriers is important because it improves health delivery by nurses. Ward et al. 14 found that lack of technology acceptance by organizations and nurses is a barrier to the implementation of technology. Some technologies are designed to perform specific nursing tasks, such as IV drip monitoring, and can be highly beneficial and efficient to reduce nursing workload. Mensah et al. 35 found that inadequate technical support was another barrier that should be considered. Lack of technical support was identified as a significant barrier and if nurses faced a problem while using technology and were unable to get help, their workload may increase because they themselves have to solve the issue or spend time looking for support which may take away from time spent on other nursing activities. This barrier is especially challenging in rural areas, where information technology services and supports are not as frequently found compared to urban areas. 35

Rural context

Many articles highlighted challenges due to nursing and healthcare workforce shortages which were amplified in rural and remote areas. 12 , 15 , 17 , 26 , 34 , 35 , 38 , 40 - 43 Lack of staff in rural areas has a direct impact on the increase of nurses' workload in these areas because the ratio of patients to nurses may be higher resulting in nurses in rural areas having to care for more patients. 25 , 26 , 35 , 40 Lack of healthcare resources and geographical isolation may limit patients in rural areas, particularly the older adults, to adequately benefit from home healthcare technologies. As a result, compared to those who live in an urban center, patients in rural areas may disproportionately experience inadequate access to health technologies. For example, Evseeva et al. 38 noted that a lack of health information technologies in rural and remote areas can increase nurses’ workload because they do not have access to some resources that could make health delivery easier. Terry et al. 42 highlighted geographical location as a challenge in providing care to rural residents. In some cases, nurses had to travel a long distance to deliver healthcare to people living in remote and isolated areas because the residents did not have access to healthcare technologies. Long trips increased nurses' workload in rural areas. 42

Resource limitations, including lack of access to the most current up-to-date technologies, lack of facilities, and limited infrastructure can be especially challenging for nurses in rural areas. This is compounded by the nurse shortage in rural, remote, and underserved areas and increases the workload of the nurses. Brambel et al. 21 demonstrated that insufficient access to financial resources is a significant challenge in rural and remote areas because it may decrease nurses and patients access to modern health technologies. Limited allocation of economic resources to these areas has a negative effect on both nurses’ salary and available facilities, such as health care centers. 35 For most of the technologies examined in the articles, lack of high-speed Internet access was a major challenge in underserved areas. 27 The information technology infrastructure in rural areas is often weak or inadequate. 35 Accordingly, rural areas may have little access to healthcare services and electronic communication between patients and healthcare providers. 35 , 38 The mentioned factors increase the workload of nurses in rural areas because they cannot communicate and consult with expert nurses and doctors. Also, having limited access to the high-speed Internet may impede health delivery and accessing information. 35 , 41

This review examined the effects of technologies on nurses’ workload in rural communities. An analysis of the findings from the included 35 articles demonstrated that the most efficient technologies that help decrease the workload of nurses are those that reduce the number of unnecessary visits of patients to hospitals and medical centers. Common and minor cases can be treated through smartphone applications, live video, and telephone. This kind of technology increases rural patients’ access to doctors and nurses. 32 Surprisingly, it was observed that most of the phone calls made to nurses were for non-clinical requests. 29 In addition, nurses believed technologies that save their time have a potential to decrease their workload. 5 , 11 , 20 , 21 , 22 , 26

Considering the results, some factors need to be taken into account in order to implement a technology. Of all the factors identified, training and educating healthcare workers play a key role. Therefore, at the initial stage of implementing any technology, it must be ensured that the population using the technology is informed and has sufficient knowledge. According to Dowding, 43 using the most modern and innovative technologies and preparing healthcare workers to benefit from these technologies is one of the basic expectations of employers and consumers of the healthcare industry. Having the required knowledge allows nurses to make the most of technology and reduce their workload. For example, in most cases, the use of mobile applications reduced the workload of nurses by meeting the needs of patients. However due to workload and time limitations in rural areas, nurses may not be able to fully benefit from formal in-person training programs; therefore, they may benefit from self-learning modules.

Prior to implementation of a new technology, intervention, or healthcare service in remote and rural areas, the service should have sufficient evidence to support its use. 30 For instance, in the case of using mobile technology, Diese et al. 31 found that the motivation for accepting such technology relied upon awareness of sound data collection and analysis. Most participants agreed that if properly trained, they could use mobile phones to collect the intended data appropriately. 17 , 28 , 31 Not all healthcare services are suitable for all places and conditions. The telephone intervention proposed by Roberts et al. 30 was unsuitable for the rural and remote areas in Scotland and thus experienced failure. 30 The issues caused telephone intervention failure in rural and remote areas were “the rigidity of the nurse triage model, the need to understand variation of health service delivery, and the importance of using local, professional knowledge”. 30 As mentioned earlier, before implementing any technology, it must be ensured that the technology is tailored to and useable in a particular setting. 14 Checking the appropriateness and applicability of the technology before implementation will lead to a better result of their use, including the potential to reduce the workload of nurses.

One of the most widely used technologies in the healthcare industry are Digital Information Solutions. A review of the literature highlighted that varied perspectives exist with respect to adoption and use of this technology. Entering patients’ data or information into electronic systems was shown to increase the workload of nurses. 5 One of the challenges of Digital Information Solutions is its initiators or designers who may have limited knowledge about healthcare or medical treatments. 44 According to Abbott et al., 5 even though most nurses were familiar with using EHRs correctly, they had trouble performing certain tasks such as sending e-Rx. These problems, which increase the workload of nurses, may occur due to insufficient experience or lack of knowledge in applying this technology. Therefore, Campbell and McDowell 45 stated that EHRs would be useful in the healthcare setting if the medical staff and the users of this technology know how to use it correctly.

Around 80% of nurses believed that using email would increase their workload. 46 Nurses found that documentation by a computerized system was time-consuming. Contrary to the above-mentioned findings, Moody et al. 47 concluded that 36% of nurses consider EHRs a positive technology and believe that it will reduce their workload. Adapting the performance of nurses to this technology and training them to use computers and electronic systems will enhance the absorption of these technologies and ultimately increase the efficiency of the work environment. 20 Database systems help to improve the efficiency of information management and the quality of nursing and community health by standardizing a format for recording data in rural areas. 20 Accordingly, implementing this technology can reduce nurses’ workload.

The second most mentioned intervention described in the results section was Virtual Communication. Virtual Communication technology has several advantages including: saving patients’ time and money, reducing patient referrals, saving nurses’ time, and improving healthcare. 17 , 48 Patients’ frequent use of Virtual Communication technology has been reported in several studies. 49 - 52 Frequent use of Virtual Communication by patients reduces their non-urgent admission to healthcare facilities and reduces nurses’ workload and time. 51 , 52 Moreover, several articles have shown that rural patients are more willing to communicate with physicians and clinic staff through Virtual Communication which can save doctors and healthcare providers time and decrease their workload. 53 , 54 Nonetheless, the disadvantages of this technology include delays in answering phone calls, medical staff’s insufficient knowledge of the technology, and inappropriate and erroneous information transmitted over the phone. 17 Sometimes, nurses’ limited or inaccurate information about a specific technology can increase their workload. 17 Another disadvantage of this technology is the security of patients’ health information, which has been a cause of great concern to patients and healthcare systems. Moreover, after implementing Virtual Communication technology, healthcare staff expressed their concern about the increase in workload. 17 Entering patient information into the system can be stressful because any incorrect information may cause unintended consequences such as inaccurate medical records.

As nursing has historically been a female-dominated profession, 55 there is a lack of males represented in the current research evidence. No articles focused on sex or gender-based differences in nurses’ perspectives or experiences with technology. Future, it appeared that most articles concentrated on the efficiency of various technologies and their positive or negative impacts on healthcare personnel and/or patients without taking into consideration the roles of sex and gender of the patient. Therefore, further research may consider examining whether there are any differences in how technology influences workload. With the rapid development and iterations of technology, it remains an important and pressing issue to continue to assess as technology continues to become more integrated into nursing practice across the care continuum and across the globe.

Use of technologies can effect nursing workload either by increasing it or reducing it. The effect on nursing workload may depend on multiple factors, such as institutions’ technology acceptance, nurses’ technology acceptance, and nurses’ knowledge of using the technology. The workload of nurses working in rural areas was different from in urban areas, which was due to the low number of nurses in remote places and unequal distribution of work in these regions. While the majority of articles indicated the positive impact of the technologies used by nurses, there remains a lack of consensus across varying technology types warranting further research.

Supplemental Material

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the AGWELL (3.20-SIP A5).

Supplemental Material: Supplemental material for this article is available online

Shannon Freeman https://orcid.org/0000-0002-8129-6696

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Research: What Companies Don’t Know About How Workers Use AI

  • Jeremie Brecheisen

technology research work

Three Gallup studies shed light on when and why AI is being used at work — and how employees and customers really feel about it.

Leaders who are exploring how AI might fit into their business operations must not only navigate a vast and ever-changing landscape of tools, but they must also facilitate a significant cultural shift within their organizations. But research shows that leaders do not fully understand their employees’ use of, and readiness for, AI. In addition, a significant number of Americans do not trust business’ use of AI. This article offers three recommendations for leaders to find the right balance of control and trust around AI, including measuring how their employees currently use AI, cultivating trust by empowering managers, and adopting a purpose-led AI strategy that is driven by the company’s purpose instead of a rules-heavy strategy that is driven by fear.

If you’re a leader who wants to shift your workforce toward using AI, you need to do more than manage the implementation of new technologies. You need to initiate a profound cultural shift. At the heart of this cultural shift is trust. Whether the use case for AI is brief and experimental or sweeping and significant, a level of trust must exist between leaders and employees for the initiative to have any hope of success.

  • Jeremie Brecheisen is a partner and managing director of The Gallup CHRO Roundtable.

Partner Center

Smarter + Stronger Together

Making the world a safer place together ..

You can find technology companies everywhere. But purpose like ours is something special.

STR makes the world a safer place by developing technology and applying it to solve emerging national security challenges.

Life at STR

Challenging, high-impact work.

technology research work

At STR, our work is challenging and high impact.

What if there was a team that thought the real potential of artificial intelligence wasn’t to optimize web advertisement? What if instead we could use AI to allow citizens with oppressive governments to communicate safely and freely?

careers at str

A choose-your-own adventure for technical careers.

Yeah, we’ve got opportunity. Rapid growth means that there is always room for motivated and creative people to blaze their own trail. We also know that creativity and innovation is built on diverse perspectives and listening to unconventional ideas, which is why we’re deeply committed to diversity & inclusion practices at STR.

Image

STR NOTES TRACK OUR JOURNEY

Snippets from our team’s adventures that capture our culture and our work .

get in touch

Let’s talk! What are you building for the future? We’re ready to engage on what a better, safe, technology-enabled tomorrow looks like.

Cornell Chronicle

  • Architecture & Design
  • Arts & Humanities
  • Business, Economics & Entrepreneurship
  • Computing & Information Sciences
  • Energy, Environment & Sustainability
  • Food & Agriculture
  • Global Reach
  • Health, Nutrition & Medicine
  • Law, Government & Public Policy
  • Life Sciences & Veterinary Medicine
  • Physical Sciences & Engineering
  • Social & Behavioral Sciences
  • Coronavirus
  • News & Events
  • Public Engagement
  • New York City
  • Photos of the Week
  • Big Red Sports
  • Freedom of Expression
  • Student Life
  • University Statements

Around Cornell

  • All Stories
  • In the News
  • Expert Quotes
  • Cornellians

A piece of robotic machinery in a factory setting

News directly from Cornell's colleges and centers

Research: Technology is changing how companies do business

By sarah mangus-sharpe.

A new study from the Cornell SC Johnson College of Business advances understanding of the U.S. production chain evolution amidst technological progress in information technology (IT), shedding light on the complex connections between business IT investments and organizational design. Advances in IT have sparked significant changes in how companies design their production processes. In the paper " Production Chain Organization in the Digital Age: Information Technology Use and Vertical Integration in U.S. Manufacturing ," which published April 30 in Management Science, Chris Forman , the Peter and Stephanie Nolan Professor in the Dyson School of Applied Economics and Management , and his co-author delved into what these changes mean for businesses and consumers.

Forman and Kristina McElheran, assistant professor of strategic management at University of Toronto, analyzed U.S. Census Bureau data of over 5,600 manufacturing plants to see how the production chains of businesses were affected by the internet revolution. Their use of census data allowed them to look inside the relationships among production units within and between companies and how transaction flows changed after companies invested in internet-enabled technology that facilitated coordination between them. The production units of many of the companies in their study concurrently sold to internal and external customers, a mix they refer to as plural selling. They found that the reduction in communication costs enabled by the internet shifted the mix toward more sales outside of the firm, or less vertical integration.

The research highlights the importance of staying ahead of the curve in technology. Companies that embrace digital technologies now are likely to be the ones that thrive in the future. And while there are still many unanswered questions about how these changes will play out, one thing is clear: The relationship between technology and business is only going to become more and more intertwined in the future.

Read the full story on the Cornell SC Johnson College of Business news site, BusinessFeed.

Media Contact

Media relations office.

Get Cornell news delivered right to your inbox.

You might also like

technology research work

Gallery Heading

REALIZING THE PROMISE:

Leading up to the 75th anniversary of the UN General Assembly, this “Realizing the promise: How can education technology improve learning for all?” publication kicks off the Center for Universal Education’s first playbook in a series to help improve education around the world.

It is intended as an evidence-based tool for ministries of education, particularly in low- and middle-income countries, to adopt and more successfully invest in education technology.

While there is no single education initiative that will achieve the same results everywhere—as school systems differ in learners and educators, as well as in the availability and quality of materials and technologies—an important first step is understanding how technology is used given specific local contexts and needs.

The surveys in this playbook are designed to be adapted to collect this information from educators, learners, and school leaders and guide decisionmakers in expanding the use of technology.  

Introduction

While technology has disrupted most sectors of the economy and changed how we communicate, access information, work, and even play, its impact on schools, teaching, and learning has been much more limited. We believe that this limited impact is primarily due to technology being been used to replace analog tools, without much consideration given to playing to technology’s comparative advantages. These comparative advantages, relative to traditional “chalk-and-talk” classroom instruction, include helping to scale up standardized instruction, facilitate differentiated instruction, expand opportunities for practice, and increase student engagement. When schools use technology to enhance the work of educators and to improve the quality and quantity of educational content, learners will thrive.

Further, COVID-19 has laid bare that, in today’s environment where pandemics and the effects of climate change are likely to occur, schools cannot always provide in-person education—making the case for investing in education technology.

Here we argue for a simple yet surprisingly rare approach to education technology that seeks to:

  • Understand the needs, infrastructure, and capacity of a school system—the diagnosis;
  • Survey the best available evidence on interventions that match those conditions—the evidence; and
  • Closely monitor the results of innovations before they are scaled up—the prognosis.

RELATED CONTENT

technology research work

Podcast: How education technology can improve learning for all students

technology research work

To make ed tech work, set clear goals, review the evidence, and pilot before you scale

The framework.

Our approach builds on a simple yet intuitive theoretical framework created two decades ago by two of the most prominent education researchers in the United States, David K. Cohen and Deborah Loewenberg Ball. They argue that what matters most to improve learning is the interactions among educators and learners around educational materials. We believe that the failed school-improvement efforts in the U.S. that motivated Cohen and Ball’s framework resemble the ed-tech reforms in much of the developing world to date in the lack of clarity improving the interactions between educators, learners, and the educational material. We build on their framework by adding parents as key agents that mediate the relationships between learners and educators and the material (Figure 1).

Figure 1: The instructional core

Adapted from Cohen and Ball (1999)

As the figure above suggests, ed-tech interventions can affect the instructional core in a myriad of ways. Yet, just because technology can do something, it does not mean it should. School systems in developing countries differ along many dimensions and each system is likely to have different needs for ed-tech interventions, as well as different infrastructure and capacity to enact such interventions.

The diagnosis:

How can school systems assess their needs and preparedness.

A useful first step for any school system to determine whether it should invest in education technology is to diagnose its:

  • Specific needs to improve student learning (e.g., raising the average level of achievement, remediating gaps among low performers, and challenging high performers to develop higher-order skills);
  • Infrastructure to adopt technology-enabled solutions (e.g., electricity connection, availability of space and outlets, stock of computers, and Internet connectivity at school and at learners’ homes); and
  • Capacity to integrate technology in the instructional process (e.g., learners’ and educators’ level of familiarity and comfort with hardware and software, their beliefs about the level of usefulness of technology for learning purposes, and their current uses of such technology).

Before engaging in any new data collection exercise, school systems should take full advantage of existing administrative data that could shed light on these three main questions. This could be in the form of internal evaluations but also international learner assessments, such as the Program for International Student Assessment (PISA), the Trends in International Mathematics and Science Study (TIMSS), and/or the Progress in International Literacy Study (PIRLS), and the Teaching and Learning International Study (TALIS). But if school systems lack information on their preparedness for ed-tech reforms or if they seek to complement existing data with a richer set of indicators, we developed a set of surveys for learners, educators, and school leaders. Download the full report to see how we map out the main aspects covered by these surveys, in hopes of highlighting how they could be used to inform decisions around the adoption of ed-tech interventions.

The evidence:

How can school systems identify promising ed-tech interventions.

There is no single “ed-tech” initiative that will achieve the same results everywhere, simply because school systems differ in learners and educators, as well as in the availability and quality of materials and technologies. Instead, to realize the potential of education technology to accelerate student learning, decisionmakers should focus on four potential uses of technology that play to its comparative advantages and complement the work of educators to accelerate student learning (Figure 2). These comparative advantages include:

  • Scaling up quality instruction, such as through prerecorded quality lessons.
  • Facilitating differentiated instruction, through, for example, computer-adaptive learning and live one-on-one tutoring.
  • Expanding opportunities to practice.
  • Increasing learner engagement through videos and games.

Figure 2: Comparative advantages of technology

Here we review the evidence on ed-tech interventions from 37 studies in 20 countries*, organizing them by comparative advantage. It’s important to note that ours is not the only way to classify these interventions (e.g., video tutorials could be considered as a strategy to scale up instruction or increase learner engagement), but we believe it may be useful to highlight the needs that they could address and why technology is well positioned to do so.

When discussing specific studies, we report the magnitude of the effects of interventions using standard deviations (SDs). SDs are a widely used metric in research to express the effect of a program or policy with respect to a business-as-usual condition (e.g., test scores). There are several ways to make sense of them. One is to categorize the magnitude of the effects based on the results of impact evaluations. In developing countries, effects below 0.1 SDs are considered to be small, effects between 0.1 and 0.2 SDs are medium, and those above 0.2 SDs are large (for reviews that estimate the average effect of groups of interventions, called “meta analyses,” see e.g., Conn, 2017; Kremer, Brannen, & Glennerster, 2013; McEwan, 2014; Snilstveit et al., 2015; Evans & Yuan, 2020.)

*In surveying the evidence, we began by compiling studies from prior general and ed-tech specific evidence reviews that some of us have written and from ed-tech reviews conducted by others. Then, we tracked the studies cited by the ones we had previously read and reviewed those, as well. In identifying studies for inclusion, we focused on experimental and quasi-experimental evaluations of education technology interventions from pre-school to secondary school in low- and middle-income countries that were released between 2000 and 2020. We only included interventions that sought to improve student learning directly (i.e., students’ interaction with the material), as opposed to interventions that have impacted achievement indirectly, by reducing teacher absence or increasing parental engagement. This process yielded 37 studies in 20 countries (see the full list of studies in Appendix B).

Scaling up standardized instruction

One of the ways in which technology may improve the quality of education is through its capacity to deliver standardized quality content at scale. This feature of technology may be particularly useful in three types of settings: (a) those in “hard-to-staff” schools (i.e., schools that struggle to recruit educators with the requisite training and experience—typically, in rural and/or remote areas) (see, e.g., Urquiola & Vegas, 2005); (b) those in which many educators are frequently absent from school (e.g., Chaudhury, Hammer, Kremer, Muralidharan, & Rogers, 2006; Muralidharan, Das, Holla, & Mohpal, 2017); and/or (c) those in which educators have low levels of pedagogical and subject matter expertise (e.g., Bietenbeck, Piopiunik, & Wiederhold, 2018; Bold et al., 2017; Metzler & Woessmann, 2012; Santibañez, 2006) and do not have opportunities to observe and receive feedback (e.g., Bruns, Costa, & Cunha, 2018; Cilliers, Fleisch, Prinsloo, & Taylor, 2018). Technology could address this problem by: (a) disseminating lessons delivered by qualified educators to a large number of learners (e.g., through prerecorded or live lessons); (b) enabling distance education (e.g., for learners in remote areas and/or during periods of school closures); and (c) distributing hardware preloaded with educational materials.

Prerecorded lessons

Technology seems to be well placed to amplify the impact of effective educators by disseminating their lessons. Evidence on the impact of prerecorded lessons is encouraging, but not conclusive. Some initiatives that have used short instructional videos to complement regular instruction, in conjunction with other learning materials, have raised student learning on independent assessments. For example, Beg et al. (2020) evaluated an initiative in Punjab, Pakistan in which grade 8 classrooms received an intervention that included short videos to substitute live instruction, quizzes for learners to practice the material from every lesson, tablets for educators to learn the material and follow the lesson, and LED screens to project the videos onto a classroom screen. After six months, the intervention improved the performance of learners on independent tests of math and science by 0.19 and 0.24 SDs, respectively but had no discernible effect on the math and science section of Punjab’s high-stakes exams.

One study suggests that approaches that are far less technologically sophisticated can also improve learning outcomes—especially, if the business-as-usual instruction is of low quality. For example, Naslund-Hadley, Parker, and Hernandez-Agramonte (2014) evaluated a preschool math program in Cordillera, Paraguay that used audio segments and written materials four days per week for an hour per day during the school day. After five months, the intervention improved math scores by 0.16 SDs, narrowing gaps between low- and high-achieving learners, and between those with and without educators with formal training in early childhood education.

Yet, the integration of prerecorded material into regular instruction has not always been successful. For example, de Barros (2020) evaluated an intervention that combined instructional videos for math and science with infrastructure upgrades (e.g., two “smart” classrooms, two TVs, and two tablets), printed workbooks for students, and in-service training for educators of learners in grades 9 and 10 in Haryana, India (all materials were mapped onto the official curriculum). After 11 months, the intervention negatively impacted math achievement (by 0.08 SDs) and had no effect on science (with respect to business as usual classes). It reduced the share of lesson time that educators devoted to instruction and negatively impacted an index of instructional quality. Likewise, Seo (2017) evaluated several combinations of infrastructure (solar lights and TVs) and prerecorded videos (in English and/or bilingual) for grade 11 students in northern Tanzania and found that none of the variants improved student learning, even when the videos were used. The study reports effects from the infrastructure component across variants, but as others have noted (Muralidharan, Romero, & Wüthrich, 2019), this approach to estimating impact is problematic.

A very similar intervention delivered after school hours, however, had sizeable effects on learners’ basic skills. Chiplunkar, Dhar, and Nagesh (2020) evaluated an initiative in Chennai (the capital city of the state of Tamil Nadu, India) delivered by the same organization as above that combined short videos that explained key concepts in math and science with worksheets, facilitator-led instruction, small groups for peer-to-peer learning, and occasional career counseling and guidance for grade 9 students. These lessons took place after school for one hour, five times a week. After 10 months, it had large effects on learners’ achievement as measured by tests of basic skills in math and reading, but no effect on a standardized high-stakes test in grade 10 or socio-emotional skills (e.g., teamwork, decisionmaking, and communication).

Drawing general lessons from this body of research is challenging for at least two reasons. First, all of the studies above have evaluated the impact of prerecorded lessons combined with several other components (e.g., hardware, print materials, or other activities). Therefore, it is possible that the effects found are due to these additional components, rather than to the recordings themselves, or to the interaction between the two (see Muralidharan, 2017 for a discussion of the challenges of interpreting “bundled” interventions). Second, while these studies evaluate some type of prerecorded lessons, none examines the content of such lessons. Thus, it seems entirely plausible that the direction and magnitude of the effects depends largely on the quality of the recordings (e.g., the expertise of the educator recording it, the amount of preparation that went into planning the recording, and its alignment with best teaching practices).

These studies also raise three important questions worth exploring in future research. One of them is why none of the interventions discussed above had effects on high-stakes exams, even if their materials are typically mapped onto the official curriculum. It is possible that the official curricula are simply too challenging for learners in these settings, who are several grade levels behind expectations and who often need to reinforce basic skills (see Pritchett & Beatty, 2015). Another question is whether these interventions have long-term effects on teaching practices. It seems plausible that, if these interventions are deployed in contexts with low teaching quality, educators may learn something from watching the videos or listening to the recordings with learners. Yet another question is whether these interventions make it easier for schools to deliver instruction to learners whose native language is other than the official medium of instruction.

Distance education

Technology can also allow learners living in remote areas to access education. The evidence on these initiatives is encouraging. For example, Johnston and Ksoll (2017) evaluated a program that broadcasted live instruction via satellite to rural primary school students in the Volta and Greater Accra regions of Ghana. For this purpose, the program also equipped classrooms with the technology needed to connect to a studio in Accra, including solar panels, a satellite modem, a projector, a webcam, microphones, and a computer with interactive software. After two years, the intervention improved the numeracy scores of students in grades 2 through 4, and some foundational literacy tasks, but it had no effect on attendance or classroom time devoted to instruction, as captured by school visits. The authors interpreted these results as suggesting that the gains in achievement may be due to improving the quality of instruction that children received (as opposed to increased instructional time). Naik, Chitre, Bhalla, and Rajan (2019) evaluated a similar program in the Indian state of Karnataka and also found positive effects on learning outcomes, but it is not clear whether those effects are due to the program or due to differences in the groups of students they compared to estimate the impact of the initiative.

In one context (Mexico), this type of distance education had positive long-term effects. Navarro-Sola (2019) took advantage of the staggered rollout of the telesecundarias (i.e., middle schools with lessons broadcasted through satellite TV) in 1968 to estimate its impact. The policy had short-term effects on students’ enrollment in school: For every telesecundaria per 50 children, 10 students enrolled in middle school and two pursued further education. It also had a long-term influence on the educational and employment trajectory of its graduates. Each additional year of education induced by the policy increased average income by nearly 18 percent. This effect was attributable to more graduates entering the labor force and shifting from agriculture and the informal sector. Similarly, Fabregas (2019) leveraged a later expansion of this policy in 1993 and found that each additional telesecundaria per 1,000 adolescents led to an average increase of 0.2 years of education, and a decline in fertility for women, but no conclusive evidence of long-term effects on labor market outcomes.

It is crucial to interpret these results keeping in mind the settings where the interventions were implemented. As we mention above, part of the reason why they have proven effective is that the “counterfactual” conditions for learning (i.e., what would have happened to learners in the absence of such programs) was either to not have access to schooling or to be exposed to low-quality instruction. School systems interested in taking up similar interventions should assess the extent to which their learners (or parts of their learner population) find themselves in similar conditions to the subjects of the studies above. This illustrates the importance of assessing the needs of a system before reviewing the evidence.

Preloaded hardware

Technology also seems well positioned to disseminate educational materials. Specifically, hardware (e.g., desktop computers, laptops, or tablets) could also help deliver educational software (e.g., word processing, reference texts, and/or games). In theory, these materials could not only undergo a quality assurance review (e.g., by curriculum specialists and educators), but also draw on the interactions with learners for adjustments (e.g., identifying areas needing reinforcement) and enable interactions between learners and educators.

In practice, however, most initiatives that have provided learners with free computers, laptops, and netbooks do not leverage any of the opportunities mentioned above. Instead, they install a standard set of educational materials and hope that learners find them helpful enough to take them up on their own. Students rarely do so, and instead use the laptops for recreational purposes—often, to the detriment of their learning (see, e.g., Malamud & Pop-Eleches, 2011). In fact, free netbook initiatives have not only consistently failed to improve academic achievement in math or language (e.g., Cristia et al., 2017), but they have had no impact on learners’ general computer skills (e.g., Beuermann et al., 2015). Some of these initiatives have had small impacts on cognitive skills, but the mechanisms through which those effects occurred remains unclear.

To our knowledge, the only successful deployment of a free laptop initiative was one in which a team of researchers equipped the computers with remedial software. Mo et al. (2013) evaluated a version of the One Laptop per Child (OLPC) program for grade 3 students in migrant schools in Beijing, China in which the laptops were loaded with a remedial software mapped onto the national curriculum for math (similar to the software products that we discuss under “practice exercises” below). After nine months, the program improved math achievement by 0.17 SDs and computer skills by 0.33 SDs. If a school system decides to invest in free laptops, this study suggests that the quality of the software on the laptops is crucial.

To date, however, the evidence suggests that children do not learn more from interacting with laptops than they do from textbooks. For example, Bando, Gallego, Gertler, and Romero (2016) compared the effect of free laptop and textbook provision in 271 elementary schools in disadvantaged areas of Honduras. After seven months, students in grades 3 and 6 who had received the laptops performed on par with those who had received the textbooks in math and language. Further, even if textbooks essentially become obsolete at the end of each school year, whereas laptops can be reloaded with new materials for each year, the costs of laptop provision (not just the hardware, but also the technical assistance, Internet, and training associated with it) are not yet low enough to make them a more cost-effective way of delivering content to learners.

Evidence on the provision of tablets equipped with software is encouraging but limited. For example, de Hoop et al. (2020) evaluated a composite intervention for first grade students in Zambia’s Eastern Province that combined infrastructure (electricity via solar power), hardware (projectors and tablets), and educational materials (lesson plans for educators and interactive lessons for learners, both loaded onto the tablets and mapped onto the official Zambian curriculum). After 14 months, the intervention had improved student early-grade reading by 0.4 SDs, oral vocabulary scores by 0.25 SDs, and early-grade math by 0.22 SDs. It also improved students’ achievement by 0.16 on a locally developed assessment. The multifaceted nature of the program, however, makes it challenging to identify the components that are driving the positive effects. Pitchford (2015) evaluated an intervention that provided tablets equipped with educational “apps,” to be used for 30 minutes per day for two months to develop early math skills among students in grades 1 through 3 in Lilongwe, Malawi. The evaluation found positive impacts in math achievement, but the main study limitation is that it was conducted in a single school.

Facilitating differentiated instruction

Another way in which technology may improve educational outcomes is by facilitating the delivery of differentiated or individualized instruction. Most developing countries massively expanded access to schooling in recent decades by building new schools and making education more affordable, both by defraying direct costs, as well as compensating for opportunity costs (Duflo, 2001; World Bank, 2018). These initiatives have not only rapidly increased the number of learners enrolled in school, but have also increased the variability in learner’ preparation for schooling. Consequently, a large number of learners perform well below grade-based curricular expectations (see, e.g., Duflo, Dupas, & Kremer, 2011; Pritchett & Beatty, 2015). These learners are unlikely to get much from “one-size-fits-all” instruction, in which a single educator delivers instruction deemed appropriate for the middle (or top) of the achievement distribution (Banerjee & Duflo, 2011). Technology could potentially help these learners by providing them with: (a) instruction and opportunities for practice that adjust to the level and pace of preparation of each individual (known as “computer-adaptive learning” (CAL)); or (b) live, one-on-one tutoring.

Computer-adaptive learning

One of the main comparative advantages of technology is its ability to diagnose students’ initial learning levels and assign students to instruction and exercises of appropriate difficulty. No individual educator—no matter how talented—can be expected to provide individualized instruction to all learners in his/her class simultaneously . In this respect, technology is uniquely positioned to complement traditional teaching. This use of technology could help learners master basic skills and help them get more out of schooling.

Although many software products evaluated in recent years have been categorized as CAL, many rely on a relatively coarse level of differentiation at an initial stage (e.g., a diagnostic test) without further differentiation. We discuss these initiatives under the category of “increasing opportunities for practice” below. CAL initiatives complement an initial diagnostic with dynamic adaptation (i.e., at each response or set of responses from learners) to adjust both the initial level of difficulty and rate at which it increases or decreases, depending on whether learners’ responses are correct or incorrect.

Existing evidence on this specific type of programs is highly promising. Most famously, Banerjee et al. (2007) evaluated CAL software in Vadodara, in the Indian state of Gujarat, in which grade 4 students were offered two hours of shared computer time per week before and after school, during which they played games that involved solving math problems. The level of difficulty of such problems adjusted based on students’ answers. This program improved math achievement by 0.35 and 0.47 SDs after one and two years of implementation, respectively. Consistent with the promise of personalized learning, the software improved achievement for all students. In fact, one year after the end of the program, students assigned to the program still performed 0.1 SDs better than those assigned to a business as usual condition. More recently, Muralidharan, et al. (2019) evaluated a “blended learning” initiative in which students in grades 4 through 9 in Delhi, India received 45 minutes of interaction with CAL software for math and language, and 45 minutes of small group instruction before or after going to school. After only 4.5 months, the program improved achievement by 0.37 SDs in math and 0.23 SDs in Hindi. While all learners benefited from the program in absolute terms, the lowest performing learners benefited the most in relative terms, since they were learning very little in school.

We see two important limitations from this body of research. First, to our knowledge, none of these initiatives has been evaluated when implemented during the school day. Therefore, it is not possible to distinguish the effect of the adaptive software from that of additional instructional time. Second, given that most of these programs were facilitated by local instructors, attempts to distinguish the effect of the software from that of the instructors has been mostly based on noncausal evidence. A frontier challenge in this body of research is to understand whether CAL software can increase the effectiveness of school-based instruction by substituting part of the regularly scheduled time for math and language instruction.

Live one-on-one tutoring

Recent improvements in the speed and quality of videoconferencing, as well as in the connectivity of remote areas, have enabled yet another way in which technology can help personalization: live (i.e., real-time) one-on-one tutoring. While the evidence on in-person tutoring is scarce in developing countries, existing studies suggest that this approach works best when it is used to personalize instruction (see, e.g., Banerjee et al., 2007; Banerji, Berry, & Shotland, 2015; Cabezas, Cuesta, & Gallego, 2011).

There are almost no studies on the impact of online tutoring—possibly, due to the lack of hardware and Internet connectivity in low- and middle-income countries. One exception is Chemin and Oledan (2020)’s recent evaluation of an online tutoring program for grade 6 students in Kianyaga, Kenya to learn English from volunteers from a Canadian university via Skype ( videoconferencing software) for one hour per week after school. After 10 months, program beneficiaries performed 0.22 SDs better in a test of oral comprehension, improved their comfort using technology for learning, and became more willing to engage in cross-cultural communication. Importantly, while the tutoring sessions used the official English textbooks and sought in part to help learners with their homework, tutors were trained on several strategies to teach to each learner’s individual level of preparation, focusing on basic skills if necessary. To our knowledge, similar initiatives within a country have not yet been rigorously evaluated.

Expanding opportunities for practice

A third way in which technology may improve the quality of education is by providing learners with additional opportunities for practice. In many developing countries, lesson time is primarily devoted to lectures, in which the educator explains the topic and the learners passively copy explanations from the blackboard. This setup leaves little time for in-class practice. Consequently, learners who did not understand the explanation of the material during lecture struggle when they have to solve homework assignments on their own. Technology could potentially address this problem by allowing learners to review topics at their own pace.

Practice exercises

Technology can help learners get more out of traditional instruction by providing them with opportunities to implement what they learn in class. This approach could, in theory, allow some learners to anchor their understanding of the material through trial and error (i.e., by realizing what they may not have understood correctly during lecture and by getting better acquainted with special cases not covered in-depth in class).

Existing evidence on practice exercises reflects both the promise and the limitations of this use of technology in developing countries. For example, Lai et al. (2013) evaluated a program in Shaanxi, China where students in grades 3 and 5 were required to attend two 40-minute remedial sessions per week in which they first watched videos that reviewed the material that had been introduced in their math lessons that week and then played games to practice the skills introduced in the video. After four months, the intervention improved math achievement by 0.12 SDs. Many other evaluations of comparable interventions have found similar small-to-moderate results (see, e.g., Lai, Luo, Zhang, Huang, & Rozelle, 2015; Lai et al., 2012; Mo et al., 2015; Pitchford, 2015). These effects, however, have been consistently smaller than those of initiatives that adjust the difficulty of the material based on students’ performance (e.g., Banerjee et al., 2007; Muralidharan, et al., 2019). We hypothesize that these programs do little for learners who perform several grade levels behind curricular expectations, and who would benefit more from a review of foundational concepts from earlier grades.

We see two important limitations from this research. First, most initiatives that have been evaluated thus far combine instructional videos with practice exercises, so it is hard to know whether their effects are driven by the former or the latter. In fact, the program in China described above allowed learners to ask their peers whenever they did not understand a difficult concept, so it potentially also captured the effect of peer-to-peer collaboration. To our knowledge, no studies have addressed this gap in the evidence.

Second, most of these programs are implemented before or after school, so we cannot distinguish the effect of additional instructional time from that of the actual opportunity for practice. The importance of this question was first highlighted by Linden (2008), who compared two delivery mechanisms for game-based remedial math software for students in grades 2 and 3 in a network of schools run by a nonprofit organization in Gujarat, India: one in which students interacted with the software during the school day and another one in which students interacted with the software before or after school (in both cases, for three hours per day). After a year, the first version of the program had negatively impacted students’ math achievement by 0.57 SDs and the second one had a null effect. This study suggested that computer-assisted learning is a poor substitute for regular instruction when it is of high quality, as was the case in this well-functioning private network of schools.

In recent years, several studies have sought to remedy this shortcoming. Mo et al. (2014) were among the first to evaluate practice exercises delivered during the school day. They evaluated an initiative in Shaanxi, China in which students in grades 3 and 5 were required to interact with the software similar to the one in Lai et al. (2013) for two 40-minute sessions per week. The main limitation of this study, however, is that the program was delivered during regularly scheduled computer lessons, so it could not determine the impact of substituting regular math instruction. Similarly, Mo et al. (2020) evaluated a self-paced and a teacher-directed version of a similar program for English for grade 5 students in Qinghai, China. Yet, the key shortcoming of this study is that the teacher-directed version added several components that may also influence achievement, such as increased opportunities for teachers to provide students with personalized assistance when they struggled with the material. Ma, Fairlie, Loyalka, and Rozelle (2020) compared the effectiveness of additional time-delivered remedial instruction for students in grades 4 to 6 in Shaanxi, China through either computer-assisted software or using workbooks. This study indicates whether additional instructional time is more effective when using technology, but it does not address the question of whether school systems may improve the productivity of instructional time during the school day by substituting educator-led with computer-assisted instruction.

Increasing learner engagement

Another way in which technology may improve education is by increasing learners’ engagement with the material. In many school systems, regular “chalk and talk” instruction prioritizes time for educators’ exposition over opportunities for learners to ask clarifying questions and/or contribute to class discussions. This, combined with the fact that many developing-country classrooms include a very large number of learners (see, e.g., Angrist & Lavy, 1999; Duflo, Dupas, & Kremer, 2015), may partially explain why the majority of those students are several grade levels behind curricular expectations (e.g., Muralidharan, et al., 2019; Muralidharan & Zieleniak, 2014; Pritchett & Beatty, 2015). Technology could potentially address these challenges by: (a) using video tutorials for self-paced learning and (b) presenting exercises as games and/or gamifying practice.

Video tutorials

Technology can potentially increase learner effort and understanding of the material by finding new and more engaging ways to deliver it. Video tutorials designed for self-paced learning—as opposed to videos for whole class instruction, which we discuss under the category of “prerecorded lessons” above—can increase learner effort in multiple ways, including: allowing learners to focus on topics with which they need more help, letting them correct errors and misconceptions on their own, and making the material appealing through visual aids. They can increase understanding by breaking the material into smaller units and tackling common misconceptions.

In spite of the popularity of instructional videos, there is relatively little evidence on their effectiveness. Yet, two recent evaluations of different versions of the Khan Academy portal, which mainly relies on instructional videos, offer some insight into their impact. First, Ferman, Finamor, and Lima (2019) evaluated an initiative in 157 public primary and middle schools in five cities in Brazil in which the teachers of students in grades 5 and 9 were taken to the computer lab to learn math from the platform for 50 minutes per week. The authors found that, while the intervention slightly improved learners’ attitudes toward math, these changes did not translate into better performance in this subject. The authors hypothesized that this could be due to the reduction of teacher-led math instruction.

More recently, Büchel, Jakob, Kühnhanss, Steffen, and Brunetti (2020) evaluated an after-school, offline delivery of the Khan Academy portal in grades 3 through 6 in 302 primary schools in Morazán, El Salvador. Students in this study received 90 minutes per week of additional math instruction (effectively nearly doubling total math instruction per week) through teacher-led regular lessons, teacher-assisted Khan Academy lessons, or similar lessons assisted by technical supervisors with no content expertise. (Importantly, the first group provided differentiated instruction, which is not the norm in Salvadorian schools). All three groups outperformed both schools without any additional lessons and classrooms without additional lessons in the same schools as the program. The teacher-assisted Khan Academy lessons performed 0.24 SDs better, the supervisor-led lessons 0.22 SDs better, and the teacher-led regular lessons 0.15 SDs better, but the authors could not determine whether the effects across versions were different.

Together, these studies suggest that instructional videos work best when provided as a complement to, rather than as a substitute for, regular instruction. Yet, the main limitation of these studies is the multifaceted nature of the Khan Academy portal, which also includes other components found to positively improve learner achievement, such as differentiated instruction by students’ learning levels. While the software does not provide the type of personalization discussed above, learners are asked to take a placement test and, based on their score, educators assign them different work. Therefore, it is not clear from these studies whether the effects from Khan Academy are driven by its instructional videos or to the software’s ability to provide differentiated activities when combined with placement tests.

Games and gamification

Technology can also increase learner engagement by presenting exercises as games and/or by encouraging learner to play and compete with others (e.g., using leaderboards and rewards)—an approach known as “gamification.” Both approaches can increase learner motivation and effort by presenting learners with entertaining opportunities for practice and by leveraging peers as commitment devices.

There are very few studies on the effects of games and gamification in low- and middle-income countries. Recently, Araya, Arias Ortiz, Bottan, and Cristia (2019) evaluated an initiative in which grade 4 students in Santiago, Chile were required to participate in two 90-minute sessions per week during the school day with instructional math software featuring individual and group competitions (e.g., tracking each learner’s standing in his/her class and tournaments between sections). After nine months, the program led to improvements of 0.27 SDs in the national student assessment in math (it had no spillover effects on reading). However, it had mixed effects on non-academic outcomes. Specifically, the program increased learners’ willingness to use computers to learn math, but, at the same time, increased their anxiety toward math and negatively impacted learners’ willingness to collaborate with peers. Finally, given that one of the weekly sessions replaced regular math instruction and the other one represented additional math instructional time, it is not clear whether the academic effects of the program are driven by the software or the additional time devoted to learning math.

The prognosis:

How can school systems adopt interventions that match their needs.

Here are five specific and sequential guidelines for decisionmakers to realize the potential of education technology to accelerate student learning.

1. Take stock of how your current schools, educators, and learners are engaging with technology .

Carry out a short in-school survey to understand the current practices and potential barriers to adoption of technology (we have included suggested survey instruments in the Appendices); use this information in your decisionmaking process. For example, we learned from conversations with current and former ministers of education from various developing regions that a common limitation to technology use is regulations that hold school leaders accountable for damages to or losses of devices. Another common barrier is lack of access to electricity and Internet, or even the availability of sufficient outlets for charging devices in classrooms. Understanding basic infrastructure and regulatory limitations to the use of education technology is a first necessary step. But addressing these limitations will not guarantee that introducing or expanding technology use will accelerate learning. The next steps are thus necessary.

“In Africa, the biggest limit is connectivity. Fiber is expensive, and we don’t have it everywhere. The continent is creating a digital divide between cities, where there is fiber, and the rural areas.  The [Ghanaian] administration put in schools offline/online technologies with books, assessment tools, and open source materials. In deploying this, we are finding that again, teachers are unfamiliar with it. And existing policies prohibit students to bring their own tablets or cell phones. The easiest way to do it would have been to let everyone bring their own device. But policies are against it.” H.E. Matthew Prempeh, Minister of Education of Ghana, on the need to understand the local context.

2. Consider how the introduction of technology may affect the interactions among learners, educators, and content .

Our review of the evidence indicates that technology may accelerate student learning when it is used to scale up access to quality content, facilitate differentiated instruction, increase opportunities for practice, or when it increases learner engagement. For example, will adding electronic whiteboards to classrooms facilitate access to more quality content or differentiated instruction? Or will these expensive boards be used in the same way as the old chalkboards? Will providing one device (laptop or tablet) to each learner facilitate access to more and better content, or offer students more opportunities to practice and learn? Solely introducing technology in classrooms without additional changes is unlikely to lead to improved learning and may be quite costly. If you cannot clearly identify how the interactions among the three key components of the instructional core (educators, learners, and content) may change after the introduction of technology, then it is probably not a good idea to make the investment. See Appendix A for guidance on the types of questions to ask.

3. Once decisionmakers have a clear idea of how education technology can help accelerate student learning in a specific context, it is important to define clear objectives and goals and establish ways to regularly assess progress and make course corrections in a timely manner .

For instance, is the education technology expected to ensure that learners in early grades excel in foundational skills—basic literacy and numeracy—by age 10? If so, will the technology provide quality reading and math materials, ample opportunities to practice, and engaging materials such as videos or games? Will educators be empowered to use these materials in new ways? And how will progress be measured and adjusted?

4. How this kind of reform is approached can matter immensely for its success.

It is easy to nod to issues of “implementation,” but that needs to be more than rhetorical. Keep in mind that good use of education technology requires thinking about how it will affect learners, educators, and parents. After all, giving learners digital devices will make no difference if they get broken, are stolen, or go unused. Classroom technologies only matter if educators feel comfortable putting them to work. Since good technology is generally about complementing or amplifying what educators and learners already do, it is almost always a mistake to mandate programs from on high. It is vital that technology be adopted with the input of educators and families and with attention to how it will be used. If technology goes unused or if educators use it ineffectually, the results will disappoint—no matter the virtuosity of the technology. Indeed, unused education technology can be an unnecessary expenditure for cash-strapped education systems. This is why surveying context, listening to voices in the field, examining how technology is used, and planning for course correction is essential.

5. It is essential to communicate with a range of stakeholders, including educators, school leaders, parents, and learners .

Technology can feel alien in schools, confuse parents and (especially) older educators, or become an alluring distraction. Good communication can help address all of these risks. Taking care to listen to educators and families can help ensure that programs are informed by their needs and concerns. At the same time, deliberately and consistently explaining what technology is and is not supposed to do, how it can be most effectively used, and the ways in which it can make it more likely that programs work as intended. For instance, if teachers fear that technology is intended to reduce the need for educators, they will tend to be hostile; if they believe that it is intended to assist them in their work, they will be more receptive. Absent effective communication, it is easy for programs to “fail” not because of the technology but because of how it was used. In short, past experience in rolling out education programs indicates that it is as important to have a strong intervention design as it is to have a solid plan to socialize it among stakeholders.

technology research work

Beyond reopening: A leapfrog moment to transform education?

On September 14, the Center for Universal Education (CUE) will host a webinar to discuss strategies, including around the effective use of education technology, for ensuring resilient schools in the long term and to launch a new education technology playbook “Realizing the promise: How can education technology improve learning for all?”

file-pdf Full Playbook – Realizing the promise: How can education technology improve learning for all? file-pdf References file-pdf Appendix A – Instruments to assess availability and use of technology file-pdf Appendix B – List of reviewed studies file-pdf Appendix C – How may technology affect interactions among students, teachers, and content?

About the Authors

Alejandro j. ganimian, emiliana vegas, frederick m. hess.

  • Media Relations
  • Terms and Conditions
  • Privacy Policy

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

3 Questions: Technology roadmapping in teaching and industry

Press contact :.

On a stage, Oli De Weck points to a large line graph plotting human tech milestones against population and time.

Previous image Next image

Innovation is rarely accidental. Behind every new invention and product, including the device you are using to read this story, is years of research, investment, and planning. Organizations that want to reach these milestones in the fastest and most efficient way possible use technology roadmaps.  

Olivier de Weck , the Apollo Program Professor of Astronautics and professor of engineering systems, taps into his expertise in systems design and engineering to help company leaders develop their own path to progress. His work has led to an MIT graduate course, two MIT Professional Education classes, and the textbook " Technology Roadmapping and Development: A Quantitative Approach to the Management of Technology ." Recently, his textbook was honored with the Most Promising New Textbook Award from the Textbook and Academic Authors Association. The textbook not only serves as a guide to students but also to company leaders. Aerospace design and manufacturer Airbus, defense technology laboratory Draper, and package delivery giant UPS have implemented de Weck’s methods. Here, De Weck describes the value of technology roadmapping. 

Q: What is technology roadmapping, and why is it important?

A: A technology roadmap is a planning tool. It connects current products, services, and missions to future endeavors, and identifies the specific technologies needed to achieve them. 

Let’s say an organization wants to build a spacecraft to explore an asteroid in the farthest reaches of our solar system. It will need a new kind of electric thruster technology so that it can travel to the asteroid faster and more efficiently than what is currently possible. A technology roadmap details several factors, such as the level of performance needed to meet the goal and how to measure progress. The guide also links various responsibilities within an organization, including strategy, product development, research and development (R&D), and finance, so everyone understands the technologies that are being funded and how they will benefit the company. 

Technology roadmapping has been in use for over five decades. For a long time, it was taught in business schools in a more general and qualitative way, but the practice has evolved over the years. The technology roadmapping I teach and write about uses quantitative engineering analysis and connects it to strategic thinking. From 2017 to 2018, I used and refined this approach for Airbus, which has a $1 billion R&D budget. Together, we developed over 40 technology roadmaps, which included a plan to build ZEROe, a commercial aircraft that will run on hydrogen fuel, by 2035. 

Q: Are technology roadmaps used widely in industry today, and what gaps in knowledge/processes does your approach address?   

A: Colleagues from the University of Cambridge and the Fraunhofer Institute in Germany and I recently conducted an industry-wide survey about technology roadmapping. Of the 200 companies that participated, 62 percent said they use technology roadmaps to make strategic investment decisions and 32 percent update them yearly. Yet only 11 percent of firms plan technologies 10 years out. This is a bit concerning because technology does not move as fast as many people believe. Using Airbus’s ZEROe aircraft as an example, it is important to think 10 or even 20 years ahead, not just within three to five years. 

My approach to technology roadmapping uses a method I call Advanced Technology Roadmap Architecture (ATRA). It provides a step-by-step methodology to create a technology roadmap that is more rigorous and has a longer time horizon than traditional roadmaps. ATRA asks four essential questions: Where are we today, where could we go, where should we go, and where we are going? Instead of technologies, I want people to think of these questions as a guide to their retirement investing. You could invest in some high-risk mutual funds, low-risk bonds, or an index fund that will follow the market. You would pick investments that reflect your future goals and risk tolerances. ATRA works in the same way. It enables organizations to select the right mix of R&D based on different scenarios and different risk tolerances. 

Q: Can you share how you designed your book and the courses, including 16.887/EM.427, to help students understand and apply technology roadmapping?   

A: My time at Airbus allowed me to implement and battle-test technology roadmapping and ATRA. When I returned to MIT in 2019, I had already drafted chapters of the book and MIT students provided great feedback, which allowed me to refine and improve the book to the point where it would be useful and understandable to future MIT engineering and business students, industry practitioners, and C-level executives. 

An important feature of both my textbook and class that may not be obvious is my focus on history. With innovation moving as fast as it is, it is easy to claim a never-been-done-before technology. That is often not the case — for example, one student did a technology roadmap of virtual reality headsets. He realized that people were doing virtual reality in the 1960s and 70s. It was super crude, clunky, and the resolution was poor. Still, there is a 60-year history that needs to be understood and acknowledged. My students and I have created a library of nearly 100 roadmaps on wide-ranging technologies, including superconducting nuclear fusion, lab-grown meat, and bioplastics. Each one traces an innovation’s history.

Share this news article on:

Related links.

  • Olivier de Weck
  • MIT Technology Roadmaps
  • Department of Aeronautics and Astronautics

Related Topics

  • Books and authors
  • Aeronautical and astronautical engineering
  • Augmented and virtual reality
  • History of science
  • Innovation and Entrepreneurship (I&E)

Related Articles

airplane takeoff

Concept for a hybrid-electric plane may reduce aviation’s air pollution problem

model forecasting the Covid-19 crisis

3 Questions: Why getting ahead of Covid-19 requires modeling more than a health crisis

MIT researchers have devised a framework for deciding which type of mission would be most successful in deflecting an incoming asteroid, taking into account an asteroid’s mass and momentum, its proximity to a gravitational keyhole, and the amount of warning time that scientists have of an impending collision.

How to deflect an asteroid

Previous item Next item

More MIT News

Headshots of Grant Knappe and Arjav Shah

MIT scholars will take commercial break with entrepreneurial scholarship

Read full story →

The blue glow of a tiny light shines on Guillermo Herrera-Arcos’s face inside the lab.

MIT scientists learn how to control muscles with light

An open notebook shows illustrations of the heart, liver, and intestines with translucent bandages.

Adhesive coatings can prevent scarring around medical implants

Three photos show a particle bouncing off of a surface. The particle bounces higher when the temperature is increased. These three images are labeled “20 °C, 100 °C, and 177 °C.”

Study: Under extreme impacts, metals get stronger when heated

The sun is shown as a black sphere covered in glowing yellow flares. Hundreds of thin lines swoop outward and back, representing the magnetic fields.

The origin of the sun’s magnetic field could lie close to its surface

Two similar images show a black hole grabbing debris from a passing star. Light blue and red clouds are around the black hole. Arrows depict the “spin of the supermassive black hole.” The left image has a wobbly red line being read by a satellite, and the right side has two wobbly blue lines.

Using wobbling stellar material, astronomers measure the spin of a supermassive black hole for the first time

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

A new future of work: The race to deploy AI and raise skills in Europe and beyond

At a glance.

Amid tightening labor markets and a slowdown in productivity growth, Europe and the United States face shifts in labor demand, spurred by AI and automation. Our updated modeling of the future of work finds that demand for workers in STEM-related, healthcare, and other high-skill professions would rise, while demand for occupations such as office workers, production workers, and customer service representatives would decline. By 2030, in a midpoint adoption scenario, up to 30 percent of current hours worked could be automated, accelerated by generative AI (gen AI). Efforts to achieve net-zero emissions, an aging workforce, and growth in e-commerce, as well as infrastructure and technology spending and overall economic growth, could also shift employment demand.

By 2030, Europe could require up to 12 million occupational transitions, double the prepandemic pace. In the United States, required transitions could reach almost 12 million, in line with the prepandemic norm. Both regions navigated even higher levels of labor market shifts at the height of the COVID-19 period, suggesting that they can handle this scale of future job transitions. The pace of occupational change is broadly similar among countries in Europe, although the specific mix reflects their economic variations.

Businesses will need a major skills upgrade. Demand for technological and social and emotional skills could rise as demand for physical and manual and higher cognitive skills stabilizes. Surveyed executives in Europe and the United States expressed a need not only for advanced IT and data analytics but also for critical thinking, creativity, and teaching and training—skills they report as currently being in short supply. Companies plan to focus on retraining workers, more than hiring or subcontracting, to meet skill needs.

Workers with lower wages face challenges of redeployment as demand reweights toward occupations with higher wages in both Europe and the United States. Occupations with lower wages are likely to see reductions in demand, and workers will need to acquire new skills to transition to better-paying work. If that doesn’t happen, there is a risk of a more polarized labor market, with more higher-wage jobs than workers and too many workers for existing lower-wage jobs.

Choices made today could revive productivity growth while creating better societal outcomes. Embracing the path of accelerated technology adoption with proactive worker redeployment could help Europe achieve an annual productivity growth rate of up to 3 percent through 2030. However, slow adoption would limit that to 0.3 percent, closer to today’s level of productivity growth in Western Europe. Slow worker redeployment would leave millions unable to participate productively in the future of work.

Businessman and skilled worker in high tech enterprise, using VR glasses - stock photo

Demand will change for a range of occupations through 2030, including growth in STEM- and healthcare-related occupations, among others

This report focuses on labor markets in nine major economies in the European Union along with the United Kingdom, in comparison with the United States. Technology, including most recently the rise of gen AI, along with other factors, will spur changes in the pattern of labor demand through 2030. Our study, which uses an updated version of the McKinsey Global Institute future of work model, seeks to quantify the occupational transitions that will be required and the changing nature of demand for different types of jobs and skills.

Our methodology

We used methodology consistent with other McKinsey Global Institute reports on the future of work to model trends of job changes at the level of occupations, activities, and skills. For this report, we focused our analysis on the 2022–30 period.

Our model estimates net changes in employment demand by sector and occupation; we also estimate occupational transitions, or the net number of workers that need to change in each type of occupation, based on which occupations face declining demand by 2030 relative to current employment in 2022. We included ten countries in Europe: nine EU members—the Czech Republic, Denmark, France, Germany, Italy, Netherlands, Poland, Spain, and Sweden—and the United Kingdom. For the United States, we build on estimates published in our 2023 report Generative AI and the future of work in America.

We included multiple drivers in our modeling: automation potential, net-zero transition, e-commerce growth, remote work adoption, increases in income, aging populations, technology investments, and infrastructure investments.

Two scenarios are used to bookend the work-automation model: “late” and “early.” For Europe, we modeled a “faster” scenario and a “slower” one. For the faster scenario, we use the midpoint—the arithmetical average between our late and early scenarios. For the slower scenario, we use a “mid late” trajectory, an arithmetical average between a late adoption scenario and the midpoint scenario. For the United States, we use the midpoint scenario, based on our earlier research.

We also estimate the productivity effects of automation, using GDP per full-time-equivalent (FTE) employee as the measure of productivity. We assumed that workers displaced by automation rejoin the workforce at 2022 productivity levels, net of automation, and in line with the expected 2030 occupational mix.

Amid tightening labor markets and a slowdown in productivity growth, Europe and the United States face shifts in labor demand, spurred not only by AI and automation but also by other trends, including efforts to achieve net-zero emissions, an aging population, infrastructure spending, technology investments, and growth in e-commerce, among others (see sidebar, “Our methodology”).

Our analysis finds that demand for occupations such as health professionals and other STEM-related professionals would grow by 17 to 30 percent between 2022 and 2030, (Exhibit 1).

By contrast, demand for workers in food services, production work, customer services, sales, and office support—all of which declined over the 2012–22 period—would continue to decline until 2030. These jobs involve a high share of repetitive tasks, data collection, and elementary data processing—all activities that automated systems can handle efficiently.

Up to 30 percent of hours worked could be automated by 2030, boosted by gen AI, leading to millions of required occupational transitions

By 2030, our analysis finds that about 27 percent of current hours worked in Europe and 30 percent of hours worked in the United States could be automated, accelerated by gen AI. Our model suggests that roughly 20 percent of hours worked could still be automated even without gen AI, implying a significant acceleration.

These trends will play out in labor markets in the form of workers needing to change occupations. By 2030, under the faster adoption scenario we modeled, Europe could require up to 12.0 million occupational transitions, affecting 6.5 percent of current employment. That is double the prepandemic pace (Exhibit 2). Under a slower scenario we modeled for Europe, the number of occupational transitions needed would amount to 8.5 million, affecting 4.6 percent of current employment. In the United States, required transitions could reach almost 12.0 million, affecting 7.5 percent of current employment. Unlike Europe, this magnitude of transitions is broadly in line with the prepandemic norm.

Both regions navigated even higher levels of labor market shifts at the height of the COVID-19 period. While these were abrupt and painful to many, given the forced nature of the shifts, the experience suggests that both regions have the ability to handle this scale of future job transitions.

Smiling female PhD student discussing with man at desk in innovation lab - stock photo

Businesses will need a major skills upgrade

The occupational transitions noted above herald substantial shifts in workforce skills in a future in which automation and AI are integrated into the workplace (Exhibit 3). Workers use multiple skills to perform a given task, but for the purposes of our quantification, we identified the predominant skill used.

Demand for technological skills could see substantial growth in Europe and in the United States (increases of 25 percent and 29 percent, respectively, in hours worked by 2030 compared to 2022) under our midpoint scenario of automation adoption (which is the faster scenario for Europe).

Demand for social and emotional skills could rise by 11 percent in Europe and by 14 percent in the United States. Underlying this increase is higher demand for roles requiring interpersonal empathy and leadership skills. These skills are crucial in healthcare and managerial roles in an evolving economy that demands greater adaptability and flexibility.

Conversely, demand for work in which basic cognitive skills predominate is expected to decline by 14 percent. Basic cognitive skills are required primarily in office support or customer service roles, which are highly susceptible to being automated by AI. Among work characterized by these basic cognitive skills experiencing significant drops in demand are basic data processing and literacy, numeracy, and communication.

Demand for work in which higher cognitive skills predominate could also decline slightly, according to our analysis. While creativity is expected to remain highly sought after, with a potential increase of 12 percent by 2030, work activities characterized by other advanced cognitive skills such as advanced literacy and writing, along with quantitative and statistical skills, could decline by 19 percent.

Demand for physical and manual skills, on the other hand, could remain roughly level with the present. These skills remain the largest share of workforce skills, representing about 30 percent of total hours worked in 2022. Growth in demand for these skills between 2022 and 2030 could come from the build-out of infrastructure and higher investment in low-emissions sectors, while declines would be in line with continued automation in production work.

Business executives report skills shortages today and expect them to worsen

A survey we conducted of C-suite executives in five countries shows that companies are already grappling with skills challenges, including a skills mismatch, particularly in technological, higher cognitive, and social and emotional skills: about one-third of the more than 1,100 respondents report a shortfall in these critical areas. At the same time, a notable number of executives say they have enough employees with basic cognitive skills and, to a lesser extent, physical and manual skills.

Within technological skills, companies in our survey reported that their most significant shortages are in advanced IT skills and programming, advanced data analysis, and mathematical skills. Among higher cognitive skills, significant shortfalls are seen in critical thinking and problem structuring and in complex information processing. About 40 percent of the executives surveyed pointed to a shortage of workers with these skills, which are needed for working alongside new technologies (Exhibit 4).

Two IT co-workers code on laptop or technology for testing, web design or online startup - stock photo

Companies see retraining as key to acquiring needed skills and adapting to the new work landscape

Surveyed executives expect significant changes to their workforce skill levels and worry about not finding the right skills by 2030. More than one in four survey respondents said that failing to capture the needed skills could directly harm financial performance and indirectly impede their efforts to leverage the value from AI.

To acquire the skills they need, companies have three main options: retraining, hiring, and contracting workers. Our survey suggests that executives are looking at all three options, with retraining the most widely reported tactic planned to address the skills mismatch: on average, out of companies that mentioned retraining as one of their tactics to address skills mismatch, executives said they would retrain 32 percent of their workforce. The scale of retraining needs varies in degree. For example, respondents in the automotive industry expect 36 percent of their workforce to be retrained, compared with 28 percent in the financial services industry. Out of those who have mentioned hiring or contracting as their tactics to address the skills mismatch, executives surveyed said they would hire an average of 23 percent of their workforce and contract an average of 18 percent.

Occupational transitions will affect high-, medium-, and low-wage workers differently

All ten European countries we examined for this report may see increasing demand for top-earning occupations. By contrast, workers in the two lowest-wage-bracket occupations could be three to five times more likely to have to change occupations compared to the top wage earners, our analysis finds. The disparity is much higher in the United States, where workers in the two lowest-wage-bracket occupations are up to 14 times more likely to face occupational shifts than the highest earners. In Europe, the middle-wage population could be twice as affected by occupational transitions as the same population in United States, representing 7.3 percent of the working population who might face occupational transitions.

Enhancing human capital at the same time as deploying the technology rapidly could boost annual productivity growth

About quantumblack, ai by mckinsey.

QuantumBlack, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.

Organizations and policy makers have choices to make; the way they approach AI and automation, along with human capital augmentation, will affect economic and societal outcomes.

We have attempted to quantify at a high level the potential effects of different stances to AI deployment on productivity in Europe. Our analysis considers two dimensions. The first is the adoption rate of AI and automation technologies. We consider the faster scenario and the late scenario for technology adoption. Faster adoption would unlock greater productivity growth potential but also, potentially, more short-term labor disruption than the late scenario.

The second dimension we consider is the level of automated worker time that is redeployed into the economy. This represents the ability to redeploy the time gained by automation and productivity gains (for example, new tasks and job creation). This could vary depending on the success of worker training programs and strategies to match demand and supply in labor markets.

We based our analysis on two potential scenarios: either all displaced workers would be able to fully rejoin the economy at a similar productivity level as in 2022 or only some 80 percent of the automated workers’ time will be redeployed into the economy.

Exhibit 5 illustrates the various outcomes in terms of annual productivity growth rate. The top-right quadrant illustrates the highest economy-wide productivity, with an annual productivity growth rate of up to 3.1 percent. It requires fast adoption of technologies as well as full redeployment of displaced workers. The top-left quadrant also demonstrates technology adoption on a fast trajectory and shows a relatively high productivity growth rate (up to 2.5 percent). However, about 6.0 percent of total hours worked (equivalent to 10.2 million people not working) would not be redeployed in the economy. Finally, the two bottom quadrants depict the failure to adopt AI and automation, leading to limited productivity gains and translating into limited labor market disruptions.

Managers discussing work while futuristic AI computer vision analyzing, ccanning production line - stock photo

Four priorities for companies

The adoption of automation technologies will be decisive in protecting businesses’ competitive advantage in an automation and AI era. To ensure successful deployment at a company level, business leaders can embrace four priorities.

Understand the potential. Leaders need to understand the potential of these technologies, notably including how AI and gen AI can augment and automate work. This includes estimating both the total capacity that these technologies could free up and their impact on role composition and skills requirements. Understanding this allows business leaders to frame their end-to-end strategy and adoption goals with regard to these technologies.

Plan a strategic workforce shift. Once they understand the potential of automation technologies, leaders need to plan the company’s shift toward readiness for the automation and AI era. This requires sizing the workforce and skill needs, based on strategically identified use cases, to assess the potential future talent gap. From this analysis will flow details about the extent of recruitment of new talent, upskilling, or reskilling of the current workforce that is needed, as well as where to redeploy freed capacity to more value-added tasks.

Prioritize people development. To ensure that the right talent is on hand to sustain the company strategy during all transformation phases, leaders could consider strengthening their capabilities to identify, attract, and recruit future AI and gen AI leaders in a tight market. They will also likely need to accelerate the building of AI and gen AI capabilities in the workforce. Nontechnical talent will also need training to adapt to the changing skills environment. Finally, leaders could deploy an HR strategy and operating model to fit the post–gen AI workforce.

Pursue the executive-education journey on automation technologies. Leaders also need to undertake their own education journey on automation technologies to maximize their contributions to their companies during the coming transformation. This includes empowering senior managers to explore automation technologies implications and subsequently role model to others, as well as bringing all company leaders together to create a dedicated road map to drive business and employee value.

AI and the toolbox of advanced new technologies are evolving at a breathtaking pace. For companies and policy makers, these technologies are highly compelling because they promise a range of benefits, including higher productivity, which could lift growth and prosperity. Yet, as this report has sought to illustrate, making full use of the advantages on offer will also require paying attention to the critical element of human capital. In the best-case scenario, workers’ skills will develop and adapt to new technological challenges. Achieving this goal in our new technological age will be highly challenging—but the benefits will be great.

Eric Hazan is a McKinsey senior partner based in Paris; Anu Madgavkar and Michael Chui are McKinsey Global Institute partners based in New Jersey and San Francisco, respectively; Sven Smit is chair of the McKinsey Global Institute and a McKinsey senior partner based in Amsterdam; Dana Maor is a McKinsey senior partner based in Tel Aviv; Gurneet Singh Dandona is an associate partner and a senior expert based in New York; and Roland Huyghues-Despointes is a consultant based in Paris.

Explore a career with us

Related articles.

""

Generative AI and the future of work in America

McKinsey partners Lareina Yee and Michael Chui

The economic potential of generative AI: The next productivity frontier

What every CEO should know about generative AI

What every CEO should know about generative AI

Applications sought for Future of Work Student Summer Research Grant

22 May 2024

  • Share on Facebook
  • Share on Twitter
  • Copy address link to clipboard

From: The Institute for Creativity, Arts, and Technology

The Center for Future Work Places and Practices (CFWPP) is offering the inaugural Future of Work Student Summer Research Grant. Both undergraduate and graduate students at Virginia Tech are eligible for the research grant. 

Interested students should identify and work with an affiliated CFWPP faculty member to develop a research project that can be completed over the summer. The research projects should align with the four research focus areas of CFWPP: (1)  Workforce development, (2) Health in the workplace, (3) Responsible technologies at work, and (4) Sustainable work.

Selected students will be affiliated with CFWPP and expected to give a presentation on their research projects at the start of the fall semester. They will receive a $500 award and a certificate of completion at the end of the project. The faculty supervisors will also receive $500 that will be paid into research.

Submission  deadline is 5 p.m. May 24.  This scholarship is generously supported by Professors Lucy Qiu and David Wang.

  • Institute for Creativity, Arts, and Technology

share this!

May 17, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

Research: Technology is changing how companies do business

by Sarah Mangus-Sharpe, Cornell University

robot manufacture

In the fast-paced world of modern business, technology plays a crucial role in shaping how companies operate. One area where this impact is particularly significant is in the organization of production chains—specifically the way goods are made and distributed.

A new study from the Cornell SC Johnson College of Business advances understanding of the U.S. production chain evolution amidst technological progress in information technology (IT), shedding light on the complex connections between business IT investments and organizational design.

Advances in IT have sparked significant changes in how companies design their production processes . In the paper "Production Chain Organization in the Digital Age: Information Technology Use and Vertical Integration in U.S. Manufacturing," which published April 30 in Management Science , Chris Forman, the Peter and Stephanie Nolan Professor in the Dyson School of Applied Economics and Management, and his co-author delved into what these changes mean for businesses and consumers.

In running a manufacturing plant , a key decision is how much of the production process is handled in-house and how much is outsourced to other companies. This decision, known as vertical integration, can have big implications for a business. Advances in information and communication technology , such as those brought about by the internet, shifted the network of production flows for many firms.

Forman and Kristina McElheran, assistant professor of strategic management at University of Toronto, analyzed U.S. Census Bureau data of over 5,600 manufacturing plants to see how the production chains of businesses were affected by the internet revolution. Their use of census data allowed them to look inside the relationships among production units within and between companies and how transaction flows changed after companies invested in internet-enabled technology that facilitated coordination between them.

The production units of many of the companies in their study concurrently sold to internal and external customers, a mix they refer to as plural selling. They found that the reduction in communication costs enabled by the internet shifted the mix toward more sales outside of the firm, or less vertical integration.

"The internet has made it cheaper and faster for companies to communicate and share information with each other. This means they can work together more efficiently without the need for as much vertical integration," said Forman.

While some might worry that relying on external partners could make businesses more vulnerable, the research suggests otherwise. In fact, companies that were already using a plural governance approach before the internet age seem to be the most adaptable to these changes. Production units that were capacity-constrained were also among those that made the most significant changes to transaction flows after new technology investments.

"Technology is continuing to reshape the way companies operate and are organized," Forman said. "More recently, changes in the use of analytics in companies have been accompanied by changes in organizations, and the same is very likely ongoing with newer investments in artificial intelligence."

The research highlights the importance of staying ahead of the curve in technology. Companies that embrace digital technologies now are likely to be the ones that thrive in the future. And while there are still many unanswered questions about how these changes will play out, one thing is clear: The relationship between technology and business is only going to become more and more intertwined in the future.

Journal information: Management Science

Provided by Cornell University

Explore further

Feedback to editors

technology research work

Study finds sea-level rise and weather-related shocks caused Louisiana marsh to die back

51 minutes ago

technology research work

Repurposed protease controls important signaling molecule-activating protein

2 hours ago

technology research work

Escaped GMO canola plants persist long-term, but may be losing their engineered resistance to pesticides

technology research work

Ancient people hunted now extinct elephants at Tagua Tagua Lake in Chile 12,000 years ago, study finds

technology research work

Ancient Mycenaean armor tested by Marines and pronounced suitable for extended combat

technology research work

Study reveals cuddled cows who work as therapy animals show a strong preference for women compared to men

technology research work

Sustainable, high-performance paper coating material could reduce microplastic pollution

technology research work

NASA's Psyche fires up its sci-fi-worthy thrusters

3 hours ago

technology research work

Astronomers observe jet reorientation in 'Death Star' black holes

technology research work

How yeasts manage to compensate for the genetic imbalance of extra chromosomes

Relevant physicsforums posts, cover songs versus the original track, which ones are better.

8 hours ago

Metal, Rock, Instrumental Rock and Fusion

May 20, 2024

Today's Fusion Music: T Square, Cassiopeia, Rei & Kanade Sato

May 19, 2024

Bach, Bach, and more Bach please

May 18, 2024

What are your favorite Disco "Classics"?

Who is your favorite jazz musician and what is your favorite song.

More from Art, Music, History, and Linguistics

Related Stories

technology research work

Worker mobility can impact adoption of new technology

Mar 27, 2024

technology research work

Many firms prefer ready-made AI software, with a few tweaks

Mar 13, 2024

technology research work

Adoption of 'Industry 4.0' systems can lead to sustainability across organizations

Jul 28, 2023

technology research work

Predictive analytics pays off with complementary investments

Dec 3, 2021

technology research work

New type of voice assistant for production works according to the rules of AI ethics

Mar 5, 2024

technology research work

Study analyzes how the green transition affects competition and concentration in the business market

Dec 11, 2023

Recommended for you

technology research work

Gender gaps remain for many women scientists, study finds

technology research work

Ridesourcing platforms thrive on socio-economic inequality, say researchers

Apr 26, 2024

technology research work

Which countries are more at risk in the global supply chain?

Apr 19, 2024

technology research work

Study finds world economy already committed to income reduction of 19% due to climate change

Apr 17, 2024

technology research work

Building footprints could help identify neighborhood sociodemographic traits

Apr 10, 2024

technology research work

Can the bias in algorithms help us see our own?

Apr 9, 2024

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

Stanford University

Social Science Research Professional 2

🔍 school of medicine, stanford, california, united states.

The Division of Adolescent Medicine at Stanford University School of Medicine is seeking a Social Science Research Professional 2 to support health services research and health policy projects and programs in child and adolescent health. The ideal candidate is highly motivated, independent, organized, mission-driven, and experienced in health services research, public health and policy work related to preventive services, tobacco/secondhand smoke and nicotine control, and global health. The candidate will have the opportunity to help build an interdisciplinary research team to conduct work with broad scientific and public health impact in improving care delivery to adolescents and young adults. Our Division encompasses numerous aspects of adolescent health research, and provides a strong community of scientific colleagues, post-doctoral trainees, and students. 

Duties include:

·         Assist in designing and independently conducts portions of research project(s). Make recommendations on experimental design and/or research direction to principal investigator.

·         Develop and implement new, nonstandard procedures and research protocols where appropriate protocols are not described in the literature or where modification or adaptation of standard procedures and protocols is required, with the supervisor providing general guidance and suggestions.

·         Interpret, synthesize, and analyze data and results using scientific or statistical techniques.

·         Solve problems, and make decisions that affect the direction of the research and result in independent contributions to the overall project.

·         Participate in multidisciplinary teams across different faculties or schools.

·         Perform ongoing literature review to remain current with related research; apply to ongoing research and development of new protocols.

·         Contribute substantively to the preparation of papers for publication, especially in the results section, and to publication of findings. Present ongoing work to colleagues and/or at academic conferences.

·         Co-author sections of research publications and regulatory reports as needed.

·         Complete project-related administrative and budgetary reports as needed.

·         Supervise (either formally or informally) staff or students as needed, including oversight and direction on techniques, as well as consultation on project work. Serve as a technical resource for other research staff.

Stanford University provides pay ranges representing its good faith estimate of what the University reasonably expects to pay for a position. The pay offered to a selected candidate will be determined based on factors such as (but not limited to) the scope and responsibilities of the position, the qualifications of the selected candidate, departmental budget availability, internal equity, geographic location, and external market pay for comparable jobs. The pay range for this position working in the California Bay area is between $69,000 - $85,000.

*- Other duties may also be assigned

DESIRED QUALIFICATIONS:

·         Doctoral degree in public health or related social science discipline.

·         Experience in conducting health services and health policy research for maternal, child and adolescent populations and health systems in US and/or global settings.

·         Familiarity with health care delivery and clinical and community preventive services.

·         Ability to work with diverse populations, including those with limited English proficiency or low literacy levels

EDUCATION & EXPERIENCE (REQUIRED):

Bachelor of Arts degree in an applicable social science related field and two years applicable experience, or combination of education and experience in an applicable social science.

KNOWLEDGE, SKILLS AND ABILITIES (REQUIRED):

·         Comprehensive understanding of scientific theory and methods.

·         General computer skills and ability to quickly learn and master computer programs.

·         Strong analytical skills and excellent judgment.

·         Ability to work under deadlines with general guidance.

·         Excellent organizational skills and demonstrated ability to complete detailed work accurately.

·         Demonstrated oral and written communication skills.

·         Ability to work with human study participants.

·         Developing supervisory skills.

CERTIFICATIONS & LICENSES:   None

PHYSICAL REQUIREMENTS*:

·         Frequently perform desk-based computer tasks, grasp lightly/fine manipulation, lift/carry/push/pull objects that weigh up to 10 pounds.

·         Occasionally stand/walk, sit, use a telephone, writing by hand, and sort/file paperwork or parts.

·         Rarely twist/bend/stoop/squat, kneel/crawl, rarely reach/work above shoulders, operate foot and/or hand controls.

·         Add or subtract physical requirements based on the requirements of your specific job. (remove this statement for posting)

*- Consistent with its obligations under the law, the University will provide reasonable accommodation to any employee with a disability who requires accommodation to perform the essential functions of his or her job.

WORKING CONDITIONS:

·         May be exposed to blood borne pathogens.

·         May be required to work non-standard, extended or weekend hours in support of research work.

WORK STANDARDS

·         Interpersonal Skills: Demonstrates the ability to work well with Stanford colleagues and clients and with external organizations.

·         Promote Culture of Safety: Demonstrates commitment to personal responsibility and value for safety; communicates safety concerns; uses and promotes safe behaviors bases on training and lessons learned.

·         Subject to and expected to comply with all applicable University policies and procedures, including but not limited to the personnel policies and other policies found in the University’s Administrative Guide, http://adminguide.stanford.edu .

All members of the Department of Pediatrics are engaged in continuous learning and improvement to foster a culture where diversity, equity, inclusion, and justice are central to all aspects of our work. The Department collectively and publicly commits to continuously promoting anti-racism and equity through its policies, programs, and practices at all levels.  

  • Schedule: Full-time
  • Job Code: 4187
  • Employee Status: Fixed-Term
  • Department URL: http://pediatrics.stanford.edu/
  • Requisition ID: 103324
  • Work Arrangement : Remote Eligible

My Submissions

Track your opportunities.

Similar Listings

 School of Medicine, Stanford, California, United States

📁 Research

Post Date: May 07, 2024

Post Date: Jan 29, 2024

Global Impact We believe in having a global impact

Climate and sustainability.

Stanford's deep commitment to sustainability practices has earned us a Platinum rating and inspired a new school aimed at tackling climate change.

Medical Innovations

Stanford's Innovative Medicines Accelerator is currently focused entirely on helping faculty generate and test new medicines that can slow the spread of COVID-19.

From Google and PayPal to Netflix and Snapchat, Stanford has housed some of the most celebrated innovations in Silicon Valley.

Advancing Education

Through rigorous research, model training programs and partnerships with educators worldwide, Stanford is pursuing equitable, accessible and effective learning for all.

Working Here We believe you matter as much as the work

Group Dance Class In A Gym

I love that Stanford is supportive of learning, and as an education institution, that pursuit of knowledge extends to staff members through professional development, wellness, financial planning and staff affinity groups.

School of Engineering

Students Working With A Robot Arm

I get to apply my real-world experiences in a setting that welcomes diversity in thinking and offers support in applying new methods. In my short time at Stanford, I've been able to streamline processes that provide better and faster information to our students.

Phillip Cheng

Office of the Vice Provost for Student Affairs

Students Working With A Robot Arm

Besides its contributions to science, health, and medicine, Stanford is also the home of pioneers across disciplines. Joining Stanford has been a great way to contribute to our society by supporting emerging leaders.

Denisha Clark

School of Medicine

Students Working With A Robot Arm

I like working in a place where ideas matter. Working at Stanford means being part of a vibrant, international culture in addition to getting to do meaningful work.

Office of the President and Provost

Getting Started We believe that you can love your job

Join Stanford in shaping a better tomorrow for your community, humanity and the planet we call home.

  • 4.2 Review Ratings
  • 81% Recommend to a Friend

View All Jobs

U.S. flag

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • Publication Library

Deployable Expedient Traffic Entry Regulator (DETER) Fact Sheet

The Science and Technology Directorate is evaluating the Deployable Expedient Traffic Entry Regulator (DETER) barrier in realistic environments to identify operational recommendations and hardware adjustments to optimize use for DHS components. DETER is an innovative and versatile vehicle barrier that provides fast protection in multiple scenarios, making it a valuable system to be used in multi-domain operations.

  • Science and Technology
  • Physical Security

IMAGES

  1. Top 8 Technology Trends & Innovations driving Scientific Research in 2023

    technology research work

  2. Technological & Scientific Advancements

    technology research work

  3. Research Technology Trends for the Next Five Years

    technology research work

  4. How can technology transform research and open up science?

    technology research work

  5. Female scientist with microscope in lab. Woman scientist doing

    technology research work

  6. (PDF) Information technology in research

    technology research work

VIDEO

  1. Workforce of the Future

  2. 15 Years of Research in Technology for the Classroom

  3. Ideas at Work

  4. Tech Innovation in 2024: Themes and Technologies to Monitor

  5. IITRAM

  6. How can technology improve workplace productivity?

COMMENTS

  1. A comprehensive study of technological change

    New research from MIT aims to assist in the prediction of technology performance improvement using U.S. patents as a dataset. The study describes 97 percent of the U.S. patent system as a set of 1,757 discrete technology domains, and quantitatively assesses each domain for its improvement potential. "The rate of improvement can only be ...

  2. Research Roundup: How Technology Is Transforming Work

    In this research roundup, we share highlights from several recent studies that explore the nuanced ways in which technology is influencing today's workplace and workforce — including both its ...

  3. How Is Technology Changing the World, and How Should the World Change

    A crucial part of understanding how technology has created global change and, in turn, how global changes have influenced the development of new technologies is understanding the technologies themselves in all their richness and complexity—how they work, the limits of what they can do, what they were designed to do, how they are actually used.

  4. Technology

    NATO is boosting AI and climate research as scientific diplomacy remains on ice. As the military alliance created to counter the Soviet Union expands, it is prioritizing studies on how climate ...

  5. Tech tools to make research more open and inclusive

    Gilbert suggests that leaders take their cues from their team when adopting tech tools such as Slack and WhatsApp. He says that many younger researchers view e-mail as formal and cumbersome ...

  6. MIT's top research stories of 2021

    The year's popular research stories include a promising new approach to cancer immunotherapy, the confirmation of a 50-year-old theorem, and a major fusion breakthrough. In 2021, MIT researchers made advances toward fusion energy, confirmed Stephen Hawking's black hole theorem, developed a Covid-detecting face mask, and created a ...

  7. How Technology is Changing Academic Research

    Her research, which challenges basic assumptions about the performance of extroverts and neurotics at work, was made easy by using technology to collect her research and experimental data.

  8. Artificial Intelligence

    In recent years, artificial intelligence has become increasingly powerful, propelling discovery across scientific fields and enabling researchers to delve into problems previously too complex to solve. Outside of science, artificial intelligence is built into devices all around us, and billions of people across the globe rely on it every day.

  9. Information technology

    Information technology is the design and implementation of computer networks for data processing and communication. This includes designing the hardware for processing information and connecting ...

  10. 10 Breakthrough Technologies 2024

    Every year, the reporters and editors at MIT Technology Review survey the tech landscape and pick 10 technologies that we think have the greatest potential to change our lives in the years ahead ...

  11. Using Technology to Make Work More Human

    Using Technology to Make Work More Human. by. Allison Fine. and. Beth Kanter. March 18, 2022. HBR Staff/3DSculptor/Yagi Studio/Getty Images. Summary. The next wave of digital tech, or "smart ...

  12. New technology and work: Exploring the challenges

    The intersection of new technology and work is apparent in the gig economy. Through apps and an online platform, work can be allocated, assessed, remunerated and controlled. There has been growth in research on gig work, especially around conditions, job quality, labour regulation and forms of control (Stanford, 2017; Stewart and Stanford, 2017).

  13. Technology, jobs, and the future of work

    MGI research on the automation potential of the global economy, focusing on 46 countries representing about 80 percent of the global workforce, has examined more than 2,000 work activities and quantified the technical feasibility of automating each of them. The proportion of occupations that can be fully automated using currently demonstrated ...

  14. Augmenting the realities of work

    This is the future of immersive technology that global head of Immersive Technology Research at JPMorgan Chase, Blair MacIntyre is working to build. Augmented reality (AR) and virtual reality (VR ...

  15. The 11 Best Technology Tools for Researchers

    With Zenodo, you can receive a free DOI for your research, whether it's a paper, article, essay, blog post, and nearly anything you can think of. Using it, you can share it with a thriving online community of researchers in all kinds of fields. 5. EndNote. EndNote is an all-in-one tool for managing your references and citations.

  16. Impacts of Technology Use on the Workload of Registered Nurses: A

    Methods. This scoping review, guided by Arksey and O'Malley 1 framework, sought to describe the breadth of evidence of the effects of different technologies on nurses' workload in rural areas and followed the five steps including 1) identifying the research question, 2) identifying relevant studies, 3) study selection, 4) charting the data, and 5) collecting, summarizing, and reporting the ...

  17. Technology

    Technology is the application of conceptual knowledge for ... (thousand years ago) into pressure flaking, enabling much finer work. The discovery of fire was described by Charles Darwin as "possibly the greatest ... schools, and scientific research. Continuing improvements led to the furnace and bellows and provided, for the first time ...

  18. Research: What Companies Don't Know About How Workers Use AI

    Three Gallup studies shed light on when and why AI is being used at work — and how employees and customers really feel about it. Leaders who are exploring how AI might fit into their business ...

  19. Home

    You can find technology companies everywhere. But purpose like ours is something special. ... it to solve emerging national security challenges. Learn More. Life at STR. Challenging, high-impact work. At STR, our work is challenging and high impact. What if there was a team that thought the real potential of artificial intelligence wasn't to ...

  20. Research: Technology is changing how companies do business

    The research highlights the importance of staying ahead of the curve in technology. Companies that embrace digital technologies now are likely to be the ones that thrive in the future. And while there are still many unanswered questions about how these changes will play out, one thing is clear: The relationship between technology and business ...

  21. Realizing the promise: How can education technology improve learning

    Here are five specific and sequential guidelines for decisionmakers to realize the potential of education technology to accelerate student learning. 1. Take stock of how your current schools ...

  22. 3 Questions: Technology roadmapping in teaching and industry

    His work has led to an MIT graduate course, two MIT Professional Education classes, and the textbook "Technology Roadmapping and Development: A Quantitative Approach to the Management of Technology." Recently, his textbook was honored with the Most Promising New Textbook Award from the Textbook and Academic Authors Association. The textbook not ...

  23. PDF 1:1 Technology and its Effect on Student Academic Achievement and ...

    Spears (p. 1) cites the work of Donovan et al. (2007), "An increase in enthusiasm for teaching and learning with technology, an improvement in student writing skills, an increase of authentic and purposeful use of technology…are some of the benefits of 1:1 technology integration programs like the AAL program."

  24. The race to deploy generative AI and raise skills

    Technology, including most recently the rise of gen AI, along with other factors, will spur changes in the pattern of labor demand through 2030. Our study, which uses an updated version of the McKinsey Global Institute future of work model, seeks to quantify the occupational transitions that will be required and the changing nature of demand ...

  25. Applications sought for Future of Work Student Summer Research Grant

    Interested students should identify and work with an affiliated CFWPP faculty member to develop a research project that can be completed over the summer. The research projects should align with the four research focus areas of CFWPP: (1) Workforce development, (2) Health in the workplace, (3) Responsible technologies at work, and (4 ...

  26. Research: Technology is changing how companies do business

    Citation: Research: Technology is changing how companies do business (2024, May ... Public transit agencies may need to adapt to the rise of remote work, says new study. Apr 9, 2024.

  27. Social Science Research Professional 2

    The Division of Adolescent Medicine at Stanford University School of Medicine is seeking a Social Science Research Professional 2 to support health services research and health policy projects and programs in child and adolescent health. The ideal candidate is highly motivated, independent, organized, mission-driven, and experienced in health services research, public health and policy work ...

  28. DETER

    The Science and Technology Directorate is evaluating the Deployable Expedient Traffic Entry Regulator (DETER) barrier in realistic environments to identify operational recommendations and hardware adjustments to optimize use for DHS components. DETER is an innovative and versatile vehicle barrier that provides fast protection in multiple scenarios, making it a valuable system to be used in ...