Feb 13, 2023

200-500 Word Example Essays about Technology

Got an essay assignment about technology check out these examples to inspire you.

Technology is a rapidly evolving field that has completely changed the way we live, work, and interact with one another. Technology has profoundly impacted our daily lives, from how we communicate with friends and family to how we access information and complete tasks. As a result, it's no surprise that technology is a popular topic for students writing essays.

But writing a technology essay can be challenging, especially for those needing more time or help with writer's block. This is where Jenni.ai comes in. Jenni.ai is an innovative AI tool explicitly designed for students who need help writing essays. With Jenni.ai, students can quickly and easily generate essays on various topics, including technology.

This blog post aims to provide readers with various example essays on technology, all generated by Jenni.ai. These essays will be a valuable resource for students looking for inspiration or guidance as they work on their essays. By reading through these example essays, students can better understand how technology can be approached and discussed in an essay.

Moreover, by signing up for a free trial with Jenni.ai, students can take advantage of this innovative tool and receive even more support as they work on their essays. Jenni.ai is designed to help students write essays faster and more efficiently, so they can focus on what truly matters – learning and growing as a student. Whether you're a student who is struggling with writer's block or simply looking for a convenient way to generate essays on a wide range of topics, Jenni.ai is the perfect solution.

The Impact of Technology on Society and Culture

Introduction:.

Technology has become an integral part of our daily lives and has dramatically impacted how we interact, communicate, and carry out various activities. Technological advancements have brought positive and negative changes to society and culture. In this article, we will explore the impact of technology on society and culture and how it has influenced different aspects of our lives.

Positive impact on communication:

Technology has dramatically improved communication and made it easier for people to connect from anywhere in the world. Social media platforms, instant messaging, and video conferencing have brought people closer, bridging geographical distances and cultural differences. This has made it easier for people to share information, exchange ideas, and collaborate on projects.

Positive impact on education:

Students and instructors now have access to a multitude of knowledge and resources because of the effect of technology on education . Students may now study at their speed and from any location thanks to online learning platforms, educational applications, and digital textbooks.

Negative impact on critical thinking and creativity:

Technological advancements have resulted in a reduction in critical thinking and creativity. With so much information at our fingertips, individuals have become more passive in their learning, relying on the internet for solutions rather than logic and inventiveness. As a result, independent thinking and problem-solving abilities have declined.

Positive impact on entertainment:

Technology has transformed how we access and consume entertainment. People may now access a wide range of entertainment alternatives from the comfort of their own homes thanks to streaming services, gaming platforms, and online content makers. The entertainment business has entered a new age of creativity and invention as a result of this.

Negative impact on attention span:

However, the continual bombardment of information and technological stimulation has also reduced attention span and the capacity to focus. People are easily distracted and need help focusing on a single activity for a long time. This has hampered productivity and the ability to accomplish duties.

The Ethics of Artificial Intelligence And Machine Learning

The development of artificial intelligence (AI) and machine learning (ML) technologies has been one of the most significant technological developments of the past several decades. These cutting-edge technologies have the potential to alter several sectors of society, including commerce, industry, healthcare, and entertainment. 

As with any new and quickly advancing technology, AI and ML ethics must be carefully studied. The usage of these technologies presents significant concerns around privacy, accountability, and command. As the use of AI and ML grows more ubiquitous, we must assess their possible influence on society and investigate the ethical issues that must be taken into account as these technologies continue to develop.

What are Artificial Intelligence and Machine Learning?

Artificial Intelligence is the simulation of human intelligence in machines designed to think and act like humans. Machine learning is a subfield of AI that enables computers to learn from data and improve their performance over time without being explicitly programmed.

The impact of AI and ML on Society

The use of AI and ML in various industries, such as healthcare, finance, and retail, has brought many benefits. For example, AI-powered medical diagnosis systems can identify diseases faster and more accurately than human doctors. However, there are also concerns about job displacement and the potential for AI to perpetuate societal biases.

The Ethical Considerations of AI and ML

A. Bias in AI algorithms

One of the critical ethical concerns about AI and ML is the potential for algorithms to perpetuate existing biases. This can occur if the data used to train these algorithms reflects the preferences of the people who created it. As a result, AI systems can perpetuate these biases and discriminate against certain groups of people.

B. Responsibility for AI-generated decisions

Another ethical concern is the responsibility for decisions made by AI systems. For example, who is responsible for the damage if a self-driving car causes an accident? The manufacturer of the vehicle, the software developer, or the AI algorithm itself?

C. The potential for misuse of AI and ML

AI and ML can also be used for malicious purposes, such as cyberattacks and misinformation. The need for more regulation and oversight in developing and using these technologies makes it difficult to prevent misuse.

The developments in AI and ML have given numerous benefits to humanity, but they also present significant ethical concerns that must be addressed. We must assess the repercussions of new technologies on society, implement methods to limit the associated dangers, and guarantee that they are utilized for the greater good. As AI and ML continue to play an ever-increasing role in our daily lives, we must engage in an open and frank discussion regarding their ethics.

The Future of Work And Automation

Rapid technological breakthroughs in recent years have brought about considerable changes in our way of life and work. Concerns regarding the influence of artificial intelligence and machine learning on the future of work and employment have increased alongside the development of these technologies. This article will examine the possible advantages and disadvantages of automation and its influence on the labor market, employees, and the economy.

The Advantages of Automation

Automation in the workplace offers various benefits, including higher efficiency and production, fewer mistakes, and enhanced precision. Automated processes may accomplish repetitive jobs quickly and precisely, allowing employees to concentrate on more complex and creative activities. Additionally, automation may save organizations money since it removes the need to pay for labor and minimizes the danger of workplace accidents.

The Potential Disadvantages of Automation

However, automation has significant disadvantages, including job loss and income stagnation. As robots and computers replace human labor in particular industries, there is a danger that many workers may lose their jobs, resulting in higher unemployment and more significant economic disparity. Moreover, if automation is not adequately regulated and managed, it might lead to stagnant wages and a deterioration in employees' standard of life.

The Future of Work and Automation

Despite these difficulties, automation will likely influence how labor is done. As a result, firms, employees, and governments must take early measures to solve possible issues and reap the rewards of automation. This might entail funding worker retraining programs, enhancing education and skill development, and implementing regulations that support equality and justice at work.

IV. The Need for Ethical Considerations

We must consider the ethical ramifications of automation and its effects on society as technology develops. The impact on employees and their rights, possible hazards to privacy and security, and the duty of corporations and governments to ensure that automation is utilized responsibly and ethically are all factors to be taken into account.

Conclusion:

To summarise, the future of employment and automation will most certainly be defined by a complex interaction of technological advances, economic trends, and cultural ideals. All stakeholders must work together to handle the problems and possibilities presented by automation and ensure that technology is employed to benefit society as a whole.

The Role of Technology in Education

Introduction.

Nearly every part of our lives has been transformed by technology, and education is no different. Today's students have greater access to knowledge, opportunities, and resources than ever before, and technology is becoming a more significant part of their educational experience. Technology is transforming how we think about education and creating new opportunities for learners of all ages, from online courses and virtual classrooms to instructional applications and augmented reality.

Technology's Benefits for Education

The capacity to tailor learning is one of technology's most significant benefits in education. Students may customize their education to meet their unique needs and interests since they can access online information and tools. 

For instance, people can enroll in online classes on topics they are interested in, get tailored feedback on their work, and engage in virtual discussions with peers and subject matter experts worldwide. As a result, pupils are better able to acquire and develop the abilities and information necessary for success.

Challenges and Concerns

Despite the numerous advantages of technology in education, there are also obstacles and considerations to consider. One issue is the growing reliance on technology and the possibility that pupils would become overly dependent on it. This might result in a lack of critical thinking and problem-solving abilities, as students may become passive learners who only follow instructions and rely on technology to complete their assignments.

Another obstacle is the digital divide between those who have access to technology and those who do not. This division can exacerbate the achievement gap between pupils and produce uneven educational and professional growth chances. To reduce these consequences, all students must have access to the technology and resources necessary for success.

In conclusion, technology is rapidly becoming an integral part of the classroom experience and has the potential to alter the way we learn radically. 

Technology can help students flourish and realize their full potential by giving them access to individualized instruction, tools, and opportunities. While the benefits of technology in the classroom are undeniable, it's crucial to be mindful of the risks and take precautions to guarantee that all kids have access to the tools they need to thrive.

The Influence of Technology On Personal Relationships And Communication 

Technological advancements have profoundly altered how individuals connect and exchange information. It has changed the world in many ways in only a few decades. Because of the rise of the internet and various social media sites, maintaining relationships with people from all walks of life is now simpler than ever. 

However, concerns about how these developments may affect interpersonal connections and dialogue are inevitable in an era of rapid technological growth. In this piece, we'll discuss how the prevalence of digital media has altered our interpersonal connections and the language we use to express ourselves.

Direct Effect on Direct Interaction:

The disruption of face-to-face communication is a particularly stark example of how technology has impacted human connections. The quality of interpersonal connections has suffered due to people's growing preference for digital over human communication. Technology has been demonstrated to reduce the usage of nonverbal signs such as facial expressions, tone of voice, and other indicators of emotional investment in the connection.

Positive Impact on Long-Distance Relationships:

Yet there are positives to be found as well. Long-distance relationships have also benefited from technological advancements. The development of technologies such as video conferencing, instant messaging, and social media has made it possible for individuals to keep in touch with distant loved ones. It has become simpler for individuals to stay in touch and feel connected despite geographical distance.

The Effects of Social Media on Personal Connections:

The widespread use of social media has had far-reaching consequences, especially on the quality of interpersonal interactions. Social media has positive and harmful effects on relationships since it allows people to keep in touch and share life's milestones.

Unfortunately, social media has made it all too easy to compare oneself to others, which may lead to emotions of jealousy and a general decline in confidence. Furthermore, social media might cause people to have inflated expectations of themselves and their relationships.

A Personal Perspective on the Intersection of Technology and Romance

Technological advancements have also altered physical touch and closeness. Virtual reality and other technologies have allowed people to feel physical contact and familiarity in a digital setting. This might be a promising breakthrough, but it has some potential downsides. 

Experts are concerned that people's growing dependence on technology for intimacy may lead to less time spent communicating face-to-face and less emphasis on physical contact, both of which are important for maintaining good relationships.

In conclusion, technological advancements have significantly affected the quality of interpersonal connections and the exchange of information. Even though technology has made it simpler to maintain personal relationships, it has chilled interpersonal interactions between people. 

Keeping tabs on how technology is changing our lives and making adjustments as necessary is essential as we move forward. Boundaries and prioritizing in-person conversation and physical touch in close relationships may help reduce the harm it causes.

The Security and Privacy Implications of Increased Technology Use and Data Collection

The fast development of technology over the past few decades has made its way into every aspect of our life. Technology has improved many facets of our life, from communication to commerce. However, significant privacy and security problems have emerged due to the broad adoption of technology. In this essay, we'll look at how the widespread use of technological solutions and the subsequent explosion in collected data affects our right to privacy and security.

Data Mining and Privacy Concerns

Risk of Cyber Attacks and Data Loss

The Widespread Use of Encryption and Other Safety Mechanisms

The Privacy and Security of the Future in a Globalized Information Age

Obtaining and Using Individual Information

The acquisition and use of private information is a significant cause for privacy alarm in the digital age. Data about their customers' online habits, interests, and personal information is a valuable commodity for many internet firms. Besides tailored advertising, this information may be used for other, less desirable things like identity theft or cyber assaults.

Moreover, many individuals need to be made aware of what data is being gathered from them or how it is being utilized because of the lack of transparency around gathering personal information. Privacy and data security have become increasingly contentious as a result.

Data breaches and other forms of cyber-attack pose a severe risk.

The risk of cyber assaults and data breaches is another big issue of worry. More people are using more devices, which means more opportunities for cybercriminals to steal private information like credit card numbers and other identifying data. This may cause monetary damages and harm one's reputation or identity.

Many high-profile data breaches have occurred in recent years, exposing the personal information of millions of individuals and raising serious concerns about the safety of this information. Companies and governments have responded to this problem by adopting new security methods like encryption and multi-factor authentication.

Many businesses now use encryption and other security measures to protect themselves from cybercriminals and data thieves. Encryption keeps sensitive information hidden by encoding it so that only those possessing the corresponding key can decipher it. This prevents private information like bank account numbers or social security numbers from falling into the wrong hands.

Firewalls, virus scanners, and two-factor authentication are all additional security precautions that may be used with encryption. While these safeguards do much to stave against cyber assaults, they are not entirely impregnable, and data breaches are still possible.

The Future of Privacy and Security in a Technologically Advanced World

There's little doubt that concerns about privacy and security will persist even as technology improves. There must be strict safeguards to secure people's private information as more and more of it is transferred and kept digitally. To achieve this goal, it may be necessary to implement novel technologies and heightened levels of protection and to revise the rules and regulations regulating the collection and storage of private information.

Individuals and businesses are understandably concerned about the security and privacy consequences of widespread technological use and data collecting. There are numerous obstacles to overcome in a society where technology plays an increasingly important role, from acquiring and using personal data to the risk of cyber-attacks and data breaches. Companies and governments must keep spending money on security measures and working to educate people about the significance of privacy and security if personal data is to remain safe.

In conclusion, technology has profoundly impacted virtually every aspect of our lives, including society and culture, ethics, work, education, personal relationships, and security and privacy. The rise of artificial intelligence and machine learning has presented new ethical considerations, while automation is transforming the future of work. 

In education, technology has revolutionized the way we learn and access information. At the same time, our dependence on technology has brought new challenges in terms of personal relationships, communication, security, and privacy.

Jenni.ai is an AI tool that can help students write essays easily and quickly. Whether you're looking, for example, for essays on any of these topics or are seeking assistance in writing your essay, Jenni.ai offers a convenient solution. Sign up for a free trial today and experience the benefits of AI-powered writing assistance for yourself.

Try Jenni for free today

Create your first piece of content with Jenni today and never look back

  • Browse All Articles
  • Newsletter Sign-Up

TechnologicalInnovation →

No results found in working knowledge.

  • Were any results found in one of the other content buckets on the left?
  • Try removing some search filters.
  • Use different search filters.

(Stanford users can avoid this Captcha by logging in.)

  • Send to text email RefWorks EndNote printer

Essays in technological innovation & financial economics

Digital content, also available at, more options.

  • Find it at other libraries via WorldCat
  • Contributors

Description

Creators/contributors, contents/summary, bibliographic information.

Stanford University

  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Non-Discrimination
  • Accessibility

© Stanford University , Stanford , California 94305 .

Technology innovation and sustainability: challenges and research needs

  • Published: 12 July 2021
  • Volume 23 , pages 1663–1664, ( 2021 )

Cite this article

technological innovation essay

  • Yinlun Huang 1  

8464 Accesses

15 Citations

10 Altmetric

Explore all metrics

Avoid common mistakes on your manuscript.

Global environmental problems, such as depletion of natural resources, various types of environmental pollution and health risks, climate change, and loss of biodiversity, have become increasingly evident. Societies are more aware of the challenges than ever, and understand more deeply that pursuing sustainability is essential to environmental protection, economic growth, and social stability. Among solution approaches, technology innovation is a key, as it can influence prosperity, consumption pattern, lifestyle, social relation and cultural development. Technology determines, to a great extent, the demand for raw materials and energy, the ways and efficiency of manufacturing, product performance, waste reduction and waste handling, health and safety, transportation and infrastructure, etc., thereby making significant impacts on the economic, environmental, and social dimensions of industrial development. It is more widely recognized that sustainability is a key driver of innovation, and only those companies that make sustainability as a goal will achieve competitive advantage (Nidumolu et al. 2009 ; Kiron et al. 2012 ).

In the U.S., a new wave of technology innovations has arisen, largely due to the national endeavor to advance manufacturing in the thrust areas of national importance. The accelerated innovations entail rapid transfer of new technologies into design and manufacturing of high performance products and services. Although new and emerging technologies have become an engine of change and progress, the net improvement brought to the environment and society could be questionable, if sustainability principles are not fully incorporated into the technology development and application phases. For instance, although introduction of nanomaterials has created new opportunities for high performance applications and novel product introduction, there exist various concerns about negative impacts on health and the environment. Biofuel, as another example, can be converted directly from renewable sources, but its global emergence has led to the debate over the environmental impact, including global warming, due to growing vegetation used for biofuel manufacturing. All these demand thorough examination of economic, environmental and social aspects. Industries are more seriously conducting comprehensive sustainability assessment, and demand more sustainable technologies (Dornfeld 2014 ).

The essential component of industrial sustainability is three-pillar-based balanced development. This requires that technology innovations be shaped to incorporate sustainability principles fully throughout their development and application phases. It is imperative, therefore, to conduct a fundamental study on the sustainability dimensions of technology innovation, and develop systematic methodologies and effective tools for technology inventors, decision makers, and organizations to evaluate and maximize potential sustainability benefits of new and emerging technologies. In this endeavor, sustainability assessment of technology innovation, especially in its early development stage, is critical.

Technology assessment (TA) emerged in the 1970s as a research based policy advising activity. It constitutes a scientific and societal response to problems at the interface between technology and society. In the last decade or so, early engagement in TA occurred mainly in new and emerging products using, for example, nanotechnology and biotechnology (Grunwald 2009 ). Today, TA is considered a designation of approaches and methods for investigating the condition for and consequence of technologies, and for denoting their social evaluation. It is an interactive and communicative process that aims to contribute to the formation of public and political opinion on societal aspects of science and technology. A number of important concepts exist at the uppermost level of TA operationalization, such as participative technology assessment (evaluations participated by scientific experts, societal groups, and political decision makers), constructive technology assessment (constructive involvement of technology development process, aiming to analyze its enculturation by society), leitbild assessment (explanation of the course of technology development ex post rather than by giving indications on how to shape technology), and innovation-orientated technology assessment (analysis of completed and current innovation processes with primary interest in factors that are crucial to successful market penetration). The known methods for conducting TA are basically all derived based on participants’ views, discussions, and group consensus, and applicable to the TA of individual technology rather than a group of them as a whole. However, there is a lack of scientific framework for systematic, integrated assessment of technology innovation in different life cycle stages. More critically, there has been no systematic methods for TA in the triple-bottom-line-based sustainability space; this could lead to the whole spectrum of sustainability performance of technology innovations unclear.

Sustainability assessment (SA) is a very complex appraisal method. It entails not only multidimensional aspects that may be intertwined, but also cultural and value-based elements. There exist numerous types of sustainability indicators for a variety of systems and applications in different fields, and methods for indicator formulation, scaling, normalization, weighting, and aggregation (Singh et al. 2012 ). Studies on assessment information aggregation leads to a creation of composite sustainability performance indices. Sikdar et al. ( 2012 ) stated that it is deemed desirable to consolidate all the usable indicators into one aggregate metric to make performance comparison easier. A main challenge in SA of technology innovation is how to conduct multiple life-cycle-stage based assessment and to compare sustainability performance under different scenarios, especially when the available system information is uncertain, incomplete and imprecise. In almost every phase of sustainability study, data and information uncertainty issues exist. Examples include the data about material or energy utilization, toxic/hazardous waste generation, and market fluctuation, the multifaceted makeup of the inter-entity dynamics, dependencies, and relationships, the prospect of forthcoming environmental policies, and the interrelationship among the triple-bottom-line aspects of sustainability, weighting methods, weights’ values and aggregation methods. In technology innovation, uncertainty could be more severe, as many types of data and information are frequently unavailable and uncertain, and the relevant information from the literature or other sources may not be easily justifiable.

Apparently, an urgent research need is to develop science-driven frameworks for conducting systematic sustainability assessment of emerging technologies in their early development stage and recommending technologies sets after performing multistage sustainability impact evaluation (Huang 2020 ). Such frameworks should be composed of coherent sets of new concepts, propositions, assumptions, principles, and methodologies, as well as tools that could assist researchers, decision makers, and organizations in shaping technology innovations for industrial sustainability. This is certainly a very challenging task, especially when the world experiences major disruptions, such as COVID-19. However, the motivations for achieving industrial sustainable development goals should lead to the development of a new wave of highly sustainable technology innovations in the years to come.

Dornfeld DA (2014) Moving towards green and sustainable manufacturing. Int J Precis Eng Manuf Green Technol 1(1):63–66

Article   Google Scholar  

Grunwald A (2009) Technology assessment: concepts and methods. In: Meijers A (ed) Handbook of the philosophy of science, vol 9: philosophy of technology and engineering sciences. Elsevier

Huang Y (2020) Reinforcing sustainability assessment and reshaping technology innovation for highly sustainable manufacturing in the post-COVID-19 era. Smart Sustain Manuf Syst 4(3):341–345

Kiron D, Kruschwitz N, Reeves M, Goh E (2012) The benefits of sustainability-driven innovation. MIT Sloan Manag Rev 54:69

Google Scholar  

Nidumolu R, Prahalad CK, Rangaswami MR (2009) Why sustainability is now the key driver of innovation. Harvard Bus Rev 87:56–64

Sikdar SK, Sengupta D, Harten P (2012) More on aggregating multiple indicators into a single index for sustainability analyses. Clean Technol Environ Policy 14(5):765–773

Singh RK, Murty HR, Gupta SK, Dikshit AK (2012) An overview of sustainability assessment methodologies. Ecol Ind 15(1):281–299

Download references

Acknowledgements

This work is supported in part by U.S. National Science Foundation (Award No. 2031385 and 1604756).

Author information

Authors and affiliations.

Department of Chemical Engineering and Materials Science, Wayne State University, Detroit, MI, 48098, USA

Yinlun Huang

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Yinlun Huang .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Yinlun Huang—Associate editor.

Rights and permissions

Reprints and permissions

About this article

Huang, Y. Technology innovation and sustainability: challenges and research needs. Clean Techn Environ Policy 23 , 1663–1664 (2021). https://doi.org/10.1007/s10098-021-02152-6

Download citation

Published : 12 July 2021

Issue Date : August 2021

DOI : https://doi.org/10.1007/s10098-021-02152-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Share Podcast

HBR On Strategy podcast series

Disruptive Innovation in the Era of Big Tech

How does the landmark theory apply to tech start-ups, three decades after its introduction?

  • Apple Podcasts

In 1995, the late and legendary Harvard Business School professor Clayton Christensen introduced his theory of “disruptive innovation” right here in the pages of the Harvard Business Review. The idea inspired a generation of entrepreneurs and businesses, ranging from small start-ups to global corporations.

Three decades later, debates have emerged around how the theory should be applied — especially within technology start-ups that have driven so much economic growth since 2000.

In this episode, Harvard Business Review editor Amy Bernstein and a panel of expert scholars discuss the legacy of disruptive innovation, and how the common perception of disruption has drifted away from its original meaning.

Expert guests include:

  • Harvard Business School senior lecturer and director of the Forum for Growth and Innovation Derek van Bever
  • Columbia Business School professor Rita McGrath
  • Harvard Business School professor Felix Oberholzer-Gee

Key episode topics include: strategy, competitive strategy, business history, disruptive innovation, Clay Christensen, innovator’s dilemma.

HBR On Strategy curates the best case studies and conversations with the world’s top business and management experts, to help you unlock new ways of doing business. New episodes every week.

  • Listen to the full HBR IdeaCast episode: 4 Business Ideas That Changed the World: Disruptive Innovation (2022)
  • Find more episodes of HBR IdeaCast
  • Discover 100 years of Harvard Business Review articles, case studies, podcasts, and more at HBR.org .

HANNAH BATES: Welcome to HBR On Strategy , case studies and conversations with the world’s top business and management experts – hand-selected to help you unlock new ways of doing business.

In 1995, the late and legendary Harvard Business School professor Clayton Christensen introduced his theory of disruptive innovation right here in the pages of Harvard Business Review .   The idea inspired a generation of entrepreneurs and businesses, ranging from small start-ups to global corporations.

Almost three decades later, debates have emerged around how the theory should be applied in the real world especially within the tech start-ups that have driven so much economic growth .

In this episode, Harvard Business Review editor Amy Bernstein and a panel of expert scholars discuss the legacy of disruptive innovation, including what Christensen got wrong about it, and how the common perception of “disruption” has drifted away from Christensen’s initial idea in recent decades.

This episode will give you a new perspective on what makes a strategy succeed in the long term. It originally aired on HBR IdeaCast in October 2022 as part of a special series called 4 Business Ideas That Changed the World. Here it is.

AMY BERNSTEIN: Welcome to 4 Business Ideas That Changed the World , a special series of the HBR IdeaCast . In the 1980s, Clayton Christensen was in his 30s, the business guy at a startup. The company was making ceramics out of advanced materials, and it was able to take over the market niche from DuPont and Alcoa. That experience left Christensen puzzled. How could a small company with few resources beat rich incumbents? The question led to his theory of disruptive innovation, introduced in the pages of Harvard Business Review in 1995, and popularized two years later in The Innovator’s Dilemma .

The idea has inspired a generation of entrepreneurs. It’s reshaped R&D strategies at countless established firms, seeking to disrupt themselves before somebody else does. It’s changed how investors place billions of dollars and how governments spend billions more, aiming to kickstart new industries and spark economic growth. But the idea has taken on a meaning well beyond what Christensen actually described. Think about how easily we use the word disruption to explain any sort of innovation, business success, or industry shakeup.

It’s also drawn fire. Some critics argue the theory lacks evidence. Others say it glosses over the social costs of bankrupted companies, and debate continues over the best way to put the idea to work. On this special series, we’re exploring 4 business ideas that changed the world. Each week, we talk to scholars and experts on the most influential ideas of HBR’s first 100 years. This week: disruptive innovation. With me to discuss it are Derek van Bever, senior lecturer and director of the Forum for Growth and Innovation at Harvard Business School, Rita McGrath, professor at Columbia Business School, and Felix Oberholzer-Gee, professor at Harvard Business School. I’m Amy Bernstein, editor of Harvard Business Review and your host for this episode. Let’s set some context. Rita, what was our understanding of innovation before Clay gave us disruptive innovation?

RITA MCGRATH: Yeah. I think our common understanding of it was something that came out of R&D groups. It was like big product, big materials, big physical things, innovation. The classic would be like DuPont nylon. They invented this thing, that meant women didn’t have to spend hundreds of thousands of dollars collectively on silk stockings, and they had nylon riots. Literally, people were charging at these trucks with this revolutionary substance.

I think that’s how a lot of people still thought about innovation, is something that was very tech-heavy in the sense of not digital, but just technology that was coming out of R&D labs and so forth. That was one pervasive thought. I think the next pervasive thought was that innovations that were successful added something. They were new and improved, and so you built a better mouse trap. You built a better nylon stocking, you made Kevlar and things became impermeable, and that it was always at the top of the market.

I think that was one of the things that Clay’s work revealed, which was that innovation did not have to be new and improved or better on the existing dimension of merit, but that it could actually be worse on whatever it was we used to judge products by. But it did something else that was different.

AMY BERNSTEIN: You mentioned technology. Was technology always a necessary component of innovation as understood then?

RITA MCGRATH: I think in our theory of innovation it was. I think the idea of really business model innovation to me, did not become a common topic of conversation really until the ’90s. Prior to that, it was really product-centric, I would say, innovation. Peter Drucker and people like that, talked a little bit about things like the advent of the knowledge worker and what the network society was going to mean, and that kind of thing but that was really early days.

AMY BERNSTEIN: Felix, so help us understand Clay and what shaped his thinking. He was a co-founder of a technology company when he started to consider disruptive innovation. What shaped his thinking?

FELIX OBERHOLZER-GEE: We know Clay as a faculty member at Harvard Business School, of course, first and foremost. But actually, by the time he arrived and became a faculty member, he had done many different things already. He was a missionary in Korea, he studied in the US and in the UK. He had earned an MBA from HBS. Then in the 1980s, together with faculty members at MIT, he had started a company called Ceramics Process Systems. The one experience that he had as CEO of the company, was quite dramatic and in part informed his thinking about disruptive innovation.

The basic technology that they had came out of an MIT lab, and it was exactly what Rita had alluded to. It was this idea, is there a way to make what we have today, is there a way to make it better? To improve on the quality? In their case, they made ceramic substrate that could be used in microelectronics. This is a very, very thin layer of ceramic that has excellent properties when it comes to conducting heat and power. They had better ideas how to make that. The challenge was that the technology was not so easy to scale up.

They were about 14 months late or so later than they had anticipated. By that time, a competitor had essentially duplicated or had a product that was very similar, and the price premium that they expected to earn had vanished. In retrospect, I think looking back at this particular type of innovation, Clay later found in his dissertation that if you go directly against established incumbents, your chances of being successful are not all that great. He would say, “Well, maybe 5%, 6% of these attempts are successful, but mostly you shouldn’t really get your hopes high up.”

AMY BERNSTEIN: Derek, let me ask you about this idea that Felix just described. Had anyone ever noticed this before? Was it all that novel?

DEREK VAN BEVER: It was really remarkably creative, what he did. The question that consumed him was why is it that sometimes a tiny, little upstart can unseat a powerful, industry-leading incumbent? It was the sometimes that really intrigued him. He was looking for the causal driver, not merely correlation, but what was it that caused this phenomenon? There were lots of descriptive explanations that had been advanced in the past. One was that industry leaders would become self-satisfied and complacent, and not see the attacker coming.

Another was that if you got attacked on too many fronts at once, Xerox versus Canon, you couldn’t respond adequately. What bothered Clay was that while these explanations were often true enough, there were also a lot of anomalies, instances where they didn’t hold. Clay used these anomalies as learning opportunities, rather than exceptions. What he realized was if you can approach an incumbent in a way that causes them to ignore you or to flee upmarket, you have the thing you need the most, which is time to build a foundation underneath your business.

Then finally, he gave names to phenomena that were familiar, particularly to businesspeople. He called the trajectory of innovation that is far and away the most common, he called that sustaining innovation. Any company that wants to be in business for any length of time, had better be really good at that. He called that trajectory underneath the existing incumbents; he called that disruptive innovation. That’s what’s hard for incumbents to see, because it typically presents as products that aren’t as good, that aren’t interesting to their best customers. And therefore, are not something that they can allocate resource toward

FELIX OBERHOLZER-GEE: Or maybe if I can add a little twist to it. One of the things that I find most fascinating about the theory of disruption, is that it describes the reasons why the incumbent is unlikely to respond. For instance, because you have amazing margins with your best customers, and the incentive to serve a segment that doesn’t look very profitable to begin with, those incentives are just really muted.

Or you might have firm internal processes that make it really difficult to serve a new segment with much different demands in a way that seems both effective and eventually profitable. Even once you know about disruption, in part, it’s such a powerful idea because it speaks to the tendency not to respond. Even though from the outside it looks like you have all the resources, you have all the talent, you have everything that it would take to be responsive.

DEREK VAN BEVER: Felix, you’re reminding me, our colleague, Chet Huber, came into my office one day after I had been teaching in the course for a couple of years. He sat down in front of my desk and he said, “You do realize that this is a psychology course, right?” And boy, was that true.

AMY BERNSTEIN: Rita, Clay brought this idea to a much broader audience through HBR and through his book, The Innovator’s Dilemma. Tell us how that was received.

RITA MCGRATH: Well, I think before we get to Innovator’s Dilemma, let’s talk about “Disruptive Technologies: Catching the Wave,” because that was the HBR article that preceded it. Everybody’s forgotten this now, but he co-wrote that with Joe Bower, Harvard’s own Joe Bower, who had written a whole series of books and articles, and research drafts on how fundamental the resource allocation process is to corporate decision-making of all kinds.

The original idea was to build on what Derek was saying, companies allocate resources according to a logic, and that logic is sometimes not necessarily in their own best interest. When the book came out, The Innovator’s Dilemma, that was in 1997. This is another thing we’ve all forgotten, which is it did not become a runaway best-seller right away. It took a couple of years.

And if memory serves me, it was a picture of Clay with Andy Grove of Intel on the front cover of a business magazine. I think it was Forbes. The two of them are on the front cover, and Grove basically saying, “I am changing the entire direction of my company because of Christensen’s theory.” That’s when it hit the masses.

AMY BERNSTEIN: That’s exactly when I remember becoming familiar with it for the first time. I’d forgotten that. Thank you for that. Felix, why do you think the idea struck a chord? Why did the book finally take off, the idea finally take off? What was happening at that time?

FELIX OBERHOLZER-GEE: When we think about the late 1990s today, of course, what we think of most commonly is that the dot-com bust when the bubble burst. But of course, before the bubble burst, there was a dot-com boom. There was a deep sense that technology would change things in really radical fashion. It’s not a coincidence that Andy Grove and companies like Intel were under the impression that the future could look radically different from the way the past had looked. That past success didn’t really guarantee much when it came to predicting future success.

Part of that, I think, is interlinked with the way the new technologies created network effects. The idea, that as my technology scales, as I get lots of customers, as I get broad adoption, the value of technology increases correspondingly. The personal computer, the early beginnings of the internet, everything spoke to technology and network effects, in particular, would become dominant features of the business landscape. Now, one thing that is true, if you operate in environments with very strong network effects, on the one hand, they’re a real formidable barrier to entry.

But just like they fuel growth and they can make you very successful in a short period of time if successfully challenged, you can then also lose everything in a very short period of time. Andy Grove’s famous management mantra that instructed everyone to be really paranoid, had in part to do with how technology changed and how technology gave rise to business network effects that created stability and instability at one and the same time. That was obviously fertile ground for a thinker who came along and said, “Well, it looks like you’re doing really well today, but actually your success today may hide in some sense, the undoing of your business in the future.”

AMY BERNSTEIN: Derek, was that paranoia that Andy Grove was pushing? Is that what made the idea so relevant to businesspeople or what was it that made it resonate?

DEREK VAN BEVER: Well, first, unlike many academics, Clay was himself a businessperson earlier in his career. He instinctively understood the relevance of his work to business leaders. He understood the angle at which a businessperson would approach a question. In fact, he was answering the question he had had when he left business to come to academia. He was also careful never to pretend that he knew more than his audience about their business.

In that famous encounter he had with Andy Grove, in which Andy Grove kept asking him to say, “What does disruption mean for Intel?” Clay said, “I’ll explain the theory of disruption to you, but you know your business better than I do. You’re the one who’s got to figure out what the implication is for Intel.” He famously said, “I would’ve been killed if I had tried to out Andy Grove, Andy Grove on what the implication of disruption was for Intel’s strategy.”

AMY BERNSTEIN: Rita, who was the first to embrace it? We know about Andy Grove, of course, but what industries, where did the uptake happen?

RITA MCGRATH: I think the uptake happened in industries that were being challenged so automotive, for example. The advent of really inexpensive but super, high-quality, smaller cars in the ’70s and ’80s, had completely freaked that industry out. They glommed onto this theory as, “Oh, they were low-featured, they weren’t as good on the dimensions of merit that we’d previously competed on.” But the disruption theory gave the incumbent Big Three car makers an out.

I think those kinds of industries, steel, automotive, where they felt that there were these things happening at the low ends of the market. I think the other thing that made it popular at the time was, and we’ve forgotten this now, but there was a time in American business where entrepreneurship meant you couldn’t get a real job. It was not the glam, cool thing. The guy you wanted to be was the guy in the gray flannel suit.

I would say beginning in the Reagan Administration mid-‘80s, and then leading up to the dot-com boom, that was really when entrepreneurship, the whole idea of startups, started to be something people took seriously. Before that, if you weren’t Ford or 3M or something, people didn’t really think about you as a force for change in the economy. I think that moved towards entrepreneurship.

I would put it to the rise of companies like Microsoft, where briefly, Bill Gates was the most valuable man in the world. It legitimated that whole field. Then following closely on the heels of that was this idea of corporate entrepreneurship, which is we need to be able to create new businesses from within, and then we need to be doing this continuously. We can’t just have one great idea and live on it for decades, no more.

AMY BERNSTEIN: Did everyone embrace this theory when it finally took off? Or were there some who said, “No, that’s not making sense”? Were there critics?

RITA MCGRATH: Oh, there always are. Oh, there always are. There’s always people that say, “Are you kidding? I’m, insert name of company. Gillette in razor blades, or Pepsi or Coke or these big franchises.” There’s always people that say, “Don’t be ridiculous. There’s no way some little fly-on-the-wall company is going to be able to attack us in any meaningful way.” There was a whole chunk of people who just didn’t buy it. What I would say, and I want to build on what Derek was saying, and to some extent Felix, it gave managers an explanation. It gave them an out.

It said, “You’re not a bad manager, because you’re attending to your best customers and you’re trying to go upmarket, and you’re trying to increase your margins. You’re trying to do all these things that all the business textbooks at the time said was the right thing to do.” It doesn’t mean you’re a bad manager, but you can still find yourself in trouble. I think it was that combination of providing an explanation for a phenomenon that had not gotten a lot of attention up to that point. But also giving people an out saying, “Oh, I was hit by the innovator’s dilemma. Nobody could have seen that coming.” Right?

DEREK VAN BEVER: Right.

AMY BERNSTEIN: But did it explain anything else, Felix? Were there any puzzling business behaviors or phenomena that this theory helped explain, other than the one that Rita just described?

FELIX OBERHOLZER-GEE: I think what Rita described is really the core of what was appealing, and it often came across as a puzzle ex post. Once you see that Netflix has successfully disrupted Blockbuster, then the big question, of course, is, “Oh my God, if Netflix saw this opportunity, why didn’t Blockbuster, in the beginning, have a DVD shipping service? Why didn’t they see the promise of the internet?” In some sense, the most popular version of the theory that often we couldn’t see it because no one knew that it would be so big.

There’s 15 ideas around the corner that go nowhere. How am I to pick the one that I should really pay attention to? That explanation is much more disquieting, I think, and hard to live with because it doesn’t really tell you what you can and what you cannot do. It replaced that with an explanation that said, “Yes. Of course, it’s bad luck someone else had a really promising idea, but your incentives were actually not to respond in the first place.” That’s exactly why disruption is something really powerful.

Because your systems are set up in a way, your incentives are set up in a way, that in the moment the company that seems to have all the resources, that seems to have all the capabilities to do something, that the disruptor often does—typically, not a great quality—why the incumbent wouldn’t really do that successfully.

AMY BERNSTEIN: Derek, let’s get into the criticism that the theory has drawn. There have been a few critics. Jill Lepore, the Harvard historian, most notably, who said that there really wasn’t enough evidence to justify the theory. Well, first of all, what’s your view of that? You worked very closely with Clay. How did he respond to that criticism?

DEREK VAN BEVER: Anyone who knew Clay, knows that he had a handmade sign in his office that said, “Anomalies Wanted.” And it’s true. One of the things that made him such a powerful thinker, was that he was so humble and so open to criticism. It wasn’t as if you spot something that the theory doesn’t cover and say the theory, therefore, is discredited. For Clay, that was for him a building block. Now, we get to dig in and make it better.

That disruption theory was still under construction, absolutely fit Clay’s worldview. It wasn’t so much that businesspeople criticized the theory. I think the academy had a really hard time with it, in part for the reason that Felix is mentioning. That people would say, “Sure, ex post, you can spot disruption, but can you spot it ex ante? Can you spot the areas where disruption prospectively is going to be operative?”

Work has been done on that, but that was very much out there. Then also, disruption is not built on a quantitative model, which is the coin of the realm today, of course, so it’s really hard to determine the boundary conditions. Anybody who’s done research on growth, you have to define what success and failure are, and there is no objective standard. You’ve got to figure out, “Okay, what’s the structure of the experiment?” And then run it.

I will always remember, I went to Clay once with what I thought was a really smart question. I said, “Clay, how can you tell when a disruptor becomes an incumbent?” He looked at me indulgently, and he said, “Derek, you do realize these are just constructs, right?” It was he had this revolutionary idea, but he also realized he’d given names to forces, and there was still so much to be discovered.

RITA MCGRATH: Yeah, and I’ll jump in on this. Very famously, he was wrong, by the way, about some of the top-of-the-line innovations. He very famously predicted that the iPhone would fail. One of the most profound critics of the theory of disruption is Safi Bahcall, who wrote a book called Loonshots. He’s biotech CEO, he’s a trained physicist, da, da, da, da, da. In his work, what he’s looking at are these unloved, crazy ideas that some passionate person is pushing.

So something like mRNA virus chains and discovery, all kinds of discoveries. He called them loonshots because it wasn’t obvious that they were economically viable. But his argument would be very often what turns into a disruptive technology, is actually a bunch of people pursuing what they think is a sustaining technology. It ends up through the twists and turns that discovery takes, it ends up actually being completely disruptive.

An example of that would be the invention of the microprocessor. The people that came up with that stuff, were actually looking for better vacuum tubes. They thought they were doing sustaining innovation, and it turned out to take them in a completely different direction. I think there is a nuance to this, which is separating out the intent of the people making these discoveries from the actual market consequences.

AMY BERNSTEIN: Felix, any thoughts?

FELIX OBERHOLZER-GEE: I always liked Clay’s distinction in the article that he wrote for Harvard Business Review in 2015, where he explains why Uber is not a disruptor in his view. First, the theory is not really built to explain which of the disruptors is going to be successful. Even if you expose, see the patterns, say, “Oh my God, that’s amazing what they did, because they went in at the low end and they had a really great idea. Ultimately, built an amazing business.”

There’s nothing in the theory that out of the hundreds of people that try to do this, who’s going to be successful and who’s not going to be successful. Then the second point that he makes in that article that I’ve always found very important, and often among the critics, I think poorly understood, is that there is a sense of when is it going to happen fast and when is it going to take a long time? But ultimately, there’s very little in the theory that would describe end states.

That is if you see a company, a big, large incumbent that gets disrupted, can you say anything about the eventual size of that organization? Can you say anything about the return on investor capital of that company? The answer is, by and large, no. It might be that the segment that they hold onto, perhaps it’s a sliver at the very high end of quality, where you have customers with very high willingness to pay.

You can maintain perhaps a smaller but a financially super, super successful business. The idea of being disrupted, is not so much the disruptor has to, I don’t know, go bankrupt. Or it’s like it’s only really disruption if it looks like Kodak.

AMY BERNSTEIN: Rita, what was it about the way that Clay communicated that helped spread his ideas?

RITA MCGRATH: That is such a good question because I have had so many conversations with my fellow innovation professors over the years, who would say things like, “I came up with the concept of, fill it in, ambidextrous innovation, the attacker’s advantage.” There’s a whole list of things, and they’re very miffed that, “Well, I came up with that and nobody paid any attention. Clay talks about it, and everybody thinks it’s the best thing since the miracle of bandwidth.” I think I’d point to three things, master storyteller, absolutely masterful storyteller.

When Clay illustrated a phenomenon, he used relatable examples. He used an interesting story, he used a twist, and people could see themselves in that story. Second thing he did, was he took ordinary things and made them really interesting. I’ll go back to one of his most famous parables ever, the parable of the milkshake. What’s the job a milkshake has to do for you? People would be listening to it going, “You know, you’re right. At lunchtime, I have a different job I need to be doing, than when I’m picking my kids up from school. Yes, I see that now.”

He had that way of making the ordinary seem really extraordinary. Then I think the third thing was he was genuinely interested in your response to what he had to say. Many professors, I won’t name names, but many professors are much more interested in you hearing what they have to say, than being interested in what you have to say. I think with Clay, it was always the other way around.

AMY BERNSTEIN: Coming up after the break, we’re going to explore how the common perception of disruption is drifted from its original meaning. What lessons are there for us today? Stay with us.

Welcome back to 4 Business Ideas That Changed the World: Disruptive Innovation . I’m Amy Bernstein. Felix, let’s pull the camera back a little bit. How has Clay Christensen’s theory of disruption changed the way we think about strategy and competition?

FELIX OBERHOLZER-GEE: Well, in a way, the idea is almost a victim of its own success, so disruption is anywhere. In fact, the way most people use the word disruption these days has very little to do with Clayton’s idea. We come up with a new flavor for yogurt and people say, “Oh my God, the market for yogurt has been disrupted.” Despite that, I think it has done two things. The first is what Rita mentioned earlier, it’s given entrepreneurship a prominence.

It’s gone to a point now, when I tell my MBA students that most of the time, most innovation comes from large, established organizations, they look at me in complete disbelief. They actually don’t really think that large, incumbent organizations do anything that is all that innovative. It’s almost like the flip of what Rita described earlier, where we thought that, “Oh, if you’re an entrepreneur, you must be a loser.”

Now we’re giving, I think generally speaking, not enough credit to large companies and all the pretty amazing things that they do. One of the consequences of using disruption completely indiscriminately is that it’s now become synonymous with success. We look at Uber and they seem successful. Then we say, “Oh, the market for taxi services has been disrupted.” Success described in these very, very general terms I think is actually not very useful for setting strategy.

AMY BERNSTEIN: That’s interesting. If we now equate disruption with success, what about the other side of that, Rita? Can the theory of disruption be blamed for business failure? Can we say it’s brought down some companies, some firms?

RITA MCGRATH: I don’t know that the theory’s done that. It is possible to have badly managed firms in just about any circumstance. I think this builds on what Felix was saying. When the stories get told after the fact, we miss so much of what actually happened. What actually happened at Blockbuster was not the common mythology. The common mythology is Netflix emerged out of scorched earth and took the world by storm with CDs that you could mail in a red envelope. That is not true. Netflix in desperation, went to Blockbuster to try to be acquired.

They wanted to be Blockbuster’s online arm, and Blockbuster laughed at them. Literally laughed at them and said, “Get out of my office. What are you people? You’re a four-person dingbat operation, and we’re supposed to take you seriously?” That’s one of those stories that gets misunderstood. Kodak’s another one. The guy that sank Kodak had been running the printing business at HP. Lost out on the CEO race to run things at HP. And steered that company right over the cliff that was printing at home just at the moment that screens became possible, to be good enough to show pictures.

A lot of this stuff doesn’t really get remembered when we recall the stories. I don’t think the theory brings companies down. What I think brings companies down is the following: A failure to adequately balance today’s investments versus tomorrow’s. An unwillingness to make the financial and personnel commitments to little, new things. I see this all the time. You got your core business and it’s trundling along like an eight-lane highway. You got something with four people and a passionate advocate in charge of it, and it looks completely insignificant in the early stages.

When you think about why established companies get undone, it’s not because they didn’t make big, courageous moves, it’s because they didn’t allow the flourishing of lots of small, low-cost moves.

DEREK VAN BEVER: I completely agree with Rita. You can’t blame a theory for being explanatory. In fact, there has been research to try to validate the proposition that what disruption actually does through targeting non-consumption is to expand markets.

It may be that the providers of products and services change, revolve over time, but consumers benefit because there are more and more people who are available to consume products that are less expensive, more convenient, et cetera.

AMY BERNSTEIN: How has the theory evolved since it debuted, Felix?

FELIX OBERHOLZER-GEE: One of the really big additions was to distinguish between different types of disruption. We just talked earlier about the low-end entry, the low-end foothold that I think was very much on Clay’s mind when he first wrote about disruption. Toyota’s entry into the car market being one of the prominent examples. There wasn’t all that much in his ideas regarding competing against non-consumption. The idea you want to be that lower quality, lower priced version of something that we’re familiar with, or are you really competing for a segment that is not in the market at all?

Those differences turn out to be super, super important. In that sense, the theory has become richer. I think there’s also a little more of a sense that it’s not really a recipe. It’s not as though, “Oh, I follow this particular recipe and then I know I’m going to be successful.” We just know that the chances of entrepreneurs being successful are pretty low to begin with. Just like the probability of being disrupted if you’re a large and successful business are probably not all that large.

DEREK VAN BEVER: Could I add one thing to that? I completely agree that with Felix, that if you go back to [The Innovator’s] Dilemma, Clay was really describing one flavor of disruption at that time. Not new market disruption. But also, I think over time, you could see a shift in his language from talking about a disruptive technology to a disruptive positioning.

That it was really the creation of a new business model in all of its attributes. What’s the value proposition? What’s the profit formula, the capabilities, and priorities in that model? In fact, a technology can be shaped to be sustaining or disruptive. What is the model that’s being brought to market to compete with incumbents?

AMY BERNSTEIN: For the businesses that are trying to avoid being disrupted, Rita, what’s the best advice out there for them?

RITA MCGRATH: Well, you lift the lid off of any corporate portfolio, and it’s horrifying. What you see in there is somebody’s pet bunny from three CEOs ago and nobody said, “Why are we still doing that?” Or you’ve got these mission-critical, absolutely important projects that like half an intern is working on so you have this real disconnect.

DEREK VAN BEVER: These are the scars of a veteran, for sure!

RITA MCGRATH: I have been around the block on this. Anyway, then the last thing is your reward system. What do people believe they’re going to get rewarded for around here? One of the things that companies needed to do, if they’re going to avoid getting disrupted, you have to be in the game and you have to be willing to support small initiatives. There’s got to be some slack resource, there’s got to be the willingness to fund it. The number of times I have seen companies say, “Oh, we don’t want, we’re not going to be disrupted. We have this thing going on over here.”

No assumptions tested, no low-cost commitment tests. Big project teams with all the money in the world, on the assumption that they know what they’re doing and they don’t. There’s a real need for organizations that want to behave this way, to be willing to put some money behind what I call options. The idea of making a small investment today that could, not that will, but that could give you the right to create future choices. Companies that are going to be successful are going to get a lot smarter about that.

AMY BERNSTEIN: Well, let’s look at it from the other side, Derek. What’s the best advice for entrepreneurs or upstarts, who want to take advantage of disruptive innovation?

DEREK VAN BEVER: Yeah, pretty simple advice. Keep your cost structure low so that you’re able to exploit opportunities that are uninteresting to incumbents, too small, too remote, and target non-consumption. Don’t go after customers that they value, but rather go after segments that they’ve dismissed. The brass ring is if you can go after a segment that they’ve dismissed and they look at you and they go, “They just don’t understand this business.”

They let you grow a little bit and you get some success, and they look back at you a little bit later. And they go, “Oh, those poor dears. They just are not going to learn, are they?” Then they completely ignore you. That gives you the opportunity then to build from the bottom unmolested.

AMY BERNSTEIN: Felix, where does applying this theory most often go off the rails? Where are the difficulties in applying it?

FELIX OBERHOLZER-GEE: One difficulty for entrepreneurs is that it’s pretty difficult to distinguish non-consumption that actually has the promise from situations where there’s just no interest. You’re probably familiar with SimpliSafe, the home security company, I think is a beautiful example. Eleanor Laurans, one of the co-founders, she sits in Clay’s class. She literally goes out and tries to apply the theory thinking, “Why is there no home security for renters?”

How is it that leading company back then, that now ADT is serving homeowners, but renters are afraid maybe, or have a willingness to invest in home security as well. They built the company, literally built on the principles that she learned in the classroom. That yes, it’s a little less convenient, you don’t have someone who comes by your house and installs the equipment. You have to do that yourself, and so on, and so on. Then it turns out renters were just not really all that interested.

The fact that SimpliSafe is a very successful company today is just because a large fraction of homeowners actually found the value proposition of the company quite attractive. Distinguishing instances when you look at non-customers and what I tend to call near-customers, customers whose willingness to pay is in a useful vicinity, that turns out to be really difficult. Then for incumbent firms, I think one of the main difficulties is even if you’re successful at recognizing potential for disruption. Even if, as Rita suggested, you follow Clay’s advice and you set up a small group.

Typically, you take it out of the regular bureaucratic procedures, and you set it up as a separate entity, and they don’t have to worry about funding for a little while. We have lots and lots of examples where companies have done this successfully, where they build a shadow operation. Think Walmart, its online operations that get established, a million miles away, at least mentally, from Bentonville, in Silicon Valley, of course. Then there’s just no real way to bring that small, agile organization back and attach it to the supertanker.

You build something sort of interesting, sort of successful, but given the scale of the incumbent, it’s pretty meaningless. I think incubating new ideas, that’s what many incumbents are quite good at. But marrying these ideas back to the supertanker that has been on a set course for a long period of time, I think that remains extraordinarily challenging, with not that many examples of companies that have done this successfully.

DEREK VAN BEVER: Felix, you’re reminding me, Clay, when he was in the classroom, he would take that big index finger of his and he would go, “Where do you stick it?”

FELIX OBERHOLZER-GEE: Yeah.

DEREK VAN BEVER: His frustration was that companies would always try to stick it underneath the division that it is effectively disrupting. You know how that story ends, right?

FELIX OBERHOLZER-GEE: Yes.

DEREK VAN BEVER: Where it’s, “Oh, we’ll take care of this. Don’t worry, we’ll make sure that this grows just as fast as it should.” That’s often the last that you hear from it.

FELIX OBERHOLZER-GEE: Yeah. But then his view that simple organizational separation will lead to long-term success, that I think has not really been true for many companies either. I think that’s a really important question. Then the second, if you see disruption, if you think it’s going to happen, how good are you going to be? What are the chances that that’s a game that you can play successfully? Think of the large energy companies right now.

Most of them are making some investments in renewables, and we already see quite interesting dividing lines. Some of them being good at it, and some of them basically wasting money that doesn’t seem to have much of a payoff. Disruption itself implies that it’s almost costless to respond. But in the end, there’s capital, there’s talent, there’s attention that is required, if in fact, you want to be building something successful.

In an environment where entrepreneurship and the opportunity cost of trying new things are typically downplayed or are seen as very low, I tend to remind my students that the opportunity costs of trying to play yet another game, they can be quite sizable.

AMY BERNSTEIN: Let me throw out a question to the whole group here. Where do you all think our understanding of disruptive innovation is headed? What future are we looking at? I’ll go around the horn here. I’ll start with you, Rita.

RITA MCGRATH: Sure. What I’m encouraged by is when Clay and I were working together in the ’90s, we’d never actually wrote a paper together, we co-presented a lot of stuff, but not co-authored. But anyway, we were talking about this in the ’90s, and we would be like the only people in the room talking about these phenomena, and people would look at us as though we had two heads—or four heads I guess, between the two of us. Because I was talking about, “Well, you need to plan differently when you don’t have data.”

Clay was talking about, “Well, this little upstart could cause you problems, if the right circumstances prevailed.” I think what’s happened in the intervening decades, is people are now aware. People are now willing to say older models of strategy don’t apply, that newer models really make a difference. That is a far cry from being able to put that awareness into systemic action. I think what we’ve made a lot of progress on is the conversations are different.

There’s a lot more knowledge that there’s more to life than just sustaining innovations. That there are these phenomena we need to pay attention to. I think awareness is where we are. I think the next big chasm to be crossed is how do we now put that in practice in the management structures that we use to run large, complex corporations? There is so much knowledge about how you build innovation capability, how you build disruptive potential, how you actually make these things happen.

And yet, most managers aren’t taught it. If you think about the lifecycle of a competitive advantage, it has to come from somewhere. It has to come from an innovation or an invention, or an idea or something. Then you have to scale it, which is getting it into the business. Then you have this delightful period of exploitation, where you get to enjoy the fruits of your labor. That’s what we teach people. We don’t also teach them about what happens when the shoe has turned, the thing’s gone obsolete. Your 386 microprocessor is no longer the state-of-the-art. How do you now reconfigure your company to take advantage of the next new thing? Those are skills were not yet mainstream.

DEREK VAN BEVER: Yeah.

AMY BERNSTEIN: Derek?

DEREK VAN BEVER: Yeah. Going back to an aside I made a while ago, that when Chet said, “You know this is a psychology course, right?” It is interesting that 27 years after the publication of that book, we’re still bound to get caught up in this phenomenon. To pick up on what Rita said, I think we are going to understand more about how to respond to the phenomenon of disruption as incumbent companies. We’ll understand the different rate at which it works its way through industries.

Fifty years in steel, seemingly overnight in education, and we’ll understand more the importance of the performance metrics that we honor. What would’ve happened if US Steel had measured not gross margin, but net profit dollars per ton? Would they have abandoned such a huge swath of the steel market and imagined that they were doing the right thing? I think we’ll get better at continuing to tease out this puzzle of how do we confront our own cognitive weaknesses and blind spots and respond with more alacrity, more quickly and more effectively?

AMY BERNSTEIN: Last word to you, Felix.

FELIX OBERHOLZER-GEE: I think to me, one of the really big changes in technology in the economy today, is the ease with which companies can produce high-quality services and products at incredibly low cost. Remember, part of the dilemma for the incumbent, comes from the fact that you’re serving customers who have very high demands. And the implication was you, as a result, have very high cost. That makes it basically impossible for you to respond. Now today, we see so many companies that have amazing quality and a cost advantage at one and the same time.

This old notion in strategy of being stuck in the middle when you try to be both high quality and low cost, and then you end up being not really high quality because you’re thinking about cost. You end up not being really low-cost because you’re thinking about quality as well. This notion of “stuck in the middle,” to the extent that it doesn’t really apply, frees up incumbents to respond in a much more flexible manner to serious threats of disruptors.

Then it struck me as interesting, even in today’s conversation—I know I’m guilty of it myself—how many of our examples are product related? Well, what about services? In services, it’s almost true by definition that you get fabulous service from engaged employees. And the moment you have highly productive, highly engaged employees, you have this interesting combination of having a potential cost advantage that comes from high productivity. The very same ingredient that produces your cost advantage now produces your ability to satisfy even the most demanding customers.

That, to me, is a change that doesn’t say, “Oh, if I’m an entrepreneur, I shouldn’t use disruptive innovation as my guideposts, where to enter, how to develop my business.” But it says that the balance of who’s going to be successful and how easy it will be to disrupt large organizations, that balance is going to change over time in favor of large incumbents. The very formidable difficulties of disrupting their businesses.

HANNAH BATES: You just heard Derek van Bever, Rita McGrath, and Felix Oberholzer-Geein conversation with Harvard Business Review editor Amy Bernstein on HBR IdeaCast.

Derek van Bever is senior lecturer and director of the Forum for Growth and Innovation at Harvard Business School, Rita McGrath is a professor at Columbia Business School, and Felix Oberholzer-Gee is a professor at Harvard Business School.

We’ll be back next Wednesday with another hand-picked conversation about business strategy from Harvard Business Review. If you found this episode helpful, share it with your friends and colleagues, and follow our show on Apple Podcasts, Spotify, or wherever you get your podcasts. While you’re there, be sure to leave us a review.

And when you’re ready for more podcasts, articles, case studies, books, and videos with the world’s top business and management experts, find it all at HBR dot org.

This episode was produced by Curt Nickisch, Anne Saini, and me, Hannah Bates. Ian Fox is our editor. And special thanks to Maureen Hoch, Nicole Smith, Erica Truxler, Ramsey Khabbaz, Anne Bartholomew, and you – our listener.

See you next week.

  • Subscribe On:

Latest in this series

This article is about strategy.

  • Competitive strategy
  • Business history
  • Disruptive innovation

Partner Center

Science and Technology Will Change Our Future Essay

Introduction, papers are replaced by computer interface, credit card type media, changes in travel, lowering the cost of living, works cited.

Science and technology have continued to play a central role in providing means through which people improve their well-being and health, alleviate poverty, and define themselves as a nation and people. Many societies are built on a firm foundation of science and technology and irrevocably dependent on them. As such, science and technology will continue to play a major role in shaping our lives and nation. It will change how people communicate and interact with each other, how people work, travel and how students learn. Technological innovation in the next 50 years will rival innovation that took place in the past 400 years.

According to Reuters, businesses and schools will go paperless as papers are replaced by computer interfaces built into furniture and walls. Advances in communication, energy distribution, and storage in consumer products and businesses will support a technology known as “room ware” that will support this breakthrough. Offices Tables, walls, and cafeteria tables will double as terminals that will allow a person to write down the idea and send it to a personal desk or computer located somewhere else. School and office Walls and windows will have the capability to display maps and direction commands to help locate particular offices, staff, classrooms, etc ( Reuters, 2009). As offices/schools go paperless, the environment will benefit from reduced dependent on the tree for paper production.

After a long period of stability as the main choice of storage DVD and CD, media will be replaced by credit card type media by 2015. As the internet becomes more flexible coupled with the availability of cheap massive storage space, high data transfer rate, people will no longer need physical storage media to store data. File storage and access will be done remotely due to the convenience brought by the internet. Movies will only be available for download from the internet and that the user will need to access code to get movies and data. (B, 2009)

Innovation in Science and Technology will also change travel. People will be traveling on sky car that will be cruising comfortably at a speed of 300Miles per hour using regular fuel. The sky car will be equipped with onboard computers and will be fully automated. This means that one will not need a license to fly the sky car. The sky car will be equipped with redundant engines for safety purposes just in case the main engine fails (FutureCars.com, 2010). The cost of a new sky car will be equal to that of a luxury car once mass production begins. Sky car will be cost less to main and will launch and land at a pad the size of the dining room. Using sky car, people will be able to avoid traffic, spending tickets and save travel time.

Other speculations about the future include the availability of cheap, advanced personal equipment for self-diagnosis for illnesses that currently require a costly medical diagnosis. This will reduce the cost of health care and health insurance, hence lowering the cost of living. It will also lead to better health. Robots will also become part of mainstream life, in form of interactive toys, household items like carpets and pets will require no maintenance (Mooneyham, 2005).

The future will be shaped greatly by continued innovation in science and technology. Offices will go paperless and papers will be replaced by a computer interface inbuilt on office furniture and walls. DVD and CD media will be replaced by credit card types of media as people turn to online data storage and access. Technology innovation will also have a great impact on travel with the introduction of sky cars, which will result in reduced travel time and traffic congestion. New health equipment will help people to diagnose themselves for diseases, hence reducing the cost of health care, leading to better health.

Reuters. (2009). 2018 milestone: “Paperless Offices”. Web.

B, D. (2009). The Future of – Online/Remote Data Storage. Web.

Future Diagnostics Group. (2009). Nuclear Medicin. Web.

FutureCars.com. (2010). Moller Skycar – Long Time Coming. Web.

Mooneyham, J. (2005). Substantial regeneration treatments for various organs. Web.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2023, November 2). Science and Technology Will Change Our Future. https://ivypanda.com/essays/science-and-technology-will-change-our-future/

"Science and Technology Will Change Our Future." IvyPanda , 2 Nov. 2023, ivypanda.com/essays/science-and-technology-will-change-our-future/.

IvyPanda . (2023) 'Science and Technology Will Change Our Future'. 2 November.

IvyPanda . 2023. "Science and Technology Will Change Our Future." November 2, 2023. https://ivypanda.com/essays/science-and-technology-will-change-our-future/.

1. IvyPanda . "Science and Technology Will Change Our Future." November 2, 2023. https://ivypanda.com/essays/science-and-technology-will-change-our-future/.

Bibliography

IvyPanda . "Science and Technology Will Change Our Future." November 2, 2023. https://ivypanda.com/essays/science-and-technology-will-change-our-future/.

  • Emirates Airlines' Transition to Paperless Environment
  • Challenges of a Paperless Office
  • Paperless education
  • Information Technology: The Impact of Paperless
  • Smart Dubai: Creating a Paperless Organization
  • Paperless Passports as a Product Innovation
  • Making an Office Paperless: Project Progress
  • Paper Administration Shift to Electronic Platforms
  • Consumer Behaviour: Paper-Less Society Through Reduction of Yellow Pages and Increasing Online Books
  • Paperless Billing System Public Project
  • Advantages of Using Computers at Work
  • “Why the Future Doesn’t Need Us” by Bill Joy
  • Human Mind Simply: A Biological Computer
  • Remote Sensing and Geographical Information System for Developing Countries
  • “Technology Run Amok” and “Being Prepared for Technology Snow Days”: Articles' Analysis

How artificial intelligence is transforming the world

Subscribe to techstream, darrell m. west and darrell m. west senior fellow - center for technology innovation , douglas dillon chair in governmental studies john r. allen john r. allen.

April 24, 2018

Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making—and already it is transforming every walk of life. In this report, Darrell West and John Allen discuss AI’s application across a variety of sectors, address issues in its development, and offer recommendations for getting the most out of AI while still protecting important human values.

Table of Contents I. Qualities of artificial intelligence II. Applications in diverse sectors III. Policy, regulatory, and ethical issues IV. Recommendations V. Conclusion

  • 49 min read

Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. 1 A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values. 2

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21 st -century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity.

Qualities of artificial intelligence

Although there is no uniformly agreed upon definition, AI generally is thought to refer to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention.” 3  According to researchers Shubhendu and Vijay, these software systems “make decisions which normally require [a] human level of expertise” and help people anticipate problems or deal with issues as they come up. 4 As such, they operate in an intentional, intelligent, and adaptive manner.

Intentionality

Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from a variety of different sources, analyze the material instantly, and act on the insights derived from those data. With massive improvements in storage systems, processing speeds, and analytic techniques, they are capable of tremendous sophistication in analysis and decisionmaking.

Artificial intelligence is already altering the world and raising important questions for society, the economy, and governance.

Intelligence

AI generally is undertaken in conjunction with machine learning and data analytics. 5 Machine learning takes data and looks for underlying trends. If it spots something that is relevant for a practical problem, software designers can take that knowledge and use it to analyze specific issues. All that is required are data that are sufficiently robust that algorithms can discern useful patterns. Data can come in the form of digital information, satellite imagery, visual information, text, or unstructured data.

Adaptability

AI systems have the ability to learn and adapt as they make decisions. In the transportation area, for example, semi-autonomous vehicles have tools that let drivers and vehicles know about upcoming congestion, potholes, highway construction, or other possible traffic impediments. Vehicles can take advantage of the experience of other vehicles on the road, without human involvement, and the entire corpus of their achieved “experience” is immediately and fully transferable to other similarly configured vehicles. Their advanced algorithms, sensors, and cameras incorporate experience in current operations, and use dashboards and visual displays to present information in real time so human drivers are able to make sense of ongoing traffic and vehicular conditions. And in the case of fully autonomous vehicles, advanced systems can completely control the car or truck, and make all the navigational decisions.

Related Content

Jack Karsten, Darrell M. West

October 26, 2015

Makada Henry-Nickie

November 16, 2017

Sunil Johal, Daniel Araya

February 28, 2017

Applications in diverse sectors

AI is not a futuristic vision, but rather something that is here today and being integrated with and deployed into a variety of sectors. This includes fields such as finance, national security, health care, criminal justice, transportation, and smart cities. There are numerous examples where AI already is making an impact on the world and augmenting human capabilities in significant ways. 6

One of the reasons for the growing role of AI is the tremendous opportunities for economic development that it presents. A project undertaken by PriceWaterhouseCoopers estimated that “artificial intelligence technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030.” 7 That includes advances of $7 trillion in China, $3.7 trillion in North America, $1.8 trillion in Northern Europe, $1.2 trillion for Africa and Oceania, $0.9 trillion in the rest of Asia outside of China, $0.7 trillion in Southern Europe, and $0.5 trillion in Latin America. China is making rapid strides because it has set a national goal of investing $150 billion in AI and becoming the global leader in this area by 2030.

Meanwhile, a McKinsey Global Institute study of China found that “AI-led automation can give the Chinese economy a productivity injection that would add 0.8 to 1.4 percentage points to GDP growth annually, depending on the speed of adoption.” 8 Although its authors found that China currently lags the United States and the United Kingdom in AI deployment, the sheer size of its AI market gives that country tremendous opportunities for pilot testing and future development.

Investments in financial AI in the United States tripled between 2013 and 2014 to a total of $12.2 billion. 9 According to observers in that sector, “Decisions about loans are now being made by software that can take into account a variety of finely parsed data about a borrower, rather than just a credit score and a background check.” 10 In addition, there are so-called robo-advisers that “create personalized investment portfolios, obviating the need for stockbrokers and financial advisers.” 11 These advances are designed to take the emotion out of investing and undertake decisions based on analytical considerations, and make these choices in a matter of minutes.

A prominent example of this is taking place in stock exchanges, where high-frequency trading by machines has replaced much of human decisionmaking. People submit buy and sell orders, and computers match them in the blink of an eye without human intervention. Machines can spot trading inefficiencies or market differentials on a very small scale and execute trades that make money according to investor instructions. 12 Powered in some places by advanced computing, these tools have much greater capacities for storing information because of their emphasis not on a zero or a one, but on “quantum bits” that can store multiple values in each location. 13 That dramatically increases storage capacity and decreases processing times.

Fraud detection represents another way AI is helpful in financial systems. It sometimes is difficult to discern fraudulent activities in large organizations, but AI can identify abnormalities, outliers, or deviant cases requiring additional investigation. That helps managers find problems early in the cycle, before they reach dangerous levels. 14

National security

AI plays a substantial role in national defense. Through its Project Maven, the American military is deploying AI “to sift through the massive troves of data and video captured by surveillance and then alert human analysts of patterns or when there is abnormal or suspicious activity.” 15 According to Deputy Secretary of Defense Patrick Shanahan, the goal of emerging technologies in this area is “to meet our warfighters’ needs and to increase [the] speed and agility [of] technology development and procurement.” 16

Artificial intelligence will accelerate the traditional process of warfare so rapidly that a new term has been coined: hyperwar.

The big data analytics associated with AI will profoundly affect intelligence analysis, as massive amounts of data are sifted in near real time—if not eventually in real time—thereby providing commanders and their staffs a level of intelligence analysis and productivity heretofore unseen. Command and control will similarly be affected as human commanders delegate certain routine, and in special circumstances, key decisions to AI platforms, reducing dramatically the time associated with the decision and subsequent action. In the end, warfare is a time competitive process, where the side able to decide the fastest and move most quickly to execution will generally prevail. Indeed, artificially intelligent intelligence systems, tied to AI-assisted command and control systems, can move decision support and decisionmaking to a speed vastly superior to the speeds of the traditional means of waging war. So fast will be this process, especially if coupled to automatic decisions to launch artificially intelligent autonomous weapons systems capable of lethal outcomes, that a new term has been coined specifically to embrace the speed at which war will be waged: hyperwar.

While the ethical and legal debate is raging over whether America will ever wage war with artificially intelligent autonomous lethal systems, the Chinese and Russians are not nearly so mired in this debate, and we should anticipate our need to defend against these systems operating at hyperwar speeds. The challenge in the West of where to position “humans in the loop” in a hyperwar scenario will ultimately dictate the West’s capacity to be competitive in this new form of conflict. 17

Just as AI will profoundly affect the speed of warfare, the proliferation of zero day or zero second cyber threats as well as polymorphic malware will challenge even the most sophisticated signature-based cyber protection. This forces significant improvement to existing cyber defenses. Increasingly, vulnerable systems are migrating, and will need to shift to a layered approach to cybersecurity with cloud-based, cognitive AI platforms. This approach moves the community toward a “thinking” defensive capability that can defend networks through constant training on known threats. This capability includes DNA-level analysis of heretofore unknown code, with the possibility of recognizing and stopping inbound malicious code by recognizing a string component of the file. This is how certain key U.S.-based systems stopped the debilitating “WannaCry” and “Petya” viruses.

Preparing for hyperwar and defending critical cyber networks must become a high priority because China, Russia, North Korea, and other countries are putting substantial resources into AI. In 2017, China’s State Council issued a plan for the country to “build a domestic industry worth almost $150 billion” by 2030. 18 As an example of the possibilities, the Chinese search firm Baidu has pioneered a facial recognition application that finds missing people. In addition, cities such as Shenzhen are providing up to $1 million to support AI labs. That country hopes AI will provide security, combat terrorism, and improve speech recognition programs. 19 The dual-use nature of many AI algorithms will mean AI research focused on one sector of society can be rapidly modified for use in the security sector as well. 20

Health care

AI tools are helping designers improve computational sophistication in health care. For example, Merantix is a German company that applies deep learning to medical issues. It has an application in medical imaging that “detects lymph nodes in the human body in Computer Tomography (CT) images.” 21 According to its developers, the key is labeling the nodes and identifying small lesions or growths that could be problematic. Humans can do this, but radiologists charge $100 per hour and may be able to carefully read only four images an hour. If there were 10,000 images, the cost of this process would be $250,000, which is prohibitively expensive if done by humans.

What deep learning can do in this situation is train computers on data sets to learn what a normal-looking versus an irregular-appearing lymph node is. After doing that through imaging exercises and honing the accuracy of the labeling, radiological imaging specialists can apply this knowledge to actual patients and determine the extent to which someone is at risk of cancerous lymph nodes. Since only a few are likely to test positive, it is a matter of identifying the unhealthy versus healthy node.

AI has been applied to congestive heart failure as well, an illness that afflicts 10 percent of senior citizens and costs $35 billion each year in the United States. AI tools are helpful because they “predict in advance potential challenges ahead and allocate resources to patient education, sensing, and proactive interventions that keep patients out of the hospital.” 22

Criminal justice

AI is being deployed in the criminal justice area. The city of Chicago has developed an AI-driven “Strategic Subject List” that analyzes people who have been arrested for their risk of becoming future perpetrators. It ranks 400,000 people on a scale of 0 to 500, using items such as age, criminal activity, victimization, drug arrest records, and gang affiliation. In looking at the data, analysts found that youth is a strong predictor of violence, being a shooting victim is associated with becoming a future perpetrator, gang affiliation has little predictive value, and drug arrests are not significantly associated with future criminal activity. 23

Judicial experts claim AI programs reduce human bias in law enforcement and leads to a fairer sentencing system. R Street Institute Associate Caleb Watney writes:

Empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI. One machine-learning policy simulation concluded that such programs could be used to cut crime up to 24.8 percent with no change in jailing rates, or reduce jail populations by up to 42 percent with no increase in crime rates. 24

However, critics worry that AI algorithms represent “a secret system to punish citizens for crimes they haven’t yet committed. The risk scores have been used numerous times to guide large-scale roundups.” 25 The fear is that such tools target people of color unfairly and have not helped Chicago reduce the murder wave that has plagued it in recent years.

Despite these concerns, other countries are moving ahead with rapid deployment in this area. In China, for example, companies already have “considerable resources and access to voices, faces and other biometric data in vast quantities, which would help them develop their technologies.” 26 New technologies make it possible to match images and voices with other types of information, and to use AI on these combined data sets to improve law enforcement and national security. Through its “Sharp Eyes” program, Chinese law enforcement is matching video images, social media activity, online purchases, travel records, and personal identity into a “police cloud.” This integrated database enables authorities to keep track of criminals, potential law-breakers, and terrorists. 27 Put differently, China has become the world’s leading AI-powered surveillance state.

Transportation

Transportation represents an area where AI and machine learning are producing major innovations. Research by Cameron Kerry and Jack Karsten of the Brookings Institution has found that over $80 billion was invested in autonomous vehicle technology between August 2014 and June 2017. Those investments include applications both for autonomous driving and the core technologies vital to that sector. 28

Autonomous vehicles—cars, trucks, buses, and drone delivery systems—use advanced technological capabilities. Those features include automated vehicle guidance and braking, lane-changing systems, the use of cameras and sensors for collision avoidance, the use of AI to analyze information in real time, and the use of high-performance computing and deep learning systems to adapt to new circumstances through detailed maps. 29

Light detection and ranging systems (LIDARs) and AI are key to navigation and collision avoidance. LIDAR systems combine light and radar instruments. They are mounted on the top of vehicles that use imaging in a 360-degree environment from a radar and light beams to measure the speed and distance of surrounding objects. Along with sensors placed on the front, sides, and back of the vehicle, these instruments provide information that keeps fast-moving cars and trucks in their own lane, helps them avoid other vehicles, applies brakes and steering when needed, and does so instantly so as to avoid accidents.

Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. This means that software is the key—not the physical car or truck itself.

Since these cameras and sensors compile a huge amount of information and need to process it instantly to avoid the car in the next lane, autonomous vehicles require high-performance computing, advanced algorithms, and deep learning systems to adapt to new scenarios. This means that software is the key, not the physical car or truck itself. 30 Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. 31

Ride-sharing companies are very interested in autonomous vehicles. They see advantages in terms of customer service and labor productivity. All of the major ride-sharing companies are exploring driverless cars. The surge of car-sharing and taxi services—such as Uber and Lyft in the United States, Daimler’s Mytaxi and Hailo service in Great Britain, and Didi Chuxing in China—demonstrate the opportunities of this transportation option. Uber recently signed an agreement to purchase 24,000 autonomous cars from Volvo for its ride-sharing service. 32

However, the ride-sharing firm suffered a setback in March 2018 when one of its autonomous vehicles in Arizona hit and killed a pedestrian. Uber and several auto manufacturers immediately suspended testing and launched investigations into what went wrong and how the fatality could have occurred. 33 Both industry and consumers want reassurance that the technology is safe and able to deliver on its stated promises. Unless there are persuasive answers, this accident could slow AI advancements in the transportation sector.

Smart cities

Metropolitan governments are using AI to improve urban service delivery. For example, according to Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson:

The Cincinnati Fire Department is using data analytics to optimize medical emergency responses. The new analytics system recommends to the dispatcher an appropriate response to a medical emergency call—whether a patient can be treated on-site or needs to be taken to the hospital—by taking into account several factors, such as the type of call, location, weather, and similar calls. 34

Since it fields 80,000 requests each year, Cincinnati officials are deploying this technology to prioritize responses and determine the best ways to handle emergencies. They see AI as a way to deal with large volumes of data and figure out efficient ways of responding to public requests. Rather than address service issues in an ad hoc manner, authorities are trying to be proactive in how they provide urban services.

Cincinnati is not alone. A number of metropolitan areas are adopting smart city applications that use AI to improve service delivery, environmental planning, resource management, energy utilization, and crime prevention, among other things. For its smart cities index, the magazine Fast Company ranked American locales and found Seattle, Boston, San Francisco, Washington, D.C., and New York City as the top adopters. Seattle, for example, has embraced sustainability and is using AI to manage energy usage and resource management. Boston has launched a “City Hall To Go” that makes sure underserved communities receive needed public services. It also has deployed “cameras and inductive loops to manage traffic and acoustic sensors to identify gun shots.” San Francisco has certified 203 buildings as meeting LEED sustainability standards. 35

Through these and other means, metropolitan areas are leading the country in the deployment of AI solutions. Indeed, according to a National League of Cities report, 66 percent of American cities are investing in smart city technology. Among the top applications noted in the report are “smart meters for utilities, intelligent traffic signals, e-governance applications, Wi-Fi kiosks, and radio frequency identification sensors in pavement.” 36

Policy, regulatory, and ethical issues

These examples from a variety of sectors demonstrate how AI is transforming many walks of human existence. The increasing penetration of AI and autonomous devices into many aspects of life is altering basic operations and decisionmaking within organizations, and improving efficiency and response times.

At the same time, though, these developments raise important policy, regulatory, and ethical issues. For example, how should we promote data access? How do we guard against biased or unfair data used in algorithms? What types of ethical principles are introduced through software programming, and how transparent should designers be about their choices? What about questions of legal liability in cases where algorithms cause harm? 37

The increasing penetration of AI into many aspects of life is altering decisionmaking within organizations and improving efficiency. At the same time, though, these developments raise important policy, regulatory, and ethical issues.

Data access problems

The key to getting the most out of AI is having a “data-friendly ecosystem with unified standards and cross-platform sharing.” AI depends on data that can be analyzed in real time and brought to bear on concrete problems. Having data that are “accessible for exploration” in the research community is a prerequisite for successful AI development. 38

According to a McKinsey Global Institute study, nations that promote open data sources and data sharing are the ones most likely to see AI advances. In this regard, the United States has a substantial advantage over China. Global ratings on data openness show that U.S. ranks eighth overall in the world, compared to 93 for China. 39

But right now, the United States does not have a coherent national data strategy. There are few protocols for promoting research access or platforms that make it possible to gain new insights from proprietary data. It is not always clear who owns data or how much belongs in the public sphere. These uncertainties limit the innovation economy and act as a drag on academic research. In the following section, we outline ways to improve data access for researchers.

Biases in data and algorithms

In some instances, certain AI systems are thought to have enabled discriminatory or biased practices. 40 For example, Airbnb has been accused of having homeowners on its platform who discriminate against racial minorities. A research project undertaken by the Harvard Business School found that “Airbnb users with distinctly African American names were roughly 16 percent less likely to be accepted as guests than those with distinctly white names.” 41

Racial issues also come up with facial recognition software. Most such systems operate by comparing a person’s face to a range of faces in a large database. As pointed out by Joy Buolamwini of the Algorithmic Justice League, “If your facial recognition data contains mostly Caucasian faces, that’s what your program will learn to recognize.” 42 Unless the databases have access to diverse data, these programs perform poorly when attempting to recognize African-American or Asian-American features.

Many historical data sets reflect traditional values, which may or may not represent the preferences wanted in a current system. As Buolamwini notes, such an approach risks repeating inequities of the past:

The rise of automation and the increased reliance on algorithms for high-stakes decisions such as whether someone get insurance or not, your likelihood to default on a loan or somebody’s risk of recidivism means this is something that needs to be addressed. Even admissions decisions are increasingly automated—what school our children go to and what opportunities they have. We don’t have to bring the structural inequalities of the past into the future we create. 43

AI ethics and transparency

Algorithms embed ethical considerations and value choices into program decisions. As such, these systems raise questions concerning the criteria used in automated decisionmaking. Some people want to have a better understanding of how algorithms function and what choices are being made. 44

In the United States, many urban schools use algorithms for enrollment decisions based on a variety of considerations, such as parent preferences, neighborhood qualities, income level, and demographic background. According to Brookings researcher Jon Valant, the New Orleans–based Bricolage Academy “gives priority to economically disadvantaged applicants for up to 33 percent of available seats. In practice, though, most cities have opted for categories that prioritize siblings of current students, children of school employees, and families that live in school’s broad geographic area.” 45 Enrollment choices can be expected to be very different when considerations of this sort come into play.

Depending on how AI systems are set up, they can facilitate the redlining of mortgage applications, help people discriminate against individuals they don’t like, or help screen or build rosters of individuals based on unfair criteria. The types of considerations that go into programming decisions matter a lot in terms of how the systems operate and how they affect customers. 46

For these reasons, the EU is implementing the General Data Protection Regulation (GDPR) in May 2018. The rules specify that people have “the right to opt out of personally tailored ads” and “can contest ‘legal or similarly significant’ decisions made by algorithms and appeal for human intervention” in the form of an explanation of how the algorithm generated a particular outcome. Each guideline is designed to ensure the protection of personal data and provide individuals with information on how the “black box” operates. 47

Legal liability

There are questions concerning the legal liability of AI systems. If there are harms or infractions (or fatalities in the case of driverless cars), the operators of the algorithm likely will fall under product liability rules. A body of case law has shown that the situation’s facts and circumstances determine liability and influence the kind of penalties that are imposed. Those can range from civil fines to imprisonment for major harms. 48 The Uber-related fatality in Arizona will be an important test case for legal liability. The state actively recruited Uber to test its autonomous vehicles and gave the company considerable latitude in terms of road testing. It remains to be seen if there will be lawsuits in this case and who is sued: the human backup driver, the state of Arizona, the Phoenix suburb where the accident took place, Uber, software developers, or the auto manufacturer. Given the multiple people and organizations involved in the road testing, there are many legal questions to be resolved.

In non-transportation areas, digital platforms often have limited liability for what happens on their sites. For example, in the case of Airbnb, the firm “requires that people agree to waive their right to sue, or to join in any class-action lawsuit or class-action arbitration, to use the service.” By demanding that its users sacrifice basic rights, the company limits consumer protections and therefore curtails the ability of people to fight discrimination arising from unfair algorithms. 49 But whether the principle of neutral networks holds up in many sectors is yet to be determined on a widespread basis.

Recommendations

In order to balance innovation with basic human values, we propose a number of recommendations for moving forward with AI. This includes improving data access, increasing government investment in AI, promoting AI workforce development, creating a federal advisory committee, engaging with state and local officials to ensure they enact effective policies, regulating broad objectives as opposed to specific algorithms, taking bias seriously as an AI issue, maintaining mechanisms for human control and oversight, and penalizing malicious behavior and promoting cybersecurity.

Improving data access

The United States should develop a data strategy that promotes innovation and consumer protection. Right now, there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design. AI requires data to test and improve its learning capacity. 50 Without structured and unstructured data sets, it will be nearly impossible to gain the full benefits of artificial intelligence.

In general, the research community needs better access to government and business data, although with appropriate safeguards to make sure researchers do not misuse data in the way Cambridge Analytica did with Facebook information. There is a variety of ways researchers could gain data access. One is through voluntary agreements with companies holding proprietary data. Facebook, for example, recently announced a partnership with Stanford economist Raj Chetty to use its social media data to explore inequality. 51 As part of the arrangement, researchers were required to undergo background checks and could only access data from secured sites in order to protect user privacy and security.

In the U.S., there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design.

Google long has made available search results in aggregated form for researchers and the general public. Through its “Trends” site, scholars can analyze topics such as interest in Trump, views about democracy, and perspectives on the overall economy. 52 That helps people track movements in public interest and identify topics that galvanize the general public.

Twitter makes much of its tweets available to researchers through application programming interfaces, commonly referred to as APIs. These tools help people outside the company build application software and make use of data from its social media platform. They can study patterns of social media communications and see how people are commenting on or reacting to current events.

In some sectors where there is a discernible public benefit, governments can facilitate collaboration by building infrastructure that shares data. For example, the National Cancer Institute has pioneered a data-sharing protocol where certified researchers can query health data it has using de-identified information drawn from clinical data, claims information, and drug therapies. That enables researchers to evaluate efficacy and effectiveness, and make recommendations regarding the best medical approaches, without compromising the privacy of individual patients.

There could be public-private data partnerships that combine government and business data sets to improve system performance. For example, cities could integrate information from ride-sharing services with its own material on social service locations, bus lines, mass transit, and highway congestion to improve transportation. That would help metropolitan areas deal with traffic tie-ups and assist in highway and mass transit planning.

Some combination of these approaches would improve data access for researchers, the government, and the business community, without impinging on personal privacy. As noted by Ian Buck, the vice president of NVIDIA, “Data is the fuel that drives the AI engine. The federal government has access to vast sources of information. Opening access to that data will help us get insights that will transform the U.S. economy.” 53 Through its Data.gov portal, the federal government already has put over 230,000 data sets into the public domain, and this has propelled innovation and aided improvements in AI and data analytic technologies. 54 The private sector also needs to facilitate research data access so that society can achieve the full benefits of artificial intelligence.

Increase government investment in AI

According to Greg Brockman, the co-founder of OpenAI, the U.S. federal government invests only $1.1 billion in non-classified AI technology. 55 That is far lower than the amount being spent by China or other leading nations in this area of research. That shortfall is noteworthy because the economic payoffs of AI are substantial. In order to boost economic development and social innovation, federal officials need to increase investment in artificial intelligence and data analytics. Higher investment is likely to pay for itself many times over in economic and social benefits. 56

Promote digital education and workforce development

As AI applications accelerate across many sectors, it is vital that we reimagine our educational institutions for a world where AI will be ubiquitous and students need a different kind of training than they currently receive. Right now, many students do not receive instruction in the kinds of skills that will be needed in an AI-dominated landscape. For example, there currently are shortages of data scientists, computer scientists, engineers, coders, and platform developers. These are skills that are in short supply; unless our educational system generates more people with these capabilities, it will limit AI development.

For these reasons, both state and federal governments have been investing in AI human capital. For example, in 2017, the National Science Foundation funded over 6,500 graduate students in computer-related fields and has launched several new initiatives designed to encourage data and computer science at all levels from pre-K to higher and continuing education. 57 The goal is to build a larger pipeline of AI and data analytic personnel so that the United States can reap the full advantages of the knowledge revolution.

But there also needs to be substantial changes in the process of learning itself. It is not just technical skills that are needed in an AI world but skills of critical reasoning, collaboration, design, visual display of information, and independent thinking, among others. AI will reconfigure how society and the economy operate, and there needs to be “big picture” thinking on what this will mean for ethics, governance, and societal impact. People will need the ability to think broadly about many questions and integrate knowledge from a number of different areas.

One example of new ways to prepare students for a digital future is IBM’s Teacher Advisor program, utilizing Watson’s free online tools to help teachers bring the latest knowledge into the classroom. They enable instructors to develop new lesson plans in STEM and non-STEM fields, find relevant instructional videos, and help students get the most out of the classroom. 58 As such, they are precursors of new educational environments that need to be created.

Create a federal AI advisory committee

Federal officials need to think about how they deal with artificial intelligence. As noted previously, there are many issues ranging from the need for improved data access to addressing issues of bias and discrimination. It is vital that these and other concerns be considered so we gain the full benefits of this emerging technology.

In order to move forward in this area, several members of Congress have introduced the “Future of Artificial Intelligence Act,” a bill designed to establish broad policy and legal principles for AI. It proposes the secretary of commerce create a federal advisory committee on the development and implementation of artificial intelligence. The legislation provides a mechanism for the federal government to get advice on ways to promote a “climate of investment and innovation to ensure the global competitiveness of the United States,” “optimize the development of artificial intelligence to address the potential growth, restructuring, or other changes in the United States workforce,” “support the unbiased development and application of artificial intelligence,” and “protect the privacy rights of individuals.” 59

Among the specific questions the committee is asked to address include the following: competitiveness, workforce impact, education, ethics training, data sharing, international cooperation, accountability, machine learning bias, rural impact, government efficiency, investment climate, job impact, bias, and consumer impact. The committee is directed to submit a report to Congress and the administration 540 days after enactment regarding any legislative or administrative action needed on AI.

This legislation is a step in the right direction, although the field is moving so rapidly that we would recommend shortening the reporting timeline from 540 days to 180 days. Waiting nearly two years for a committee report will certainly result in missed opportunities and a lack of action on important issues. Given rapid advances in the field, having a much quicker turnaround time on the committee analysis would be quite beneficial.

Engage with state and local officials

States and localities also are taking action on AI. For example, the New York City Council unanimously passed a bill that directed the mayor to form a taskforce that would “monitor the fairness and validity of algorithms used by municipal agencies.” 60 The city employs algorithms to “determine if a lower bail will be assigned to an indigent defendant, where firehouses are established, student placement for public schools, assessing teacher performance, identifying Medicaid fraud and determine where crime will happen next.” 61

According to the legislation’s developers, city officials want to know how these algorithms work and make sure there is sufficient AI transparency and accountability. In addition, there is concern regarding the fairness and biases of AI algorithms, so the taskforce has been directed to analyze these issues and make recommendations regarding future usage. It is scheduled to report back to the mayor on a range of AI policy, legal, and regulatory issues by late 2019.

Some observers already are worrying that the taskforce won’t go far enough in holding algorithms accountable. For example, Julia Powles of Cornell Tech and New York University argues that the bill originally required companies to make the AI source code available to the public for inspection, and that there be simulations of its decisionmaking using actual data. After criticism of those provisions, however, former Councilman James Vacca dropped the requirements in favor of a task force studying these issues. He and other city officials were concerned that publication of proprietary information on algorithms would slow innovation and make it difficult to find AI vendors who would work with the city. 62 It remains to be seen how this local task force will balance issues of innovation, privacy, and transparency.

Regulate broad objectives more than specific algorithms

The European Union has taken a restrictive stance on these issues of data collection and analysis. 63 It has rules limiting the ability of companies from collecting data on road conditions and mapping street views. Because many of these countries worry that people’s personal information in unencrypted Wi-Fi networks are swept up in overall data collection, the EU has fined technology firms, demanded copies of data, and placed limits on the material collected. 64 This has made it more difficult for technology companies operating there to develop the high-definition maps required for autonomous vehicles.

The GDPR being implemented in Europe place severe restrictions on the use of artificial intelligence and machine learning. According to published guidelines, “Regulations prohibit any automated decision that ‘significantly affects’ EU citizens. This includes techniques that evaluates a person’s ‘performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.’” 65 In addition, these new rules give citizens the right to review how digital services made specific algorithmic choices affecting people.

By taking a restrictive stance on issues of data collection and analysis, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

If interpreted stringently, these rules will make it difficult for European software designers (and American designers who work with European counterparts) to incorporate artificial intelligence and high-definition mapping in autonomous vehicles. Central to navigation in these cars and trucks is tracking location and movements. Without high-definition maps containing geo-coded data and the deep learning that makes use of this information, fully autonomous driving will stagnate in Europe. Through this and other data protection actions, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

It makes more sense to think about the broad objectives desired in AI and enact policies that advance them, as opposed to governments trying to crack open the “black boxes” and see exactly how specific algorithms operate. Regulating individual algorithms will limit innovation and make it difficult for companies to make use of artificial intelligence.

Take biases seriously

Bias and discrimination are serious issues for AI. There already have been a number of cases of unfair treatment linked to historic data, and steps need to be undertaken to make sure that does not become prevalent in artificial intelligence. Existing statutes governing discrimination in the physical economy need to be extended to digital platforms. That will help protect consumers and build confidence in these systems as a whole.

For these advances to be widely adopted, more transparency is needed in how AI systems operate. Andrew Burt of Immuta argues, “The key problem confronting predictive analytics is really transparency. We’re in a world where data science operations are taking on increasingly important tasks, and the only thing holding them back is going to be how well the data scientists who train the models can explain what it is their models are doing.” 66

Maintaining mechanisms for human oversight and control

Some individuals have argued that there needs to be avenues for humans to exercise oversight and control of AI systems. For example, Allen Institute for Artificial Intelligence CEO Oren Etzioni argues there should be rules for regulating these systems. First, he says, AI must be governed by all the laws that already have been developed for human behavior, including regulations concerning “cyberbullying, stock manipulation or terrorist threats,” as well as “entrap[ping] people into committing crimes.” Second, he believes that these systems should disclose they are automated systems and not human beings. Third, he states, “An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.” 67 His rationale is that these tools store so much data that people have to be cognizant of the privacy risks posed by AI.

In the same vein, the IEEE Global Initiative has ethical guidelines for AI and autonomous systems. Its experts suggest that these models be programmed with consideration for widely accepted human norms and rules for behavior. AI algorithms need to take into effect the importance of these norms, how norm conflict can be resolved, and ways these systems can be transparent about norm resolution. Software designs should be programmed for “nondeception” and “honesty,” according to ethics experts. When failures occur, there must be mitigation mechanisms to deal with the consequences. In particular, AI must be sensitive to problems such as bias, discrimination, and fairness. 68

A group of machine learning experts claim it is possible to automate ethical decisionmaking. Using the trolley problem as a moral dilemma, they ask the following question: If an autonomous car goes out of control, should it be programmed to kill its own passengers or the pedestrians who are crossing the street? They devised a “voting-based system” that asked 1.3 million people to assess alternative scenarios, summarized the overall choices, and applied the overall perspective of these individuals to a range of vehicular possibilities. That allowed them to automate ethical decisionmaking in AI algorithms, taking public preferences into account. 69 This procedure, of course, does not reduce the tragedy involved in any kind of fatality, such as seen in the Uber case, but it provides a mechanism to help AI developers incorporate ethical considerations in their planning.

Penalize malicious behavior and promote cybersecurity

As with any emerging technology, it is important to discourage malicious treatment designed to trick software or use it for undesirable ends. 70 This is especially important given the dual-use aspects of AI, where the same tool can be used for beneficial or malicious purposes. The malevolent use of AI exposes individuals and organizations to unnecessary risks and undermines the virtues of the emerging technology. This includes behaviors such as hacking, manipulating algorithms, compromising privacy and confidentiality, or stealing identities. Efforts to hijack AI in order to solicit confidential information should be seriously penalized as a way to deter such actions. 71

In a rapidly changing world with many entities having advanced computing capabilities, there needs to be serious attention devoted to cybersecurity. Countries have to be careful to safeguard their own systems and keep other nations from damaging their security. 72 According to the U.S. Department of Homeland Security, a major American bank receives around 11 million calls a week at its service center. In order to protect its telephony from denial of service attacks, it uses a “machine learning-based policy engine [that] blocks more than 120,000 calls per month based on voice firewall policies including harassing callers, robocalls and potential fraudulent calls.” 73 This represents a way in which machine learning can help defend technology systems from malevolent attacks.

To summarize, the world is on the cusp of revolutionizing many sectors through artificial intelligence and data analytics. There already are significant deployments in finance, national security, health care, criminal justice, transportation, and smart cities that have altered decisionmaking, business models, risk mitigation, and system performance. These developments are generating substantial economic and social benefits.

The world is on the cusp of revolutionizing many sectors through artificial intelligence, but the way AI systems are developed need to be better understood due to the major implications these technologies will have for society as a whole.

Yet the manner in which AI systems unfold has major implications for society as a whole. It matters how policy issues are addressed, ethical conflicts are reconciled, legal realities are resolved, and how much transparency is required in AI and data analytic solutions. 74 Human choices about software development affect the way in which decisions are made and the manner in which they are integrated into organizational routines. Exactly how these processes are executed need to be better understood because they will have substantial impact on the general public soon, and for the foreseeable future. AI may well be a revolution in human affairs, and become the single most influential human innovation in history.

Note: We appreciate the research assistance of Grace Gilberg, Jack Karsten, Hillary Schaub, and Kristjan Tomasson on this project.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Support for this publication was generously provided by Amazon. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment. 

John R. Allen is a member of the Board of Advisors of Amida Technology and on the Board of Directors of Spark Cognition. Both companies work in fields discussed in this piece.

  • Thomas Davenport, Jeff Loucks, and David Schatsky, “Bullish on the Business Value of Cognitive” (Deloitte, 2017), p. 3 (www2.deloitte.com/us/en/pages/deloitte-analytics/articles/cognitive-technology-adoption-survey.html).
  • Luke Dormehl, Thinking Machines: The Quest for Artificial Intelligence—and Where It’s Taking Us Next (New York: Penguin–TarcherPerigee, 2017).
  • Shubhendu and Vijay, “Applicability of Artificial Intelligence in Different Fields of Life.”
  • Andrew McAfee and Erik Brynjolfsson, Machine Platform Crowd: Harnessing Our Digital Future (New York: Norton, 2017).
  • Portions of this paper draw on Darrell M. West, The Future of Work: Robots, AI, and Automation , Brookings Institution Press, 2018.
  • PriceWaterhouseCoopers, “Sizing the Prize: What’s the Real Value of AI for Your Business and How Can You Capitalise?” 2017.
  • Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (New York: McKinsey Global Institute, April 2017), p. 1.
  • Nathaniel Popper, “Stocks and Bots,” New York Times Magazine , February 28, 2016.
  • Michael Lewis, Flash Boys: A Wall Street Revolt (New York: Norton, 2015).
  • Cade Metz, “In Quantum Computing Race, Yale Professors Battle Tech Giants,” New York Times , November 14, 2017, p. B3.
  • Executive Office of the President, “Artificial Intelligence, Automation, and the Economy,” December 2016, pp. 27-28.
  • Christian Davenport, “Future Wars May Depend as Much on Algorithms as on Ammunition, Report Says,” Washington Post , December 3, 2017.
  • John R. Allen and Amir Husain, “On Hyperwar,” Naval Institute Proceedings , July 17, 2017, pp. 30-36.
  • Paul Mozur, “China Sets Goal to Lead in Artificial Intelligence,” New York Times , July 21, 2017, p. B1.
  • Paul Mozur and John Markoff, “Is China Outsmarting American Artificial Intelligence?” New York Times , May 28, 2017.
  • Economist , “America v China: The Battle for Digital Supremacy,” March 15, 2018.
  • Rasmus Rothe, “Applying Deep Learning to Real-World Problems,” Medium , May 23, 2017.
  • Eric Horvitz, “Reflections on the Status and Future of Artificial Intelligence,” Testimony before the U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016, p. 5.
  • Jeff Asher and Rob Arthur, “Inside the Algorithm That Tries to Predict Gun Violence in Chicago,” New York Times Upshot , June 13, 2017.
  • Caleb Watney, “It’s Time for our Justice System to Embrace Artificial Intelligence,” TechTank (blog), Brookings Institution, July 20, 2017.
  • Asher and Arthur, “Inside the Algorithm That Tries to Predict Gun Violence in Chicago.”
  • Paul Mozur and Keith Bradsher, “China’s A.I. Advances Help Its Tech Industry, and State Security,” New York Times , December 3, 2017.
  • Simon Denyer, “China’s Watchful Eye,” Washington Post , January 7, 2018.
  • Cameron Kerry and Jack Karsten, “Gauging Investment in Self-Driving Cars,” Brookings Institution, October 16, 2017.
  • Portions of this section are drawn from Darrell M. West, “Driverless Cars in China, Europe, Japan, Korea, and the United States,” Brookings Institution, September 2016.
  • Yuming Ge, Xiaoman Liu, Libo Tang, and Darrell M. West, “Smart Transportation in China and the United States,” Center for Technology Innovation, Brookings Institution, December 2017.
  • Peter Holley, “Uber Signs Deal to Buy 24,000 Autonomous Vehicles from Volvo,” Washington Post , November 20, 2017.
  • Daisuke Wakabayashi, “Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam,” New York Times , March 19, 2018.
  • Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson, “Learning from Public Sector Experimentation with Artificial Intelligence,” TechTank (blog), Brookings Institution, June 23, 2017.
  • Boyd Cohen, “The 10 Smartest Cities in North America,” Fast Company , November 14, 2013.
  • Teena Maddox, “66% of US Cities Are Investing in Smart City Technology,” TechRepublic , November 6, 2017.
  • Osonde Osoba and William Welser IV, “The Risks of Artificial Intelligence to Security and the Future of Work” (Santa Monica, Calif.: RAND Corp., December 2017) (www.rand.org/pubs/perspectives/PE237.html).
  • Ibid., p. 7.
  • Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (New York: McKinsey Global Institute, April 2017), p. 7.
  • Executive Office of the President, “Preparing for the Future of Artificial Intelligence,” October 2016, pp. 30-31.
  • Elaine Glusac, “As Airbnb Grows, So Do Claims of Discrimination,” New York Times , June 21, 2016.
  • “Joy Buolamwini,” Bloomberg Businessweek , July 3, 2017, p. 80.
  • Mark Purdy and Paul Daugherty, “Why Artificial Intelligence is the Future of Growth,” Accenture, 2016.
  • Jon Valant, “Integrating Charter Schools and Choice-Based Education Systems,” Brown Center Chalkboard blog, Brookings Institution, June 23, 2017.
  • Tucker, “‘A White Mask Worked Better.’”
  • Cliff Kuang, “Can A.I. Be Taught to Explain Itself?” New York Times Magazine , November 21, 2017.
  • Yale Law School Information Society Project, “Governing Machine Learning,” September 2017.
  • Katie Benner, “Airbnb Vows to Fight Racism, But Its Users Can’t Sue to Prompt Fairness,” New York Times , June 19, 2016.
  • Executive Office of the President, “Artificial Intelligence, Automation, and the Economy” and “Preparing for the Future of Artificial Intelligence.”
  • Nancy Scolar, “Facebook’s Next Project: American Inequality,” Politico , February 19, 2018.
  • Darrell M. West, “What Internet Search Data Reveals about Donald Trump’s First Year in Office,” Brookings Institution policy report, January 17, 2018.
  • Ian Buck, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
  • Keith Nakasone, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Greg Brockman, “The Dawn of Artificial Intelligence,” Testimony before U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016.
  • Amir Khosrowshahi, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
  • James Kurose, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Stephen Noonoo, “Teachers Can Now Use IBM’s Watson to Search for Free Lesson Plans,” EdSurge , September 13, 2017.
  • Congress.gov, “H.R. 4625 FUTURE of Artificial Intelligence Act of 2017,” December 12, 2017.
  • Elizabeth Zima, “Could New York City’s AI Transparency Bill Be a Model for the Country?” Government Technology , January 4, 2018.
  • Julia Powles, “New York City’s Bold, Flawed Attempt to Make Algorithms Accountable,” New Yorker , December 20, 2017.
  • Sheera Frenkel, “Tech Giants Brace for Europe’s New Data Privacy Rules,” New York Times , January 28, 2018.
  • Claire Miller and Kevin O’Brien, “Germany’s Complicated Relationship with Google Street View,” New York Times , April 23, 2013.
  • Cade Metz, “Artificial Intelligence is Setting Up the Internet for a Huge Clash with Europe,” Wired , July 11, 2016.
  • Eric Siegel, “Predictive Analytics Interview Series: Andrew Burt,” Predictive Analytics Times , June 14, 2017.
  • Oren Etzioni, “How to Regulate Artificial Intelligence,” New York Times , September 1, 2017.
  • “Ethical Considerations in Artificial Intelligence and Autonomous Systems,” unpublished paper. IEEE Global Initiative, 2018.
  • Ritesh Noothigattu, Snehalkumar Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, and Ariel Procaccia, “A Voting-Based System for Ethical Decision Making,” Computers and Society , September 20, 2017 (www.media.mit.edu/publications/a-voting-based-system-for-ethical-decision-making/).
  • Miles Brundage, et al., “The Malicious Use of Artificial Intelligence,” University of Oxford unpublished paper, February 2018.
  • John Markoff, “As Artificial Intelligence Evolves, So Does Its Criminal Potential,” New York Times, October 24, 2016, p. B3.
  • Economist , “The Challenger: Technopolitics,” March 17, 2018.
  • Douglas Maughan, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Levi Tillemann and Colin McCormick, “Roadmapping a U.S.-German Agenda for Artificial Intelligence Policy,” New American Foundation, March 2017.

Artificial Intelligence

Governance Studies

Center for Technology Innovation

Artificial Intelligence and Emerging Technology Initiative

Julian Jacobs, Francesco Tasin, AJ Mannan

April 25, 2024

Elaine Kamarck, Darrell M. West

August 27, 2024

Jolynn Dellinger, Stephanie K. Pell

April 18, 2024

Issue Cover

  • Previous Article
  • Next Article

Promises and Pitfalls of Technology

Politics and privacy, private-sector influence and big tech, state competition and conflict, author biography, how is technology changing the world, and how should the world change technology.

[email protected]

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Open the PDF for in another window
  • Guest Access
  • Get Permissions
  • Cite Icon Cite
  • Search Site

Josephine Wolff; How Is Technology Changing the World, and How Should the World Change Technology?. Global Perspectives 1 February 2021; 2 (1): 27353. doi: https://doi.org/10.1525/gp.2021.27353

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Technologies are becoming increasingly complicated and increasingly interconnected. Cars, airplanes, medical devices, financial transactions, and electricity systems all rely on more computer software than they ever have before, making them seem both harder to understand and, in some cases, harder to control. Government and corporate surveillance of individuals and information processing relies largely on digital technologies and artificial intelligence, and therefore involves less human-to-human contact than ever before and more opportunities for biases to be embedded and codified in our technological systems in ways we may not even be able to identify or recognize. Bioengineering advances are opening up new terrain for challenging philosophical, political, and economic questions regarding human-natural relations. Additionally, the management of these large and small devices and systems is increasingly done through the cloud, so that control over them is both very remote and removed from direct human or social control. The study of how to make technologies like artificial intelligence or the Internet of Things “explainable” has become its own area of research because it is so difficult to understand how they work or what is at fault when something goes wrong (Gunning and Aha 2019) .

This growing complexity makes it more difficult than ever—and more imperative than ever—for scholars to probe how technological advancements are altering life around the world in both positive and negative ways and what social, political, and legal tools are needed to help shape the development and design of technology in beneficial directions. This can seem like an impossible task in light of the rapid pace of technological change and the sense that its continued advancement is inevitable, but many countries around the world are only just beginning to take significant steps toward regulating computer technologies and are still in the process of radically rethinking the rules governing global data flows and exchange of technology across borders.

These are exciting times not just for technological development but also for technology policy—our technologies may be more advanced and complicated than ever but so, too, are our understandings of how they can best be leveraged, protected, and even constrained. The structures of technological systems as determined largely by government and institutional policies and those structures have tremendous implications for social organization and agency, ranging from open source, open systems that are highly distributed and decentralized, to those that are tightly controlled and closed, structured according to stricter and more hierarchical models. And just as our understanding of the governance of technology is developing in new and interesting ways, so, too, is our understanding of the social, cultural, environmental, and political dimensions of emerging technologies. We are realizing both the challenges and the importance of mapping out the full range of ways that technology is changing our society, what we want those changes to look like, and what tools we have to try to influence and guide those shifts.

Technology can be a source of tremendous optimism. It can help overcome some of the greatest challenges our society faces, including climate change, famine, and disease. For those who believe in the power of innovation and the promise of creative destruction to advance economic development and lead to better quality of life, technology is a vital economic driver (Schumpeter 1942) . But it can also be a tool of tremendous fear and oppression, embedding biases in automated decision-making processes and information-processing algorithms, exacerbating economic and social inequalities within and between countries to a staggering degree, or creating new weapons and avenues for attack unlike any we have had to face in the past. Scholars have even contended that the emergence of the term technology in the nineteenth and twentieth centuries marked a shift from viewing individual pieces of machinery as a means to achieving political and social progress to the more dangerous, or hazardous, view that larger-scale, more complex technological systems were a semiautonomous form of progress in and of themselves (Marx 2010) . More recently, technologists have sharply criticized what they view as a wave of new Luddites, people intent on slowing the development of technology and turning back the clock on innovation as a means of mitigating the societal impacts of technological change (Marlowe 1970) .

At the heart of fights over new technologies and their resulting global changes are often two conflicting visions of technology: a fundamentally optimistic one that believes humans use it as a tool to achieve greater goals, and a fundamentally pessimistic one that holds that technological systems have reached a point beyond our control. Technology philosophers have argued that neither of these views is wholly accurate and that a purely optimistic or pessimistic view of technology is insufficient to capture the nuances and complexity of our relationship to technology (Oberdiek and Tiles 1995) . Understanding technology and how we can make better decisions about designing, deploying, and refining it requires capturing that nuance and complexity through in-depth analysis of the impacts of different technological advancements and the ways they have played out in all their complicated and controversial messiness across the world.

These impacts are often unpredictable as technologies are adopted in new contexts and come to be used in ways that sometimes diverge significantly from the use cases envisioned by their designers. The internet, designed to help transmit information between computer networks, became a crucial vehicle for commerce, introducing unexpected avenues for crime and financial fraud. Social media platforms like Facebook and Twitter, designed to connect friends and families through sharing photographs and life updates, became focal points of election controversies and political influence. Cryptocurrencies, originally intended as a means of decentralized digital cash, have become a significant environmental hazard as more and more computing resources are devoted to mining these forms of virtual money. One of the crucial challenges in this area is therefore recognizing, documenting, and even anticipating some of these unexpected consequences and providing mechanisms to technologists for how to think through the impacts of their work, as well as possible other paths to different outcomes (Verbeek 2006) . And just as technological innovations can cause unexpected harm, they can also bring about extraordinary benefits—new vaccines and medicines to address global pandemics and save thousands of lives, new sources of energy that can drastically reduce emissions and help combat climate change, new modes of education that can reach people who would otherwise have no access to schooling. Regulating technology therefore requires a careful balance of mitigating risks without overly restricting potentially beneficial innovations.

Nations around the world have taken very different approaches to governing emerging technologies and have adopted a range of different technologies themselves in pursuit of more modern governance structures and processes (Braman 2009) . In Europe, the precautionary principle has guided much more anticipatory regulation aimed at addressing the risks presented by technologies even before they are fully realized. For instance, the European Union’s General Data Protection Regulation focuses on the responsibilities of data controllers and processors to provide individuals with access to their data and information about how that data is being used not just as a means of addressing existing security and privacy threats, such as data breaches, but also to protect against future developments and uses of that data for artificial intelligence and automated decision-making purposes. In Germany, Technische Überwachungsvereine, or TÜVs, perform regular tests and inspections of technological systems to assess and minimize risks over time, as the tech landscape evolves. In the United States, by contrast, there is much greater reliance on litigation and liability regimes to address safety and security failings after-the-fact. These different approaches reflect not just the different legal and regulatory mechanisms and philosophies of different nations but also the different ways those nations prioritize rapid development of the technology industry versus safety, security, and individual control. Typically, governance innovations move much more slowly than technological innovations, and regulations can lag years, or even decades, behind the technologies they aim to govern.

In addition to this varied set of national regulatory approaches, a variety of international and nongovernmental organizations also contribute to the process of developing standards, rules, and norms for new technologies, including the International Organization for Standardization­ and the International Telecommunication Union. These multilateral and NGO actors play an especially important role in trying to define appropriate boundaries for the use of new technologies by governments as instruments of control for the state.

At the same time that policymakers are under scrutiny both for their decisions about how to regulate technology as well as their decisions about how and when to adopt technologies like facial recognition themselves, technology firms and designers have also come under increasing criticism. Growing recognition that the design of technologies can have far-reaching social and political implications means that there is more pressure on technologists to take into consideration the consequences of their decisions early on in the design process (Vincenti 1993; Winner 1980) . The question of how technologists should incorporate these social dimensions into their design and development processes is an old one, and debate on these issues dates back to the 1970s, but it remains an urgent and often overlooked part of the puzzle because so many of the supposedly systematic mechanisms for assessing the impacts of new technologies in both the private and public sectors are primarily bureaucratic, symbolic processes rather than carrying any real weight or influence.

Technologists are often ill-equipped or unwilling to respond to the sorts of social problems that their creations have—often unwittingly—exacerbated, and instead point to governments and lawmakers to address those problems (Zuckerberg 2019) . But governments often have few incentives to engage in this area. This is because setting clear standards and rules for an ever-evolving technological landscape can be extremely challenging, because enforcement of those rules can be a significant undertaking requiring considerable expertise, and because the tech sector is a major source of jobs and revenue for many countries that may fear losing those benefits if they constrain companies too much. This indicates not just a need for clearer incentives and better policies for both private- and public-sector entities but also a need for new mechanisms whereby the technology development and design process can be influenced and assessed by people with a wider range of experiences and expertise. If we want technologies to be designed with an eye to their impacts, who is responsible for predicting, measuring, and mitigating those impacts throughout the design process? Involving policymakers in that process in a more meaningful way will also require training them to have the analytic and technical capacity to more fully engage with technologists and understand more fully the implications of their decisions.

At the same time that tech companies seem unwilling or unable to rein in their creations, many also fear they wield too much power, in some cases all but replacing governments and international organizations in their ability to make decisions that affect millions of people worldwide and control access to information, platforms, and audiences (Kilovaty 2020) . Regulators around the world have begun considering whether some of these companies have become so powerful that they violate the tenets of antitrust laws, but it can be difficult for governments to identify exactly what those violations are, especially in the context of an industry where the largest players often provide their customers with free services. And the platforms and services developed by tech companies are often wielded most powerfully and dangerously not directly by their private-sector creators and operators but instead by states themselves for widespread misinformation campaigns that serve political purposes (Nye 2018) .

Since the largest private entities in the tech sector operate in many countries, they are often better poised to implement global changes to the technological ecosystem than individual states or regulatory bodies, creating new challenges to existing governance structures and hierarchies. Just as it can be challenging to provide oversight for government use of technologies, so, too, oversight of the biggest tech companies, which have more resources, reach, and power than many nations, can prove to be a daunting task. The rise of network forms of organization and the growing gig economy have added to these challenges, making it even harder for regulators to fully address the breadth of these companies’ operations (Powell 1990) . The private-public partnerships that have emerged around energy, transportation, medical, and cyber technologies further complicate this picture, blurring the line between the public and private sectors and raising critical questions about the role of each in providing critical infrastructure, health care, and security. How can and should private tech companies operating in these different sectors be governed, and what types of influence do they exert over regulators? How feasible are different policy proposals aimed at technological innovation, and what potential unintended consequences might they have?

Conflict between countries has also spilled over significantly into the private sector in recent years, most notably in the case of tensions between the United States and China over which technologies developed in each country will be permitted by the other and which will be purchased by other customers, outside those two countries. Countries competing to develop the best technology is not a new phenomenon, but the current conflicts have major international ramifications and will influence the infrastructure that is installed and used around the world for years to come. Untangling the different factors that feed into these tussles as well as whom they benefit and whom they leave at a disadvantage is crucial for understanding how governments can most effectively foster technological innovation and invention domestically as well as the global consequences of those efforts. As much of the world is forced to choose between buying technology from the United States or from China, how should we understand the long-term impacts of those choices and the options available to people in countries without robust domestic tech industries? Does the global spread of technologies help fuel further innovation in countries with smaller tech markets, or does it reinforce the dominance of the states that are already most prominent in this sector? How can research universities maintain global collaborations and research communities in light of these national competitions, and what role does government research and development spending play in fostering innovation within its own borders and worldwide? How should intellectual property protections evolve to meet the demands of the technology industry, and how can those protections be enforced globally?

These conflicts between countries sometimes appear to challenge the feasibility of truly global technologies and networks that operate across all countries through standardized protocols and design features. Organizations like the International Organization for Standardization, the World Intellectual Property Organization, the United Nations Industrial Development Organization, and many others have tried to harmonize these policies and protocols across different countries for years, but have met with limited success when it comes to resolving the issues of greatest tension and disagreement among nations. For technology to operate in a global environment, there is a need for a much greater degree of coordination among countries and the development of common standards and norms, but governments continue to struggle to agree not just on those norms themselves but even the appropriate venue and processes for developing them. Without greater global cooperation, is it possible to maintain a global network like the internet or to promote the spread of new technologies around the world to address challenges of sustainability? What might help incentivize that cooperation moving forward, and what could new structures and process for governance of global technologies look like? Why has the tech industry’s self-regulation culture persisted? Do the same traditional drivers for public policy, such as politics of harmonization and path dependency in policy-making, still sufficiently explain policy outcomes in this space? As new technologies and their applications spread across the globe in uneven ways, how and when do they create forces of change from unexpected places?

These are some of the questions that we hope to address in the Technology and Global Change section through articles that tackle new dimensions of the global landscape of designing, developing, deploying, and assessing new technologies to address major challenges the world faces. Understanding these processes requires synthesizing knowledge from a range of different fields, including sociology, political science, economics, and history, as well as technical fields such as engineering, climate science, and computer science. A crucial part of understanding how technology has created global change and, in turn, how global changes have influenced the development of new technologies is understanding the technologies themselves in all their richness and complexity—how they work, the limits of what they can do, what they were designed to do, how they are actually used. Just as technologies themselves are becoming more complicated, so are their embeddings and relationships to the larger social, political, and legal contexts in which they exist. Scholars across all disciplines are encouraged to join us in untangling those complexities.

Josephine Wolff is an associate professor of cybersecurity policy at the Fletcher School of Law and Diplomacy at Tufts University. Her book You’ll See This Message When It Is Too Late: The Legal and Economic Aftermath of Cybersecurity Breaches was published by MIT Press in 2018.

Recipient(s) will receive an email with a link to 'How Is Technology Changing the World, and How Should the World Change Technology?' and will not need an account to access the content.

Subject: How Is Technology Changing the World, and How Should the World Change Technology?

(Optional message may have a maximum of 1000 characters.)

Citing articles via

Email alerts, affiliations.

  • Special Collections
  • Review Symposia
  • Info for Authors
  • Info for Librarians
  • Editorial Team
  • Emerging Scholars Forum
  • Open Access
  • Online ISSN 2575-7350
  • Copyright © 2024 The Regents of the University of California. All Rights Reserved.

Stay Informed

Disciplines.

  • Ancient World
  • Anthropology
  • Communication
  • Criminology & Criminal Justice
  • Film & Media Studies
  • Food & Wine
  • Browse All Disciplines
  • Browse All Courses
  • Book Authors
  • Booksellers
  • Instructions
  • Journal Authors
  • Journal Editors
  • Media & Journalists
  • Planned Giving

About UC Press

  • Press Releases
  • Seasonal Catalog
  • Acquisitions Editors
  • Customer Service
  • Exam/Desk Requests
  • Media Inquiries
  • Print-Disability
  • Rights & Permissions
  • UC Press Foundation
  • © Copyright 2024 by the Regents of the University of California. All rights reserved. Privacy policy    Accessibility

This Feature Is Available To Subscribers Only

Sign In or Create an Account

helpful professor logo

27 Technological Innovation Examples (Chronological Order)

technological innovation examples and definition, explained below

Technology is anything that is newly created based upon the cutting-edge knowledge of the era.

In today’s information society , when we think of technology, we generally think of machines like computers, smartphones, and cars.

But 500 years ago, things considered technology had no electrical components – they were things like better quality bows, arrows, and shovels.

The definition of technology also encompasses the application of scientific principles to achieve specific objectives, such as increasing crop yields or improving communication networks.

Technological Innovation Examples

Invented: 1.5 Million BCE

According to most historians, fire was first invented by early humans between 1.8 and 1.5 million years ago.

prior to the invention of fire, human beings were restricted to eating raw foods. The discovery of fire changed all of that, allowing our ancestors to cook their food and unlocking a whole new world of flavor and nutrition.

In addition, fire provided warmth and light, making it possible for humans to live in colder climates.

As a result, the invention of fire had a profound impact on human history. It helped humans to gain greater control over their lives, shaped their development as a species, and allowed them to become more civilized.

Related: The 25 Most Famous Innovators of All Time

2. The Wheel

Invented: 4000 BCE

It is unclear exactly when the wheel was invented, but bt 4000-3500 BCE there is evidence of wheeled vehicles and wheels used for the production of pottery.

The invention of the wheel revolutionized transportation and had a profound impact on the development of civilization. Wheels were foundational for future developments, including the steam engine, which operates on an axle, and of course, the car.

Even the wheel has developed significantly over time. For example, the invention of the spoked wheel in the Middle Ages made travel much easier because it was far lighter than simple wooden disks. In the late 19th century, the invention of the pneumatic tire transformed transportation yet again, allowing vehicles to drive more smoothly on uneven terrain.

Invented: 600 BCE

The first known use of money comes from ancient Mesopotamian cities. From as early as 3000 BCE, Mesopotamian cities had central banks that would store assets and provide clay tokens as a proxy for the assets. These tokens could then be traded on an open market.

The first known coin, however, is a cultural artifact that dates to 600 BCE, from the Kingdom of Lidya (modern-day Turkey). These coins were made of electrum, an alloy of gold and silver.

Since then, money has evolved to become the primary means of exchange in most societies. Today, there are a wide variety of currencies in use around the world, including paper notes, metal coins, and digital currencies.

The invention of money was monumental because it allowed for the development of trade and commerce. Money made it possible to buy and sell goods and services without the need to barter.

4. Gunpowder

Invented: 808 CE

While the first confirmed recording of gunpowder was in 808 CE, it’s likely that gunpowder existed for several centuries beforehand.

Gunpowder is a mixture of sulfur, charcoal, and potassium nitrate. It is used in firearms and explosives.

While gunpowder has a number of negative applications, it also has some positive ones. As a defensive mechanism, it helped militaries to keep countries safe from invaders. It is also used in fireworks, for example, and has also been used in Chinese medicine.

5. The Compass

Invented: 11th Century CE

The first compass was invented in China in the 11th century.

Prior to the invention of the compass, people navigated by using the stars and other landmarks. The compass made it possible to travel in any direction, regardless of weather or terrain.

As a result, the compass had a profound impact on exploration and trade. It allowed humans to venture into new and unknown territory and to expand their horizons.

6. The Printing Press

Invented: 1450

The printing press is a technological invention that prints text and images onto paper. The first printing press was invented in 1450 by Johannes Gutenberg.

The printing press is one of the most important inventions in human history. It helped to spread information and to print books en masse . As a result, literacy increased dramatically and human knowledge was accelerated.

Scholar Benedict Anderson also believes that the printing press and subsequent print media were the impetus for the concept of a nation-state, or ‘ imagined community ‘ where people who have never met each other could feel a sense of commonality and shared identity.

7. The Microscope

Invented: 1590

The microscope is an instrument used to magnify objects. The first compound microscope was invented in 1590 by Zacharias Janssen.

The microscope has a wide range of applications, including in medicine, biology, and engineering. It is used to examine cells, tissues, and organs; to study the structure of materials; and to inspect objects for defects.

The microscope revolutionized our understanding of biology. It helped us, for example, to learn about the structure of cells and to discover the existence of bacteria and viruses.

8. The Steam Engine

Invented: 1698 CE

The first steam engine was invented by Thomas Savery in 1698. It was an important invention because it helped to usher in the industrial revolution.

The engine worked by using steam to power a piston, which could then be used to drive a shaft. This made it possible to use steam to power a wide variety of machines, which led to an increase in productivity and efficiency.

In addition, the invention of the steam engine also helped to create new industries and jobs, as well as helping to fuel the growth of cities and towns. As a result, the steam engine was a key factor in the transformation of society from agrarian to industrial.

For more innovations from the 17th Century, see: Seven Innovations from the Second Agricultural Revolution

9. Electricity

Invented: Late 1700s

While electricity had been studied since the 16th century and some primitive forms of electrification were developed in the 18th century, it was not until the late 1700s that electricity became a well-understood phenomenon.

The English scientist Michael Faraday is credited with the discovery of induction in 1831, which laid the groundwork for modern electrical technology.

Today, electricity is an essential part of our lives, powering everything from our homes and businesses to our cars and electronic devices.

10. The Camera

Invented: 1827

The first camera was invented in 1827 by Joseph Nicéphore Niépce.

Cameras are now used extensively in our everyday lives. They are used not only on our phones to take photos and videos, but also used in security systems.

Cameras have had a profound impact on our society. They have allowed us to document and preserve our memories. They have also given us the ability to preserve and share our experiences.

11. The Internal Combustion Engine

Invented: 1862

The first internal combustion engine was invented in 1862 by Jean Joseph Étienne Lenoir.

Internal combustion engines are used extensively in our modern world. They power our cars, trucks, and buses. They are also used in construction equipment, generators, and lawnmowers.

Internal combustion engines have had a profound impact on our society. They have allowed us to travel long distances quickly and easily and have made it possible for us to move heavy loads and equipment.

12. The Telephone

Invented: 1876

The telephone was invented by Alexander Graham Bell in 1876 and it revolutionized communication. Prior to the telephone, the only way to communicate with someone at a distance was through written messages, which could be slow and unreliable.

The telephone allowed people to instantly connect with each other no matter where they were in the world. Today, there are over 1 billion telephone landlines in use and billions of mobile phones as well.

13. Computers

The first computers were created in the early 1800s. However, these early machines were nothing like the computers of today. They were bulky and required a team of operators to function.

In 1876, Charles Babbage designed a machine called the Analytical Engine, which could be programmed to perform simple calculations. However, the machine was never completed.

In 1937, John Atanasoff and Clifford Berry developed the first electronic computer, called the Atanasoff-Berry Computer. However, this machine was not actually built until 1973.

By the 1990s, computers had well and truly changed the world. People were using them at work and, increasingly, they were used at home for word processing and accounting.

14. The Airplane

Invented: 1903

The airplane was invented by the Wright brothers in 1903. It was the first heavier-than-air craft to successfully achieve powered flight.

The airplane has had a profound impact on the world, making it possible to travel great distances in relatively short periods of time. It has also made it possible to transport goods and people around the world in a way that was previously unimaginable.

It also changed how wars were fought, as airplanes were used in World War I to drop bombs and conduct reconnaissance missions. Today, there are over 100,000 airplanes in use worldwide.

15. Television

Invented: 1927

The printing press was the first form of mass media, followed by radio. But the postwar decades hailed the era of television. During this era, television was a disruptive technology that caused radio to lose its prominence.

By 1960, there was sufficient infrastructure, and televisions were affordable enough, that the television had entered the mainstream zeitgeist . It became one of the most popular forms of entertainment, broadcasting into over 1 million homes in the United States.

It also had a profound impact on society, helping to connect people from all over the world and making information more accessible than ever before. The Vitenam war was the first war to be televised into people’s homes, which was a catalyst for the anti-war movement in the United States.

16. Semiconductors

Invented: 1947

The first semiconductor was invented in 1947 by William Shockley, John Bardeen, and Walter Brattain.

Semiconductors are used extensively in our modern world. They are the building blocks of computer chips and are used in a wide range of electronic devices.

Semiconductors have had a profound impact on our society. They have allowed us to miniaturize electronic devices and to create fast and powerful computer chips.

17. The Polio Vaccine

Invented: 1955

The polio vaccine is one of the most important technological breakthroughs in history.

Prior to the development of the vaccine, polio was a leading cause of disability and death, particularly in young children. The introduction of the vaccine in the 1950s led to a dramatic reduction in the incidence of polio, and today the disease is considered to be Eliminated in most parts of the world.

While there are still a few cases each year, they are almost exclusively in countries where the vaccine is not widely available.

The success of the polio vaccine has led to the development of other vaccines for other diseases, such as measles and rubella, which have also had a profound impact on public health .

18. Artificial Intelligence

Invented: 1956

Artificial intelligence (AI) is the ability of a computer to perform tasks that would normally require human intelligence, such as understanding natural language and recognizing objects.

The first AI program was developed in 1956 by a team of researchers at Dartmouth College. The program, called “Dartmouth Summer Research Project on Artificial Intelligence”, was designed to create a machine that could beat a human player at checkers.

While the program was successful, it was not able to beat the best human players. However, it did lay the foundations for further AI research.

Today, AI is becoming more and more common in businesses. For example, many customer service tasks are now handled by AI chatbots. AI is also being used for more complex tasks such as financial analysis and medical diagnosis.

19. Satellites

Invented: 1957

The first artificial satellite, Sputnik 1, was launched by the Soviet Union in 1957. This event ushered in the Space Age and the start of the space race between the two superpowers.

Satellites have had a profound impact on the world, providing us with a way to communicate with people in other parts of the world and to observe the planet from space.

They have also been used for navigation, weather forecasting, and mapping. Today, there are over 2,000 satellites in orbit around the Earth, and their numbers are growing every year.

20. The laser

Invented: 1960

The laser is a device that emits a beam of coherent light. The first laser was built in 1960 by Arthur Schawlow and Charles Townes.

Lasers have a wide range of applications, including cutting and welding, communications, printing, and medicine. They are also used in production processes, such as in the manufacture of semiconductors.

Recently, lasers have also been used in the military. They can help to guide missiles, for example, to help them to be more accurate.

21. Virtual reality

Invented: 1960 CE

The first virtual reality headset was invented in 1960 by Morton Heilig.

Virtual reality is a computer-generated environment that allows users to interact with it in a realistic way. It is a quintessential example of a wearable technology .

It has been used extensively in gaming and entertainment. But futurists believe that it will become extremely useful for training and education purposes (especially in the media), in medicine, and in manufacturing.pri

22. The Internet

Invented: 1969

The first Internet connection was made in 1969 between two computers at the University of California, Los Angeles.

The Internet is now a global network of computers that allows people to communicate and share information. It has transformed our lives. It has allowed us to stay connected with friends and family around the world, to access a wealth of information at our fingertips, and to work from anywhere.

It has also had a profound impact on our economy. It has stimulated technological globalization . created new industries, and allowed existing ones to thrive. It has facilitated the rapid transfer of information and money around the world and been the platform for the growth of countless businesses.

Read Also: Internet Pros and Cons

23. Mobile phones

Invented: 1973

The first mobile phone was invented in 1973 by Motorola. However, it was not until the late 1990s that mobile phones became commonplace in society, and it became one of the central types of communication technology of the 21st Century.

In the early days of mobile phones, there were a number of challenges that needed to be overcome. One of the biggest was developing a way to miniaturize the components so that they could fit into a small handheld device.

This was essential for making the phone portable and convenient to use.

Another challenge was finding a power source that would be able to keep the phone charged for long periods of time. NiCad batteries were initially used, but they had a tendency to lose their charge quickly and were also prone to the “memory effect”, which reduced their capacity over time.

Eventually, Lithium-ion batteries were developed which addressed these issues.

The Global Positioning System (GPS) is a satellite-based navigation system that was invented in 1973 by the United States Department of Defense.

GPS allows users to determine their precise location anywhere on the planet. It has a number of civilian and military applications, including navigation, surveying, mapping, and timing.

GPS is now an essential part of our everyday lives. It is used by millions of people around the world when using maps apps on their phones, for example. It helps us to find our way to a specific location, track our progress while running or cycling, and plan driving routes.

25. DNA sequencing

Invented: 1977

DNA sequencing is the process of determining the order of nucleotides in a DNA molecule. The first DNA sequence was determined in 1977 by Frederick Sanger.

Since then, DNA sequencing has become an essential tool in biology and medicine. It is used to study the genetic basis of diseases, to develop new treatments and diagnostic tests, and to trace the evolutionary history of organisms.

26. 3D printing

Invented: 1984

3D printing is the process of making three-dimensional solid objects from a 3D digital file. The first 3D printer was invented in 1984 by Chuck Hull.

Since then, 3D printing has become more and more popular, with a wide range of applications in industry, medicine, and even art.

3D printing has a number of advantages over traditional manufacturing methods. It is quick and easy to set up, and it can be used to create complex shapes that would be difficult or impossible to produce using traditional methods.

It is also relatively low cost, making it an attractive option for small businesses and hobbyists.

27. Bitcoin

Invented: 2009

Bitcoin is a digital currency that was invented in 2009 by an anonymous person or group of people known as Satoshi Nakamoto.

Bitcoin is different from traditional currencies because it is not regulated by any government or financial institution. Instead, it is decentralized and can be bought, sold, or traded on a number of online exchanges.

Bitcoin has become popular because it offers a number of advantages over traditional currencies. For example, it is not subject to inflationary pressures, and it can be used to make anonymous transactions.

See Also: Amazing Assistive Technologies for the Disabled

Technology has played a pivotal role in human history, and its impact can be seen in all aspects of our lives. From the development of early tools and agriculture to the rise of modern civilizations, technology has shaped the course of human history. Today, we continue to rely on technology to improve our lives and solve problems. With each new breakthrough, we push the boundaries of human capabilities.

Chris

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 5 Top Tips for Succeeding at University
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 50 Durable Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 100 Consumer Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 30 Globalization Pros and Cons

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

IMAGES

  1. ⇉Technology: Innovation and Invention Essay Example

    technological innovation essay

  2. ≫ Science and Technology Based Entrepreneurship Free Essay Sample on

    technological innovation essay

  3. ⇉Two Different Ways of Innovating with Information Technology Essay

    technological innovation essay

  4. Information Technological Change and Innovation Essay

    technological innovation essay

  5. Adoption of Technological Innovation Free Essay Example

    technological innovation essay

  6. Impact Of Technological Innovation On Society

    technological innovation essay

VIDEO

  1. Information Technology Essay writing in English..Short Essay on Technology Information in 150 words

  2. English Essay Writing

  3. 10 Topics in 1 Essay| Multi topic Essays| Multi Purpose essay 2024| #essaywriting #exams #essays

  4. 🌟 Applying Design Thinking: A Revolutionary Approach to Innovation

  5. Essay In English : Electronic media vs. print media

  6. Circular economy explained/Upsc essay writing in hindi//SDG

COMMENTS

  1. 544 Innovation Essay Topics & Examples

    Innovation means introducing new products, services, and ideas in any sphere. It takes place in technology, science, business, education, etc. If you're searching for innovation essay examples and topics, this article will be helpful. It contains innovation research titles, paper samples, and ideas for writing assignments and presentations.

  2. 200-500 Word Example Essays about Technology

    From bite-sized 200-word insights to in-depth 500-word analyses, immerse yourself in discussions on the innovations and implications of today's tech landscape. Feb 13, 2023. 200-500 Word Example Essays about Technology. ... But writing a technology essay can be challenging, especially for those needing more time or help with writer's block. ...

  3. How Innovation and Technology Makes Life Easier Essay

    Without technologies, the level of medical services would be much lower. Besides, the adoption of technologies maximizes the independence of older adults and makes their life easier and safer (Adams et al. 1718). The use of technologies results in millions of lives and loads of time saved. The efficacious use of technologies in all spheres of ...

  4. Technological Innovation: Articles, Research, & Case Studies on

    Read Articles about Technological Innovation- HBS Working Knowledge: The latest business management research and ideas from HBS faculty. ... The company had grown quickly, and its technology had been used in tens of thousands of procedures in more than 50 countries and 500 hospitals. It had raised close to $50 million in equity financing and ...

  5. Digital innovation: transforming research and practice

    There is no doubt that digital technologies are spawning ongoing innovation across most if not all sectors of the economy and society. In this essay, we take stock of the characteristics of digital technologies that give rise to this new reality and introduce the papers in this special issue. In addition, we also highlight the unprecedent ...

  6. Technological Innovation Essay

    Technological Innovation Essay. Technological innovation makes daily life more convenient and enjoyable for everyone. However, technological breakthroughs also produce social and ethical consequences. Computers are no exception to this rule. These products of modern technology can store massive amounts of information which help us perform at ...

  7. Technology and the Innovation Economy

    Executive Summary. Innovation and entrepreneurship are crucial for long-term economic development. Over the years, America's well-being has been furthered by science and technology. Fears set ...

  8. Technological innovations: Creating and harnessing tools ...

    Harnessing technology and innovation for a better future in Africa: Policy priorities for enabling the 'Africa we want'. The COVID-19 crisis has changed how the world functions, bringing to ...

  9. Science, technology and innovation in a 21st century context

    Science, technology and innovation in a 21st century context. This editorial essay was prepared by John H. "Jack" Marburger for a workshop on the "science of science and innovation policy" held in 2009 that was the basis for this special issue. It is published posthumously. Linking the words "science," "technology," and ...

  10. Technological innovation research in the last six decades: a

    database for the technological innovation papers from 1961 - 2019 (October). WoS has WoS has published and indexed about 1,520 studies until 2019, within which there are 1,361 articles, 97

  11. Technological Advancement Essay: Breakthrough Technologies

    In 1750, engineer John Smeaton working on the water wheel significantly increased its efficiency hence boosting its productivity. It was during this period that technological advancement, revolution, and innovation in agriculture were at its peak and it led to the emergence of new farm machinery like cultivators, combine harvesters and mowers that were pulled by oxen, mules, and horses.

  12. Essays in technological innovation & financial economics

    Summary. This thesis examines the effects of technological innovation, particularly recent developments in machine learning and artificial intelligence (ML/AI), on firm growth, productivity, investment and competitiveness. It has two parts. The first chapter of my dissertation takes a broad view to ask a more fundamental question: do these ...

  13. Technology innovation and sustainability: challenges and research needs

    A main challenge in SA of technology innovation is how to conduct multiple life-cycle-stage based assessment and to compare sustainability performance under different scenarios, especially when the available system information is uncertain, incomplete and imprecise. In almost every phase of sustainability study, data and information uncertainty ...

  14. Technological Innovation and Economic Growth: A Brief Report on the

    Technological innovation is a fundamental driver of economic growth and human progress. Yet some critics want to deny the vast benefits that innovation has bestowed and continues to bestow on mankind. To inform policy discussions and address the technology critics' concerns, this paper summarizes relevant literature documenting the impact of ...

  15. Disruptive Innovation in the Era of Big Tech

    Transcript. April 17, 2024. In 1995, the late and legendary Harvard Business School professor Clayton Christensen introduced his theory of "disruptive innovation" right here in the pages of ...

  16. (PDF) Technological Innovation

    Technological innovation is an element of the complex system of technology directed to satisfy needs, achieve goals, and solve problems of adopters. The origin and diffusion of technological ...

  17. Science and Technology Will Change Our Future Essay

    Changes in travel. Innovation in Science and Technology will also change travel. People will be traveling on sky car that will be cruising comfortably at a speed of 300Miles per hour using regular fuel. The sky car will be equipped with onboard computers and will be fully automated. This means that one will not need a license to fly the sky car.

  18. How artificial intelligence is transforming the world

    Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information ...

  19. How Is Technology Changing the World, and How Should the World Change

    Technologies are becoming increasingly complicated and increasingly interconnected. Cars, airplanes, medical devices, financial transactions, and electricity systems all rely on more computer software than they ever have before, making them seem both harder to understand and, in some cases, harder to control. Government and corporate surveillance of individuals and information processing ...

  20. PDF Science, technology and innovation in a 21st century context

    This editorial essay was prepared by John H. ''Jack'' Marburger for a workshop on the ''science of science and innovation policy'' held in 2009 that was the basis for this special issue. It is published posthumously. Linking the words ''science,'' ''technology,'' and ''innovation,'' may suggest that we know

  21. 27 Technological Innovation Examples (Chronological Order)

    Technological Innovation Examples. 1. Fire. Invented: 1.5 Million BCE. According to most historians, fire was first invented by early humans between 1.8 and 1.5 million years ago. prior to the invention of fire, human beings were restricted to eating raw foods.

  22. Technological Innovation Essay

    Strategic Management of Technological Innovation. London: McGraw-Hill/Irwin. In this study, Innovation is the process of exploiting new ideas that that will lead to creation of new products or services. This does not mean that, invention of new idea is what is important but also bringing that idea to the market and putting it into practice as ...

  23. Technological Innovation Review and Analysis

    Technological innovation has been known for having a major impact on society today, economically and socially. It has been discussed that innovation is a key factor for growth and that technological innovation is a driver for competitive success. Technological change stems from the industrial revolution and has completely changed the approach ...

  24. IELTS Essay: Technological innovations have affected our lives. Do you

    Technological innovations have affected our lives. Do you agree or disagree? Model essay. There is no denying the fact that technological innovations have affected our lives in many ways. While some innovations have made our lives better, others have provoked many people to debate whether technology is essentially good.

  25. IMF Working Papers

    Industrial policies pursued in many developing countries in the 1950s-1970s largely failed while the industrial policies of the Asian Miracles succeeded. We argue that a key factor of success is industrial policy with export orientation in contrast to import substitution. Exporting encouraged competition, economies of scale, innovation, and local integration and provided market signals to ...