How Computers Affect Our Lives Essay

How computers affect our lives: essay introduction, history of computers, positive effects of computer on human life, computers replacing man, negative computer influences, conflict with religious beliefs, conclusion: how computer influences our life, works cited.

Computers are a common phenomenon in the lives of people in today’s world. Computers are very vital especially to those people who run businesses, industries and other organizations. Today, almost everything that people engage in makes use of a computer. Take for instance, the transport sector: vehicles, trains, airplanes, and even traffic lights on our roads are controlled by computers.

In hospitals, most of the equipments use or are run by computers. Look at space exploration; it was all made possible with the advent of computer technology. In the job sector, many of the jobs require knowledge in computers because they mostly involve the use of computers.

In short, these machines have become so important and embedded in the lives of humans, they have hugely impacted on the whole society to the extent that it will be very hard to survive now, without them. This article discusses the influence of computers on the everyday life of human beings.

One can guess what will exactly happen if the world had no computers. Many of the cures found with help of computer technology would not have been developed without computer technology, meaning that many people would have died from diseases that are now curable. In the entertainment industry, many of the movies and even songs will not be in use without computers because most of the graphics used and the animations we see are only possible with the help of a computer (Saimo 1).

In the field of medicine, pharmacies, will find it hard in determining the type of medication to give to the many patients. Computers have also played a role in the development of democracy in the world. Today votes are counted using computers and this has greatly reduced incidences of vote rigging and consequently reduced conflicts that would otherwise arise from the same.

And as we have already seen, no one would have known anything about space because space explorations become possible only with the help of computer technology. However, the use of computers has generated public discourses whereby people have emerged with different views, some supporting their use and others criticizing them (Saimo 1).

To better understand how computers influence the lives of people, we will have to start from the history, from their invention to the present day. Early computers did not involve complex technologies as the ones that are used today; neither did they employ the use of monitors or chips that are common today.

The early computers were not that small as those used today and they were commonly used to help in working out complex calculations in mathematics that proved tedious to be done manually. This is why the first machine was called by some as a calculator and others as a computer because it was used for making calculations.

Blaise Pascal is credited with the first digital machine that could add and subtract. Many versions of calculators and computers borrowed from his ideas. And as time went by, many developed more needs, which lead to modifications to bring about new and more efficient computers (Edwards 4).

Computer influence in the life of man became widely felt during World War II where computers were used to calculate and track the movements and also strategize the way military attacks were done (Edwards 4). It is therefore clear, that computers and its influence on man have a long history.

Its invention involved hard work dedication and determination, and in the end it paid off. The world was and is still being changed by computers. Man has been able to see into the future and plan ahead because of computers. Life today has been made easier with the help of computers, although some people may disagree with this, but am sure many will agree with me.

Those who disagree say that computers have taken away the role of man, which is not wrong at all, but we must also acknowledge the fact what was seen as impossible initially, become possible because of computers (Turkle 22).

As we mentioned in the introduction, computers are useful in the running of the affairs of many companies today. Companies nowadays use a lot of data that can only be securely stored with the help of computers. This data is then used in operations that are computer run. Without computers companies will find it difficult store thousands of records that are made on a daily basis.

Take for instance, what will happen to a customer checking his or her balance, or one who just want to have information on transactions made. In such a case, it will take long to go through all the transactions to get a particular one.

The invention of computers made this easier; bank employees today give customers their balances, transaction information, and other services just by tapping the computer keyboard. This would not be possible without computers (Saimo 1).

In personal life

Today individuals can store all information be it personal or that of a business nature in a computer. It is even made better by being able to make frequent updates and modifications to the information. This same information can be easily retrieved whenever it is needed by sending it via email or by printing it.

All this have been made possible with the use of computers. Life is easier and enjoyable, individuals now can comfortably entertain themselves at home by watching TV with their families or they can work from the comfort of their home thanks to computer technology.

Computers feature in the everyday life of people. Today one can use a computer even without being aware of it: people use their credit cards when buying items from stores; this has become a common practice that few know that the transaction is processed through computer technology.

It is the computer which process customer information that is fed to it through the credit card, it detects the transaction, and it then pays the bill by subtracting the amount from the credit card. Getting cash has also been made easier and faster, an individual simply walks to an ATM machine to withdraw any amount of cash he requires. ATM machines operate using computer technology (Saimo 1).

I mentioned the use of credit cards as one of the practical benefits of using computers. Today, individual do not need to physically visit shopping stores to buy items. All one needs is to be connected on the internet and by using a computer one can pay for items using the credit card.

These can then be delivered at the door step. The era where people used to queue in crowded stores to buy items, or wasting time in line waiting to buy tickets is over. Today, travelers can buy tickets and make travel arrangements via the internet at any time thanks to the advent of computer technology (Saimo 1).

In communication

Through the computer, man now has the most effective means of communication. The internet has made the world a global village. Today people carry with them phones, which are basically small computers, others carry laptops, all these have made the internet most effective and affordable medium of communication for people to contact their friends, their families, contact business people, from anywhere in the world.

Businesses are using computer technology to keep records and track their accounts and the flow of money (Lee 1). In the area of entertainment, computers have not been left behind either.

Action and science fiction movies use computers to incorporated visual effects that make them look real. Computer games, a common entertainer especially to teenagers, have been made more entertaining with the use of advanced computer technology (Frisicaro et.al 1).

In Education

The education sector has also been greatly influenced by computer technology. Much of the school work is done with the aid of a computer. If students are given assignments all they have to do is search for the solution on the internet using Google. The assignments can then be neatly presented thanks to computer software that is made specifically for such purposes.

Today most high schools have made it mandatory for students to type out their work before presenting it for marking. This is made possible through computers. Teachers have also found computer technology very useful as they can use it to track student performance. They use computers to give out instructions.

Computers have also made online learning possible. Today teachers and students do not need to be physically present in class in order to be taught. Online teaching has allowed students to attend class from any place at any time without any inconveniences (Computers 1).

In the medical sector

Another very crucial sector in the life of man that computers has greatly influenced and continues to influence is the health sector. It was already mentioned in the introduction that hospitals and pharmacies employ the use of computers in serving people.

Computers are used in pharmacies to help pharmacists determine what type and amount of medication patients should get. Patient data and their health progress are recorded using computers in many hospitals. The issue of equipment status and placement in hospitals is recorded and tracked down using computers.

Research done by scientists, doctors, and many other people in the search to find cures for many diseases and medical complications is facilitated through computer technology. Many of the diseases that were known to be dangerous such as malaria are now treatable thanks to computer interventions (Parkin 615).

Many of the opponents of computer technology have argued against the use of computers basing their arguments on the fact that computers are replacing man when carrying out the basic activities that are naturally human in nature.

However, it should be noted that there are situations that call for extraordinary interventions. In many industries, machines have replaced human labor. Use of machines is usually very cheap when compared to human labor.

In addition machines give consistent results in terms of quality. There are other instances where the skills needed to perform a certain task are too high for an ordinary person to do. This is usually experienced in cases of surgery where man’s intervention alone is not sufficient. However, machines that are computer operated have made complex surgeries successful.

There are also cases where the tasks that are to be performed may be too dangerous for a normal human being. Such situations have been experienced during disasters such as people being trapped underground during mining. It is usually dangerous to use people in such situations, and even where people are used, the rescue is usually delayed.

Robotic machines that are computer operated have always helped in such situations and people have been saved. It is not also possible to send people in space duration space explorations, but computer machines such as robots have been effectively used to make exploration outside our world (Gupta 1).

Despite all these good things that computers have done to humans, their opponents also have some vital points that should not just be ignored. There are many things that computers do leaving many people wondering whether they are really helping the society, or they are just being used to deprive man his God given ability to function according to societal ethics.

Take for instance in the workplace and even at home; computers have permeated in every activity done by an individual thereby compromising personal privacy. Computers have been used to expose people to unauthorized access to personal information. There is some personal information, which if exposed can impact negatively to someone’s life.

Today the world does not care about ethics to the extent that it is very difficulty for one to clearly differentiate between what is and is not authentic or trustful. Computers have taken up every aspect of human life, from house chores in the home to practices carried out in the social spheres.

This has seen people lose their human element to machines. Industries and organizations have replaced human labor for the cheap and more effective machine labor. This means that people have lost jobs thanks to the advances made in the computer technology. Children using computers grow up with difficulties of differentiating between reality and fiction (Subrahmanyam et.al 139).

People depend on computers to do tasks. Students generate solutions to assignments using computers; teachers on the other hand use computers to mark assignments. Doctors in hospitals depend on machines to make patient diagnoses, to perform surgeries and to determine type of medications (Daley 56).

In the entertainment industry, computer technology has been used to modify sound to make people think that person singing is indeed great, but the truth of the matter is that it is simply the computer. This has taken away the really function of a musician in the music sector.

In the world of technology today, we live as a worried lot. The issue of hacking is very common and even statistics confirm that huge amounts of money are lost every year through hacking. Therefore, as much as people pride themselves that they are computer literate, they deeply worried that they may be the next victim to practices such as hacking (Bynum 1).

There is also the problem of trying to imitate God. It is believed that in 20 years time, man will come up with another form of life, a man made being. This will not only affect how man will be viewed in terms of his intelligence, but it will also break the long held view that God is the sole provider of life.

Computers have made it possible to create artificial intelligence where machines are given artificial intelligence so that they can behave and act like man. This when viewed from the religious point of view creates conflicts in human beliefs.

It has been long held that man was created in the image of God. Creating a machine in the image of money will distort the way people conceive of God. Using artificial methods to come up with new forms of life with man like intelligence will make man equate himself to God.

This carries the risk of changing the beliefs that mankind has held for millions of years. If this happens, the very computer technology will help by the use of mass media to distribute and convince people to change their beliefs and conceptions of God (Krasnogor 1).

We have seen that computer have and will continue to influence our lives. The advent of the computers has changed man as much as it has the world he lives in.

It is true that many of the things that seemed impossible have been made possible with computer technology. Medical technologies have led to discoveries in medicine, which have in turn saved many lives. Communication is now easy and fast. The world has been transformed into a virtual village.

Computers have made education accessible to all. In the entertainment sector, people are more satisfied. Crime surveillance is better and effective. However, we should be ware not to imitate God. As much as computers have positively influenced our lives, it is a live bomb that is waiting to explode.

We should tread carefully not to be overwhelmed by its sophistication (Computers 1). Many technologies have come with intensities that have seen them surpass their productivity levels thereby destroying themselves in the process. This seems like one such technology.

Bynum, Terrell. Computer and Information Ethics . Plato, 2008. Web.

Computers. Institutional Impacts . Virtual Communities in a Capitalist World, n.d. Web.

Daley, Bill. Computers Are Your Future: Introductory. New York: Prentice, 2007. Print.

Edwards, Paul. From “Impact” to Social Process . Computers in Society and Culture,1994. Web.

Frisicaro et.al. So What’s the Problem? The Impact of Computers, 2011. Web.

Gupta, Satyandra. We, robot: What real-life machines can and can’t do . Science News, 2011. Web.

Krasnogor, Ren. Advances in Artificial Life. Impacts on Human Life. n.d. Web.

Lee, Konsbruck. Impacts of Information Technology on Society in the new Century . Zurich. Web.

Parkin, Andrew. Computers in clinical practice . Applying experience from child psychiatry. 2004. Web.

Saimo. The impact of computer technology in Affect human life . Impact of Computer, 2010. Web.

Subrahmanyam et al. The Impact of Home Computer Use on Children’s Activities and Development. Princeton, 2004. Web.

Turkle, Sherry. The second self : Computers and the human spirit, 2005. Web.

  • Technology Implementation: The Role of People and Culture
  • The Evolution of the Automobile & Its Effects on Society
  • Should College Students Have Credit Cards
  • Credit Cards: Supporting Arguments
  • The Future of Space Exploration
  • Concept and Types of the Computer Networks
  • History of the Networking Technology
  • Bellevue Mine Explosion, Crowsnest Pass, Alberta, December 9, 1910
  • Men are Responsible for More Car Accidents Compared to Women
  • Solutions to Computer Viruses
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2018, May 28). How Computers Affect Our Lives. https://ivypanda.com/essays/how-computers-influence-our-life/

"How Computers Affect Our Lives." IvyPanda , 28 May 2018, ivypanda.com/essays/how-computers-influence-our-life/.

IvyPanda . (2018) 'How Computers Affect Our Lives'. 28 May.

IvyPanda . 2018. "How Computers Affect Our Lives." May 28, 2018. https://ivypanda.com/essays/how-computers-influence-our-life/.

1. IvyPanda . "How Computers Affect Our Lives." May 28, 2018. https://ivypanda.com/essays/how-computers-influence-our-life/.

Bibliography

IvyPanda . "How Computers Affect Our Lives." May 28, 2018. https://ivypanda.com/essays/how-computers-influence-our-life/.

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

IvyPanda uses cookies and similar technologies to enhance your experience, enabling functionalities such as:

  • Basic site functions
  • Ensuring secure, safe transactions
  • Secure account login
  • Remembering account, browser, and regional preferences
  • Remembering privacy and security settings
  • Analyzing site traffic and usage
  • Personalized search, content, and recommendations
  • Displaying relevant, targeted ads on and off IvyPanda

Please refer to IvyPanda's Cookies Policy and Privacy Policy for detailed information.

Certain technologies we use are essential for critical functions such as security and site integrity, account authentication, security and privacy preferences, internal site usage and maintenance data, and ensuring the site operates correctly for browsing and transactions.

Cookies and similar technologies are used to enhance your experience by:

  • Remembering general and regional preferences
  • Personalizing content, search, recommendations, and offers

Some functions, such as personalized recommendations, account preferences, or localization, may not work correctly without these technologies. For more details, please refer to IvyPanda's Cookies Policy .

To enable personalized advertising (such as interest-based ads), we may share your data with our marketing and advertising partners using cookies and other technologies. These partners may have their own information collected about you. Turning off the personalized advertising setting won't stop you from seeing IvyPanda ads, but it may make the ads you see less relevant or more repetitive.

Personalized advertising may be considered a "sale" or "sharing" of the information under California and other state privacy laws, and you may have the right to opt out. Turning off personalized advertising allows you to exercise your right to opt out. Learn more in IvyPanda's Cookies Policy and Privacy Policy .

Essay on Computer and its Uses for School Students and Children

500+ words essay on computer.

In this essay on computer, we are going to discuss some useful things about computers. The modern-day computer has become an important part of our daily life. Also, their usage has increased much fold during the last decade. Nowadays, they use the computer in every office whether private or government. Mankind is using computers for over many decades now. Also, they are used in many fields like agriculture, designing, machinery making, defense and many more. Above all, they have revolutionized the whole world.

essay on computer

History of Computers

It is very difficult to find the exact origin of computers. But according to some experts computer exists at the time of world war-II. Also, at that time they were used for keeping data. But, it was for only government use and not for public use. Above all, in the beginning, the computer was a very large and heavy machine.

Working of a Computer 

The computer runs on a three-step cycle namely input, process, and output. Also, the computer follows this cycle in every process it was asked to do. In simple words, the process can be explained in this way. The data which we feed into the computer is input, the work CPU do is process and the result which the computer give is output.

Components and Types of Computer

The simple computer basically consists of CPU, monitor, mouse, and keyboard . Also, there are hundreds of other computer parts that can be attached to it. These other parts include a printer, laser pen, scanner , etc.

The computer is categorized into many different types like supercomputers, mainframes, personal computers (desktop), PDAs, laptop, etc. The mobile phone is also a type of computer because it fulfills all the criteria of being a computer.

Get the huge list of more than 500 Essay Topics and Ideas

Uses of Computer in Various Fields

As the usage of computer increased it became a necessity for almost every field to use computers for their operations. Also, they have made working and sorting things easier. Below we are mentioning some of the important fields that use a computer in their daily operation.

Medical Field

They use computers to diagnose diseases, run tests and for finding the cure for deadly diseases . Also, they are able to find a cure for many diseases because of computers.

Whether it’s scientific research, space research or any social research computers help in all of them. Also, due to them, we are able to keep a check on the environment , space, and society. Space research helped us to explore the galaxies. While scientific research has helped us to locate resources and various other useful resources from the earth.

For any country, his defence is most important for the safety and security of its people. Also, computer in this field helps the country’s security agencies to detect a threat which can be harmful in the future. Above all the defense industry use them to keep surveillance on our enemy.

Threats from a Computer

Computers have become a necessity also, they have become a threat too. This is due to hackers who steal your private data and leak them on internet. Also, anyone can access this data. Apart from that, there are other threats like viruses, spams, bug and many other problems.

computer and our future essay

The computer is a very important machine that has become a useful part of our life. Also, the computers have twin-faces on one side it’s a boon and on the other side, it’s a bane. Its uses completely depend upon you. Apart from that, a day in the future will come when human civilization won’t be able to survive without computers as we depend on them too much. Till now it is a great discovery of mankind that has helped in saving thousands and millions of lives.

Frequently Asked Questions on Computer

Q.1  What is a computer?

A.1 A computer is an electronic device or machine that makes our work easier. Also, they help us in many ways.

Q.2 Mention various fields where computers are used?

A.2  Computers are majorly used in defense, medicine, and for research purposes.

Customize your course in 30 seconds

Which class are you in.

tutor

  • Travelling Essay
  • Picnic Essay
  • Our Country Essay
  • My Parents Essay
  • Essay on Favourite Personality
  • Essay on Memorable Day of My Life
  • Essay on Knowledge is Power
  • Essay on Gurpurab
  • Essay on My Favourite Season
  • Essay on Types of Sports

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Download the App

Google Play

Science News

computer chip

Monika Sakowska/EyeEm/Getty Images

Century of Science: Theme

  • The future of computing

Everywhere and invisible

You are likely reading this on a computer. You are also likely taking that fact for granted. That’s even though the device in front of you would have astounded computer scientists just a few decades ago, and seemed like sheer magic much before that. It contains billions of tiny computing elements, running millions of lines of software instructions, collectively written by countless people across the globe. The result: You click or tap or type or speak, and the result seamlessly appears on the screen.

One mill of the analytical engine

Computers once filled rooms. Now they’re everywhere and invisible, embedded in watches, car engines, cameras, televisions and toys. They manage electrical grids, analyze scientific data and predict the weather. The modern world would be impossible without them, and our dependence on them for health, prosperity and entertainment will only increase.

Scientists hope to make computers faster yet, to make programs more intelligent and to deploy technology in an ethical manner. But before looking at where we go from here, let’s review where we’ve come from.

In 1833, the English mathematician Charles Babbage conceived a programmable machine that presaged today’s computing architecture, featuring a “store” for holding numbers, a “mill” for operating on them, an instruction reader and a printer. This Analytical Engine also had logical functions like branching (if X, then Y). Babbage constructed only a piece of the machine, but based on its description, his acquaintance Ada Lovelace saw that the numbers it might manipulate could represent anything, even music, making it much more general-purpose than a calculator. “A new, a vast, and a powerful language is developed for the future use of analysis,” she wrote. She became an expert in the proposed machine’s operation and is often called the first programmer.

Colossus machine

In 1936, the English mathematician Alan Turing introduced the idea of a computer that could rewrite its own instructions , making it endlessly programmable. His mathematical abstraction could, using a small vocabulary of operations, mimic a machine of any complexity, earning it the name “universal Turing machine.”

The first reliable electronic digital computer, Colossus, was completed in 1943, to help England decipher wartime codes. It used vacuum tubes — devices for controlling the flow of electrons — instead of moving mechanical parts like the Analytical Engine’s cogwheels. This made Colossus fast, but engineers had to manually rewire it every time they wanted to perform a new task. Perhaps inspired by Turing’s concept of a more easily reprogrammable computer, the team that created the United States’ first electronic digital computer , ENIAC, drafted a new architecture for its successor, the EDVAC. The mathematician John von Neumann, who penned the EDVAC’s design in 1945, described a system that could store programs in its memory alongside data and alter the programs, a setup now called the von Neumann architecture. Nearly every computer today follows that paradigm.

ENIAC

In 1947, researchers at Bell Telephone Laboratories invented the transistor , a piece of circuitry in which the application of voltage (electrical pressure) or current controls the flow of electrons between two points. It came to replace the slower and less efficient vacuum tubes. In 1958 and 1959, researchers at Texas Instruments and Fairchild Semiconductor independently invented integrated circuits, in which transistors and their supporting circuitry were fabricated on a chip in one process.

For a long time, only experts could program computers. Then in 1957, IBM released FORTRAN, a programming language that was much easier to understand. It’s still in use today. In 1981 the company unveiled the IBM PC and Microsoft released its operating system called MS-DOS, together expanding the reach of computers into homes and offices. Apple further personalized computing with the operating systems for its Lisa, in 1982, and Macintosh, in 1984. Both systems popularized graphical user interfaces, or GUIs, offering users a mouse cursor instead of a command line.

Arpanet map

Meanwhile, researchers had been doing work that would end up connecting our newfangled hardware and software. In 1948, the mathematician Claude Shannon published “ A Mathematical Theory of Communication ,” a paper that popularized the word bit (for binary digit) and laid the foundation for information theory . His ideas have shaped computation and in particular the sharing of data over wires and through the air. In 1969, the U.S. Advanced Research Projects Agency created a computer network called ARPANET, which later merged with other networks to form the internet. In 1990, researchers at CERN — a European laboratory near Geneva, Switzerland — developed rules for transmitting data that would become the foundation of the World Wide Web.

Better hardware, better software and better communication have now connected most of the people on the planet. But how much better can the processors get? How smart can algorithms become? And what kinds of benefits and dangers should we expect to see as technology advances? Stuart Russell, a computer scientist at University of California, Berkeley and coauthor of a popular textbook on artificial intelligence, sees great potential for computers in “expanding artistic creativity, accelerating science, serving as diligent personal assistants, driving cars and — I hope — not killing us.” — Matthew Hutson

Jobs and Mac

Chasing speed

Computers, for the most part, speak the language of bits. They store information — whether it’s music, an application or a password — in strings of 1s and 0s. They also process information in a binary fashion, flipping transistors between an “on” and “off” state. The more transistors in a computer, the faster it can process bits, making possible everything from more realistic video games to safer air traffic control.

Combining transistors forms one of the building blocks of a circuit, called a logic gate. An AND logic gate, for example, is on if both inputs are on, while an OR is on if at least one input is on. Together, logic gates compose a complex traffic pattern of electrons, the physical manifestation of computation. A computer chip can contain millions of such logic gates.

So the more logic gates, and by extension the more transistors, the more powerful the computer. In 1965, Gordon Moore, a cofounder of Fairchild Semiconductor and later of Intel, published a paper on the future of chips titled “Cramming More Components onto Integrated Circuits.” He graphed the number of components (mostly transistors) on five integrated circuits (chips) that had been built from 1959 to 1965, and extended the line. Transistors per chip had doubled every year, and he expected the trend to continue.

Original Moore graph

In a 1975 talk, Moore identified three factors behind this exponential growth: smaller transistors, bigger chips and “device and circuit cleverness,” such as less wasted space. He expected the doubling to occur every two years. It did, and continued doing so for decades. That trend is now called Moore’s law.

Moore’s law is not a physical law, like Newton’s law of universal gravitation. It was meant as an observation about economics. There will always be incentives to make computers faster and cheaper — but at some point, physics interferes. Chip development can’t keep up with Moore’s law forever, as it becomes more difficult to make transistors tinier. According to what’s jokingly called Moore’s second law, the cost of chip fabrication plants, or “fabs,” doubles every few years. The semiconductor company TSMC has considered building a plant that would cost $25 billion.

Today, Moore’s law no longer holds; doubling is happening at a slower rate. We continue to squeeze more transistors onto chips with each generation, but the generations come less frequently. Researchers are looking into several ways forward: better transistors, more specialized chips, new chip concepts and software hacks.  

Computer performance from 1985 through 2015

Modern Moore graph

Until about 2005, the ability to squeeze more transistors onto each chip meant exponential improvements in computer performance (black and gray show an industry benchmark for computers with one or more “cores,” or processors). Likewise, clock frequency (green) — the number of cycles of operations performed per second — improved exponentially. Since this “Dennard-scaling era,” transistors have continued to shrink but that shrinking hasn’t yielded the same performance benefits.

Transistors

Transistors can get smaller still. Conceptually, a transistor consists of three basic elements. A metal gate (different from the logic gates above) lays across the middle of a semiconductor, one side of which acts as an electron source, and the other side a drain. Current passes from source to drain, and then on down the road, when the gate has a certain voltage. Many transistors are of a design called FinFET, because the channel from source to drain sticks up like a fin or a row of fins. The gate is like a larger, perpendicular wall that the fins pass through. It touches each fin on both sides and the top.

But, according to Sanjay Natarajan, who leads transistor design at Intel, “we’ve squeezed, we believe, everything you can squeeze out of that architecture.” In the next few years, chip manufacturers will start producing gate-all-around transistors, in which the channel resembles vertically stacked wires or ribbons penetrating the gate. These transistors will be faster and require less energy and space.

Transistors revisited

Finfet and gate all around transistor drawings

New transistor designs, a shift from the common FinFET (left) to gate-all-around transistors (right), for example, can make transistors that are smaller, faster and require less energy.

As these components have shrunk, the terminology to describe their size has gotten more confusing. You sometimes hear about chips being “14 nanometers” or “10 nanometers” in size; top-of-the-line chips in 2021 are “5 nanometers.” These numbers do not refer to the width or any other dimension of a transistor. They used to refer to the size of particular transistor features, but for several years now they have been nothing more than marketing terms.

Chip design

Even if transistors were to stop shrinking, computers would still have a lot of runway to improve, through Moore’s “device and circuit cleverness.”

A large hindrance to speeding up chips is the amount of heat they produce while moving electrons around. Too much and they’ll melt. For years, Moore’s law was accompanied by Dennard scaling, named after electrical engineer Robert Dennard, who said that as transistors shrank, they would also become faster and more energy efficient. That was true until around 2005, when they became so thin that they leaked too much current, heating up the chip. Since then, computer clock speed — the number of cycles of operations performed per second — hasn’t increased beyond a few gigahertz.

A Navajo woman sitting at a microscope

  • Materials that made us
  • Unsung characters

Core memory weavers and Navajo women made the Apollo missions possible

The stories of the women who assembled integrated circuits and wove core memory for the Apollo missions remain largely unknown.

Computers are limited in how much power they can draw and in how much heat they can disperse. Since the mid-2000s, according to Tom Conte, a computer scientist at Georgia Tech in Atlanta who co-leads the IEEE Rebooting Computing Initiative, “power savings has been the name of the game.” So engineers have turned to making chips perform several operations simultaneously, or splitting a chip into multiple parallel “cores,” to eke more operations from the same clock speed. But programming for parallel circuits is tricky.

Another speed bump is that electrons often have to travel long distances between logic gates or between chips — which also produces a lot of heat. One solution to the delays and heat production of data transmission is to move transistors closer together. Some nascent efforts have looked at stacking them vertically. More near-term, others are stacking whole chips vertically. Another solution is to replace electrical wiring with fiber optics, as light transmits information faster and more efficiently than electrical current does.

TrueNorth chip

Increasingly, computers rely on specialized chips or regions of a chip, called accelerators. Arranging transistors differently can put them to better use for specific applications. A cell phone, for instance, may have different circuitry designed for processing graphics, sound, wireless transmission and GPS signals.

“Sanjay [Natarajan] leads the parts of Intel that deliver transistors and transistor technologies,” says Richard Uhlig, managing director of Intel Labs. “We figure out what to do with the transistors,” he says of his team. One type of accelerator they’re developing is for what’s called fully homomorphic encryption, in which a computer processes data while it’s still encrypted — useful for, say, drawing conclusions about a set of medical records without revealing personal information. The project, funded by DARPA, could speed homomorphic encryption by hundreds of times.

More than 200 start-ups are developing accelerators for artificial intelligence , finding faster ways to perform the calculations necessary for software to learn from data.

Some accelerators aim to mimic, in hardware, the brain’s wiring. These “neuromorphic” chips typically embody at least one of three properties. First, memory elements may sit very close to computing elements, or the same elements may perform both functions, the way neurons both store and process information. One type of element that can perform this feat is the memristor . Second, the chips may process information using “spikes.” Like neurons, the elements sit around waiting for something to happen, then send a signal, or spike, when their activation crosses a threshold. Third, the chips may be analog instead of digital, eliminating the need for encoding continuous electrical properties such as voltage into discrete 1s and 0s.

These neuromorphic properties can make processing certain types of information orders of magnitude faster and more energy efficient. The computations are often less precise than in standard chips, but fuzzy logic is acceptable for, say, pattern matching or finding approximate solutions quickly. Uhlig says Intel has used its neuromorphic chip Loihi in tests to process odors, control robots and optimize railway schedules so that many trains can share limited tracks.

Cerebras chip

Some types of accelerators might one day use quantum computing , which capitalizes on two features of the subatomic realm. The first is superposition , in which particles can exist not just in one state or another, but in some combination of states until the state is explicitly measured. So a quantum system represents information not as bits but as qubits , which can preserve the possibility of being either 0 or 1 when measured. The second is entanglement , the interdependence between distant quantum elements. Together, these features mean that a system of qubits can represent and evaluate exponentially more possibilities than there are qubits — all combinations of 1s and 0s simultaneously.

Qubits can take many forms, but one of the most popular is as current in superconducting wires. These wires must be kept at a fraction of a degree above absolute zero, around –273° Celsius, to prevent hot, jiggling atoms from interfering with the qubits’ delicate superpositions and entanglement. Quantum computers also need many physical qubits to make up one “logical,” or effective, qubit, with the redundancy acting as error correction .

Quantum computers have several potential applications: machine learning, optimization (like train scheduling) and simulating real-world quantum mechanics, as in chemistry. But they will not likely become general-purpose computers. It’s not clear how you’d use one to, say, run a word processor.

New chip concepts

There remain new ways to dramatically speed up not just specialized accelerators but also general-purpose chips. Conte points to two paradigms. The first is superconduction. Below about 4 kelvins, around –269° C, many metals lose almost all electrical resistance, so they won’t convert current into heat. A superconducting circuit might be able to operate at hundreds of gigahertz instead of just a few, using much less electricity. The hard part lies not in keeping the circuits refrigerated (at least in big data centers), but in working with the exotic materials required to build them. 

The second paradigm is reversible computing. In 1961, the physicist Rolf Landauer merged information theory and thermodynamics , the physics of heat. He noted that when a logic gate takes in two bits and outputs one, it destroys a bit, expelling it as entropy, or randomness, in the form of heat. When billions of transistors operate at billions of cycles per second, the wasted heat adds up. Michael Frank, a computer scientist at Sandia National Laboratories in Albuquerque who works on reversible computing, wrote in 2017: “A conventional computer is, essentially, an expensive electric heater that happens to perform a small amount of computation as a side effect.”

But in reversible computing, logic gates have as many outputs as inputs. This means that if you ran the logic gate in reverse, you could use, say, three out-bits to obtain the three in-bits. Some researchers have conceived of reversible logic gates and circuits that could not only save those extra out-bits but also recycle them for other calculations. The physicist Richard Feynman had concluded that, aside from energy loss during data transmission, there’s no theoretical limit to computing efficiency.

Combine reversible and superconducting computing, Conte says, and “you get a double whammy.” Efficient computing allows you to run more operations on the same chip without worrying about power use or heat generation. Conte says that, eventually, one or both of these methods “probably will be the backbone of a lot of computing.”

Software hacks

Researchers continue to work on a cornucopia of new technologies for transistors, other computing elements, chip designs and hardware paradigms: photonics, spintronics , biomolecules, carbon nanotubes . But much more can still be eked out of current elements and architectures merely by optimizing code.

In a 2020 paper in Science , for instance, researchers studied the simple problem of multiplying two matrices, grids of numbers used in mathematics and machine learning. The calculation ran more than 60,000 times faster when the team picked an efficient programming language and optimized the code for the underlying hardware, compared with a standard piece of code in the Python language, which is considered user-friendly and easy to learn.

Computing gains through hardware and algorithm improvement

Algorithm improvement chart

Hardware isn’t the only way computing speeds up. Advances in the algorithms — the computational procedures for achieving a result — can lend a big boost to performance. The graph above shows the relative number of problems that can be solved in a fixed amount of time for one type of algorithm. The black line shows gains over time from hardware and algorithm advances; the purple line shows gains from hardware improvements alone.

Neil Thompson, a research scientist at MIT who coauthored the Science paper, recently coauthored a paper looking at historical improvements in algorithms , abstract procedures for tasks like sorting data. “For a substantial minority of algorithms,” he says, “their progress has been as fast or faster than Moore’s law.”

People have predicted the end of Moore’s law for decades. Even Moore has predicted its end several times. Progress may have slowed, at least for the time being, but human innovation, accelerated by economic incentives, has kept technology moving at a fast clip. — Matthew Hutson

Chasing intelligence

From the early days of computer science, researchers have aimed to replicate human thought. Alan Turing opened a 1950 paper titled “ Computing Machinery and Intelligence ” with: “I propose to consider the question, ‘Can machines think?’” He proceeded to outline a test, which he called “the imitation game” ( now called the Turing test ), in which a human communicating with a computer and another human via written questions had to judge which was which. If the judge failed, the computer could presumably think.

Man with wires

The term “artificial intelligence” was coined in a 1955 proposal for a summer institute at Dartmouth College. “An attempt will be made,” the proposal goes, “to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” The organizers expected that over two months, the 10 summit attendees would make a “significant advance.”

remington ad

More than six decades and untold person-hours later, it’s unclear whether the advances live up to what was in mind at that summer summit. Artificial intelligence surrounds us, in ways invisible (filtering spam), headline-worthy (beating us at chess, driving cars) and in between (letting us chat with our smartphones). But these are all narrow forms of AI, performing one or two tasks well. What Turing and others had in mind is called artificial general intelligence, or AGI. Depending on your definition, it’s a system that can do most of what humans do.

We may never achieve AGI, but the path has led, and will lead, to lots of useful innovations along the way. “I think we’ve made a lot of progress,” says Doina Precup, a computer scientist at McGill University in Montreal and head of AI company DeepMind’s Montreal research team. “But one of the things that, to me, is still missing right now is more of an understanding of the principles that are fundamental in intelligence.”

AI has made great headway in the last decade, much of it due to machine learning. Previously, computers relied more heavily on symbolic AI, which uses algorithms, or sets of instructions, that make decisions according to manually specified rules. Machine-learning programs, on the other hand, process data to find patterns on their own. One form uses artificial neural networks, software with layers of simple computing elements that together mimic certain principles of biological brains. Neural networks with several, or many more, layers are currently popular and make up a type of machine learning called deep learning.

Kasparov

Deep-learning systems can now play games like chess and Go better than the best human. They can probably identify dog breeds from photos better than you can. They can translate text from one language to another. They can control robots and compose music and predict how proteins will fold.

But they also lack much of what falls under the umbrella term of common sense. They don’t understand fundamental things about how the world works, physically or socially. Slightly changing images in a way that you or I might not notice, for example, can dramatically affect what a computer sees. Researchers found that placing a few innocuous stickers on a stop sign can lead software to interpret the sign as a speed limit sign, an obvious problem for self-driving cars .

photo of a person looking at the "Edmond de Belamy" portrait

Artificial intelligence challenges what it means to be creative

Computer programs can mimic famous artworks, but struggle with originality and lack self-awareness.

Types of learning

How can AI improve? Computer scientists are leveraging multiple forms of machine learning, whether the learning is “deep” or not. One common form is called supervised learning, in which machine-learning systems, or models, are trained by being fed labeled data such as images of dogs and their breed names. But that requires lots of human effort to label them. Another approach is unsupervised or self-supervised learning, in which computers learn without relying on outside labels, the way you or I predict what a chair will look like from different angles as we walk around it.

Models that process billions of words of text, predicting the next word one at a time and changing slightly when they’re wrong, rely on unsupervised learning. They can then generate new strings of text. In 2020, the research lab OpenAI released a trained language model called GPT-3 that’s perhaps the most complex neural network ever. Based on prompts, it can write humanlike news articles, short stories and poems. It can answer trivia questions, write computer code and translate language — all without being specifically trained to do any of these things. It’s further down the path toward AGI than many researchers thought was currently possible. And language models will get bigger and better from here.

Neural network

Another type of machine learning is reinforcement learning , in which a model interacts with an environment, exploring sequences of actions to achieve a goal. Reinforcement learning has allowed AI to become expert at board games like Go and video games like StarCraft II . A recent paper by researchers at DeepMind, including Precup, argues in the title that “ Reward Is Enough .” By merely having a training algorithm reinforce a model’s successful or semi-successful behavior, models will incrementally build up all the components of intelligence needed to succeed at the given task and many others.

For example, according to the paper, a robot rewarded for maximizing kitchen cleanliness would eventually learn “perception (to differentiate clean and dirty utensils), knowledge (to understand utensils), motor control (to manipulate utensils), memory (to recall locations of utensils), language (to predict future mess from dialogue) and social intelligence (to encourage young children to make less mess).” Whether trial and error would lead to such skills within the life span of the solar system — and what kinds of goals, environment and model would be required — is to be determined.

Another type of learning involves Bayesian statistics, a way of estimating what conditions are likely given current observations. Bayesian statistics is helping machines identify causal relations, an essential skill for advanced intelligence.

Generalizing

To learn efficiently, machines (and people) need to generalize, to draw abstract principles from experiences. “A huge part of intelligence,” says Melanie Mitchell, a computer scientist at the Santa Fe Institute in New Mexico, “is being able to take one’s knowledge and apply it in different situations.” Much of her work involves analogies, in a most rudimentary form: finding similarities between strings of letters. In 2019, AI researcher François Chollet of Google created a kind of IQ test for machines called the Abstraction and Reasoning Corpus, or ARC, in which computers must complete visual patterns according to principles demonstrated in example patterns. The puzzles are easy for humans but so far challenging for machines. Eventually, AI might understand grander abstractions like love and democracy.

Machine IQ test

Blocky tests

In a kind of IQ test for machines, computers are challenged to complete a visual patterning task based on examples provided. In each of these three tasks, computers are given “training examples” (both the problem, left, and the answer, right) and then have to determine the answer for “test examples.” The puzzles are typically much easier for humans than for machines.

Much of our abstract thought, ironically, may be grounded in our physical experiences. We use conceptual metaphors like important = big, and argument = opposing forces. AGI that can do most of what humans can do may require embodiment, such as operating within a physical robot. Researchers have combined language learning and robotics by creating virtual worlds where virtual robots simultaneously learn to follow instructions and to navigate within a house. GPT-3 is evidence that disembodied language may not be enough. In one demo , it wrote: “It takes two rainbows to jump from Hawaii to seventeen.”

“I’ve played around a lot with it,” Mitchell says. “It does incredible things. But it can also make some incredibly dumb mistakes.”

AGI might also require other aspects of our animal nature, like emotions , especially if humans expect to interact with machines in natural ways. Emotions are not mere irrational reactions. We’ve evolved them to guide our drives and behaviors. According to Ilya Sutskever, a cofounder and the chief scientist at OpenAI, they “give us this extra oomph of wisdom.” Even if AI doesn’t have the same conscious feelings we do, it may have code that approximates fear or anger. Already, reinforcement learning includes an exploratory element akin to curiosity .

Stop sign stickers

One function of curiosity is to help learn causality, by encouraging exploration and experimentation, Precup says. However, current exploration methods in AI “are still very far from babies playing purposefully with objects,” she notes.

Humans aren’t blank slates. We’re born with certain predispositions to recognize faces, learn language and play with objects. Machine-learning systems also require the right kind of innate structure to learn certain things quickly. How much structure, and what kind, is a matter of intense debate in the field. Sutskever says building in how we think we think is “intellectually seductive,” and he leans toward blank slates. However, “we want the best blank slate.”

One general neural-network structure Sutskever likes is called the transformer, a method for paying greater attention to important relationships between elements of an input. It’s behind current language models like GPT-3, and has also been applied to analyzing images, audio and video. “It makes everything better,” he says.

Thinking about thinking

AI itself may help us discover new forms of AI. There’s a set of techniques called AutoML, in which algorithms help optimize neural-network architectures or other aspects of AI models. AI also helps chip architects design better integrated circuits. This year, Google researchers reported in Nature that reinforcement learning performed better than their in-house team at laying out some aspects of an accelerator chip they’d designed for AI.

Estimates of AGI’s proximity vary greatly, but most experts think it’s decades away. In a 2016 survey, 352 machine-learning researchers estimated the arrival of “high-level machine intelligence,” defined as “when unaided machines can accomplish every task better and more cheaply than human workers.” On average, they gave even odds of such a feat by around 2060.

But no one has a good basis for judging. “We don’t understand our own intelligence,” Mitchell says, as much of it is unconscious. “And therefore, we don’t know what’s going to be hard or easy for AI.” What seems hard can be easy and vice versa — a phenomenon known as Moravec’s paradox, after the roboticist Hans Moravec. In 1988, Moravec wrote, “it is comparatively easy to make computers exhibit adult-level performance in solving problems on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a 1-year-old when it comes to perception and mobility.” Babies are secretly brilliant. In aiming for AGI, Precup says, “we are also understanding more about human intelligence, and about intelligence in general.”

The gap between organic and synthetic intelligence sometimes seems small because we anthropomorphize machines, spurred by computer science terms like intelligence , learning and vision . Aside from whether we even want humanlike machine intelligence — if they think just like us, won’t they essentially just be people, raising ethical and practical dilemmas? — such a thing may not be possible. Even if AI becomes broad, it may still have unique strengths and weaknesses.

Turing also differentiated between general intelligence and humanlike intelligence. In his 1950 paper on the imitation game, he wrote, “May not machines carry out something which ought to be described as thinking but which is very different from what a man does?” — Matthew Hutson

iCub robot

Ethical issues

In the 1942 short story “Runaround,” one of Isaac Asimov’s characters enumerated “the three fundamental Rules of Robotics — the three rules that are built most deeply into a robot’s positronic brain.” Robots avoided causing or allowing harm to humans, they obeyed orders and they protected themselves, as long as following one rule didn’t conflict with preceding decrees.

We might picture Asimov’s “positronic brains” making autonomous decisions about harm to humans, but that’s not actually how computers affect our well-being every day. Instead of humanoid robots killing people, we have algorithms curating news feeds. As computers further infiltrate our lives, we’ll need to think harder about what kinds of systems to build and how to deploy them, as well as meta-problems like how to decide — and who should decide — these things.

This is the realm of ethics, which may seem distant from the supposed objectivity of math, science and engineering. But deciding what questions to ask about the world and what tools to build has always depended on our ideals and scruples. Studying an abstruse topic like the innards of atoms , for instance, has clear bearing on both energy and weaponry. “There’s the fundamental fact that computer systems are not value neutral,” says Barbara Grosz, a computer scientist at Harvard University, “that when you design them, you bring some set of values into that design.”

One topic that has received a lot of attention from scientists and ethicists is fairness and bias . Algorithms increasingly inform or even dictate decisions about hiring, college admissions, loans and parole. Even if they discriminate less than people do, they can still treat certain groups unfairly, not by design but often because they are trained on biased data. They might predict a person’s future criminal behavior based on prior arrests, for instance, even though different groups are arrested at different rates for a given amount of crime.

Estimated percent of Oakland residents using drugs

Bar charts on Oakland drug use (side by side)

Percent of population that would be targeted by predictive policing

Bar charts on Oakland drug use (side by side)

A predictive policing algorithm tested in Oakland, Calif., would target Black people at roughly twice the rate of white people (right) even though data from the same time period, 2011, show that drug use was roughly equivalent across racial groups (left).

And confusingly, there are multiple definitions of fairness, such as equal false-positive rates between groups or equal false-negative rates between groups. A researcher at one conference listed 21 definitions . And the definitions often conflict. In one paper , researchers showed that in most cases it’s mathematically impossible to satisfy three common definitions simultaneously.

Another concern is privacy and surveillance , given that computers can now gather and sort information on their use in a way previously unimaginable . Data on our online behavior can help predict aspects of our private lives, like sexuality. Facial recognition can also follow us around the real world, helping police or authoritarian governments. And the emerging field of neurotechnology is already testing ways to connect the brain directly to computers. Related to privacy is security — hackers can access data that’s locked away, or interfere with pacemakers and autonomous vehicles.

Computers can also enable deception. AI can generate content that looks real. Language models might write masterpieces or be used to fill the internet with fake news and recruiting material for extremist groups. Generative adversarial networks, a type of deep learning that can generate realistic content, can assist artists or create deepfakes , images or videos showing people doing things they never did.

Putin Obama example

On social media, we also need to worry about polarization in people’s social, political and other views. Generally, recommendation algorithms optimize engagement (and platform profit through advertising), not civil discourse. Algorithms can also manipulate us in other ways. Robo-advisers — chatbots for dispensing financial advice or providing customer support — might learn to know what we really need, or to push our buttons and upsell us on extraneous products.

Multiple countries are developing autonomous weapons that have the potential to reduce civilian casualties as well as escalate conflict faster than their minders can react. Putting guns or missiles in the hands of robots raises the sci-fi specter of Terminators attempting to eliminate humankind. They might even think they’re helping us because eliminating humankind also eliminates human cancer (an example of having no common sense). More near-term, automated systems let loose in the real world have already caused flash crashes in the stock market and Amazon book prices reaching into the millions . If AIs are charged with making life-and-death decisions, they then face the famous trolley problem, deciding whom or what to sacrifice when not everyone can win. Here we’re entering Asimov territory.

That’s a lot to worry about. Russell, of UC Berkeley, suggests where our priorities should lie: “Lethal autonomous weapons are an urgent issue, because people may have already died, and the way things are going, it’s only a matter of time before there’s a mass attack,” he says. “Bias and social media addiction and polarization are both arguably instances of failure of value alignment between algorithms and society, so they are giving us early warnings of how things can easily go wrong.” He adds, “I don’t think trolley problems are urgent at all.”

Drones

There are also social, political and legal questions about how to manage technology in society. Who should be held accountable when an AI system causes harm? (For instance, “confused” self-driving cars have killed people .) How can we ensure more equal access to the tools of AI and their benefits, and make sure they don’t harm some groups much more than others? How will automating jobs upend the labor market? Can we manage the environmental impact of data centers, which use a lot of electricity? (Bitcoin mining is responsible for as many tons of carbon dioxide emissions as a small country.) Should we preferentially employ explainable algorithms — rather than the black boxes of many neural networks — for greater trust and debuggability, even if it makes the algorithms poorer at prediction?

What can be done

Michael Kearns, a computer scientist at the University of Pennsylvania and coauthor of The Ethical Algorithm , puts the problems on a spectrum of manageability. At one end is what’s called differential privacy, the ability to add noise to a dataset of, say, medical records so that it can be shared usefully with researchers without revealing much about the individual records. We can now make mathematical guarantees about exactly how private individuals’ data should remain.

Somewhere in the middle of the spectrum is fairness in machine learning. Researchers have developed methods to increase fairness by removing or altering biased training data, or to maximize certain types of equality — in loans, for instance — while minimizing reduction in profit. Still, some types of fairness will forever be in mutual conflict, and math can’t tell us which ones we want.

At the far end is explainability. As opposed to fairness, which can be analyzed mathematically in many ways, the quality of an explanation is hard to describe in mathematical terms. “I feel like I haven’t seen a single good definition yet,” Kearns says. “You could say, ‘Here’s an algorithm that will take a trained neural network and try to explain why it rejected you for a loan,’ but [the explanation] doesn’t feel principled.”

Explanation methods include generating a simpler, interpretable model that approximates the original, or highlighting regions of an image a network found salient, but these are just gestures toward how the cryptic software computes. Even worse, systems can provide intentionally deceptive explanations , to make unfair models look fair to auditors. Ultimately, if the audience doesn’t understand it, it’s not a good explanation, and measuring its success — however you define success — requires user studies.  

Something like Asimov’s three laws won’t save us from robots that hurt us while trying to help us; stepping on your phone when you tell it to hurry up and get you a drink is a likely example. And even if the list were extended to a million laws, the letter of a law is not identical to its spirit. One possible solution is what’s called inverse reinforcement learning, or IRL. In reinforcement learning, a model learns behaviors to achieve a given goal. In IRL, it infers someone’s goal by observing their behavior. We can’t always articulate our values — the goals we ultimately care about — but AI might figure them out by watching us. If we have coherent goals, that is.

“Perhaps the most obvious preference is that we prefer to be alive,” says Russell, who has pioneered IRL. “So an AI agent using IRL can avoid courses of action that cause us to be dead. In case this sounds too trivial, remember that not a single one of the prototype self-driving cars knows that we prefer to be alive. The self-driving car may have rules that in most cases prohibit actions that cause death, but in some unusual circumstance — such as filling a garage with carbon monoxide — they might watch the person collapse and die and have no notion that anything was wrong.”

Digital lives

Facebook metaverse

In 2021, Facebook unveiled its vision for a metaverse, a virtual world where people would work and play. “As so many have made clear, this is what technology wants,” says MIT sociologist and clinical psychologist Sherry Turkle about the metaverse. “For me, it would be wiser to ask first, not what technology wants, but what do people want? What do people need to be safer? Less lonely? More connected to each other in communities? More supported in their efforts to live healthier and more fulfilled lives?”

Engineer, heal thyself

In the 1950 short story “The Evitable Conflict,” Asimov articulated what became a “zeroth law,” which would supersede the others: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” It should go without saying that the rule should apply with “roboticist” in place of “robot.” For sure, many computer scientists avoid harming humanity, but many also don’t actively engage with the social implications of their work, effectively allowing humanity to come to harm, says Margaret Mitchell, a computer scientist who co-led Google’s Ethical AI team and now consults with organizations on tech ethics. (She is no relation to computer scientist Melanie Mitchell.)

One hurdle, according to Grosz, is that they’re not properly trained in ethics. But she hopes to change that. Grosz and the philosopher Alison Simmons began a program at Harvard called Embedded EthiCS, in which teaching assistants with training in philosophy are embedded in computer science courses and teach lessons on privacy or discrimination or fake news. The program has spread to MIT, Stanford and the University of Toronto.

“We try to get students to think about values and value trade-offs,” Grosz says. Two things have struck her. The first is the difficulty students have with problems that lack right answers and require arguing for particular choices. The second is, despite their frustration, “how much students care about this set of issues,” Grosz says.

Another way to educate technologists about their influence is to widen collaborations. According to Mitchell, “computer science needs to move from holding math up as the be-all and end-all, to holding up both math and social science, and psychology as well.” Researchers should bring in experts in these topics, she says. Going the other way, Kearns says, they should also share their own technical expertise with regulators, lawyers and policy makers. Otherwise, policies will be so vague as to be useless. Without specific definitions of privacy or fairness written into law, companies can choose whatever’s most convenient or profitable.

When evaluating how a tool will affect a community, the best experts are often community members themselves. Grosz advocates consulting with diverse populations. Diversity helps in both user studies and technology teams. “If you don’t have people in the room who think differently from you,” Grosz says, “the differences are just not in front of you. If somebody says not every patient has a smartphone, boom, you start thinking differently about what you’re designing.”

According to Margaret Mitchell, “the most pressing problem is the diversity and inclusion of who’s at the table from the start. All the other issues fall out from there.” — Matthew Hutson

Editor’s note: This story was published February 24, 2022.

pic of Turing

Alan Turing (shown) sketches out the theoretical blueprint for a machine able to implement instructions for making any calculation — the principle behind modern computing devices.

Operators at the ENIAC

The University of Pennsylvania rolls out the first all-electronic general-purpose digital computer , called ENIAC (one shown). The Colossus electronic computers had been used by British code-breakers during World War II.

Grace Hopper

Grace Hopper (shown) creates the first compiler. It translated instructions into code that a computer could read and execute, making it an important step in the evolution of modern programming languages.

Kids looking at a computer

Three computers released this year — the Commodore PET, the Apple II and the TRS-80 (an early version shown) — help make personal computing a reality.

Lee Sedol playing

Google’s AlphaGo computer program defeats world champion Go player Lee Sedol (shown).

Sycamore chip

Researchers at Google report a controversial claim that they have achieved quantum supremacy, performing a computation that would be impossible in practice for a classical machine. (Google’s Sycamore chip is shown.)

From the archive

From now on: computers.

Science News Letter editor Watson Davis predicts how “mechanical brains” will push forward human knowledge.

Maze for Mechanical Mouse

Claude Shannon demonstrates his “electrical mouse,” which can learn to find its way through a maze.

Giant Electronic Brains

Science News Letter covers the introduction of a “giant electronic ‘brain’” to aid weather predictions.

Automation Changes Jobs

A peek into early worries over how technological advances will swallow up jobs.

Machine ‘Thinks’ for Itself

“An automaton that is half-beast, half-machine is able to ‘think’ for itself,” Science News Letter reports.

Predicting Chemical Properties by Computer

A report on how artificial intelligence is helping to predict chemical properties.

From Number Crunchers to Pocket Genies

The first in a series of articles on the computer revolution explores the technological breakthroughs bringing computers to the average person.

Calculators in the Classroom

Science News weighs the pros and cons of “pocket math,” noting that high school and college students are “buying calculators as if they were radios.”

Computing for Art’s Sake

Artists embrace computers as essential partners in the creative process, Science News ’ Janet Raloff reports.

PetaCrunchers

Mathematics writer Ivars Peterson reports on the push toward ultrafast supercomputing — and what it might reveal about the cosmos.

A Mind from Math

Alan Turing foresaw the potential of machines to mimic brains, reports Tom Siegfried.

Machines are getting schooled on fairness

Machine-learning programs can introduce biases that may harm job seekers, loan applicants and more, Maria Temming reports.

An illustration of a smiley face with a green monster with lots of tentacles and eyes behind it. The monster's fangs and tongue are visible at the bottom of the illustration.

AI chatbots can be tricked into misbehaving. Can scientists stop it?

To develop better safeguards, computer scientists are studying how people have manipulated generative AI chatbots into answering harmful questions.

Science News is published by Society for Science

DNA illustration

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

Logo

Essay on Future of Computer

Students are often asked to write an essay on Future of Computer in their schools and colleges. And if you’re also looking for the same, we have created 100-word, 250-word, and 500-word essays on the topic.

Let’s take a look…

100 Words Essay on Future of Computer

The future of computers.

Computers are becoming smarter every day. They can now do tasks that were once only possible for humans. In the future, they may even start thinking like us!

Artificial Intelligence

Artificial Intelligence (AI) is a big part of the future. It allows computers to learn from their experiences. This means they can improve over time without needing help from humans.

Virtual Reality

Virtual Reality (VR) is another exciting area. It allows us to enter computer-created worlds. This could change how we learn, play, and work.

Quantum Computing

Quantum computing is a new technology that could make computers incredibly fast. This could help solve problems that are currently too hard for regular computers.

250 Words Essay on Future of Computer

The evolution of computers.

AI is set to revolutionize the future of computers. Machine learning algorithms, a subset of AI, are becoming increasingly adept at pattern recognition and predictive analysis. This will lead to computers that can learn and adapt to their environment, making them more intuitive and user-friendly.

Quantum computing, using quantum bits or ‘qubits’, is another frontier. Unlike traditional bits that hold a value of either 0 or 1, qubits can exist in multiple states simultaneously. This allows quantum computers to perform complex calculations at unprecedented speeds. While still in its infancy, quantum computing could redefine computational boundaries.

Cloud Technology

Cloud technology is poised to further transform computer usage. With most data and applications moving to the cloud, the need for powerful personal computers may diminish. Instead, thin clients or devices with minimal hardware, relying on the cloud for processing and storage, could become the norm.

The future of computers is a fascinating blend of AI, quantum computing, and cloud technology. As these technologies mature, we can expect computers to become even more integral to our lives, reshaping society in profound ways. The only certainty is that the pace of change will continue to accelerate, making the future of computers an exciting realm of endless possibilities.

500 Words Essay on Future of Computer

The evolution of computing.

Computers have revolutionized the way we live, work, and play. From their early inception as room-sized machines to the sleek, pocket-sized devices we have today, computers have evolved dramatically. However, this is only the tip of the iceberg. The future of computing promises to be even more exciting and transformative.

Artificial Intelligence and Machine Learning

Artificial Intelligence (AI) and Machine Learning (ML) are two other areas poised to shape the future of computing. AI refers to the ability of a machine to mimic human intelligence, while ML is a subset of AI that involves the ability of machines to learn and improve without being explicitly programmed. As these technologies advance, we can expect computers to become more autonomous, capable of complex decision-making and problem-solving.

Neuromorphic Computing

Neuromorphic computing, another promising field, aims to mimic the human brain’s architecture and efficiency. By leveraging the principles of neural networks, neuromorphic chips can process information more efficiently than traditional processors, making them ideal for applications requiring real-time processing and low power consumption.

Edge Computing

Conclusion: the future is now.

The future of computing is already unfolding around us. Quantum computers are being developed by tech giants, AI and ML are becoming more sophisticated, neuromorphic chips are on the horizon, and edge computing is becoming a necessity in our increasingly connected world. As we move forward, the boundaries of what computers can achieve will continue to expand, leading to unprecedented advancements in technology and society. The future of computing is not just a concept—it’s a reality that’s taking shape right before our eyes.

That’s it! I hope the essay helped you.

Happy studying!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Newsletters

Where computing might go next

The future of computing depends in part on how we reckon with its past.

IBM engineers at Ames Research Center

  • Margaret O’Mara archive page

If the future of computing is anything like its past, then its trajectory will depend on things that have little to do with computing itself. 

Technology does not appear from nowhere. It is rooted in time, place, and opportunity. No lab is an island; machines’ capabilities and constraints are determined not only by the laws of physics and chemistry but by who supports those technologies, who builds them, and where they grow. 

Popular characterizations of computing have long emphasized the quirkiness and brilliance of those in the field, portraying a rule-breaking realm operating off on its own. Silicon Valley’s champions and boosters have perpetuated the mythos of an innovative land of garage startups and capitalist cowboys. The reality is different. Computing’s history is modern history—and especially American history—in miniature.

The United States’ extraordinary push to develop nuclear and other weapons during World War II unleashed a torrent of public spending on science and technology. The efforts thus funded trained a generation of technologists and fostered multiple computing projects, including ENIAC —the first all-digital computer, completed in 1946. Many of those funding streams eventually became permanent, financing basic and applied research at a scale unimaginable before the war. 

The strategic priorities of the Cold War drove rapid development of transistorized technologies on both sides of the Iron Curtain. In a grim race for nuclear supremacy amid an optimistic age of scientific aspiration, government became computing’s biggest research sponsor and largest single customer. Colleges and universities churned out engineers and scientists. Electronic data processing defined the American age of the Organization Man, a nation built and sorted on punch cards. 

The space race, especially after the Soviets beat the US into space with the launch of the Sputnik orbiter in late 1957, jump-started a silicon semiconductor industry in a sleepy agricultural region of Northern California, eventually shifting tech’s center of entrepreneurial gravity from East to West. Lanky engineers in white shirts and narrow ties turned giant machines into miniature electronic ones, sending Americans to the moon. (Of course, there were also women playing key, though often unrecognized, roles.) 

In 1965, semiconductor pioneer Gordon Moore, who with colleagues had broken ranks with his boss William Shockley of Shockley Semiconductor to launch a new company, predicted that the number of transistors on an integrated circuit would double every year while costs would stay about the same. Moore’s Law was proved right. As computing power became greater and cheaper, digital innards replaced mechanical ones in nearly everything from cars to coffeemakers.

A new generation of computing innovators arrived in the Valley, beneficiaries of America’s great postwar prosperity but now protesting its wars and chafing against its culture. Their hair grew long; their shirts stayed untucked. Mainframes were seen as tools of the Establishment, and achievement on earth overshadowed shooting for the stars. Small was beautiful. Smiling young men crouched before home-brewed desktop terminals and built motherboards in garages. A beatific newly minted millionaire named Steve Jobs explained how a personal computer was like a bicycle for the mind. Despite their counterculture vibe, they were also ruthlessly competitive businesspeople. Government investment ebbed and private wealth grew. 

The ARPANET became the commercial internet. What had been a walled garden accessible only to government-funded researchers became an extraordinary new platform for communication and business, as the screech of dial-up modems connected millions of home computers to the World Wide Web. Making this strange and exciting world accessible were very young companies with odd names: Netscape, eBay, Amazon.com, Yahoo.

By the turn of the millennium, a president had declared that the era of big government was over and the future lay in the internet’s vast expanse. Wall Street clamored for tech stocks, then didn’t; fortunes were made and lost in months. After the bust, new giants emerged. Computers became smaller: a smartphone in your pocket, a voice assistant in your kitchen. They grew larger, into the vast data banks and sprawling server farms of the cloud. 

Fed with oceans of data, largely unfettered by regulation, computing got smarter. Autonomous vehicles trawled city streets, humanoid robots leaped across laboratories, algorithms tailored social media feeds and matched gig workers to customers. Fueled by the explosion of data and computation power, artificial intelligence became the new new thing. Silicon Valley was no longer a place in California but shorthand for a global industry, although tech wealth and power were consolidated ever more tightly in five US-based companies with a combined market capitalization greater than the GDP of Japan. 

It was a trajectory of progress and wealth creation that some believed inevitable and enviable. Then, starting two years ago, resurgent nationalism and an economy-­upending pandemic scrambled supply chains, curtailed the movement of people and capital, and reshuffled the global order. Smartphones recorded death on the streets and insurrection at the US Capitol. AI-enabled drones surveyed the enemy from above and waged war on those below. Tech moguls sat grimly before congressional committees, their talking points ringing hollow to freshly skeptical lawmakers. 

Our relationship with computing had suddenly changed.

The past seven decades have produced stunning breakthroughs in science and engineering. The pace and scale of change would have amazed our mid-20th-century forebears. Yet techno-optimistic assurances about the positive social power of a networked computer on every desk have proved tragically naïve. The information age of late has been more effective at fomenting discord than advancing enlightenment, exacerbating social inequities and economic inequalities rather than transcending them. 

The technology industry—produced and made wealthy by these immense advances in computing—has failed to imagine alternative futures both bold and practicable enough to address humanity’s gravest health and climatic challenges. Silicon Valley leaders promise space colonies while building grand corporate headquarters below sea level. They proclaim that the future lies in the metaverse , in the blockchain, in cryptocurrencies whose energy demands exceed those of entire nation-states. 

The future of computing feels more tenuous, harder to map in a sea of information and disruption. That is not to say that predictions are futile, or that those who build and use technology have no control over where computing goes next. To the contrary: history abounds with examples of individual and collective action that altered social and political outcomes. But there are limits to the power of technology to overcome earthbound realities of politics, markets, and culture. 

To understand computing’s future, look beyond the machine.

1. The hoodie problem

First, look to who will get to build the future of computing.

The tech industry long celebrated itself as a meritocracy, where anyone could get ahead on the strength of technical know-how and innovative spark. This assertion has been belied in recent years by the persistence of sharp racial and gender imbalances, particularly in the field’s topmost ranks. Men still vastly outnumber women in the C-suites and in key engineering roles at tech companies. Venture capital investors and venture-backed entrepreneurs remain mostly white and male. The number of Black and Latino technologists of any gender remains shamefully tiny. 

Much of today’s computing innovation was born in Silicon Valley . And looking backward, it becomes easier to understand where tech’s meritocratic notions come from, as well as why its diversity problem has been difficult to solve. 

Silicon Valley was once indeed a place where people without family money or connections could make a career and possibly a fortune. Those lanky engineers of the Valley’s space-age 1950s and 1960s were often heartland boys from middle-class backgrounds, riding the extraordinary escalator of upward mobility that America delivered to white men like them in the prosperous quarter-century after the end of World War II.  

Many went to college on the GI Bill and won merit scholarships to places like Stanford and MIT, or paid minimal tuition at state universities like the University of California, Berkeley. They had their pick of engineering jobs as defense contracts fueled the growth of the electronics industry. Most had stay-at-home wives whose unpaid labor freed husbands to focus their energy on building new products, companies, markets. Public investments in suburban infrastructure made their cost of living reasonable, the commutes easy, the local schools excellent. Both law and market discrimination kept these suburbs nearly entirely white. 

In the last half-century, political change and market restructuring slowed this escalator of upward mobility to a crawl , right at the time that women and minorities finally had opportunities to climb on. By the early 2000s, the homogeneity among those who built and financed tech products entrenched certain assumptions: that women were not suited for science, that tech talent always came dressed in a hoodie and had attended an elite school—whether or not someone graduated. It limited thinking about what problems to solve, what technologies to build, and what products to ship. 

Having so much technology built by a narrow demographic—highly educated, West Coast based, and disproportionately white, male, and young—becomes especially problematic as the industry and its products grow and globalize. It has fueled considerable investment in driverless cars without enough attention to the roads and cities these cars will navigate. It has propelled an embrace of big data without enough attention to the human biases contained in that data . It has produced social media platforms that have fueled political disruption and violence at home and abroad. It has left rich areas of research and potentially vast market opportunities neglected.

Computing’s lack of diversity has always been a problem, but only in the past few years has it become a topic of public conversation and a target for corporate reform. That’s a positive sign. The immense wealth generated within Silicon Valley has also created a new generation of investors, including women and minorities who are deliberately putting their money in companies run by people who look like them. 

But change is painfully slow. The market will not take care of imbalances on its own.

For the future of computing to include more diverse people and ideas, there needs to be a new escalator of upward mobility: inclusive investments in research, human capital, and communities that give a new generation the same assist the first generation of space-age engineers enjoyed. The builders cannot do it alone.

2. Brainpower monopolies

Then, look at who the industry's customers are and how it is regulated.

The military investment that undergirded computing’s first all-digital decades still casts a long shadow. Major tech hubs of today—the Bay Area, Boston, Seattle, Los Angeles—all began as centers of Cold War research and military spending. As the industry further commercialized in the 1970s and 1980s, defense activity faded from public view, but it hardly disappeared. For academic computer science, the Pentagon became an even more significant benefactor starting with Reagan-era programs like the Strategic Defense Initiative, the computer-­enabled system of missile defense memorably nicknamed “Star Wars.” 

In the past decade, after a brief lull in the early 2000s, the ties between the technology industry and the Pentagon have tightened once more. Some in Silicon Valley protest its engagement in the business of war, but their objections have done little to slow the growing stream of multibillion-dollar contracts for cloud computing and cyberweaponry. It is almost as if Silicon Valley is returning to its roots. 

Defense work is one dimension of the increasingly visible and freshly contentious entanglement between the tech industry and the US government. Another is the growing call for new technology regulation and antitrust enforcement, with potentially significant consequences for how technological research will be funded and whose interests it will serve. 

The extraordinary consolidation of wealth and power in the technology sector and the role the industry has played in spreading disinformation and sparking political ruptures have led to a dramatic change in the way lawmakers approach the industry. The US has had little appetite for reining in the tech business since the Department of Justice took on Microsoft 20 years ago. Yet after decades of bipartisan chumminess and laissez-faire tolerance, antitrust and privacy legislation is now moving through Congress. The Biden administration has appointed some of the industry’s most influential tech critics to key regulatory roles and has pushed for significant increases in regulatory enforcement. 

The five giants—Amazon, Apple, Facebook, Google, and Microsoft—now spend as much or more lobbying in Washington, DC, as banks, pharmaceutical companies, and oil conglomerates, aiming to influence the shape of anticipated regulation. Tech leaders warn that breaking up large companies will open a path for Chinese firms to dominate global markets, and that regulatory intervention will squelch the innovation that made Silicon Valley great in the first place.

Viewed through a longer lens, the political pushback against Big Tech’s power is not surprising. Although sparked by the 2016 American presidential election, the Brexit referendum, and the role social media disinformation campaigns may have played in both, the political mood echoes one seen over a century ago. 

We might be looking at a tech future where companies remain large but regulated, comparable to the technology and communications giants of the middle part of the 20th century. This model did not squelch technological innovation. Today, it could actually aid its growth and promote the sharing of new technologies. 

Take the case of AT&T, a regulated monopoly for seven decades before its ultimate breakup in the early 1980s. In exchange for allowing it to provide universal telephone service, the US government required AT&T to stay out of other communication businesses, first by selling its telegraph subsidiary and later by steering clear of computing. 

Like any for-profit enterprise, AT&T had a hard time sticking to the rules, especially after the computing field took off in the 1940s. One of these violations resulted in a 1956 consent decree under which the US required the telephone giant to license the inventions produced in its industrial research arm, Bell Laboratories, to other companies. One of those products was the transistor. Had AT&T not been forced to share this and related technological breakthroughs with other laboratories and firms, the trajectory of computing would have been dramatically different.

Right now, industrial research and development activities are extraordinarily concentrated once again. Regulators mostly looked the other way over the past two decades as tech firms pursued growth at all costs, and as large companies acquired smaller competitors. Top researchers left academia for high-paying jobs at the tech giants as well, consolidating a huge amount of the field’s brainpower in a few companies. 

More so than at any other time in Silicon Valley’s ferociously entrepreneurial history, it is remarkably difficult for new entrants and their technologies to sustain meaningful market share without being subsumed or squelched by a larger, well-capitalized, market-dominant firm. More of computing’s big ideas are coming from a handful of industrial research labs and, not surprisingly, reflecting the business priorities of a select few large tech companies.

Tech firms may decry government intervention as antithetical to their ability to innovate. But follow the money, and the regulation, and it is clear that the public sector has played a critical role in fueling new computing discoveries—and building new markets around them—from the start. 

3. Location, location, location

Last, think about where the business of computing happens.

The question of where “the next Silicon Valley” might grow has consumed politicians and business strategists around the world for far longer than you might imagine. French president Charles de Gaulle toured the Valley in 1960 to try to unlock its secrets. Many world leaders have followed in the decades since. 

Silicon Somethings have sprung up across many continents, their gleaming research parks and California-style subdivisions designed to lure a globe-trotting workforce and cultivate a new set of tech entrepreneurs. Many have fallen short of their startup dreams, and all have fallen short of the standard set by the original, which has retained an extraordinary ability to generate one blockbuster company after another, through boom and bust. 

While tech startups have begun to appear in a wider variety of places, about three in 10 venture capital firms and close to 60% of available investment dollars remain concentrated in the Bay Area. After more than half a century, it remains the center of computing innovation. 

It does, however, have significant competition. China has been making the kinds of investments in higher education and advanced research that the US government made in the early Cold War, and its technology and internet sectors have produced enormous companies with global reach. 

The specter of Chinese competition has driven bipartisan support for renewed American tech investment, including a potentially massive infusion of public subsidies into the US semiconductor industry. American companies have been losing ground to Asian competitors in the chip market for years. The economy-choking consequences of this became painfully clear when covid-related shutdowns slowed chip imports to a trickle, throttling production of the many consumer goods that rely on semiconductors to function.

As when Japan posed a competitive threat 40 years ago, the American agitation over China runs the risk of slipping into corrosive stereotypes and lightly veiled xenophobia. But it is also true that computing technology reflects the state and society that makes it, whether it be the American military-industrial complex of the late 20th century, the hippie-­influenced West Coast culture of the 1970s, or the communist-capitalist China of today. 

What’s next

Historians like me dislike making predictions. We know how difficult it is to map the future, especially when it comes to technology, and how often past forecasters have gotten things wrong. 

Intensely forward-thinking and impatient with incrementalism, many modern technologists—especially those at the helm of large for-profit enterprises—are the opposite. They disdain politics, and resist getting dragged down by the realities of past and present as they imagine what lies over the horizon. They dream of a new age of quantum computers and artificial general intelligence, where machines do most of the work and much of the thinking. 

They could use a healthy dose of historical thinking. 

Whatever computing innovations will appear in the future, what matters most is how our culture, businesses, and society choose to use them. And those of us who analyze the past also should take some inspiration and direction from the technologists who have imagined what is not yet possible. Together, looking forward and backward, we may yet be able to get where we need to go. 

""

How to break free of Spotify’s algorithm

By delivering what people seem to want, has Spotify killed the joy of music discovery?

  • Tiffany Ng archive page

Joaquin Phoenix in the film Her, 2013.

AI’s growth needs the right interface

Enough with passive consumption. UX designer Cliff Kuang says it’s way past time we take interfaces back into our own hands.

  • Cliff Kuang archive page

""

Move over, text: Video is the new medium of our lives

We are increasingly learning and communicating by means of the moving image. It will shift our culture in untold ways.

  • Clive Thompson archive page

computer and our future essay

African farmers are using private satellite data to improve crop yields

A number of farmers are turning to space-based monitoring to get a better picture of what their crops need.

  • Orji Sunday archive page

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

Stephanie Gil

Stephanie Gil wins DARPA Young Faculty Award

Award will support research into improving resilience in multi-robot teams

AI / Machine Learning , Computational Science & Engineering , Robotics

A man standing with his arms crossed.

Calmon honored for research and teaching

EE professor awarded the James L. Massey Research & Teaching Award for Young Scholars

AI / Machine Learning , Awards

SEAS shield

SEAS welcomes new faculty in Computer Science, Applied Math

Faculty bring expertise in machine learning, AI and data

Applied Mathematics , Computer Science

  • Essay Editor

Essay on the Future of Computer Technology

1. introduction.

The introduction section of "The Future of Computer Technology" serves as a gateway into the exploration of the advancements, challenges, and potential impact of computer technology in our society. As we delve into the future of computer technology, it is essential to understand the rapid evolution and transformation that has taken place over the past few decades. From the advent of the first programmable computers to the emergence of artificial intelligence and quantum computing, the landscape of technology has significantly evolved, reshaping industries, communication, and daily life. In this section, we will examine the driving forces behind the development of computer technology and the crucial role it plays in shaping the future of various sectors, including healthcare, finance, education, and transportation. Additionally, we will consider the ethical and societal implications of these advancements, as well as the potential risks and challenges that come with the rapid progression of technology. By understanding the roots of computer technology and its current state, we can better comprehend the trajectory it is heading towards, laying the groundwork for the in-depth exploration of its future developments and impacts in the subsequent sections of this essay.

2. Advancements in Hardware

In recent years, significant advancements have been made in the field of computer hardware, particularly in the areas of quantum computing and neuromorphic chips. Quantum computing, a revolutionary approach to data processing, has the potential to solve complex problems at an unprecedented speed by harnessing the power of quantum mechanics. This technology leverages quantum bits, or qubits, which can exist in multiple states simultaneously, allowing for the parallel processing of vast amounts of data. As a result, quantum computers have the capability to outperform traditional binary-based systems in tasks such as cryptography, optimization, and simulation. On the other hand, neuromorphic chips are designed to mimic the structure and function of the human brain, offering a new paradigm for computing. Inspired by the brain's neural networks, these chips are equipped with artificial synapses and neurons, enabling them to process and interpret information in a manner akin to human cognition. This neuromorphic approach holds promise for applications in artificial intelligence, robotics, and pattern recognition, as it can potentially deliver higher efficiency and lower power consumption compared to conventional computing architectures. The advancements in hardware, particularly in the realms of quantum computing and neuromorphic chips, signal a transformative shift in the future of computer technology. These innovations have the potential to revolutionize the way we process and analyze data, opening up new possibilities for addressing complex challenges across various domains. As we continue to explore and harness the capabilities of these cutting-edge technologies, the landscape of computing is poised to undergo profound and impactful changes, paving the way for a future that is both exciting and transformative.

2.1. Quantum Computing

Quantum computing is a revolutionary field that has the potential to greatly impact the future of computer technology. Unlike traditional computers that use bits to process information, quantum computers use quantum bits or qubits, which can exist in multiple states at once due to the principles of quantum mechanics. This allows them to perform complex calculations at incredible speeds, making them ideal for solving problems that are currently infeasible for classical computers. One of the key concepts in quantum computing is superposition, where a qubit can exist in both 0 and 1 states simultaneously, and entanglement, where the state of one qubit is dependent on the state of another, regardless of the distance between them. These properties enable quantum computers to process and analyze vast amounts of data in parallel, leading to significant advancements in fields such as cryptography, drug discovery, and optimization problems. However, quantum computing also presents unique challenges, such as the need for sophisticated error correction due to the inherent fragility of qubits. Additionally, the technology is still in its infancy and requires further research and development to become commercially viable. Nevertheless, the potential applications of quantum computing are vast, and as the technology continues to advance, it has the power to revolutionize the future of computer technology.

2.2. Neuromorphic Chips

Neuromorphic chips, also known as brain-inspired chips, are a type of hardware that mimics the structure and function of the human brain. These chips are designed to process information in a way that is similar to how neurons in the brain process signals. By using parallel processing and interconnected nodes, neuromorphic chips are capable of performing tasks such as pattern recognition, sensor processing, and data analysis with greater efficiency and flexibility than traditional computer hardware. One of the key advantages of neuromorphic chips is their ability to adapt and learn from new information, similar to the plasticity of the human brain. This means that they can continuously improve their performance and efficiency over time, making them ideal for applications such as artificial intelligence, robotics, and autonomous systems. Additionally, neuromorphic chips have the potential to significantly reduce power consumption and increase processing speeds, which is essential for the development of future computer technology. Overall, neuromorphic chips represent a promising advancement in hardware technology, offering new opportunities for the development of intelligent and adaptive computing systems. As researchers and engineers continue to explore the potential of these brain-inspired chips, it is likely that we will see significant advancements in the field of computer technology in the coming years, with implications for a wide range of industries and applications.

3. Artificial Intelligence and Machine Learning

Artificial intelligence (AI) and machine learning (ML) are two of the most rapidly advancing fields in computer technology. AI refers to the ability of a machine to replicate human cognitive functions, such as learning, reasoning, and problem-solving. ML, on the other hand, is a subset of AI that involves algorithms allowing machines to learn from data and improve their performance over time without being explicitly programmed. These technologies have the potential to revolutionize various industries, including healthcare, finance, transportation, and more. One of the key aspects of AI and ML is their ability to analyze large datasets and extract valuable insights, leading to more informed decision-making. In healthcare, for example, AI and ML can be used to diagnose diseases, personalize treatment plans, and predict patient outcomes. In finance, these technologies can be utilized for fraud detection, risk assessment, and portfolio management. Additionally, AI-powered autonomous vehicles are poised to transform the transportation industry, making roads safer and transportation more efficient. Furthermore, AI and ML are also driving innovation in natural language processing, computer vision, and robotics. With advancements in natural language processing, machines can understand and respond to human language, leading to the development of virtual assistants and chatbots. Computer vision enables machines to interpret and understand visual information, powering applications such as facial recognition and autonomous drones. Robotics, combined with AI and ML, has the potential to automate tasks in manufacturing, logistics, and other industries, leading to increased efficiency and productivity. In summary, AI and ML are revolutionizing the future of computer technology by enabling machines to learn, analyze data, and perform tasks that were once the exclusive domain of humans. With their wide-ranging applications and potential to drive innovation across industries, AI and ML are poised to shape the future of technology in profound ways.

4. Cybersecurity Challenges and Solutions

As technology continues to advance, cybersecurity has become a critical concern for individuals, businesses, and governments alike. The increasing frequency and sophistication of cyber attacks pose significant challenges to the security of computer technology. Some of the key cybersecurity challenges include data breaches, ransomware attacks, phishing scams, and malware infections. These threats can result in financial losses, reputational damage, and the compromise of sensitive information. In response to these challenges, various cybersecurity solutions have been developed to protect against potential threats. These solutions include the implementation of robust firewalls, encryption technologies, and multi-factor authentication systems. Additionally, the use of artificial intelligence and machine learning algorithms has enabled proactive threat detection and rapid response to cyber attacks. Moreover, ongoing efforts to educate individuals and organizations on the best cybersecurity practices are essential in mitigating potential risks and strengthening overall security measures. As computer technology continues to evolve, it is imperative to remain vigilant and proactive in addressing cybersecurity challenges to safeguard against potential threats and vulnerabilities.

5. Ethical and Social Implications of Future Technologies

The future of computer technology brings to the forefront various ethical and social implications that need to be addressed. As technology continues to advance, there is a growing concern about issues such as data privacy, cybersecurity, and the impact of automation on employment. One of the key ethical considerations is the collection and use of personal data. With the increasing amount of information being gathered by technology companies, there is a need for ethical guidelines to ensure that individuals' privacy is respected and their data is used responsibly. Additionally, the rise of artificial intelligence and automation raises questions about the future of work and the potential displacement of jobs. Ethical considerations in this area include ensuring that the benefits of technological advancements are shared equitably and that measures are in place to support those affected by job displacement. From a social perspective, future technologies have the potential to exacerbate existing inequalities. Access to technology and digital skills can create a divide between those who have access to opportunities and those who do not. It is essential to address these disparities to ensure that the benefits of technology are accessible to all members of society. In conclusion, as we look towards the future of computer technology, it is crucial to consider the ethical and social implications of these advancements. By proactively addressing these issues, we can work towards creating a future where technology benefits society as a whole.

Related articles

The life and works of leonardo da vinci: impact on the renaissance.

1. Introduction 2. Early Life and Education Leonardo da Vinci nacque il 15 aprile 1452 a Vinci, in Toscana, e ricevette la sua formazione iniziale nell'atelier di Veronica di Giorgio, un pittore locale. Successivamente, entrò nell'apprendistato presso lo studio di Andrea del Verrocchio a Firenze, dove apprese tecniche pittoriche e scultoree. Durante questo periodo, Leonardo sviluppò abilità eccezionali nel disegno e nell'osservazione della natura, che avrebbero influenzato il suo lavoro fut ...

The Importance of Metric System in Scientific Measurements and Everyday Life

1. Introduction Many of us living in the United States may find the use of the metric system a frightening concept because it is not what we have become accustomed to. It is often frightening to look at things we are not used to, but why be frightened when fearing a system which has been largely accepted throughout the world somewhat ridicules our degree of intelligence when we so easily ridicule others who seem different? Using the metric system can avoid many such incredulous situations. Thro ...

The Effectiveness of Early Detection Methods in Diagnosing and Managing Chronic Diseases

1. Introduction Early detection of chronic diseases with an asymptomatic period helps to manage the disease effectively before a serious disability or untimely death occurs. This paper reviews the literature on the diagnostic effectiveness and cost-effectiveness of early detection of four chronic diseases, namely, breast cancer, colorectal cancer, diabetes mellitus, and hypertension. It is found that the systematic review evidence on these topics is dominated by effectiveness studies, rather th ...

Exploring the life and contributions of Leonardo da Vinci to the Renaissance era

1. Introduction to the Renaissance Era and its Key Figures In the fourteenth-century Italy, a shift in economic circumstances yielded an elite merchant class that admired the accomplishments of ancient Greece and Rome, emulating their values in their daily lives. The trade routes that sprang up as a result of Italy's advantageous position also facilitated the flow of ideas and knowledge, organizing and exchanging differences in their respective cultures. Figureheads of the powerful families pat ...

The Importance of Reliability in Technological Advancements

1. Introduction Reliability is an essential characteristic for the daily use and performance of items around us, and throughout our lives, we have all come to appreciate and expect more reliable products to ease our lives. An easily identified visual indicator of reliability we frequently observe is the significant rise in sales and use of a product, and, conversely, how quickly we discard the use of products out of frustration resulting from poor performance. The proliferation of integrated ci ...

The Impact of Climate Change on Biodiversity and Natural Ecosystems

1. Introduction Human beings depend on natural ecosystems for their survival and for the current quality of life. In turn, many species have become adapted to present environmental conditions. It is currently being observed that increases in temperature and changes in the regime of rainfall are producing considerable changes in both the geographical distribution of species and the composition of biological communities. Even though it would be desirable to have more information on the impact the ...

The Influence of Leonardo da Vinci on Renaissance Art and Science

1. Introduction In the Renaissance, as in the Middle Ages, the major importance of religion was in the arts, since most of the art produced was religious. Artists were commissioned to do religious projects. Therefore, the artist did not expect to do his own thing. Leonardo was different from most other renowned artists of the time. He felt that he was not only an artist, but also a scientist. His interest in science directly affected and changed the art he produced. The effect of his scientific ...

The Significance and Impact of R&D (Research and Development) in Sustainability

1. Introduction Climate change has become one of, if not the largest threat to our future presented since the Y2K in terms of probability and uncertainty. This realization trends towards two alternatives – we develop more effective means to overcome critical human challenges, which necessitates a substantial degree of alignment, or we hold off problems for much longer. R&D (Research and Development) is at the core of any long-run response; indeed, we have not generally succeeded in breaking thr ...

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Sustainability
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Envisioning the future of computing

Press contact :.

A student in a black sweater poses with an award next to a MIT Schwarzman College of Computing banner.

Previous image Next image

How will advances in computing transform human society?

MIT students contemplated this impending question as part of the Envisioning the Future of Computing Prize — an essay contest in which they were challenged to imagine ways that computing technologies could improve our lives, as well as the pitfalls and dangers associated with them.

Offered for the first time this year, the Institute-wide competition invited MIT undergraduate and graduate students to share their ideas, aspirations, and vision for what they think a future propelled by advancements in computing holds. Nearly 60 students put pen to paper, including those majoring in mathematics, philosophy, electrical engineering and computer science, brain and cognitive sciences, chemical engineering, urban studies and planning, and management, and entered their submissions.

Students dreamed up highly inventive scenarios for how the technologies of today and tomorrow could impact society, for better or worse. Some recurring themes emerged, such as tackling issues in climate change and health care. Others proposed ideas for particular technologies that ranged from digital twins as a tool for navigating the deluge of information online to a cutting-edge platform powered by artificial intelligence, machine learning, and biosensors to create personalized storytelling films that help individuals understand themselves and others.

Conceived of by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing in collaboration with the School of Humanities, Arts, and Social Sciences (SHASS), the intent of the competition was “to create a space for students to think in a creative, informed, and rigorous way about the societal benefits and costs of the technologies they are or will be developing,” says Caspar Hare, professor of philosophy, co-associate dean of SERC, and the lead organizer of the Envisioning the Future of Computing Prize. “We also wanted to convey that MIT values such thinking.”

Prize winners

The contest implemented a two-stage evaluation process wherein all essays were reviewed anonymously by a panel of MIT faculty members from the college and SHASS for the initial round. Three qualifiers were then invited to present their entries at an awards ceremony on May 8, followed by a Q&A with a judging panel and live in-person audience for the final round.

The winning entry was awarded to Robert Cunningham '23, a recent graduate in math and physics, for his paper on the implications of a personalized language model that is fine-tuned to predict an individual’s writing based on their past texts and emails. Told from the perspective of three fictional characters: Laura, founder of the tech startup ScribeAI, and Margaret and Vincent, a couple in college who are frequent users of the platform, readers gained insights into the societal shifts that take place and the unforeseen repercussions of the technology.

Cunningham, who took home the grand prize of $10,000, says he came up with the concept for his essay in late January while thinking about the upcoming release of GPT-4 and how it might be applied. Created by the developers of ChatGPT — an AI chatbot that has managed to capture popular imagination for its capacity to imitate human-like text, images, audio, and code — GPT-4, which was unveiled in March, is the newest version of OpenAI’s language model systems.

“GPT-4 is wild in reality, but some rumors before it launched were even wilder, and I had a few long plane rides to think about them! I enjoyed this opportunity to solidify a vague notion into a piece of writing, and since some of my favorite works of science fiction are short stories, I figured I'd take the chance to write one,” Cunningham says.

The other two finalists, awarded $5,000 each, included Gabrielle Kaili-May Liu '23, a recent graduate in mathematics with computer science, and brain and cognitive sciences, for her entry on using the reinforcement learning with human feedback technique as a tool for transforming human interactions with AI; and Abigail Thwaites and Eliot Matthew Watkins, graduate students in the Department of Philosophy and Linguistics, for their joint submission on automatic fact checkers, an AI-driven software that they argue could potentially help mitigate the spread of misinformation and be a profound social good.

“We were so excited to see the amazing response to this contest. It made clear how much students at MIT, contrary to stereotype, really care about the wider implications of technology, says Daniel Jackson, professor of computer science and one of the final-round judges. “So many of the essays were incredibly thoughtful and creative. Robert’s story was a chilling, but entirely plausible take on our AI future; Abigail and Eliot’s analysis brought new clarity to what harms misinformation actually causes; and Gabrielle’s piece gave a lucid overview of a prominent new technology. I hope we’ll be able to run this contest every year, and that it will encourage all our students to broaden their perspectives even further.”

Fellow judge Graham Jones, professor of anthropology, adds: “The winning entries reflected the incredible breadth of our students’ engagement with socially responsible computing. They challenge us to think differently about how to design computational technologies, conceptualize social impacts, and imagine future scenarios. Working with a cross-disciplinary panel of judges catalyzed lots of new conversations. As a sci-fi fan, I was thrilled that the top prize went to a such a stunning piece of speculative fiction!”

Other judges on the panel for the final round included:

  • Dan Huttenlocher, dean of the MIT Schwarzman College of Computing;
  • Aleksander Madry, Cadence Design Systems Professor of Computer Science;
  • Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science;
  • Georgia Perakis, co-associate dean of SERC and the William F. Pounds Professor of Management; and
  • Agustin Rayo, dean of the MIT School of Humanities, Arts, and Social Sciences.

Honorable mentions

In addition to the grand prize winner and runners up, 12 students were recognized with honorable mentions for their entries, with each receiving $500.

The honorees and the title of their essays include:

  • Alexa Reese Canaan, Technology and Policy Program, “A New Way Forward: The Internet & Data Economy”;
  • Fernanda De La Torre Romo, Department of Brain and Cognitive Sciences, “The Empathic Revolution Using AI to Foster Greater Understanding and Connection”;
  • Samuel Florin, Department of Mathematics, "Modeling International Solutions for the Climate Crisis";
  • Claire Gorman, Department of Urban Studies and Planning (DUSP), “Grounding AI — Envisioning Inclusive Computing for Soil Carbon Applications”;
  • Kevin Hansom, MIT Sloan School of Management, “Quantum Powered Personalized Pharmacogenetic Development and Distribution Model”;
  • Sharon Jiang, Department of Electrical Engineering and Computer Science (EECS), “Machine Learning Driven Transformation of Electronic Health Records”;
  • Cassandra Lee, Media Lab, “Considering an Anti-convenience Funding Body”;
  • Martin Nisser, EECS, "Towards Personalized On-Demand Manufacturing";
  • Andi Qu, EECS, "Revolutionizing Online Learning with Digital Twins";
  • David Bradford Ramsay, Media Lab, “The Perils and Promises of Closed Loop Engagement”;
  • Shuvom Sadhuka, EECS, “Overcoming the False Trade-off in Genomics: Privacy and Collaboration”; and
  • Leonard Schrage, DUSP, “Embodied-Carbon-Computing.”

The Envisioning the Future of Computing Prize was supported by MAC3 Impact Philanthropies.

Share this news article on:

Related links.

  • Envisioning the Future of Computing Prize 2023
  • Social and Ethical Responsibilities of Computing

Related Topics

  • Contests and academic competitions
  • Awards, honors and fellowships
  • Undergraduate
  • Graduate, postdoctoral
  • Technology and society
  • Brain and cognitive sciences
  • Electrical engineering and computer science (EECS)
  • Mathematics
  • Urban studies and planning
  • Technology and policy
  • Computer science and technology
  • Artificial intelligence
  • Human-computer interaction
  • MIT Sloan School of Management
  • School of Architecture and Planning
  • School of Humanities Arts and Social Sciences

Related Articles

Marion Boulicault, Dheekshita Kumar, Serena Booth, and Rodrigo Ochigame graphic

Learning to think critically about machine learning

Photo of MIT students sitting in a lecture hall

A new resource for teaching responsible technology development

Four stock images arranged in a rectangle: a photo of a person with glow-in-the-dark paint splattered on her face, an aerial photo of New York City at night, photo of a statue of a blind woman holding up scales and a sword, and an illustrated eye with a human silhouette in the pupil

Fostering ethical thinking in computing

Previous item Next item

More MIT News

Illustration of a head divided into a dark section at the front and a light section at the back, where a bell emits lines representing sound

The way sensory prediction changes under anesthesia tells us how conscious cognition works

Read full story →

Six women, four of whom hold microphones, sit in a row facing an audience sitting in red chairs in a seminar room

Mixing joy and resolve, event celebrates women in science and addresses persistent inequalities

Four textured 3D-printed owls

New 3D printing technique creates unique objects quickly and with less waste

Cashews

Uplifting West African communities, one cashew at a time

Jane-Jane Chen poses in front of a brick and glass building

Jane-Jane Chen: A model scientist who inspires the next generation

homas Lee, Laurențiu Anton, and Rosie Keller pose together in front of a large curved sculpture

MIT Energy and Climate Club mobilizes future leaders to address global climate issues

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Artificial Intelligence and the Future of Humans

Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will

Table of contents.

  • 1. Concerns about human agency, evolution and survival
  • 2. Solutions to address AI’s anticipated negative impacts
  • 3. Improvements ahead: How humans and AI might evolve together in the next decade
  • About this canvassing of experts
  • Acknowledgments

Table that shows that people in most of the surveyed countries are more willing to discuss politics in person than via digital channels.

Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?

Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.

The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.

Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. They were also enthusiastic about AI’s role in contributing to broad public-health programs built around massive amounts of data that may be captured in the coming years about everything from personal genomes to nutrition. Additionally, a number of these experts predicted that AI would abet long-anticipated changes in formal and informal education systems.

Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.

[chart id=”21972″]

Specifically, participants were asked to consider the following:

“Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties.

Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today?”

Overall, and despite the downsides they fear, 63% of respondents in this canvassing said they are hopeful that most individuals will be mostly better off in 2030, and 37% said people will not be better off.

A number of the thought leaders who participated in this canvassing said humans’ expanding reliance on technological systems will only go well if close attention is paid to how these tools, platforms and networks are engineered, distributed and updated. Some of the powerful, overarching answers included those from:

Sonia Katyal , co-director of the Berkeley Center for Law and Technology and a member of the inaugural U.S. Commerce Department Digital Economy Board of Advisors, predicted, “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”

We need to work aggressively to make sure technology matches our values. Erik Brynjolfsson Erik Brynjolfsson

Erik Brynjolfsson , director of the MIT Initiative on the Digital Economy and author of “Machine, Platform, Crowd: Harnessing Our Digital Future,” said, “AI and related technologies have already achieved superhuman performance in many areas, and there is little doubt that their capabilities will improve, probably very significantly, by 2030. … I think it is more likely than not that we will use this power to make the world a better place. For instance, we can virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet. That said, AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons. Neither outcome is inevitable, so the right question is not ‘What will happen?’ but ‘What will we choose to do?’ We need to work aggressively to make sure technology matches our values. This can and must be done at all levels, from government, to business, to academia, and to individual choices.”

Bryan Johnson , founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”

Marina Gorbis , executive director of the Institute for the Future, said, “Without significant changes in our political economy and data governance regimes [AI] is likely to create greater economic inequalities, more surveillance and more programmed and non-human-centric interactions. Every time we program our environments, we end up programming ourselves and our interactions. Humans have to become more standardized, removing serendipity and ambiguity from our interactions. And this ambiguity and complexity is what is the essence of being human.”

Judith Donath , author of “The Social Machine, Designs for Living Online” and faculty fellow at Harvard University’s Berkman Klein Center for Internet & Society, commented, “By 2030, most social situations will be facilitated by bots – intelligent-seeming programs that interact with us in human-like ways. At home, parents will engage skilled bots to help kids with homework and catalyze dinner conversations. At work, bots will run meetings. A bot confidant will be considered essential for psychological well-being, and we’ll increasingly turn to such companions for advice ranging from what to wear to whom to marry. We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals? Artificially intelligent companions will cultivate the impression that social goals similar to our own motivate them – to be held in good regard, whether as a beloved friend, an admired boss, etc. But their real collaboration will be with the humans and institutions that control them. Like their forebears today, these will be sellers of goods who employ them to stimulate consumption and politicians who commission them to sway opinions.”

Andrew McLaughlin , executive director of the Center for Innovative Thinking at Yale University, previously deputy chief technology officer of the United States for President Barack Obama and global public policy lead for Google, wrote, “2030 is not far in the future. My sense is that innovations like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognizable. AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment.”

Michael M. Roberts , first president and CEO of the Internet Corporation for Assigned Names and Numbers (ICANN) and Internet Hall of Fame member, wrote, “The range of opportunities for intelligent agents to augment human intelligence is still virtually unlimited. The major issue is that the more convenient an agent is, the more it needs to know about you – preferences, timing, capacities, etc. – which creates a tradeoff of more help requires more intrusion. This is not a black-and-white issue – the shades of gray and associated remedies will be argued endlessly. The record to date is that convenience overwhelms privacy. I suspect that will continue.”

danah boyd , a principal researcher for Microsoft and founder and president of the Data & Society Research Institute, said, “AI is a tool that will be used by humans for all sorts of purposes, including in the pursuit of power. There will be abuses of power that involve AI, just as there will be advances in science and humanitarian efforts that also involve AI. Unfortunately, there are certain trend lines that are likely to create massive instability. Take, for example, climate change and climate migration. This will further destabilize Europe and the U.S., and I expect that, in panic, we will see AI be used in harmful ways in light of other geopolitical crises.”

Amy Webb , founder of the Future Today Institute and professor of strategic foresight at New York University, commented, “The social safety net structures currently in place in the U.S. and in many other countries around the world weren’t designed for our transition to AI. The transition through AI will last the next 50 years or more. As we move farther into this third era of computing, and as every single industry becomes more deeply entrenched with AI systems, we will need new hybrid-skilled knowledge workers who can operate in jobs that have never needed to exist before. We’ll need farmers who know how to work with big data sets. Oncologists trained as robotocists. Biologists trained as electrical engineers. We won’t need to prepare our workforce just once, with a few changes to the curriculum. As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging. It’s easy to look back on history through the lens of present – and to overlook the social unrest caused by widespread technological unemployment. We need to address a difficult truth that few are willing to utter aloud: AI will eventually cause a large number of people to be permanently out of work. Just as generations before witnessed sweeping changes during and in the aftermath of the Industrial Revolution, the rapid pace of technology will likely mean that Baby Boomers and the oldest members of Gen X – especially those whose jobs can be replicated by robots – won’t be able to retrain for other kinds of work without a significant investment of time and effort.”

Barry Chudakov , founder and principal of Sertain Research, commented, “By 2030 the human-machine/AI collaboration will be a necessary tool to manage and counter the effects of multiple simultaneous accelerations: broad technology advancement, globalization, climate change and attendant global migrations. In the past, human societies managed change through gut and intuition, but as Eric Teller, CEO of Google X, has said, ‘Our societal structures are failing to keep pace with the rate of change.’ To keep pace with that change and to manage a growing list of ‘wicked problems’ by 2030, AI – or using Joi Ito’s phrase, extended intelligence – will value and revalue virtually every area of human behavior and interaction. AI and advancing technologies will change our response framework and time frames (which in turn, changes our sense of time). Where once social interaction happened in places – work, school, church, family environments – social interactions will increasingly happen in continuous, simultaneous time. If we are fortunate, we will follow the 23 Asilomar AI Principles outlined by the Future of Life Institute and will work toward ‘not undirected intelligence but beneficial intelligence.’ Akin to nuclear deterrence stemming from mutually assured destruction, AI and related technology systems constitute a force for a moral renaissance. We must embrace that moral renaissance, or we will face moral conundrums that could bring about human demise. … My greatest hope for human-machine/AI collaboration constitutes a moral and ethical renaissance – we adopt a moonshot mentality and lock arms to prepare for the accelerations coming at us. My greatest fear is that we adopt the logic of our emerging technologies – instant response, isolation behind screens, endless comparison of self-worth, fake self-presentation – without thinking or responding smartly.”

John C. Havens , executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the Council on Extended Intelligence, wrote, “Now, in 2018, a majority of people around the world can’t access their data, so any ‘human-AI augmentation’ discussions ignore the critical context of who actually controls people’s information and identity. Soon it will be extremely difficult to identify any autonomous or intelligent systems whose algorithms don’t interact with human data in one form or another.”

At stake is nothing less than what sort of society we want to live in and how we experience our humanity. Batya Friedman Batya Friedman

Batya Friedman , a human-computer interaction professor at the University of Washington’s Information School, wrote, “Our scientific and technological capacities have and will continue to far surpass our moral ones – that is our ability to use wisely and humanely the knowledge and tools that we develop. … Automated warfare – when autonomous weapons kill human beings without human engagement – can lead to a lack of responsibility for taking the enemy’s life or even knowledge that an enemy’s life has been taken. At stake is nothing less than what sort of society we want to live in and how we experience our humanity.”

Greg Shannon , chief scientist for the CERT Division at Carnegie Mellon University, said, “Better/worse will appear 4:1 with the long-term ratio 2:1. AI will do well for repetitive work where ‘close’ will be good enough and humans dislike the work. … Life will definitely be better as AI extends lifetimes, from health apps that intelligently ‘nudge’ us to health, to warnings about impending heart/stroke events, to automated health care for the underserved (remote) and those who need extended care (elder care). As to liberty, there are clear risks. AI affects agency by creating entities with meaningful intellectual capabilities for monitoring, enforcing and even punishing individuals. Those who know how to use it will have immense potential power over those who don’t/can’t. Future happiness is really unclear. Some will cede their agency to AI in games, work and community, much like the opioid crisis steals agency today. On the other hand, many will be freed from mundane, unengaging tasks/jobs. If elements of community happiness are part of AI objective functions, then AI could catalyze an explosion of happiness.”

Kostas Alexandridis , author of “Exploring Complex Dynamics in Multi-agent-based Intelligent Systems,” predicted, “Many of our day-to-day decisions will be automated with minimal intervention by the end-user. Autonomy and/or independence will be sacrificed and replaced by convenience. Newer generations of citizens will become more and more dependent on networked AI structures and processes. There are challenges that need to be addressed in terms of critical thinking and heterogeneity. Networked interdependence will, more likely than not, increase our vulnerability to cyberattacks. There is also a real likelihood that there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control.”

Oscar Gandy , emeritus professor of communication at the University of Pennsylvania, responded, “We already face an ungranted assumption when we are asked to imagine human-machine ‘collaboration.’ Interaction is a bit different, but still tainted by the grant of a form of identity – maybe even personhood – to machines that we will use to make our way through all sorts of opportunities and challenges. The problems we will face in the future are quite similar to the problems we currently face when we rely upon ‘others’ (including technological systems, devices and networks) to acquire things we value and avoid those other things (that we might, or might not be aware of).”

James Scofield O’Rourke , a professor of management at the University of Notre Dame, said, “Technology has, throughout recorded history, been a largely neutral concept. The question of its value has always been dependent on its application. For what purpose will AI and other technological advances be used? Everything from gunpowder to internal combustion engines to nuclear fission has been applied in both helpful and destructive ways. Assuming we can contain or control AI (and not the other way around), the answer to whether we’ll be better off depends entirely on us (or our progeny). ‘The fault, dear Brutus, is not in our stars, but in ourselves, that we are underlings.’”

Simon Biggs , a professor of interdisciplinary arts at the University of Edinburgh, said, “AI will function to augment human capabilities. The problem is not with AI but with humans. As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified. Given historical precedent, one would have to assume it will be our worst qualities that are augmented. My expectation is that in 2030 AI will be in routine use to fight wars and kill people, far more effectively than we can currently kill. As societies we will be less affected by this as we currently are, as we will not be doing the fighting and killing ourselves. Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing. We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully. My other primary concern is to do with surveillance and control. The advent of China’s Social Credit System (SCS) is an indicator of what it likely to come. We will exist within an SCS as AI constructs hybrid instances of ourselves that may or may not resemble who we are. But our rights and affordances as individuals will be determined by the SCS. This is the Orwellian nightmare realised.”

Mark Surman , executive director of the Mozilla Foundation, responded, “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”

William Uricchio , media scholar and professor of comparative media studies at MIT, commented, “AI and its related applications face three problems: development at the speed of Moore’s Law, development in the hands of a technological and economic elite, and development without benefit of an informed or engaged public. The public is reduced to a collective of consumers awaiting the next technology. Whose notion of ‘progress’ will prevail? We have ample evidence of AI being used to drive profits, regardless of implications for long-held values; to enhance governmental control and even score citizens’ ‘social credit’ without input from citizens themselves. Like technologies before it, AI is agnostic. Its deployment rests in the hands of society. But absent an AI-literate public, the decision of how best to deploy AI will fall to special interests. Will this mean equitable deployment, the amelioration of social injustice and AI in the public service? Because the answer to this question is social rather than technological, I’m pessimistic. The fix? We need to develop an AI-literate public, which means focused attention in the educational sector and in public-facing media. We need to assure diversity in the development of AI technologies. And until the public, its elected representatives and their legal and regulatory regimes can get up to speed with these fast-moving developments we need to exercise caution and oversight in AI’s development.”

The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education. Some responses are lightly edited for style.

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Artificial Intelligence
  • Emerging Technology
  • Future of the Internet (Project)
  • Technology Adoption

Americans in both parties are concerned over the impact of AI on the 2024 presidential campaign

A quarter of u.s. teachers say ai tools do more harm than good in k-12 education, many americans think generative ai programs should credit the sources they rely on, americans’ use of chatgpt is ticking up, but few trust its election information, q&a: how we used large language models to identify guests on popular podcasts, most popular, report materials.

  • Shareable quotes from experts about artificial intelligence and the future of humans

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan, nonadvocacy fact tank that informs the public about the issues, attitudes and trends shaping the world. It does not take policy positions. The Center conducts public opinion polling, demographic research, computational social science research and other data-driven research. Pew Research Center is a subsidiary of The Pew Charitable Trusts , its primary funder.

© 2024 Pew Research Center

IMAGES

  1. The Future is Now: Unveiling the Wonders of Future Computers

    computer and our future essay

  2. Essay On Computer In English ll Short Essay Writing ll

    computer and our future essay

  3. Future Computers Images

    computer and our future essay

  4. 8) 10 Amazing Futuristic Computers 8)

    computer and our future essay

  5. Click here to view my essay on computer networks

    computer and our future essay

  6. Essay on " Computer"

    computer and our future essay

VIDEO

  1. What Is The Future Of Laptops?

  2. Our Land Our Future Essay Writing or Speech, Paragraph

  3. Nurses, The Untold Story : The Life of a Nurse. By : Shantai Nursing Vidyalaya, Palghar

  4. Essay-writing on The Computer in English

  5. Essay/Speech/Paragraph on Our Nurse Our Future||Essay on Our Nurse Our Future||Speech on Our Nurse

  6. Essay on Our Land Our Future/ Our Land Our Future Essay / World Environment Day 2024 Theme Our Land

COMMENTS

  1. How Computers Influence Our Life | Essay Example - IvyPanda

    Wondering how computer influences our life? Check out this page! Here, you will discover an impact of computers on society essay example.

  2. Essay on Computer and its Uses in 500 Words for Students - Toppr

    Q.1 What is a computer? A.1 A computer is an electronic device or machine that makes our work easier. Also, they help us in many ways. Q.2 Mention various fields where computers are used? A.2 Computers are majorly used in defense, medicine, and for research purposes.

  3. Computers rule our lives. Where will they take us next?

    As computers further infiltrate our lives, we’ll need to think harder about what kinds of systems to build and how to deploy them, as well as meta-problems like how to decide — and who should...

  4. Essay on Future of Computer - AspiringYouths

    The future of computers is a fascinating blend of AI, quantum computing, and cloud technology. As these technologies mature, we can expect computers to become even more integral to our lives, reshaping society in profound ways.

  5. By 2030, this is what computers will be able to do | World ...

    Computing in 2030: medical nanobots and autonomous vehicles. But will they bring people together? Image: REUTERS/Shannon Stapleton. Justine Cassell. Digital Communications. Developments in computing are driving the transformation of entire systems of production, management, and governance.

  6. Where computing might go next | MIT Technology Review

    Whatever computing innovations will appear in the future, what matters most is how our culture, businesses, and society choose to use them.

  7. The present and future of AI - Harvard John A. Paulson School ...

    How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years?

  8. Essay on the Future of Computer Technology | Free Essay ...

    The future of computer technology brings to the forefront various ethical and social implications that need to be addressed. As technology continues to advance, there is a growing concern about issues such as data privacy, cybersecurity, and the impact of automation on employment.

  9. Envisioning the future of computing - MIT News

    MIT students share their ideas, aspirations, and vision for how advances in computing stand to transform society in an essay prize competition hosted by the Social and Ethical Responsibilities of Computing.

  10. Artificial Intelligence and the Future of Humans">Artificial Intelligence and the Future of Humans

    Artificial Intelligence and the Future of Humans. Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will. By Janna Anderson and Lee Rainie.