• Skip to main content
  • Keyboard shortcuts for audio player

13.7 Cosmos & Culture

The power of science and the danger of scientism.

science is a threat to humanity essay

No matter what face you put on it, science is a powerful tool. Here, engineer Marcus Hold works on a nearly completed RoboThespian . Marvels of modern science, these fully interactive and multilingual humanoid robots are increasingly being sold to academic research groups. Matt Cardy/Getty Images hide caption

No matter what face you put on it, science is a powerful tool. Here, engineer Marcus Hold works on a nearly completed RoboThespian . Marvels of modern science, these fully interactive and multilingual humanoid robots are increasingly being sold to academic research groups.

Can you be a strident defender of science and still be suspicious of the way it is appropriated within culture? Can you be passionate about the practice and promise of science, yet still remain troubled by the way other beliefs and assumptions are heralded in its name? If such a thing is possible, you may be pro-science but anti-scientism . And, if that is the case, then Steven Pinker may have just pissed you off. But, as we'll see, it might be hard to tell.

Scientism is getting a lot of play these days. It's a difficult word to pin down because it takes on a wide range of meanings depending on who is throwing it around. According to Merriam-Webster online , scientism is:

an exaggerated trust in the efficacy of the methods of natural science applied to all areas of investigation (as in philosophy, the social sciences, and the humanities).

Thus, scientism is the "science can explain everything," (or, at least, "science explains everything important"), kind of position some folks take in arguments about religion, philosophy, the value of the humanities, etc.

Steven Pinker has now waded into the scientism debate with a New Republic essay entitled " Science Is Not Your Enemy : An impassioned plea to neglected novelists, embattled professors, and tenure-less historians." For Pinker there really is no such thing as scientism, which, he claims, is "more of a boo-word than a label for any coherent doctrine."

The problem with Pinker's essay is that his main purpose is to convince friends in the humanities (history, literature, etc.) that adoption of methods from the science side of the campus poses no threat to their disciplines. On the contrary, data mining of historical records, he would claim, may shed new light on the mechanisms of history. And Pinker is clear about the importance of the humanities when he states:

No thinking person should be indifferent to our society's disinvestment from the humanities, which are indispensable to a civilized democracy.

If this were all there was to the scientism debates then I, for one, wouldn't see much need to weigh-in. Pinker says a lot that is eminently reasonable in the essay. But there is a much deeper question about science and culture and Pinker seems to step right over this bumpy ground without even noticing.

Pinker speaks about a sense of recrimination against science for its place in human life. He cites his own experience of such attitudes as a professor at Harvard:

When Harvard reformed its general education requirement in 2006 to 2007, the preliminary task force report introduced the teaching of science without any mention of its place in human knowledge: "Science and technology directly affect our students in many ways, both positive and negative: they have led to life-saving medicines, the internet, more efficient energy storage, and digital entertainment; they also have shepherded nuclear weapons, biological warfare agents, electronic eavesdropping, and damage to the environment." This strange equivocation between the utilitarian and the nefarious was not applied to other disciplines. (Just imagine motivating the study of classical music by noting that it both generates economic activity and inspired the Nazis.) And there was no acknowledgment that we might have good reasons to prefer science and know-how over ignorance and superstition.

What Pinker fails to see in this passage is that it is precisely the enormous power and the enormous success of science that put it in a unique position for misuse by those who claim to speak in its name.

Over the last four centuries the practice we call science has utterly reshaped human civilization in ways that have no precedent. Science, on its own, is simply a practice, it's a method for asking questions finding answers. It's a way to approach the world. The ability to harness that practice to create powerful change (via wealth creation or military power) has always carried its own dangers.

The efficacy of science generates a powerful attraction for advocates of (often unspoken) philosophical assumptions. These are people who seek to cloak their beliefs in the legitimacy of the scientific enterprise. This is where scientism raises its ugly head.

Pinker is right to argue, as he does, that science can't be blamed for the stupidities of social Darwinism, eugenics or the communist insistence that it had found a science of history. But his easy dismissal of scientism as a "boo-word" misses the point that science gets used within culture for more than just legitimate purposes. In fact it's the very efficacy of its tools that allows cultural misappropriations of science to go unnoticed.

Part of this misappropriation comes from thinking that, since science is so good at providing explanations, explanations are all that matter. It's an approach that levels human experience in ways that are both dangerous and sad. In discussions of human spirituality and science, for example, it leads to cartoon arguments between Richard Dawkins and fundamentalists about who started the universe. Missing are the varieties of reasons people feel "spiritual" longing that have nothing to do with asking how the moon got there.

The power and promise of science is not compromised by understanding that we live in a world saturated by its fruits and poisons. Pinker is quite right that scientism is not a coherent doctrine. But that doesn't mean the term is empty.

Scientism is an unfortunate consequence of the success science has had explaining the natural world. It would, in fact, be useful to clarify how scientism manifests itself. That would help us understand the damage it does to the real project that lies ahead of us: building space for the full spectrum of human being in a culture fully shaped by science.

You can keep up with more of what Adam Frank is thinking on Facebook and on Twitter: @AdamFrank4

  • Steven Pinker

Science Is Not Your Enemy

An impassioned plea to neglected novelists, embattled professors, and tenure-less historians.

science is a threat to humanity essay

The great thinkers of the Age of Reason and the Enlightenment were scientists. Not only did many of them contribute to mathematics, physics, and physiology, but all of them were avid theorists in the sciences of human nature. They were cognitive neuroscientists, who tried to explain thought and emotion in terms of physical mechanisms of the nervous system. They were evolutionary psychologists, who speculated on life in a state of nature and on animal instincts that are “infused into our bosoms.” And they were social psychologists, who wrote of the moral sentiments that draw us together, the selfish passions that inflame us, and the foibles of shortsightedness that frustrate our best-laid plans.

These thinkers—Descartes, Spinoza, Hobbes, Locke, Hume, Rousseau, Leibniz, Kant, Smith—are all the more remarkable for having crafted their ideas in the absence of formal theory and empirical data. The mathematical theories of information, computation, and games had yet to be invented. The words “neuron,” “hormone,” and “gene” meant nothing to them. When reading these thinkers, I often long to travel back in time and offer them some bit of twenty-first-century freshman science that would fill a gap in their arguments or guide them around a stumbling block. What would these Fausts have given for such knowledge? What could they have done with it?

WATCH: Leon Wieseltier's rejoinder: Science doesn't have all the answers

We don’t have to fantasize about this scenario, because we are living it. We have the works of the great thinkers and their heirs, and we have scientific knowledge they could not have dreamed of. This is an extraordinary time for the understanding of the human condition. Intellectual problems from antiquity are being illuminated by insights from the sciences of mind, brain, genes, and evolution. Powerful tools have been developed to explore them, from genetically engineered neurons that can be controlled with pinpoints of light to the mining of “big data” as a means of understanding how ideas propagate.

One would think that writers in the humanities would be delighted and energized by the efflorescence of new ideas from the sciences. But one would be wrong. Though everyone endorses science when it can cure disease, monitor the environment, or bash political opponents, the intrusion of science into the territories of the humanities has been deeply resented. Just as reviled is the application of scientific reasoning to religion; many writers without a trace of a belief in God maintain that there is something unseemly about scientists weighing in on the biggest questions. In the major journals of opinion, scientific carpetbaggers are regularly accused of determinism, reductionism, essentialism, positivism, and worst of all, something called “scientism.” The past couple years have seen four denunciations of scientism in this magazine alone, together with attacks in Bookforum , The Claremont Review of Books , The Huffington Post , The Nation , National Review Online ,  The New Atlantis , The New York Times , and Standpoint .

The eclectic politics of these publications reflects the bipartisan nature of the resentment. This passage , from a 2011 review in The Nation of three books by Sam Harris by the historian Jackson Lears, makes the standard case for the prosecution by the left:

Positivist assumptions provided the epistemological foundations for Social Darwinism and pop-evolutionary notions of progress, as well as for scientific racism and imperialism. These tendencies coalesced in eugenics, the doctrine that human well-being could be improved and eventually perfected through the selective breeding of the "fit" and the sterilization or elimination of the "unfit." ... Every schoolkid knows about what happened next: the catastrophic twentieth century. Two world wars, the systematic slaughter of innocents on an unprecedented scale, the proliferation of unimaginable destructive weapons, brushfire wars on the periphery of empire—all these events involved, in various degrees, the application of sceintific research to advanced technology. 

The case from the right, captured in this 2007 speech from Leon Kass, George W. Bush’s bioethics adviser, is just as measured:

Scientific ideas and discoveries about living nature and man, perfectly welcome and harmless in themselves, are being enlisted to do battle against our traditional religious and moral teachings, and even our self-understanding as creatures with freedom and dignity. A quasi-religious faith has sprung up among us—let me call it "soul-less scientism"—which believes that our new biology, eliminating all mystery, can give a complete account of human life, giving purely scientific explanations of human thought, love, creativity, moral judgment, and even why we believe in God. ... Make no mistake. The stakes in this contest are high: at issue are the moral and spiritual health of our nation, the continued vitality of science, and our own self-understanding as human beings and as children of the West. 

These are zealous prosecutors indeed. But their cases are weak. The mindset of science cannot be blamed for genocide and war and does not threaten the moral and spiritual health of our nation. It is, rather, indispensable in all areas of human concern, including politics, the arts, and the search for meaning, purpose, and morality.

The term “scientism” is anything but clear, more of a boo-word than a label for any coherent doctrine. Sometimes it is equated with lunatic positions, such as that “science is all that matters” or that “scientists should be entrusted to solve all problems.” Sometimes it is clarified with adjectives like “simplistic,” “naïve,” and “vulgar.” The definitional vacuum allows me to replicate gay activists’ flaunting of “queer” and appropriate the pejorative for a position I am prepared to defend.

Scientism, in this good sense, is not the belief that members of the occupational guild called “science” are particularly wise or noble. On the contrary, the defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable. Scientism does not mean that all current scientific hypotheses are true; most new ones are not, since the cycle of conjecture and refutation is the lifeblood of science. It is not an imperialistic drive to occupy the humanities; the promise of science is to enrich and diversify the intellectual tools of humanistic scholarship, not to obliterate them. And it is not the dogma that physical stuff is the only thing that exists. Scientists themselves are immersed in the ethereal medium of information , including the truths of mathematics, the logic of their theories, and the values that guide their enterprise. In this conception, science is of a piece with philosophy, reason, and Enlightenment humanism. It is distinguished by an explicit commitment to two ideals, and it is these that scientism seeks to export to the rest of intellectual life.

The first is that the world is intelligible . The phenomena we experience may be explained by principles that are more general than the phenomena themselves. These principles may in turn be explained by more fundamental principles, and so on. In making sense of our world, there should be few occasions in which we are forced to concede “It just is” or “It’s magic” or “Because I said so.” The commitment to intelligibility is not a matter of brute faith, but gradually validates itself as more and more of the world becomes explicable in scientific terms. The processes of life, for example, used to be attributed to a mysterious élan vital; now we know they are powered by chemical and physical reactions among complex molecules.

Demonizers of scientism often confuse intelligibility with a sin called reductionism. But to explain a complex happening in terms of deeper principles is not to discard its richness. No sane thinker would try to explain World War I in the language of physics, chemistry, and biology as opposed to the more perspicuous language of the perceptions and goals of leaders in 1914 Europe. At the same time, a curious person can legitimately ask why human minds are apt to have such perceptions and goals, including the tribalism, overconfidence, and sense of honor that fell into a deadly combination at that historical moment.

The second ideal is that the acquisition of knowledge is hard . The world does not go out of its way to reveal its workings, and even if it did, our minds are prone to illusions, fallacies, and super- stitions. Most of the traditional causes of belief—faith, revelation, dogma, authority, charisma, conventional wisdom, the invigorating glow of subjective certainty—are generators of error and should be dismissed as sources of knowledge. To understand the world, we must cultivate work-arounds for our cognitive limitations, including skepticism, open debate, formal precision, and empirical tests, often requiring feats of ingenuity. Any movement that calls itself “scientific” but fails to nurture opportunities for the falsification of its own beliefs (most obviously when it murders or imprisons the people who disagree with it) is not a scientific movement.

In which ways, then, does science illuminate human affairs? Let me start with the most ambitious: the deepest questions about who we are, where we came from, and how we define the meaning and purpose of our lives. This is the traditional territory of religion, and its defenders tend to be the most excitable critics of scientism. They are apt to endorse the partition plan proposed by Stephen Jay Gould in his worst book , Rocks of Ages , according to which the proper concerns of science and religion belong to “non-overlapping magisteria.” Science gets the empirical universe; religion gets the questions of moral meaning and value.

Unfortunately, this entente unravels as soon as you begin to examine it. The moral worldview of any scientifically literate person—one who is not blinkered by fundamentalism—requires a radical break from religious conceptions of meaning and value.

To begin with, the findings of science entail that the belief systems of all the world’s traditional religions and cultures—their theories of the origins of life, humans, and societies—are factually mistaken. We know, but our ancestors did not, that humans belong to a single species of African primate that developed agriculture, government, and writing late in its history. We know that our species is a tiny twig of a genealogical tree that embraces all living things and that emerged from prebiotic chemicals almost four billion years ago. We know that we live on a planet that revolves around one of a hundred billion stars in our galaxy, which is one of a hundred billion galaxies in a 13.8-billion-year-old universe, possibly one of a vast number of universes. We know that our intuitions about space, time, matter, and causation are incommensurable with the nature of reality on scales that are very large and very small. We know that the laws governing the physical world (including accidents, disease, and other misfortunes) have no goals that pertain to human well-being. There is no such thing as fate, providence, karma, spells, curses, augury, divine retribution, or answered prayers—though the discrepancy between the laws of probability and the workings of cognition may explain why people believe there are. And we know that we did not always know these things, that the beloved convictions of every time and culture may be decisively falsified, doubtless including some we hold today.

In other words, the worldview that guides the moral and spiritual values of an educated person today is the worldview given to us by science. Though the scientific facts do not by themselves dictate values, they certainly hem in the possibilities. By stripping ecclesiastical authority of its credibility on factual matters, they cast doubt on its claims to certitude in matters of morality. The scientific refutation of the theory of vengeful gods and occult forces undermines practices such as human sacrifice, witch hunts, faith healing, trial by ordeal, and the persecution of heretics. The facts of science, by exposing the absence of purpose in the laws governing the universe, force us to take responsibility for the welfare of ourselves, our species, and our planet. For the same reason, they undercut any moral or political system based on mystical forces, quests, destinies, dialectics, struggles, or messianic ages. And in combination with a few unexceptionable convictions— that all of us value our own welfare and that we are social beings who impinge on each other and can negotiate codes of conduct—the scientific facts militate toward a defensible morality, namely adhering to principles that maximize the flourishing of humans and other sentient beings. This humanism, which is inextricable from a scientific understanding of the world, is becoming the de facto morality of modern democracies, international organizations, and liberalizing religions, and its unfulfilled promises define the moral imperatives we face today.

Moreover, science has contributed—directly and enormously—to the fulfillment of these values. If one were to list the proudest accomplishments of our species (setting aside the removal of obstacles we set in our own path, such as the abolition of slavery and the defeat of fascism), many would be gifts bestowed by science.

The most obvious is the exhilarating achievement of scientific knowledge itself. We can say much about the history of the universe, the forces that make it tick, the stuff we’re made of, the origin of living things, and the machinery of life, including our own mental life. Better still, this understanding consists not in a mere listing of facts, but in deep and elegant principles, like the insight that life depends on a molecule that carries information, directs metabolism, and replicates itself.

Science has also provided the world with images of sublime beauty: stroboscopically frozen motion, exotic organisms, distant galaxies and outer planets, fluorescing neural circuitry, and a luminous planet Earth rising above the moon’s horizon into the blackness of space. Like great works of art, these are not just pretty pictures but prods to contemplation, which deepen our understanding of what it means to be human and of our place in nature.

And contrary to the widespread canard that technology has created a dystopia of deprivation and violence, every global measure of human flourishing is on the rise. The numbers show that after millennia of near-universal poverty, a steadily growing proportion of humanity is surviving the first year of life, going to school, voting in democracies, living in peace, communicating on cell phones, enjoying small luxuries, and surviving to old age. The Green Revolution in agronomy alone saved a billion people from starvation. And if you want examples of true moral greatness, go to Wikipedia and look up the entries for “ smallpox ” and “ rinderpest ” (cattle plague). The definitions are in the past tense, indicating that human ingenuity has eradicated two of the cruelest causes of suffering in the history of our kind. 

Though science is beneficially embedded in our material, moral, and intellectual lives, many of our cultural institutions, including the liberal arts programs of many universities, cultivate a philistine indifference to science that shades into contempt. Students can graduate from elite colleges with a trifling exposure to science. They are commonly misinformed that scientists no longer care about truth but merely chase the fashions of shifting paradigms. A demonization campaign anachronistically impugns science for crimes that are as old as civilization, including racism, slavery, conquest, and genocide.

Just as common, and as historically illiterate, is the blaming of science for political movements with a pseudoscientific patina, particularly Social Darwinism and eugenics. Social Darwinism was the misnamed laissez-faire philosophy of Herbert Spencer. It was inspired not by Darwin’s theory of natural selection, but by Spencer’s Victorian-era conception of a mysterious natural force for progress, which was best left unimpeded. Today the term is often used to smear any application of evolution to the understanding of human beings. Eugenics was the campaign, popular among leftists and progressives in the early decades of the twentieth century, for the ultimate form of social progress, improving the genetic stock of humanity. Today the term is commonly used to assail behavioral genetics, the study of the genetic contributions to individual differences.

I can testify that this recrimination is not a relic of the 1990s science wars. When Harvard reformed its general education requirement in 2006 to 2007, the preliminary task force report introduced the teaching of science without any mention of its place in human knowledge: “Science and technology directly affect our students in many ways, both positive and negative: they have led to life-saving medicines, the internet, more efficient energy storage, and digital entertainment; they also have shepherded nuclear weapons, biological warfare agents, electronic eavesdropping, and damage to the environment.” This strange equivocation between the utilitarian and the nefarious was not applied to other disciplines. (Just imagine motivating the study of classical music by noting that it both generates economic activity and inspired the Nazis.) And there was no acknowledgment that we might have good reasons to prefer science and know-how over ignorance and superstition.

At a 2011 conference, another colleague summed up what she thought was the mixed legacy of science: the eradication of smallpox on the one hand; the Tuskegee syphilis study on the other. (In that study, another bloody shirt in the standard narrative about the evils of science, public-health researchers beginning in 1932 tracked the progression of untreated, latent syphilis in a sample of impoverished African Americans.) The comparison is obtuse. It assumes that the study was the unavoidable dark side of scientific progress as opposed to a universally deplored breach, and it compares a one-time failure to prevent harm to a few dozen people with the prevention of hundreds of millions of deaths per century, in perpetuity.

A major goad for the recent denunciations of scientism has been the application of neuroscience, evolution, and genetics to human affairs. Certainly many of these applications are glib or wrong, and they are fair game for criticism: scanning the brains of voters as they look at politicians’ faces, attributing war to a gene for aggression, explaining religion as an evolutionary adaptation to bond the group. Yet it’s not unheard of for intellectuals who are innocent of science to advance ideas that are glib or wrong, and no one is calling for humanities scholars to go back to their carrels and stay out of discussions of things that matter. It is a mistake to use a few wrongheaded examples as an excuse to quarantine the sciences of human nature from our attempt to understand the human condition.

Take our understanding of politics. “What is government itself,” asked James Madison, “but the greatest of all reflections on human nature?” The new sciences of the mind are reexamining the connections between politics and human nature, which were avidly discussed in Madison’s time but submerged during a long interlude in which humans were assumed to be blank slates or rational actors. Humans, we are increasingly appreciating, are moralistic actors, guided by norms and taboos about authority, tribe, and purity, and driven by conflicting inclinations toward revenge and reconciliation. These impulses ordinarily operate beneath our conscious awareness, but in some circumstances they can be turned around by reason and debate. We are starting to grasp why these moralistic impulses evolved; how they are implemented in the brain; how they differ among individuals, cultures, and sub- cultures; and which conditions turn them on and off.

The application of science to politics not only enriches our stock of ideas, but also offers the means to ascertain which of them are likely to be correct. Political debates have traditionally been deliberated through case studies, rhetoric, and what software engineers call HiPPO (highest-paid person’s opinion). Not surprisingly, the controversies have careened without resolution. Do democracies fight each other? What about trading partners? Do neighboring ethnic groups inevitably play out ancient hatreds in bloody conflict? Do peacekeeping forces really keep the peace? Do terrorist organizations get what they want? How about Gandhian nonviolent movements? Are post-conflict reconciliation rituals effective at preventing the renewal of conflict?

History nerds can adduce examples that support either answer, but that does not mean the questions are irresolvable. Political events are buffeted by many forces, so it’s possible that a given force is potent in general but submerged in a particular instance. With the advent of data science—the analysis of large, open-access data sets of numbers or text—signals can be extracted from the noise and debates in history and political science resolved more objectively. As best we can tell at present, the answers to the questions listed above are (on average, and all things being equal) no, no, no, yes, no, yes, and yes.

The humanities are the domain in which the intrusion of science has produced the strongest recoil. Yet it is just that domain that would seem to be most in need of an infusion of new ideas. By most accounts, the humanities are in trouble. University programs are downsizing, the next generation of scholars is un- or underemployed, morale is sinking, students are staying away in droves. No thinking person should be indifferent to our society’s disinvestment from the humanities, which are indispensable to a civilized democracy.

Diagnoses of the malaise of the humanities rightly point to anti-intellectual trends in our culture and to the commercialization of our universities. But an honest appraisal would have to acknowledge that some of the damage is self-inflicted. The humanities have yet to recover from the disaster of postmodernism, with its defiant obscurantism, dogmatic relativism, and suffocating political correctness. And they have failed to define a progressive agenda. Several university presidents and provosts have lamented to me that when a scientist comes into their office, it’s to announce some exciting new research opportunity and demand the resources to pursue it. When a humanities scholar drops by, it’s to plead for respect for the way things have always been done.

Those ways do deserve respect, and there can be no replacement for the varieties of close reading, thick description, and deep immersion that erudite scholars can apply to individual works. But must these be the only paths to understanding? A consilience with science offers the humanities countless possibilities for innovation in understanding. Art, culture, and society are products of human brains. They originate in our faculties of perception, thought, and emotion, and they cumulate and spread through the epidemiological dynamics by which one person affects others. Shouldn’t we be curious to understand these connections? Both sides would win. The humanities would enjoy more of the explanatory depth of the sciences, to say nothing of the kind of a progressive agenda that appeals to deans and donors. The sciences could challenge their theories with the natural experiments and ecologically valid phenomena that have been so richly characterized by humanists.

In some disciplines, this consilience is a fait accompli. Archeology has grown from a branch of art history to a high-tech science. Linguistics and the philosophy of mind shade into cognitive science and neuroscience.

READ: The argument continues: Pinker and Wieseltier, Science vs. Humanities, Round III

Similar opportunities are there for the exploring. The visual arts could avail themselves of the explosion of knowledge in vision science, including the perception of color, shape, texture, and lighting, and the evolutionary aesthetics of faces and landscapes. Music scholars have much to discuss with the scientists who study the perception of speech and the brain’s analysis of the auditory world.

As for literary scholarship, where to begin? John Dryden wrote that a work of fiction is “a just and lively image of human nature, representing its passions and humours, and the changes of fortune to which it is subject, for the delight and instruction of mankind.” Linguistics can illuminate the resources of grammar and discourse that allow authors to manipulate a reader’s imaginary experience. Cognitive psychology can provide insight about readers’ ability to reconcile their own consciousness with those of the author and characters. Behavioral genetics can update folk theories of parental influence with discoveries about the effects of genes, peers, and chance, which have profound implications for the interpretation of biography and memoir—an endeavor that also has much to learn from the cognitive psychology of memory and the social psychology of self-presentation. Evolutionary psychologists can distinguish the obsessions that are universal from those that are exaggerated by a particular culture and can lay out the inherent conflicts and confluences of interest within families, couples, friendships, and rivalries that are the drivers of plot.

And as with politics, the advent of data science applied to books, periodicals, correspondence, and musical scores holds the promise for an expansive new “digital humanities.” The possibilities for theory and discovery are limited only by the imagination and include the origin and spread of ideas, networks of intellectual and artistic influence, the persistence of historical memory, the waxing and waning of themes in literature, and patterns of unofficial censorship and taboo.

Nonetheless, many humanities scholars have reacted to these opportunities like the protagonist of the grammar-book example of the volitional future tense: “I will drown; no one shall save me.” Noting that these analyses flatten the richness of individual works, they reach for the usual adjectives: simplistic, reductionist, naïve, vulgar, and of course, scientistic.

The complaint about simplification is misbegotten. To explain something is to subsume it under more general principles, which always entails a degree of simplification. Yet to simplify is not to be simplistic. An appreciation of the particulars of a work can co-exist with explanations at many other levels, from the personality of an author to the cultural milieu, the faculties of human nature, and the laws governing social beings. The rejection of a search for general trends and principles calls to mind Jorge Luis Borges’s fictitious empire in which “the Cartographers Guild drew a map of the Empire whose size was that of the Empire, coinciding point for point with it. The following Generations ... saw the vast Map to be Useless and permitted it to decay and fray under the Sun and winters.”

And the critics should be careful with the adjectives. If anything is naïve and simplistic, it is the conviction that the legacy silos of academia should be fortified and that we should be forever content with current ways of making sense of the world. Surely our conceptions of politics, culture, and morality have much to learn from our best understanding of the physical universe and of our makeup as a species.  

Steven Pinker is a contributing editor at The New Republic , the Johnstone Family Professor of Psychology at Harvard University, and the author, most recently, of T he Better Angels of our Nature: Why Violence Has Declined .  

  • Contributors
  • Valuing Black Lives
  • Black Issues in Philosophy
  • Blog Announcements
  • Climate Matters
  • Genealogies of Philosophy
  • Graduate Student Council (GSC)
  • Graduate Student Reflection
  • Into Philosophy
  • Member Interviews
  • On Congeniality
  • Philosophy as a Way of Life
  • Philosophy in the Contemporary World
  • Precarity and Philosophy
  • Recently Published Book Spotlight
  • Starting Out in Philosophy
  • Syllabus Showcase
  • Teaching and Learning Video Series
  • Undergraduate Philosophy Club
  • Women in Philosophy
  • Diversity and Inclusiveness
  • Issues in Philosophy
  • Public Philosophy
  • Work/Life Balance
  • Submissions
  • Journal Surveys
  • APA Connect

Logo

The Problem with Scientism

science is a threat to humanity essay

Science is unquestionably the most powerful approach humanity has developed so far to the understanding of the natural world. There is little point in arguing about the spectacular successes of fundamental physics, evolutionary and molecular biology, and countless other fields of scientific inquiry. Indeed, if you do, you risk to quickly slide into self-contradictory epistemic relativism or even downright pseudoscience .

That said, there is a pernicious and increasingly influential strand of thought these days — normally referred to as “scientism” — which is not only a threat to every other discipline, including philosophy, but risks undermining the credibility of science itself. In these days of crisis in the humanities , as well as in the social sciences , it is crucial to distinguish valid from ill-founded criticism of any academic effort, revisiting once more what C.P. Snow famously referred to as the divide between “ the two cultures .”

First off, what is scientism, exactly? Sometimes it pays to go back to the basics, in this case to the Merriam-Webster concise definition: “An exaggerated trust in the efficacy of the methods of natural science applied to all areas of investigation (as in philosophy, the social sciences, and the humanities).” But surely this is a straw man. Who really fits that description? Plenty of prominent and influential people, as it turns out. Let me give you a few examples:

Author Sam Harris , when he argues that science can by itself provide answers to moral questions and that philosophy is not needed. (e.g., “Many of my critics fault me for not engaging more directly with the academic literature on moral philosophy … I am convinced that every appearance of terms like ‘metaethics,’ ‘deontology,’ [etc.] … directly increases the amount of boredom in the universe.”)

Science popularizer Neil deGrasse Tyson (and physicists Lawrence Krauss and Stephen Hawking , science educator Bill Nye , among others), when he declares philosophy useless to science (or “dead,” in the case of Hawking). (e.g., “My concern here is that the philosophers believe they are actually asking deep questions about nature. And to the scientist it’s, what are you doing? Why are you concerning yourself with the meaning of meaning?” —N. deGrasse Tyson; also: “I think therefore I am. What if you don’t think about it? You don’t exist anymore? You probably still exist.” —B. Nye).

Any number of neuroscientists when they seem to believe that “your brain on X” provides the ultimate explanation for whatever X happens to be.

Science popularizer Richard Dawkins , when he says “science” disproves the existence of God (while deploying what he apparently does not realize are philosophical arguments informed by science).

A number of evolutionary psychologists (though not all of them!) when they make claims that go well beyond the epistemic warrant of the evidence they provide. Literature scholars (and biologists like E.O. Wilson ) when they think that an evolutionary, data-driven approach tells us much that is insightful about, say, Jane Austen.

The list could go on, for quite a bit. Of course, we could have reasonable discussions about any individual entry above, but I think the general pattern is clear enough. Scientism is explicitly advocated by a good number of scientists (predictably), and even some philosophers. A common line of defense is that the term should not even be used because it is just a quick way for purveyors of fuzzy religious and pseudoscientific ideas to dismiss anyone who looks critically at their claims.

This is certainly the case. But it is no different from the misuse of other words, such as “pseudoscience” itself, or “skepticism” (in the modern sense of a critical analysis of potentially unfounded claims). Still, few people would reasonably argue that we should stop using a perfectly valid word just because it is abused by ideologically driven groups. If that were being the case, the next version of the Merriam-Webster would be pretty thin…

Philosopher of science Susan Haack has proposed an influential list of six signs of scientistic thinking, which — with some caveats and modifications — can be usefully deployed in the context of this discussion.

The first sign is when words like “science” and “scientific” are used uncritically as honorific terms of epistemic praise. For instance, in advertisement: “9 out of 10 dentists recommend brand X.” More ominously, when ethically and scientifically ill-founded notions, such as eugenics, gain a foothold in society because they are presented as “science.” Let us not forget that between 1907 and 1963, 64,000 American citizens were forcibly sterilized because of eugenic laws.

The second of Haack’s signs is the adoption of the manners and terminology of science regardless of whether they are useful or not. My favorite example is a famous paper  published in 2005 in American Psychologist by Barbara Fredrickson and Marcial Losada. They claimed — “scientific” data in hand — that the ratio of positive to negative emotions necessary for human flourishing is exactly 2.9013 to 1. Such precision ought to be suspicious at face value, even setting aside that the whole notion of the existence of an ideal, universal ratio of positive to negative emotions is questionable in the first place. Sure enough, a few years later, Nicholas Brown, Alan Sokal, and Harris Friedman published a scathing rebuttal of the Fredrickson-Losada paper, tellingly entitled “ The complex dynamics of wishful thinking: The critical positivity ratio .” Unfortunately, the original paper is still far more cited than the rebuttal.

Third, scientistically-oriented people tend to display an obsession with demarcating science from pseudoscience. Here I think Haack is only partially correct, as my observation is rather that scientistic thinking results in an expansion of the very concept of “science”, almost making it equivalent with rationality itself. It is only as a byproduct that pseudoscience is demarcated from science, and moreover, a lot of philosophy and other humanistic disciplines tend to be cast as “pseudoscience” if they somehow dare assert even a partial independence from the natural sciences. This, of course, is nothing new, and amounts to a 21st century (rather naive) version of logical positivism:

The criterion which we use to test the genuineness of apparent statements of fact is the criterion of verifiability. We say that a sentence is factually significant to any given person, if, and only if, he knows how to verify the proposition which it purports to express — that is, if he knows what observations would lead him, under certain conditions, to accept the proposition as true, or reject it as being false. — A.J. Ayer (Language, Truth, and Logic)

The fourth sign of scientism has to do with a preoccupation with identifying a scientific method to demarcate science from other activities. A good number of scientists, especially those writing for the general public, seem blissfully unaware of decades of philosophical scholarship questioning the very idea of the scientific method. When we use that term, do we refer to inductivism, deductivism, adbuctivism, Bayesianism, or what ?

The philosophical consensus seems to be that there is no such thing as a single, well-identified scientific method, and that the sciences rely instead on an ever-evolving toolbox, which moreover is significantly different between, say, ahistorical (physics) and historical (evolutionary biology) sciences, or between the natural and social sciences.

Here too, however, the same problem that I mentioned above recurs: contra Haack, proponents of scientism do not seem to claim that there is a special scientific method, but on the contrary, that science is essentially co-extensive with reason itself. Once again, this isn’t a philosophically new position:

If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion — David Hume (An Enquiry Concerning Human Understanding).

Both Ayer’s verifiability criterion and Hume’s fork suffer from serious philosophical problems, of course, but to uncritically deployed them as a blunt instrument against in defense of scientism is simply a result of willful and abysmal illiteracy.

Next to last, comes an attitude that seeks to deploy science to answer questions beyond its scope. It seems to me that it is exceedingly easy to come up with questions that either science is wholly unequipped to answer, or for which it can at best provide a (welcome!) degree of relevant background knowledge. I will leave it to colleagues in other disciplines to arrive at their own list, but as far as philosophy is concerned, the following list is just a start:

In metaphysics: what is a cause?

In logic: is modus ponens a type of valid inference?

In epistemology: is knowledge “justified true belief”?

In ethics: is abortion permissible once the fetus begins to feel pain?

In aesthetics: is there a meaningful difference between Mill’s “low” and “high” pleasures?

In philosophy of science: what role does genetic drift play in the logical structure of evolutionary theory?

In philosophy of mathematics: what is the ontological status of mathematical objects, such as numbers?

The scientific literature on all the above is basically non-existent, while the philosophical one is huge. None of the above questions admits of answers arising from systematic observations or experiments. While empirical notions may be relevant to some of them (e.g., the one on abortion), it is philosophical arguments that provide the suitable approach.

Lastly, a sixth sign of scientism is the denial or denigration of the usefulness of nonscientific activities, particularly within the humanities. Saying that philosophy is “useless” because it doesn’t contribute to solving scientific problems (deGrasse Tyson, Hawking, Krauss, Nye), betrays a fundamental misunderstanding (and let’s be frank, simple ignorance) of what philosophy is . Ironically, the scientistic take could be turned on its head: on what empirical grounds, for instance, can we arrive at the value judgment that cosmology is “more important” than literature? Is the only thing that matters the discovery of facts about the natural world? Why? And while we are at it, why exactly do we take for granted that money spent on a new particle accelerator shouldn’t be spent on, say, cancer research? I’m not advocating such a position, I am simply pointing out that there is no scientific evidence that could settle the matter, and that scientistically-inclined writers tend, as Daniel Dennett famously said in Darwin’s Dangerous Idea , to take on board a lot of completely unexamined philosophical baggage.

In the end, it all comes down to what we mean by “science.” Perhaps we can reasonably agree that this is a classic example of a Wittgensteinian “family resemblance” concept, i.e., something that does not have precise boundaries, nor is it amenable to a precise definition in terms of necessary and jointly sufficient conditions. But as a scientist and a philosopher of science, I tend to see “science” as an evolving beast, historically and culturally situated, similar to the in-depth analysis provided by Helen Longino in her book,  Science as Social Knowledge .

Science is a particular ensemble of epistemic and social practices — including a more or less faulty system of peer review, granting agencies, academic publications, hiring practices, and so on. This is different from “science” as it was done by Aristotle, or even by Galileo. There is a continuity, of course, between its modern incarnation and its historical predecessors, as well as between it and other fields (mathematics, logic, philosophy, history, and so forth).

But when scientistic thinkers pretend that any human activity that has to do with reasoning about facts is “science” they are attempting a bold move of naked cultural colonization, defining everything else either out of existence or into irrelevance. When I get up in the morning and go to work at City College in New York I take a bus and a subway. I do so on the basis of my empirical knowledge of the Metropolitan Transportation Authority system, which results — you could say — from years of “observations” and “experiments,” aimed at testing “hypotheses” about the system and its functionality. If you want to call that science, fine, but you end up sounding pretty ridiculous. And you are doing no favor to real science either.

' src=

  • Massimo Pigliucci

Massimo Pigliucci is the K.D. Irani Professor of Philosophy at the City College of New York. His books include How to Be a Stoic: Using Ancient Philosophy to Live a Modern Life (Basic Books), A Handbook for New Stoics: How to Thrive in a World Out of Your Control (The Experiment, with Gregory Lopez) , and How to Live a Good Life (Vintage, co-edited with Skye Cleary and Dan Kaufman). Check out more from Massimo at massimopigliucci.wordpress.com .

  • Editor: Nathan Oseroff
  • Philosophy of Science

RELATED ARTICLES

A new journal survey: the pjip operations survey, should we continue to read and honor immoral historical philosophers, treading water, or self-care and success as a graduate student, i don’t read enough, the ancient practice of rest days, finding meaning in moving: my experiences as an aussie grad student, 32 comments.

I think I’d prefer the term to be ‘scientificism’ as then a proponent of scientificism would be a ‘scientificist’ as opposed to a proponent of scientism being a scientist…

Scientism is a much cleaner term. There is little need for a noun Descriptor of a person who exhibits it because there is virtually no person who expounds, overtly adheres, or self defines as a Scientism-ist. I think we can safely refer to someone falling into it with a little more distance than “scientificist.”

No it’s not “cleaner” because it is misleading as it falsely encourages confusion between a follower of scientism and a scientist (the idea that such a person would be called a scientismist is ridiculous and contrary o linguistic convention). If you wanted something shorter than scientificism the natural choice is sciencism with the believer being called a sciencist.

It would be better we judge Science by the ‘ universal’ premise she keeps,that the ‘whole’ that exists is a physical whole, irrespective of her different methods. All her rest inferences,methods etc will be consistent with the said ‘universal’ premise as she can’t be illogical. Love to share the following blog that explains this stand in more detail: http://argumentsagainstscientificpositivism.blogspot.in/2014/05/thescientific-explanation-of-reality.html?m=1

I reasonably agree that science is a “family resemblance” concept. Perhaps we can also reasonable agree that science is the collection of scientific claims. And that a scientist is someone who makes a scientific claim. Put like that, there is a “family resemblance” with scientism. Scientism is the collection of scientistic claims. And a scientistic is someone who makes a scientistic claim. The problem with these propositions is the lack of evolvement. A thief is someone who commits theft. But I can’t imagine a person who is constantly stealing. So, it’s more appropriate to state that a person is a thief “when” he is stealing, and only when he is stealing. In analogy one might say that the proposition “Author Sam Harris is a scientistic, “when” he argues that science can by itself provide answers to moral questions and that philosophy is not needed”, is a scientific claim. However, the question arises if this proposition offers enough evolvement. Or, as a matter of fact, the question arises if there isn’t too much evolvement involved. Too much evolvement might change the scientific claim into a relativistic claim. When is there enough evolvement in a proposition to call it a scientific claim and not a scientistic claim. And when is there too much evolvement in a proposition to call it a scientific claim and not a relativistic claim? I don’t agree with Sam Harris, but I have a great admiration for his guts. His guts to give his central argument. “Here it is: Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena, fully constrained by the laws of the universe (whatever these turn out to be in the end). Therefore, questions of morality and values must have right and wrong answers that fall within the purview of science (in principle, if not in practice). Consequently, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.” I do wonder whether you call this central argument “scientism”. And I eat his hat if your co-editor calls this central argument “scientism”.

I take a different view. Scientism is, for me, extreme reductionism. It boils down to the notion of complete bottom up explanations, starting with sub-atomic particles and forces. I feel this ignores the mystery of consciouisness, the famous “hard problem”, the great emergent property that we cannot explain. There is no question that one can skip this problem, and use the scientific method for issues of psychology, or sociology or political science or ethics. And do this productively. I feel it is arrogance to think that one day we’ll understand consciousness, so we can skip over it for now. Take, for example, the aesthetic experience of literature or art. We may identify the exact components that lead to this experience, but is not the subjective state of the experience. This is analgous to figuring out, perfectly, the neural correlates of consciousness, thinking this explains the subjective state of consciousness. It doesn’t and wont. I feel that the characteristics of scientism, described above, are likely solid correlates of scientism, but they are not the root. My thoughts arise from the Pinker-Weisentaller debates from a few years ago. This led a philsopher to suggest reading Dilthey. Dilthey nails it. As do some great thinkers of the 19th century such as Von Helmholtz.

Good stuff. I would add, even if the factors leading to an enjoyable art experience could be scientifically teased out… they change! And that’s the whole problem with Scientism. Like a marketing company claiming they know scientifically what motivates purchases, they may have some clues… but they are not looking at a scientific question, they’re looking at a cultural question.

I always thought Hume’s passage was a self-refutation gag – similar to “This statement is false.”

Without trying to marginalize his older comments concerning the role of philosophy, I’d like to push back a little bit about Bill Nye. He has, it seems, come around significantly concerning the importance of philosophy to science. I find this coming-around significant and worthy of note given the context of the current discussion. Indeed, it seems one root of scientism is little more than pure ignorance about philosophy. https://qz.com/960303/bill-nye-on-philosophy-the-science-guy-says-he-has-changed-his-mind/

Scientism would seem to have the value of providing considerable insight in to the processes of religion, particularly for those for whom experiencing religion is not an option.

Birds fly, fish swim, and human know. It’s in our nature to want to know everything, however distant and seemingly out of reach. As “knowers” we require a reference to some trusted authority, some methodology which we consider reliable.

We vary considerably in our relationship with our chosen authority.

Some of us hold our chosen authority loosely, using it where it is qualified, and setting it aside where it is not. So for instance, a Christian might use the Bible to define their relationship with reality, but not as a car repair manual.

Others of us cling to our chosen authorities tightly much as a small child might cling to their parent’s hand, needing to feel that their chosen authority is an all powerful force which can shelter and protect them in any circumstance, that is, a “one true way”.

The followers of scientism seem to me to be religious fundamentalists who can no longer believe in religion, so they aim their emotional need for a “one true way” in a different direction. This is likely not a popular theory with such true believers, but it is does offer at least a few of them the opportunity to better understand that which they so ardently reject, religious faith. We can best understand what’s happening in others by finding that same thing within ourselves.

Religious faith, like scientism, arises from a strong need to know what the rule book of our experience is. And that strong need arises from a very understandable desire to be safe, that is, it arises from fear.

And fear arises from the nature of thought itself, which is why those who on the surface may appear to be so very different can, just below the surface, be so very similar.

[…] APA Blog: The Problem with Scientism […]

Science is a branch of History, just as the Spanish philosopher, José Ortega y Gasset, claimed that Mathematics is a branch of Poetry, both with stricter rules. Thus Ortega claimed that the most sophisticated science is Etymology: the determination that one word is “derived” from another.

In case you missed this: https://sandwalk.blogspot.co.uk/2018/02/one-philosophers-view-of-random-genetic.html?showComment=1518463621780#c3633580149962372553

I commented, as did another reader that Moran is confusing theory with logical structure of theory and added (on FaceBook) that he has thus unwittingly showing why scientists need philosophers

Moran writes, “He (Pigliucci) is mostly upset about the fact that science as a way of knowing is extraordinarily successful whereas philosophy isn’t producing many results.”

The unexamined assumption behind this statement seems to be that being extraordinarily successful at developing knowledge is automatically a good thing, and that all methodologies should therefore be measured against this standard. If we add to this assumption a healthy measure of adamant “one true wayism” it seems we arrive at scientism.

The problem here is that while it was true that a “more is better” relationship with knowledge was an entirely rational paradigm in a long historical era characterized by knowledge scarcity, it doesn’t automatically follow that a “more is better” relationship with knowledge is therefore also ideal in an era characterized by a knowledge explosion.

As example, a “more is better” relationship with food made perfect sense in the long era when humanity routinely lived on the edge of starvation. It makes far less sense in an era when food is abundant, and more of us (in the developed world at least) are dying from obesity related diseases than starvation.

The spectacular success of science has created a radically different new environment which we are now required to adapt to, like it or not. Successful adaptation will require more sophisticated thinking than an attempt to meet the 21st century with 19th century assumptions.

The problem of your reasoning regarding scarcity vs abundance is that the factor of “quality” and “quality assessment” are crucial on what you consume, the concept itself of knowledge and food as elements of your own, assets nonetheless which implies administration… hence philosophical reasoning, as it’s the only thing that allows us to sort through life and how to engage with it.

I personally consider “science” as a deformed and abused term used to describe anything minimally systematic within what we could consider “technology” which implies repeatability and production, a ghost from the industrial revolution I guess. Either way, I consider science as philosophy and very related to the original concept of mathematics… thus esoteric and mysterious in its basis, intimately related to metaphysics and epistemology: which apparently is not the case nowadays. It’s used today in the name of obscurantism.

I think all of this fails to factor in that, a) at least to this point, relying on the scientific method has been the only almost perfect way (when used properly) of determining reality and figuring out what is and isn’t true and b) everything that we’ve discovered by using the scientific method has come entirely from natural phenomenon, including things like religious experiences, NDEs, and the whole of the Universe. When people like Sam Harris and Richard Dawkins talk about science like they do, what they are saying must be interpreted through a lens of everything we’ve ever discovered being natural and science, when it’s done properly, being able to successfully test natural things and thus, being able to give us correct answers to most everything. Taken in this context, Harris and Dawkins are only saying what the evidence appears to support.

“everything we’ve ever discovered being natural”

But it isn’t. We’ve discovered many unnatural things. For example, accounting techniques, modes of governance, and principles of modern morality. We know that these things in our culture are neither science nor religion, but a third type of knowledge that might be called “civics”. Yet I’ve never seen any serious attempt to identify the civic rules of cultures far from ours.

Civic complaints from indigenous groups are routinely misrepresented in religious or pseudo-scientific terms, with disastrous results. For example the Dakota Access Pipeline, which should have been a simple case of a business defacing private land — and thus clearly flat-out wrong on many levels, became a toothless issue of faith and culture-war.

People like Harris and Dawkins are rightly criticized, for failing to disentangle the pursuit of science itself, from the pursuit of political and professional strategies to promote and safeguard it in Western culture. Rather than let them off the hook, we need to realize that every culture has its own version of this dilemma.

Science, when done correctly, is worldwide knowledge. Religion and civics, when done correctly, are specific to individual cultures. All three of these need to be demarcated, and science needs to be respectfully “federalized” over the other two.

There are several aspects of your argument that I find fallacious.

Your argument overlooks the limitations of the scientific method, assumes a naturalistic bias, restricts interpretations of evidence, and oversimplifies the complexity of reality.

The scientific method is a valuable tool for studying the natural world, but it has limitations. It focuses mainly on observable and testable phenomena within the physical and natural sciences. However, it may not address aspects of reality like subjective experiences, consciousness, and metaphysical questions.

Subjective experiences, consciousness, and metaphysical questions are outside science’s realm. That’s within the philosophical realm.

You suggest that scientific evidence must align with a naturalistic worldview. However, interpretations of scientific findings can be influenced by biases, assumptions, and paradigms.

It’s critical to approach scientific conclusions with an open mind and critically evaluate underlying assumptions and biases.

You also simplified the complexity of reality by assuming that science can provide all the answers. Reality encompasses a wide range of phenomena and questions that may require interdisciplinary approaches and multiple perspectives to fully understand.

For example, subjective experiences, consciousness, morality, and meaning, may require additional frameworks beyond the scientific method to explore their significance.

I am now a retired “scientist”. I specialized in several different biochemical physiological fields and I always thought of my published work as “scientific” i.e. it followed the “scientific method”, which I took to mean observe, verify, explain and test whether new predictions of the explanation turned out to be what was predicted, to simplify a complex process. A major technique is to reduce your test of the prediction to a change in only one variable in a control experiment. My view is that this method is the best way of obtaining knowledge about the observable world but practically has limited application. This excludes many of the things we need to explain so we can come up with the best answer of how to act. Would anyone suggest that we should not make any of the decisions we need to make as a society unless we can make them according to the scientific method? Under that condition we could not make any such decisions at all! Is this scientism? Reading all the above and elsewhere I am not sure.

Hello Harold!

I have to believe there are indeed people who would assert the scientific method is the only rational choice. As you point out, this does not appear realistic.

To determine whether the word “scientism” properly applies, we need to select a method of evaluation. From a pragmatic perspective: What do we hope to gain by an ideal answer to whether the “science-only decision” belief constitutes scientism?

Harold Kimelberg:

Here’s a simple way to understand it:

Everything we can count may not “count” as important; and we can’t count everything that “counts.”

what makes a view “scientistic” is the view that a strictly quantitative method is sufficient for knowledge of all things. Implied in this view is that there is nothing worth knowing that cannot be known by the quantitative method.

The list that the author gave consists of questions that cannot be known by the quantitative method, all of which are important questions:

Finally, a brief anecdote might make it even simpler. The religious scholar Huston Smith taught at MIT for some years. One day, he was speaking with a scientist who taught there. They were having a light, breezy conversation, trying to identify precisely what it was that distinguished the humanities from science.

Finally, the scientist – apparently unaware of the double meaning of what he was saying – exclaimed, “I know. I count and you don’t.”

I worked as a research engineer at one of the nation’s largest, most scientifically and commercially successful laboratories in the world. It was an environment that had produced more than one Nobel prize, the first transistor, and the first integrated circuit, among many, many other life-changing developments that affected every single living thing on this planet – and that is only the record during my short tenure there. I say it not to impress you, because you would have to be an idiot not to be impressed by these accomplishments, but to remind you how little religion and philosophy had anything to do with any of those accomplishments.

No one was asking if there was a Christian transistor, or if the discovery of evidence for the Big Bang would offend the sensibilities of a philosopher of science closeted somewhere in academia. I truly want you to understand that when thousands of engineers and scientists are working together to solve some very large problem, none of them are doing scientism, they are doing science. Any individual scientist may indeed be an opinionated twit, but that isn’t science. Her hubris is not science.

My first executive director was a holocaust survivor – in the summer you could see the number tattooed on his forearm – but that never came up in our discussions of the problems of digital telephony, nor should it have, because it is simply irrelevant to those problems. Science has accepted that the universe does not care about what you think it should be, it just is, whether we observe it accurately or have completely mischaracterized it. Science attempts to model our understanding of it on evidence, but fully realizes that models are contingent upon, and subordinate to, the actual workings of the universe. And that right there is what you are calling scientism — the belief in an objective reality.

I often use the microwave oven to illustrate my point. Does it harbor a demon who hates cold coffee? Maybe so, but you couldn’t ever construct one on the premise of demonology, as it requires knowledge of electromagnetic field theory to do that. Consider also that the microwave oven does not require your philosophical or religious consent – or even your understanding of it – for you to operate it. It operates just as well when an animist tribesman fresh from the Amazon pushes the button as when a plasma physicist does it. In some parts of academia, that claim is epistemological arrogance. Meh.

Yell at me on your Apple Iphones and fancy computers, your vitriol coursing through the algorithms and applied quantum theory that allow me to see your plaintiff cries. Booo hoooo. The seventies brought to fruition an era of epistemological nihilism that was anti-science with a religious axe to grind. Despite protestations here that science is winning that war, I disagree. Every day I see science denigrated as opinion or as the untrustworthy religion of the elites. This from a generation for whom a significant number believe in the UFO origin of crop-circles and even believe chocolate milk comes from brown cows. Thank you, philosophy, for giving them permission to do so without embarrassment.

Science isn’t a panacea, as it can’t by its nature even believe that about itself and still be science. The main crime of the scientist is technology. In a world that undeniably is more reliant each day upon that technology, we see the damaging effects of technology all of the time. It is no wonder that all have our reservations about moral progress. Yet, science is so hard-wired into the brains of us humans that no matter what, it will get done, and because the results are so powerful there will always be those ready to monetize what it finds.

[…] they don’t understand having “faith” in science is in itself a fallacy of scientism. Scientism is a faith in the idea that all social problems have only one answer, through the […]

[…] they don’t understand having “faith” in science is in itself a fallacy of scientism. Scientism is a faith in the idea that all social problems have only one answer, through the […]

[…] of the future is what happens when man plays God and science replaces religion. Today we call this Scientism but it is also Gnosticism run wild.  Truth, wisdom, morality and true beauty being replaced with […]

One thing I often find omitted in these discussions, which I would like you, Massimo, to expand on, is how many things in our lives through which we both learn and generate new knowledge are rather art than science. In fact, one of the most famous books on electronics, a subject that heavily relies on physics, is called ‘The Art of Electronics’ (by Horowitz and Hill) for good reasons. Even the very techniques we use when performing experiments and being inventive with making new engineering designs or scientific theories can rather be described as art than some organized and coherent principles stemming from a scientific theory.

Yes! My only quarrel with your comment is the reference to “rather art than science”. Science is an art – or a family of related arts. The art of identifying a theory, the art of designing appropriate experiments to test that theory, and the art of making practical applications of that theory – these are all aspects of science.

[…] what is being pushed as science in the public square is actually barely disguised scientism.   Go here to read about scientism.  When a method to discover facts about the natural world becomes an idol […]

A method to discover facts about nature has become an idol.

If we must have an idol, we could do worse than one that’s self-correcting, fact-revealing, inclusive, unifying, truth-seeking, constantly expanding, independently & objectively verifiable and progressively improving in scope, detail, and beneficial value…as far as I can tell, based on my current understanding.

LEAVE A REPLY Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Notify me of follow-up comments by email.

Notify me of new posts by email.

WordPress Anti-Spam by WP-SpamShield

Currently you have JavaScript disabled. In order to post comments, please make sure JavaScript and Cookies are enabled, and reload the page. Click here for instructions on how to enable JavaScript in your browser.

Advanced search

Posts You May Enjoy

To salt or not to salt the pasta water: a reflection..., is there a silver lining to algorithm bias pt. 1, mistaking good looks for goodness: yusuf dikeç and the halo effect, failure, camaraderie, and shared embodied learning, the two philosophers, one podcast, no problems podcast: philosophy outside academia, seven ways you can support academics in turkey, a neuro-philosophy of dignity-based governance.

share this!

February 3, 2017

When scientific advances can both help and hurt humanity

by Nicholas G. Evans And Aerin Commins, The Conversation

When scientific advances can both help and hurt humanity

Scientific research can change our lives for the better, but it also presents risks – either through deliberate misuse or accident. Think about studying deadly pathogens; that's how we can learn how to successfully ward them off, but it can be a safety issue too, as when CDC workers were exposed to anthrax in 2014 after an incomplete laboratory procedure left spores of the bacterium alive.

For the last decade, scholars, scientists and government officials have worked to figure out regulations that would maximize the benefits of the life sciences while avoiding unnecessary risks. "Dual-use research" that has the capacity to be used to help or harm humanity is a big part of that debate. As a reflection of how pressing this question is, on Jan. 4, the U.S. National Academies for Science, Engineering, and Medicine met to discuss how or if sensitive information arising in the life sciences should be controlled to prevent its misuse.

For the new Trump administration, one major challenge will be how to maintain national security in the face of technological change. Part of that discussion hinges on understanding the concept of dual use. There are three different dichotomies that could be at play when officials, scholars and scientists refer to dual use – and each uniquely influences the discussion around discovery and control.

For war or for peace

The first meaning of dual use describes technologies that can have both military and civilian uses . For example, technologies useful in industry or agriculture can also be used to create chemical weapons. In civilian life, a chemical called thiodiglycol is a common solvent, occasionally used in cosmetics and microscopy. Yet the same chemical is used in the creation of mustard gas, which decimated infantry in World War I .

This distinction is one of the clearest to be made about a particular technology or breakthrough. Often when governments recognize something has both civilian and military uses, they'll attempt to control how, and with whom, the technology is shared. For instance, the Australia Group is a collection of 42 nations that together agree to control the export of certain materials to countries which might use them to create chemical weapons.

Technologies can also be dual use because there are benefits that were secondary to their development. An obvious example is the internet: The packet switching that underlies the internet was originally created as a means to communicate between military installations in the event of nuclear war . It has since been released into the civilian domain, allowing you to read this article.

This distinction between military and civilian uses doesn't always mirror a distinction between good and bad uses. Some military uses, such as those that underpinned the internet, are good. And some civilian uses can be bad: Recent controversies over the militarization of police through the spread of technologies and tactics meant for war into the civilian sphere demonstrate how proliferation in the other direction can be controversial.

Dual use in this sense is about control. Both military and civilian uses could be valuable, as long as a country can maintain authority over its technologies. Because both uses can be valuable, dual use can also be used to justify expenditures, by providing incentives to governments to invest in technology that has multiple applications .

For good or for evil

In the January meeting at the NAS, however, the key distinction was between beneficent and malevolent uses. Today this is the most common way to think about dual-use science and technology.

Dual use, in this sense, is a distinctly ethical concept. It is, at its core, about what kinds of uses are considered legitimate or valuable, and what kinds are destructive. For example, some research on viruses allows us to better understand potential pandemic-causing pathogens. The work potentially opens the door to possible countermeasures and helps public health officials in terms of preparedness. There is, however, the risk that the same research could, through an act of terror or a lab accident , cause harm.

As of 2007, the U.S. National Science Advisory Board for Biosecurity provides advice on regulating " dual-use research of concern ." This is any life sciences research that could be misapplied to pose a threat to public health and safety, agricultural crops and other plants, animals, the environment or materiel.

When scientific advances can both help and hurt humanity

The challenging ethical question is finding an acceptable trade-off between the benefits created by legitimate uses of dual-use research and the potential harms of misuse.

The recent NAS meeting discussed the spread of dual-use research's findings and methods, and who, if anyone, should be responsible for controlling its dispersal. Options that were considered included:

  • subjecting biology research to security classifications, even in part;
  • relying on scientists to responsibly control their own communications;
  • export controls, of the type used by the Australia Group with its concerns about military/civilian dual-use of chemicals.

Participants reached no firm conclusions, and it will be an ongoing challenge for the Trump administration to tackle these continuing issues.

The other side of the equation, whether we should do some dual-use research in the first place, has also been considered. On Jan. 9, the outgoing Obama administration released its final guidance for "gain-of-function research" that may result in the creation of novel, virulent strains of infectious diseases – which may also be dual use. They recommended, among other things, that in order to proceed, the experiments at issue must be the only way to answer a particular scientific question, and must produce greater benefits than they do risks. The devil, of course, is in the details, and each government agency that conducts life sciences research will have decide how best to implement the guidance.

For offense or for defense

There's a third, little discussed meaning of "dual use" that distinguishes between offensive and defensive uses of biotechnology. A classic example of this kind of dual use is " Project Clear Vision ." From 1997 to 2000, American researchers set out to recreate Soviet bomblets used to disperse biological weapons. This kind of research treads the delicate area between a defensive project – the U.S. maintains Project Clear Vision's goal was to protect Americans against an attack – and an offensive project that might violate the Biological Weapons Convention.

What is offensive and what is defensive is to some degree in the eye of the beholder. The Kalashnikov submachine gun was designed in 1947 to defend Russia, but has since become the weapon of choice in conflicts the world over – to the point that its creator expressed regret for his invention . Regardless of intent, the question of how the weapon is used in these conflicts, offensively or defensively, will vary depending on which end of the barrel one is on.

Regulating science

When scientists and policy experts wrangle over how to deal with dual-use technologies, they tend to focus on the division between applications for good or evil. This is important: We don't necessarily want to hinder science without valid reason, because it provides substantial benefits to human health and welfare.

However, there are fears that the lens of dual use could stifle progress by driving scientists away from potentially controversial research: Proponents of gain of function have argued that graduate students or postdoctoral fellows could choose other research areas in order to avoid the policy debate. To date, however, the total number of American studies put on hold – as a result of safety concerns, much less dual-use concerns – was initially 18 , with all of these being permitted to resume with the implementation of the policies set out on Jan. 9 by the White House. As a proportion of scientific research , this is vanishingly small.

Arguably, in a society that views science as an essential part of national security, dual-use research is almost certain to appear. This is definitely the case in the U.S., where the work of neuroscientists, increasingly, is funded by the national military , or the economic competitiveness that emerges from biotech is considered a national security priority.

Making decisions about the security implications of science and technology can be complicated. That's why scientists and policymakers need clarity on the dual-use distinction to help consider our options.

Provided by The Conversation

Explore further

Feedback to editors

science is a threat to humanity essay

Massive merger: Study reveals evidence for origin of supermassive black hole at galaxy's center

4 hours ago

science is a threat to humanity essay

Neolithic bones reveal isolated Yersinia pestis infections, not pandemics

science is a threat to humanity essay

New quantum error correction method uses 'many-hypercube codes' while exhibiting beautiful geometry

science is a threat to humanity essay

Solving the side effect problem of siRNA drugs for genetic disease treatment

science is a threat to humanity essay

Researchers advance new class of quantum critical metal that could advance electronic devices

science is a threat to humanity essay

Protecting just 0.7% of world's land could help save a third of unique and endangered species

5 hours ago

science is a threat to humanity essay

Crystallized alternative DNA structure sheds light on insulin and diabetes

6 hours ago

science is a threat to humanity essay

Researchers propose mechanistic framework to explain complex microbe-host symbioses

science is a threat to humanity essay

Low-cost nanomaterial technology can detect cancer genes with ultra-high sensitivity

science is a threat to humanity essay

Scientists uncover mechanism preserving centromere during cell division

Relevant physicsforums posts, interesting anecdotes in the history of physics, basques, some of the diaspora, how's and why's, more.

Sep 5, 2024

Biographies, history, personal accounts

Sep 4, 2024

Why are ABBA so popular?

Definition of maoil.

Sep 3, 2024

Favorite songs (cont.)

Sep 2, 2024

More from Art, Music, History, and Linguistics

Related Stories

science is a threat to humanity essay

Neuroscience hasn't been weaponized – it's been a tool of war from the start

Dec 1, 2016

science is a threat to humanity essay

Potentially unsafe med scripts up for dual users with dementia

Dec 6, 2016

Ethical considerations of military-funded neuroscience

Mar 20, 2012

New tool for biology students teaches biosecurity awareness by example

Sep 14, 2006

Enrollment completed for RE-DUAL PCI study of 2,700 atrial fibrillation patients

Nov 14, 2016

US government to announce new policies for dual use research

Feb 21, 2013

Recommended for you

science is a threat to humanity essay

Saturday Citations: Corn sweat! Nanoplastics! Plus: Massive objects in your area are dragging spacetime

Aug 31, 2024

science is a threat to humanity essay

Best of Last Week—How humans really killed mammoths, making AI systems smarter, mitochondria fling their DNA

Aug 26, 2024

science is a threat to humanity essay

Saturday Citations: Tarantulas and their homies; how mosquitoes find you; black holes not mysterious at all

Aug 24, 2024

science is a threat to humanity essay

New research analyzes 'Finnegans Wake' for novel spacing between punctuation marks

Aug 20, 2024

science is a threat to humanity essay

Saturday Citations: Citizen scientists observe fast thing; controlling rat populations; clearing nanoplastic from water

Aug 17, 2024

science is a threat to humanity essay

Flood of 'junk': How AI is changing scientific publishing

Aug 10, 2024

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

Science denial among the greatest risks to humanity, new 'call to arms' report finds

ABC Radio Adelaide

Topic: Climate Change

A whitewashed, stately-looking building in a flood zone with "I don't believe in global warming" written on it.

The CBH says the denial of science is a major threat to humanity. ( Twitter )

The denial of science is among the top 10 risks to humanity, ranking alongside climate change, nuclear war, overpopulation and unregulated artificial intelligence (AI), a new report claims.

Key points:

  • John Hewson says the human race can no longer stop ignoring the consequences of the actions it's taken in the pursuit of short-term gain
  • Dr Hewson is one of a number of academics involved in the Commission for the Human Future, which has released a new report
  • The report lists the threats faced by humanity, including science denial and misinformation, but says swift action could mitigate the risks

The fledgling Commission for the Human Future (CHF) believes influential members of the political class have showed "disdain for scientific knowledge" and says "some were actively hostile to it".

"This has led to scientific advice being ignored and scientific investment cut because it does not suit short-term political agendas or unfounded political beliefs," it states in its Surviving and Thriving in the 21st Century report .

Chaired by former Liberal leader and treasury economist John Hewson, CHF's roundtable is packed with former government department heads, fossil fuel company directors, doctors, university academics, biologists and scientists.

Dr Hewson told ABC Radio Adelaide that the report's list of threats to Earth and humanity, which also included dwindling fresh water supplies, ecosystem collapse and extinctions, pollution, pandemics and food security, were all "interrelated".

"Humans have, basically, unintentionally done themselves a lot of gratuitous harm by the way they've pursued growth, and population growth and other things, and ignored the consequences," he said.

"This commission's designed to focus on what we think are the 10 major threats, and to start a process of national and international dialogue."

Short-term focus

Dr Hewson, who has been a scathing critic of the Coalition's climate change policy , says governments seem "incapable of accepting science and evidence".

An older man, suited, stands grimacing at a lecture. This is former Liberal leader John Hewson.

Dr Hewson criticised Coalition climate policy at the Australian Farm Institute Round Table in September. ( Supplied: Edgars Greste )

"They don't want to listen to warnings and projections, they're very short-term in their focus, and they don't think strategically," he said.

"It means we ignore these risks and [for example] say, 'Oh, maybe there might be a coronavirus risk, but it's never going to happen'.

"The frustration is, we're finding a way to adjust to the coronavirus [pandemic] , and perhaps developing a somewhat effective recovery from it.

"But this was something that was predicted for quite some time and people just didn't pay any attention to that."

The CHF said fake news and misinformation, increasingly distributed through the internet by machine learning and algorithms, was contributing to the distortion of truth.

"The positive story is, there are solutions and if we get onto them early, we can actually move quite decisively to the benefit of the planet, and climate change would be a classic example of that," Dr Hewson said.

Politicians sit in their seats throughout the green House of Representatives chamber.

The CBH says the ability of politicians to deny facts for short-term gain is a threat to society. ( ABC News: Matt Roberts )

Beyond climate change and pandemics, the commission said the world's nine nuclear-armed states were continuing to modernise their arsenals with new weapons of mass destruction, fresh technologies and AI, "which lowered the threshold for their use".

"But that's just one risk," Dr Hewson said.

"If you look at some of the projections for population growth, you may have 10 or 11 billion people on the planet at the end of this century, and the planet can't support that.

"[Not at the] pace at which we're exhausting resources and eating into our capabilities to feed people and provide adequate water."

'Call to arms'

The CHF was established after an Australian National University workshop in 2017.

Several of its members, including Dr Hewson, are emeritus ANU academics.

It considered its report a "call to arms", with its sole recommendation to call "on the nations and peoples of Earth to come together, as a matter of urgency, to prepare a plan for humanity to survive and thrive, far into the future".

Dr Hewson added that humanity's response to the coronavirus pandemic proved it could adjust in ways previously believed out of the question.

"We're doing things we never imagined possible," he said.

"It's not only that we're isolated, travelling less and staying home.

"We're using a lot more technology, people are starting their own vegetable gardens, getting chooks — all sorts of responses we never though possible just a few weeks ago."

Arnagretta Hunter, also from ANU, said that while the threats were "grim", there was also a message of hope in producing a roadmap towards overcoming the challenges.

"The coronavirus experience provides, at a small scale, a template for the type of action required, and offers current leadership an opportune moment to abandon the present delay in responding to climate change," the physician and cardiologist said.

"We must recruit the best and brightest young humans, not to make arms, but to build the process for surviving and thriving for the whole of society."

Workers fumigate in New Delhi, India, for mosquitoes,

Why climate change is still the greatest threat to human health

Polluted air and steadily rising temperatures are linked to health effects ranging from increased heart attacks and strokes to the spread of infectious diseases and psychological trauma.

People around the world are witnessing firsthand how climate change can wreak havoc on the planet. Steadily rising average temperatures fuel increasingly intense wildfires, hurricanes, and other disasters that are now impossible to ignore. And while the world has been plunged into a deadly pandemic, scientists are sounding the alarm once more that climate change is still the greatest threat to human health in recorded history .

As recently as August—when wildfires raged in the United States, Europe, and Siberia—World Health Organization Director-General Tedros Adhanom Ghebreyesus said in a statement that “the risks posed by climate change could dwarf those of any single disease.”

On September 5, more than 200 medical journals released an unprecedented joint editorial that urged world leaders to act. “The science is unequivocal,” they write. “A global increase of 1.5°C above the pre-industrial average and the continued loss of biodiversity risk catastrophic harm to health that will be impossible to reverse.”

Despite the acute dangers posed by COVID-19, the authors of the joint op-ed write that world governments “cannot wait for the pandemic to pass to rapidly reduce emissions.” Instead, they argue, everyone must treat climate change with the same urgency as they have COVID-19.

Here’s a look at the ways that climate change can affect your health—including some less obvious but still insidious effects—and why scientists say it’s not too late to avert catastrophe.

Air pollution

Climate change is caused by an increase of carbon dioxide and other greenhouse gases in Earth’s atmosphere, mostly from fossil fuel emissions. But burning fossil fuels can also have direct consequences for human health. That’s because the polluted air contains small particles that can induce stroke and heart attacks by penetrating the lungs and heart and even traveling into the bloodstream. Those particles might harm the organs directly or provoke an inflammatory response from the immune system as it tries to fight them off. Estimates suggest that air pollution causes anywhere between 3.6 million and nine million premature deaths a year.

“The numbers do vary,” says Andy Haines , professor of environmental change and public health at the London School of Hygiene and Tropical Medicine and author of the recently published book Planetary Health . “But they all agree that it’s a big public health burden.”

Family has dinner in flooded home in Central Java, Indonesia.

People over the age of 65 are most susceptible to the harmful effects of air pollution, but many others are at risk too, says Kari Nadeau , director of the Sean N. Parker Center for Allergy and Asthma Research at Stanford University. People who smoke or vape are at increased risk, as are children with asthma.

Air pollution also has consequences for those with allergies. Carbon dioxide increases the acidity of the air, which then pulls more pollen out from plants. For some people, this might just mean that they face annoyingly long bouts of seasonal allergies. But for others, it could be life-threatening.

“For people who already have respiratory disease, boy is that a problem,” Nadeau says. When pollen gets into the respiratory pathway, the body creates mucus to get rid of it, which can then fill up and suffocate the lungs.

Even healthy people can have similar outcomes if pollen levels are especially intense. In 2016, in the Australian state of Victoria, a severe thunderstorm combined with high levels of pollen to induce what The Lancet has described as “the world’s largest and most catastrophic epidemic of thunderstorm asthma.” So many residents suffered asthma attacks that emergency rooms were overwhelmed—and at least 10 people died as a result.

Climate change is also causing wildfires to get worse, and wildfire smoke is especially toxic. As one recent study showed, fires can account for 25 percent of dangerous air pollution in the U.S. Nadeau explains that the smoke contains particles of everything that the fire has consumed along its path—from rubber tires to harmful chemicals. These particles are tiny and can penetrate even deeper into a person’s lungs and organs. ( Here’s how breathing wildfire smoke affects the body .)

Extreme heat

Heat waves are deadly, but researchers at first didn’t see direct links between climate change and the harmful impacts of heat waves and other extreme weather events. Haines says the evidence base has been growing. “We have now got a number of studies which has shown that we can with high confidence attribute health outcomes to climate change,” he says.

Workers pick tomatoes in hot weather in California.

Most recently, Haines points to a study published earlier this year in Nature Climate Change that attributes more than a third of heat-related deaths to climate change. As National Geographic reported at the time , the study found that the human toll was even higher in some countries with less access to air conditioning or other factors that render people more vulnerable to heat. ( How climate change is making heat waves even deadlier .)

That’s because the human body was not designed to cope with temperatures above 98.6°F, Nadeau says. Heat can break down muscles. The body does have some ways to deal with the heat—such as sweating. “But when it’s hot outside all the time, you cannot cope with that, and your heart muscles and cells start to literally die and degrade,” she says.

If you’re exposed to extreme heat for too long and are unable to adequately release that heat, the stress can cause a cascade of problems throughout the body. The heart has to work harder to pump blood to the rest of the organs, while sweat leeches the body of necessary minerals such as sodium and potassium. The combination can result in heart attacks and strokes .

Dehydration from heat exposure can also cause serious damage to the kidneys, which rely on water to function properly. For people whose kidneys are already beginning to fail—particularly older adults—Nadeau says that extreme heat can be a death sentence. “This is happening more and more,” she says.

Studies have also drawn links between higher temperatures and preterm birth and other pregnancy complications. It’s unclear why, but Haines says that one hypothesis is that extreme heat reduces blood flow to the fetus.

Food insecurity

One of the less direct—but no less harmful—ways that climate change can affect health is by disrupting the world’s supply of food.

You May Also Like

science is a threat to humanity essay

This brain-eating amoeba is on the rise

science is a threat to humanity essay

2024 hurricane season forecasted to be record-breaking year

science is a threat to humanity essay

‘Corn sweat’—and other weird weather phenomena—explained

Climate change both reduces the amount of food that’s available and makes it less nutritious.   According to an Intergovernmental Panel on Climate Change (IPCC) special report , crop yields have already begun to decline as a result of rising temperatures, changing precipitation patterns, and extreme weather events. Meanwhile, studies have shown that increased carbon dioxide in the atmosphere can leech plants of zinc, iron, and protein—nutrients that humans need to survive.

A woman walk through a sandstorm in Beijing, China.

Malnutrition is linked to a variety of illnesses, including heart disease, cancer, and diabetes. It can also increase the risk of stunting, or impaired growth , in children, which can harm cognitive function.

Climate change also imperils what we eat from the sea. Rising ocean temperatures have led many fish species to migrate toward Earth’s poles in search of cooler waters. Haines says that the resulting decline of fish stocks in subtropic regions “has big implications for nutrition,” because many of those coastal communities depend on fish for a substantial amount of the protein in their diets.

This effect is likely to be particularly harmful for Indigenous communities, says Tiff-Annie Kenny, a professor in the faculty of medicine at Laval University in Quebec who studies climate change and food security in the Canadian Arctic. It’s much more difficult for these communities to find alternative sources of protein, she says, either because it’s not there or because it’s too expensive. “So what are people going to eat instead?” she asks.

Infectious diseases  

As the planet gets hotter, the geographic region where ticks and mosquitoes like to live is getting wider. These animals are well-known vectors of diseases such as the Zika virus, dengue fever, and malaria. As they cross the tropics of Cancer and Capricorn, Nadeau says, mosquitoes and ticks bring more opportunities for these diseases to infect greater swaths of the world.

“It used to be that they stayed in those little sectors near the Equator, but now unfortunately because of the warming of northern Europe and Canada, you can find Zika in places you wouldn’t have expected,” Nadeau says.

In addition, climate conditions such as temperature and humidity can impact the life cycle of mosquitoes. Haines says there’s particularly good evidence showing that, in some regions, climate change has altered these conditions in ways that increase the risk of mosquitos transmitting dengue .

There are also several ways in which climate change is increasing the risk of diseases that can be transmitted through water, such as cholera, typhoid fever, and parasites. Sometimes that’s fairly direct, such as when people interact with dirty floodwaters. But Haines says that drought can have indirect impacts when people, say, can’t wash their hands or are forced to drink from dodgier sources of freshwater.

Mental health

A common result of any climate-linked disaster is the toll on mental health. The distress caused by drastic environmental change is so significant that it has been given its own name— solastalgia .

Solar and wind farms in western California.

Nadeau says that the effects on mental health have been apparent in her studies of emergency room visits arising from wildfires in the western U.S. People lose their homes, their jobs, and sometimes their loved ones, and that takes an immediate toll. “What’s the fastest acute issue that develops? It’s psychological,” she says. Extreme weather events such as wildfires and hurricanes cause so much stress and anxiety that they can lead to post-traumatic stress disorder and even suicide in the long run.

Another common factor is that climate change causes disproportionate harm to the world’s most vulnerable people. On September 2, the Environmental Protection Agency (EPA) released an analysis showing that racial and ethnic minority communities are particularly at risk . According to the report, if temperatures rise by 2°C (3.6°F), Black people are 40 percent more likely to live in areas with the highest projected increases in related deaths. Another 34 percent are more likely to live in areas with a rise in childhood asthma.

Further, the effects of climate change don’t occur in isolation. At any given time, a community might face air pollution, food insecurity, disease, and extreme heat all at once. Kenny says that’s particularly devastating in communities where the prevalence of food insecurity and poverty are already high. This situation hasn’t been adequately studied, she says, because “it’s difficult to capture these shocks that climate can bring.”

Why there’s reason for hope

In recent years, scientists and environmental activists have begun to push for more research into the myriad health effects of climate change. “One of the striking things is there’s been a real dearth of funding for climate change and health,” Haines says. “For that reason, some of the evidence we have is still fragmentary.”

Still, hope is not lost. In the Paris Agreement, countries around the world have pledged to limit global warming to below 2°C (3.6°F)—and preferably to 1.5°C (2.7°F)—by cutting their emissions. “When you reduce those emissions, you benefit health as well as the planet,” Haines says.

Meanwhile, scientists and environmental activists have put forward solutions that can help people adapt to the health effects of climate change. These include early heat warnings and dedicated cooling centers, more resilient supply chains, and freeing healthcare facilities from dependence on the electric grid.

Nadeau argues that the COVID-19 pandemic also presents an opportunity for world leaders to think bigger and more strategically. For example, the pandemic has laid bare problems with efficiency and equity that have many countries restructuring their healthcare facilities. In the process, she says, they can look for new ways to reduce waste and emissions, such as getting more hospitals using renewable energy.

“This is in our hands to do,” Nadeau says. “If we don’t do anything, that would be cataclysmic.”

Related Topics

  • AIR POLLUTION
  • WATER QUALITY
  • NATURAL DISASTERS
  • PUBLIC HEALTH
  • CLIMATE CHANGE

science is a threat to humanity essay

What we're learning from lung cancer patients who never smoked

science is a threat to humanity essay

Extreme heat is the future. Here are 10 practical ways to manage it.

science is a threat to humanity essay

The U.S. plans to limit PFAS in drinking water. What does that really mean?

science is a threat to humanity essay

Here’s what extreme heat does to the body

science is a threat to humanity essay

Some antibiotics are no longer as effective. That's as concerning as it sounds.

  • Environment
  • Paid Content

History & Culture

  • History & Culture
  • Terms of Use
  • Privacy Policy
  • Your US State Privacy Rights
  • Children's Online Privacy Policy
  • Interest-Based Ads
  • About Nielsen Measurement
  • Do Not Sell or Share My Personal Information
  • Nat Geo Home
  • Attend a Live Event
  • Book a Trip
  • Inspire Your Kids
  • Shop Nat Geo
  • Visit the D.C. Museum
  • Learn About Our Impact
  • Support Our Mission
  • Advertise With Us
  • Customer Service
  • Renew Subscription
  • Manage Your Subscription
  • Work at Nat Geo
  • Sign Up for Our Newsletters
  • Contribute to Protect the Planet

Copyright © 1996-2015 National Geographic Society Copyright © 2015-2024 National Geographic Partners, LLC. All rights reserved

The five biggest threats to human existence

science is a threat to humanity essay

James Martin Research Fellow, University of Oxford

Disclosure statement

Anders Sandberg works for the Future of Humanity Institute at the University of Oxford.

University of Oxford provides funding as a member of The Conversation UK.

View all partners

science is a threat to humanity essay

In the daily hubbub of current “crises” facing humanity, we forget about the many generations we hope are yet to come. Not those who will live 200 years from now, but 1,000 or 10,000 years from now. I use the word “hope” because we face risks, called existential risks , that threaten to wipe out humanity. These risks are not just for big disasters, but for the disasters that could end history.

Not everyone has ignored the long future though. Mystics like Nostradamus have regularly tried to calculate the end of the world. HG Wells tried to develop a science of forecasting and famously depicted the far future of humanity in his book The Time Machine. Other writers built other long-term futures to warn, amuse or speculate.

But had these pioneers or futurologists not thought about humanity’s future, it would not have changed the outcome. There wasn’t much that human beings in their place could have done to save us from an existential crisis or even cause one.

We are in a more privileged position today. Human activity has been steadily shaping the future of our planet. And even though we are far from controlling natural disasters, we are developing technologies that may help mitigate, or at least, deal with them.

Future imperfect

Yet, these risks remain understudied. There is a sense of powerlessness and fatalism about them. People have been talking apocalypses for millennia, but few have tried to prevent them. Humans are also bad at doing anything about problems that have not occurred yet (partially because of the availability heuristic – the tendency to overestimate the probability of events we know examples of, and underestimate events we cannot readily recall).

If humanity becomes extinct, at the very least the loss is equivalent to the loss of all living individuals and the frustration of their goals. But the loss would probably be far greater than that. Human extinction means the loss of meaning generated by past generations, the lives of all future generations (and there could be an astronomical number of future lives ) and all the value they might have been able to create. If consciousness or intelligence are lost, it might mean that value itself becomes absent from the universe. This is a huge moral reason to work hard to prevent existential threats from becoming reality. And we must not fail even once in this pursuit.

With that in mind, I have selected what I consider the five biggest threats to humanity’s existence. But there are caveats that must be kept in mind, for this list is not final.

Over the past century we have discovered or created new existential risks – supervolcanoes were discovered in the early 1970s, and before the Manhattan project nuclear war was impossible – so we should expect others to appear. Also, some risks that look serious today might disappear as we learn more. The probabilities also change over time – sometimes because we are concerned about the risks and fix them.

Finally, just because something is possible and potentially hazardous, doesn’t mean it is worth worrying about. There are some risks we cannot do anything at all about, such as gamma ray bursts that result from the explosions of galaxies. But if we learn we can do something, the priorities change. For instance, with sanitation, vaccines and antibiotics, pestilence went from an act of God to bad public health.

1. Nuclear war

While only two nuclear weapons have been used in war so far – at Hiroshima and Nagasaki in World War II – and nuclear stockpiles are down from their the peak they reached in the Cold War, it is a mistake to think that nuclear war is impossible. In fact, it might not be improbable.

The Cuban Missile crisis was very close to turning nuclear. If we assume one such event every 69 years and a one in three chance that it might go all the way to being nuclear war, the chance of such a catastrophe increases to about one in 200 per year.

Worse still, the Cuban Missile crisis was only the most well-known case. The history of Soviet-US nuclear deterrence is full of close calls and dangerous mistakes. The actual probability has changed depending on international tensions, but it seems implausible that the chances would be much lower than one in 1000 per year.

A full-scale nuclear war between major powers would kill hundreds of millions of people directly or through the near aftermath – an unimaginable disaster. But that is not enough to make it an existential risk.

Similarly the hazards of fallout are often exaggerated – potentially deadly locally, but globally a relatively limited problem. Cobalt bombs were proposed as a hypothetical doomsday weapon that would kill everybody with fallout, but are in practice hard and expensive to build. And they are physically just barely possible.

The real threat is nuclear winter – that is, soot lofted into the stratosphere causing a multi-year cooling and drying of the world. Modern climate simulations show that it could preclude agriculture across much of the world for years. If this scenario occurs billions would starve, leaving only scattered survivors that might be picked off by other threats such as disease. The main uncertainty is how the soot would behave: depending on the kind of soot the outcomes may be very different, and we currently have no good ways of estimating this.

2. Bioengineered pandemic

Natural pandemics have killed more people than wars. However, natural pandemics are unlikely to be existential threats: there are usually some people resistant to the pathogen, and the offspring of survivors would be more resistant. Evolution also does not favor parasites that wipe out their hosts, which is why syphilis went from a virulent killer to a chronic disease as it spread in Europe .

Unfortunately we can now make diseases nastier. One of the more famous examples is how the introduction of an extra gene in mousepox – the mouse version of smallpox – made it far more lethal and able to infect vaccinated individuals. Recent work on bird flu has demonstrated that the contagiousness of a disease can be deliberately boosted.

science is a threat to humanity essay

Right now the risk of somebody deliberately releasing something devastating is low. But as biotechnology gets better and cheaper , more groups will be able to make diseases worse.

Most work on bioweapons have been done by governments looking for something controllable, because wiping out humanity is not militarily useful. But there are always some people who might want to do things because they can. Others have higher purposes. For instance, the Aum Shinrikyo cult tried to hasten the apocalypse using bioweapons beside their more successful nerve gas attack. Some people think the Earth would be better off without humans, and so on.

The number of fatalities from bioweapon and epidemic outbreaks attacks looks like it has a power-law distribution – most attacks have few victims, but a few kill many. Given current numbers the risk of a global pandemic from bioterrorism seems very small. But this is just bioterrorism: governments have killed far more people than terrorists with bioweapons (up to 400,000 may have died from the WWII Japanese biowar program). And as technology gets more powerful in the future nastier pathogens become easier to design.

3. Superintelligence

Intelligence is very powerful. A tiny increment in problem-solving ability and group coordination is why we left the other apes in the dust. Now their continued existence depends on human decisions, not what they do. Being smart is a real advantage for people and organisations, so there is much effort in figuring out ways of improving our individual and collective intelligence: from cognition-enhancing drugs to artificial-intelligence software.

The problem is that intelligent entities are good at achieving their goals, but if the goals are badly set they can use their power to cleverly achieve disastrous ends. There is no reason to think that intelligence itself will make something behave nice and morally . In fact, it is possible to prove that certain types of superintelligent systems would not obey moral rules even if they were true .

Even more worrying is that in trying to explain things to an artificial intelligence we run into profound practical and philosophical problems. Human values are diffuse, complex things that we are not good at expressing, and even if we could do that we might not understand all the implications of what we wish for.

science is a threat to humanity essay

Software-based intelligence may very quickly go from below human to frighteningly powerful. The reason is that it may scale in different ways from biological intelligence: it can run faster on faster computers, parts can be distributed on more computers, different versions tested and updated on the fly, new algorithms incorporated that give a jump in performance.

It has been proposed that an “ intelligence explosion ” is possible when software becomes good enough at making better software. Should such a jump occur there would be a large difference in potential power between the smart system (or the people telling it what to do) and the rest of the world. This has clear potential for disaster if the goals are badly set.

The unusual thing about superintelligence is that we do not know if rapid and powerful intelligence explosions are possible: maybe our current civilisation as a whole is improving itself at the fastest possible rate. But there are good reasons to think that some technologies may speed things up far faster than current societies can handle. Similarly we do not have a good grip on just how dangerous different forms of superintelligence would be, or what mitigation strategies would actually work. It is very hard to reason about future technology we do not yet have, or intelligences greater than ourselves. Of the risks on this list, this is the one most likely to either be massive or just a mirage.

This is a surprisingly under-researched area. Even in the 50s and 60s when people were extremely confident that superintelligence could be achieved “within a generation”, they did not look much into safety issues. Maybe they did not take their predictions seriously, but more likely is that they just saw it as a remote future problem.

4. Nanotechnology

Nanotechnology is the control over matter with atomic or molecular precision. That is in itself not dangerous – instead, it would be very good news for most applications. The problem is that, like biotechnology, increasing power also increases the potential for abuses that are hard to defend against.

The big problem is not the infamous “grey goo” of self-replicating nanomachines eating everything. That would require clever design for this very purpose. It is tough to make a machine replicate: biology is much better at it, by default. Maybe some maniac would eventually succeed, but there are plenty of more low-hanging fruits on the destructive technology tree.

science is a threat to humanity essay

The most obvious risk is that atomically precise manufacturing looks ideal for rapid, cheap manufacturing of things like weapons. In a world where any government could “print” large amounts of autonomous or semi-autonomous weapons (including facilities to make even more) arms races could become very fast – and hence unstable, since doing a first strike before the enemy gets a too large advantage might be tempting.

Weapons can also be small, precision things: a “smart poison” that acts like a nerve gas but seeks out victims, or ubiquitous “gnatbot” surveillance systems for keeping populations obedient seems entirely possible. Also, there might be ways of getting nuclear proliferation and climate engineering into the hands of anybody who wants it.

We cannot judge the likelihood of existential risk from future nanotechnology, but it looks like it could be potentially disruptive just because it can give us whatever we wish for.

5. Unknown unknowns

The most unsettling possibility is that there is something out there that is very deadly, and we have no clue about it.

The silence in the sky might be evidence for this. Is the absence of aliens due to that life or intelligence is extremely rare, or that intelligent life tends to get wiped out ? If there is a future Great Filter, it must have been noticed by other civilisations too, and even that didn’t help.

science is a threat to humanity essay

Whatever the threat is, it would have to be something that is nearly unavoidable even when you know it is there, no matter who and what you are. We do not know about any such threats (none of the others on this list work like this), but they might exist.

Note that just because something is unknown it doesn’t mean we cannot reason about it. In a remarkable paper Max Tegmark and Nick Bostrom show that a certain set of risks must be less than one chance in a billion per year, based on the relative age of Earth.

You might wonder why climate change or meteor impacts have been left off this list. Climate change, no matter how scary, is unlikely to make the entire planet uninhabitable (but it could compound other threats if our defences to it break down). Meteors could certainly wipe us out, but we would have to be very unlucky. The average mammalian species survives for about a million years. Hence, the background natural extinction rate is roughly one in a million per year. This is much lower than the nuclear-war risk, which after 70 years is still the biggest threat to our continued existence.

The availability heuristic makes us overestimate risks that are often in the media, and discount unprecedented risks. If we want to be around in a million years we need to correct that.

  • Future of humanity
  • Existential risk

science is a threat to humanity essay

Service Centre Senior Consultant

science is a threat to humanity essay

Director of STEM

science is a threat to humanity essay

Community member - Training Delivery and Development Committee (Volunteer part-time)

science is a threat to humanity essay

Chief Executive Officer

science is a threat to humanity essay

Head of Evidence to Action

Responding to the Climate Threat: Essays on Humanity’s Greatest Challenge

Responding to the Climate Threat: Essays on Humanity’s Greatest Challenge

A new book co-authored by MIT Joint Program Founding Co-Director Emeritus Henry Jacoby

From the Back Cover

This book demonstrates how robust and evolving science can be relevant to public discourse about climate policy. Fighting climate change is the ultimate societal challenge, and the difficulty is not just in the wrenching adjustments required to cut greenhouse emissions and to respond to change already under way. A second and equally important difficulty is ensuring widespread public understanding of the natural and social science. This understanding is essential for an effective risk management strategy at a planetary scale. The scientific, economic, and policy aspects of climate change are already a challenge to communicate, without factoring in the distractions and deflections from organized programs of misinformation and denial. 

Here, four scholars, each with decades of research on the climate threat, take on the task of explaining our current understanding of the climate threat and what can be done about it, in lay language―importantly, without losing critical  aspects of the natural and social science. In a series of essays, published during the 2020 presidential election, the COVID pandemic, and through the fall of 2021, they explain the essential components of the challenge, countering the forces of distrust of the science and opposition to a vigorous national response.  

Each of the essays provides an opportunity to learn about a particular aspect of climate science and policy within the complex context of current events. The overall volume is more than the sum of its individual articles. Proceeding each essay is an explanation of the context in which it was written, followed by observation of what has happened since its first publication. In addition to its discussion of topical issues in modern climate science, the book also explores science communication to a broad audience. Its authors are not only scientists – they are also teachers, using current events to teach when people are listening. For preserving Earth’s planetary life support system, science and teaching are essential. Advancing both is an unending task.

About the Authors

Gary Yohe is the Huffington Foundation Professor of Economics and Environmental Studies, Emeritus, at Wesleyan University in Connecticut. He served as convening lead author for multiple chapters and the Synthesis Report for the IPCC from 1990 through 2014 and was vice-chair of the Third U.S. National Climate Assessment.

Henry Jacoby is the William F. Pounds Professor of Management, Emeritus, in the MIT Sloan School of Management and former co-director of the MIT Joint Program on the Science and Policy of Global Change, which is focused on the integration of the natural and social sciences and policy analysis in application to the threat of global climate change.

Richard Richels directed climate change research at the Electric Power Research Institute (EPRI). He served as lead author for multiple chapters of the IPCC in the areas of mitigation, impacts and adaptation from 1992 through 2014. He also served on the National Assessment Synthesis Team for the first U.S. National Climate Assessment.

Ben Santer is a climate scientist and John D. and Catherine T. MacArthur Fellow. He contributed to all six IPCC reports. He was the lead author of Chapter 8 of the 1995 IPCC report which concluded that “the balance of evidence suggests a discernible human influence on global climate”. He is currently a Visiting Researcher at UCLA’s Joint Institute for Regional Earth System Science & Engineering.

Access the Book

View the book on the publisher's website  here .

Order the book from Amazon  here . 

science is a threat to humanity essay

Related Posts

Wind and solar power can generate vital profits for texas’ dwindling farm....

sheep.jpg

The Impact of Financing Structures on the Cost of CO2 Transport

pipelines

Plan for Elliott State Forest would put its 83,000 acres into fighting clim...

JF 2024 #1

The role of fusion energy in a decarbonized electricity system webinar

science is a threat to humanity essay

MIT Climate News in Your Inbox

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Environ Res Public Health

Logo of ijerph

Climate Change, Health and Existential Risks to Civilization: A Comprehensive Review (1989–2013)

Associated data.

Background: Anthropogenic global warming, interacting with social and other environmental determinants, constitutes a profound health risk. This paper reports a comprehensive literature review for 1989–2013 (inclusive), the first 25 years in which this topic appeared in scientific journals. It explores the extent to which articles have identified potentially catastrophic, civilization-endangering health risks associated with climate change. Methods: PubMed and Google Scholar were primarily used to identify articles which were then ranked on a three-point scale. Each score reflected the extent to which papers discussed global systemic risk. Citations were also analyzed. Results : Of 2143 analyzed papers 1546 (72%) were scored as one. Their citations (165,133) were 82% of the total. The proportion of annual papers scored as three was initially high, as were their citations but declined to almost zero by 1996, before rising slightly from 2006. Conclusions : The enormous expansion of the literature appropriately reflects increased understanding of the importance of climate change to global health. However, recognition of the most severe, existential, health risks from climate change was generally low. Most papers instead focused on infectious diseases, direct heat effects and other disciplinary-bounded phenomena and consequences, even though scientific advances have long called for more inter-disciplinary collaboration.

1. Introduction

In 1988 the leading climate scientist James Hansen, of the National Aeronautics and Space Administration, with three other senior researchers, testified to a U.S. Congressional committee that it was 99 percent certain that the warming trend in Earth’s temperature that was then observed was not natural variation but was caused by the accumulation of carbon dioxide and other “greenhouse” gases. This testimony was reported prominently in the New York Times [ 1 , 2 ]. Hansen was criticized then, and many times since, for his “adventurous” interpretation of climate data, however the publicity which followed his testimony, itself reflecting a decade of growing agitation about the geo-political impacts of climate change [ 2 ] may have influenced health workers to think more deeply about the issues. In any case, within a year, a Lancet editorial discussed health and the “greenhouse effect” [ 3 ], possibly the first such publication in a health journal, eight years after a chapter concerning climate change and parasitic disease appeared [ 4 ]. At least six other chapters on this topic were published in the 1980s, as well as at least two reports. For details, see [ 5 ]. Two other journal articles concerning climate change and health were also published in 1989 [ 6 , 7 ].

The 1989 editorial stated “global warming, increased ultraviolet flux, and higher levels of tropospheric ozone will reduce crop production, with potentially devastating effects on world food supplies. Malnutrition (sic) might then become commonplace, even among developed nations, and armed conflicts would be more likely as countries compete for a dwindling supply of natural resources” [ 3 ]. In the New England Journal of Medicine, Leaf warned, also in 1989, of sea level rise, especially in the south-eastern U.S. state of Florida, higher precipitation, millions of environmental refugees, an increased risk of drought and the possibility that warming at higher latitudes would not fully compensate any climate change related loss of agricultural productivity towards the equator [ 6 ]. The third paper published that year [ 7 ] was even more direct, warning of “catastrophic” consequences to human health and well-being.

In the early 1990s, warnings of potentially catastrophic consequences of climate change continued to dominate. Yet, by the turn of the millennium, the author had formed the impression that the scientific publishing milieu was becoming less receptive to the message that climate change and other forms of “planetary overload” [ 8 ] pose existential, civilization-wide risks. This was disturbing, as my own confirmation bias seemed to support the case that the evidence of existential risk was continuing to rise [ 9 , 10 ].

That the health risks from climate change are indeed extraordinarily high was stressed in the 2009 publication of the lengthy (41 page) article by the Lancet and University College London Institute for Global Health Commission, which described climate change as the “biggest global health threat of the 21st century” [ 11 ]. Yet, although this paper attracted considerable attention at the time, the long-term outlook for climate change and health has since continued to deteriorate.

By existential, I mean related to the word “existence”. But it is not the continued existence of Earth that is in doubt, but instead the existence of a high level of function of civilization, one in which prospects of “health for many” (though no longer “health for all”) are realistic and even improving [ 12 ]. Existential risk does not necessarily mean that global civilization will collapse. Nor does it exclude pockets of order and even prosperity enduring for generations, from which global or quasi-global civilization may one day emerge, provided worst case scenarios are avoided, such as runaway climate change and nuclear war leading to nuclear winter [ 13 ]. Compared to today, such prospects should be recognized as catastrophic. Unchecked climate change could generate similar, or bleaker, global futures. Seeking to minimize such possibilities should be seen as a major responsibility for all workers concerned with sustaining and improving global public health.

There is reticence [ 14 , 15 ], shared by many authors, reviewers, journals, funders and media outlets to discuss the possibility of such existential risks. Nonetheless, the consequences for health are so vast that discussion is warranted. This paper seeks to do that, in the process conducting the largest review on the topic of climate change and health yet to be published.

1.1. Climate Change Science, Risk and the 2015 Paris Agreement

The scientific knowledge that gases, accumulating mainly from the burning of fossil fuels and the clearing of forests, add to the natural “greenhouse effect” has been known since the 19th century [ 16 ]. In 1957 scientists observed “human beings are now carrying out a largescale geophysical experiment of a kind which could not have happened in the past nor be reproduced in the future. Within a few hundred years we are returning to the air and oceans the concentrated organic carbon stored over hundreds of millions of years” [ 17 ].

In 2015 the Paris climate change agreement, negotiated by representatives of 196 parties (195 nations and the European Union) committed countries (thus, effectively, civilization), upon ratification, to actions that would seek to restrict average global warming to “well below” 2 °C above “pre-industrial” levels and to “pursue efforts” to limit the rise to 1.5 °C. The text of the Paris Agreement defines neither the pre-industrial temperature nor the time for this baseline, but most experts agree that it means the temperature in the late 18th or 19th century, soon after the start of the industrial revolution, when coal burning increased. This time is after the end of the Little Ice Age, which itself was accompanied by a rebound in average temperatures, independent of the slow rise in greenhouse gases (chiefly methane and nitrous oxide as well as carbon dioxide) that occurred throughout the 19th century. Estimates of global warming for the period 1861–1880 until 2015 range from 0.93 °C [ 18 ] to 1.12 °C [ 19 ].

Although the goal of 1.5 °C is widely known, there is less understanding that meeting this challenge would not guarantee safety from a climate change perspective [ 20 ]. Indeed, if it were to be more widely accepted that climate change has already contributed to the Syrian war [ 21 , 22 ], to the rise in global food prices which accompanied the 2010 drought and heatwave in Russia [ 23 , 24 ], and the 2018 wildfire season in the Northern Hemisphere, then the threshold of danger might already be widely seen as having long been exceeded.

In recent years the science concerning the physical impacts of climate has continued to expand and to disturb. Average global temperatures continue to rise [ 25 ], apparently in a process more “stepped” than as a trend [ 26 ] with record average global heat in both El Niño and La Niña years. Loss of ice from both Antarctica and Greenland is increasing and the rate of sea level rise is consequently accelerating [ 27 ]. Property values in parts of the U.S. East Coast may soon fall, due to sea level rise [ 28 ]. There is growing concern about more intense rainfall [ 29 , 30 ], fires worsened by heat and drought [ 31 ], a weakening Gulf Stream [ 32 ] and increased sinuosity of the jet stream, which can cause unusual cold at lower latitudes, even if the average global temperature is rising [ 33 , 34 ]. The projected trend toward a weaker and poleward-shifted jet stream is also consistent with projections of a significantly increased risk of worsening extreme heat and dryness in the Northern Hemisphere [ 35 ].

There is also growing evidence of greenhouse effect-intensifying feedbacks in the Earth system [ 36 ] that might release enormous quantities of carbon dioxide and methane, independent of fossil fuel combustion, agriculture or deforestation, from sources including warming tundra and increased fires, both of peat and forests [ 37 , 38 ]. Such releases could dwarf the climate saving made possible by the putative implementation of the Paris climate agreement. The strength of the oceanic carbon sink is also weakening [ 39 ]. If this intensifies it is likely to accelerate warming of the atmosphere, ocean and land.

1.2. Interaction, Attribution, and Causation

All, or virtually all, environmental health effects interact with social and technological factors as well as other “purely” environmental determinants. For example, the effects of heat upon individual health are influenced by temperature, humidity, exercise, hydration, age, pre-existing health status, and also by occupation, clothing, behavior, autonomy, vulnerability, and sense of obligation. Does the person affected by heat, perhaps a brick maker in India, have the capacity to regulate her heat exposure; or might they be an elite athlete or emergency worker voluntarily pushing their limits? Other factors influencing the heath impact of heat include housing quality, the presence of absence of affordable air conditioning and energy subsidies, if any. In turn, these factors are influenced by governance and socio-economic status. Thus, the health-harming effects of heat can be seen to have many contributing causes, of which climate change is only one. As McMichael (and before him David Hume, among others) pointed out, causal attribution is to an extent philosophical; it is influenced by the “focal depth” of the examiner’s “causal lens” [ 40 ]. Consider a mass shooting in a school: Some will see underlying social and legal factors as contributing; others may see only the shooter. Yet, a major role and goal of public health is to seek to identify and reduce “deep” or “underlying” causes [ 41 ]. A world in which only the most “proximal” causes are identified will not function well.

Attributing the fraction of human-caused (anthropogenic) climate change to physical events such as storms, floods and heatwaves is similarly contested and assumption-dependent. The contribution of climate change to more indirect, strongly socially mediated effects such as migration, famine or conflict is even more difficult and contentious [ 22 , 42 , 43 ]. Perhaps in part because of these causal complications, issues such as famine, genocide, large-scale population dislocation and conflict have, with rare exceptions [ 44 ], been peripheral to public health. This is despite the obvious large-scale adverse health effects of these phenomena.

Rigorous methods have been developed to detect and attribute the health effects of phenomena that are more directly or obviously related to climate change, such as heat and infectious diseases [ 45 ]. However, excessive caution risks a type II error, the overlooking of genuine effects [ 46 , 47 ]. To reduce this risk, the authors of a recent study on attribution acknowledged the role for “well-informed judgments, based on understanding of underlying processes and matching of patterns of health, climate, and other determinants of human well-being” [ 45 ]. This paper makes many such judgments.

1.3. Integrative Risk and the Sustainability of Civilization

Publications in health journals about nuclear war and health date at least to 1962 [ 48 ]. In 1992 the Union of Concerned Scientists coordinated the “World Scientist’s warning to humanity”, signed by over 1700 leading scientists (but no public health workers) [ 49 ]. This warning was repeated in 2017, with far more signatories (including many health workers) [ 50 ].

Many authors outside health have warned of the fragility of modern civilization [ 51 , 52 ]. However, comparatively few writers with a health background have contributed [ 9 , 10 , 53 , 54 ]. Tony McMichael, who led the first Intergovernmental Panel on Climate Change chapter on health [ 55 ] frequently wrote and spoke of eroding “life support mechanisms” [ 56 , 57 ], a term probably introduced into the health literature in 1972 by Sargent [ 58 ]. Certainly, McMichael wanted to convey, when using this term, a profound risk to human well-being and health.

If civilization is to collapse then effects such as conflict, population displacement and famine are likely to be involved. A heatwave, on its own, is unlikely to cause the collapse of civilization, nor even ruin an economy for a decade. It needs social co-factors to do this. For example, a series of heatwaves damaging crop yields and contributing to internal migration has been postulated as contributing to the Syrian civil war that started in 2011 [ 21 , 22 , 59 , 60 , 61 , 62 ]. Prolonged heat, especially if in a humid setting, could cause some regions to be completely abandoned [ 63 , 64 , 65 ].

A severely damaged health system, allied with worsening undernutrition and poverty, could provide a milieu for a devastating epidemic, including a resurgence of HIV/AIDS [ 66 ]. An increase in infectious diseases, if of sufficient scale, could contribute to integrative cascades of failure generating regional or even global civilization collapse. Infectious diseases, as well as unfavorable eco-climatic change, contributed to the collapse of the Roman Empire [ 67 ].

While such consequences may seem far-fetched to some, the prospect of sea level rise of one meter or more by 2100 (perhaps sooner), proliferating nuclear weapons, millions of refugees, xenophobia and tribalism which limits integration, and growing cases of state failure is disquieting. Few, if any, formal scenarios, as exercises by senior scientists, are as bleak, but funding and other pressures constrain the realism of such exercises [ 15 ]. Already, the number of forcibly displaced people exceeds 68 million [ 68 ], a rise that has been linked with tightening limits to growth, including climate change [ 69 ].

It is stressed, again, that the idea that any single climate related event, such as heat, drought, sea level rise, conflict or migration will cause the collapse of civilization is simplistic. It is far more plausible to conceive that collapse (or quasi-collapse) could arise via a “milieu” of multi-factorial risk, enhancing, inflaming and interacting with climate change and other factors [ 43 , 70 ].

1.4. Hypothesis

This article seeks to test the hypothesis that the early literature relevant to climate change and health was more willing to describe catastrophic, potentially civilization disrupting health effects including famine, mass migration and conflict than it was to become, at least until 2014.

To explore this hypothesis, a database of articles relevant to climate change and health was assembled, relying mainly on PubMed and Google Scholar. This had six steps (see Appendix for details). Due to limited resources, the main search was restricted to the period 1980–2013, and the terms “climate change” and health or “global warming” and health. After eliminating duplicates, remaining papers were checked to see if they met eligibility criteria (see Box 1 ).

inclusion and exclusion criteria.

Included: Articles, editorials, commentaries, journalistic pieces with bylines.

Excluded: Reports, books, book sections including e-chapters, letters, factsheets, monographs, un-credited journalistic entries, non-English publications, papers concerning stratospheric ozone depletion, podcast transcripts, journalistic pieces that could not easily be recovered.

The search was not restricted to health or to multidisciplinary journals. However, papers outside health journals had to meet more exacting requirements to be included. They had to include health (or a synonym such as nutrition) in their title, abstract, keywords or text, even if they focused on an effect with health implications, such as population displacement, conflict or food insecurity.

The title of each identified paper was read, followed by the abstract of each paper, assessed as possibly eligible. If a score was still unclear, the full text was obtained and searched for words and phrases that suggested a broader interpretation of the indirect effects of climate change, such as “population displacement”, “migration”, “conflict”, “war”, “famine”, and “food insecurity”.

Eligible papers were scored as one if they exclusively concerned an effect other than conflict, migration, population displacement or large-scale undernutrition or famine. They also needed to exclude statements (even if introductory) such as “climate change has been recognized as the greatest risk to health in the 21st century”.

Papers were scored as two if they either mentioned such an effect and/or contained statements recognizing the potentially enormous scale of the health impacts from climate change. A synonym for this understanding was the phrase eroding “life support mechanisms”.

Papers were scored as three if they included a more detailed explanation or assertion of the future (or current) existence and importance of conflict, migration or famine, perhaps suggesting an interaction among them. A score of three was more likely if they also warned of the general severity of climate change. The score was also influenced by the tone of the language, and the space devoted to these issues (see Appendix for further details).

In addition, PubMed was searched for papers published from 2014–2017 matching the criteria “climate change” and “health”. A sample of 156 of these articles was randomly selected, approximately 5% in each year, after the elimination of a proportion of ineligible articles. Each was then scored, using the method described above for papers published from 1989 to 2013 (inclusive). Bootstrapping was then used to estimate the average score and 95% confidence interval of these articles, by taking ten thousand resamples, each of 156 papers, with replacement from this set (so that in each iteration some papers will appear more than once, while others will not appear at all).

A total of 2143 unique articles and journalistic essays satisfied the inclusion criteria, for the period 1989–2013 inclusive. The full database is available in the supplementary material . This shows the year, lead author (at least), journal, title and primary search method. It also lists the number of Google Scholar citations and the date these were identified. Table A1 ( Appendix ) tabulates the primary search method of papers, by each year.

No paper published before 1989 was eligible for retention in the final database. One potential publication [ 71 ] was cited by Kalkstein and Smoyer [ 5 ] as published in 1988, but it could not be located. About half the total papers (1142 or 53%) were published since 2009 (see Figure 1 ). Most papers (1546 papers, 72%) were scored as one, while only 189 (3.3%) were scored as three. The difference in these scores is statistically significant ( p < 0.01 ANOVA). The average score of these 2143 papers was 1.37 (see Table A2 in Appendix ).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-02266-g001.jpg

Number of papers in each category. Since 1989 the number of papers concerning climate change and health has expanded considerably, particularly since 2008. As this article did not review the entire literature, the actual number of papers published, even in English, is more than shown. The average score of these papers declined from 1.9 in the first quintile to 1.34 in the final five years.

The increase in the size of literature reflects growing awareness of the risks to health from climate change. Over 50% of the papers published in the first quintile (1989–1993) were scored as two or three, although the total number in that time (27) was small (see Figure 1 ). Since 1993 the majority of papers have focused on effects such as heat, infectious diseases, allergies or asthma. The number of papers scored as two or three increased slightly after its trough (23%) in the third quintile (1999–2004) but was only 26% for 2009–2013 inclusive.

Papers scored as three were particularly uncommon in the third quintile (1999–2003), representing only 2.6% of the total published papers in that period. Even in the first quintile (1989–1993) most citations were for papers scored as one (see Figure 2 ).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-02266-g002.jpg

Number of citations per annum for each score of paper. Most citations were for papers scored as one. Note that in 2005–2007 three extensively cited papers were scored as two (these are discussed in the Appendix A ).

3.1. Citations

Citation data were available for 2105 papers (98%). Over 201,000 citations were identified by Google Scholar (see Table A3 in Appendix ). Thirty two percent of these citations were for papers published since 2009 (see Figure 2 ). Of these citations, the great majority (82%) were for papers scored as one, each of which was cited an average of 107 times. Papers scored 2 were cited an average of 73 times, representing 15% of the total. Papers scored as three were cited 35 times each on average and accounted for 3% of the total. The difference in these citation scores is also statistically significant ( p < 0.01 ANOVA). Citations for papers scored as three from 1995 to 2008 inclusive were even lower, accounting for less than 1% of the total citations in each year of this period (see Figure 3 ). The fraction of the literature discussing existential risk remained lower in the last 5 years of this database than in the first five years (see Figure 1 ). The shift in the ratio of annual citations from the early period to the more recent years is evident in Figure 3 . Until 1991, the majority of citations were for papers scored as three. From 1994 the fraction of citations for papers scored as three was almost zero (3% or less) in every year until 2009. In 2013 it again fell to 3%.

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-02266-g003.jpg

The proportion of citations each year for papers scored as one and three. Since 1991 most citations have been for papers scored as 1. The Lancet UCL paper published in 2009 [ 11 ] led to a resurgence of citations for papers scored as 3, but this effect declined. Three individual papers, each scored as two (published in 2005, 2006 and 2007), were disproportionately cited. In each year at least some papers scored two or three, but their proportion of citations fell steeply after the first quintile. In 2003 no paper was scored as three, and for almost a decade (1997–2005 inclusive) virtually no papers scored as three were cited.

3.2. Coverage of Topics

All papers published in 1989 discussed multiple potential health effects of climate change. However, from 1990, journal articles focusing exclusively on infectious diseases and climate change appeared [ 72 , 73 , 74 ]. Early papers also focused on heat [ 75 ] and allergies [ 76 ]. From 2000, the foci of concerns expanded greatly. Additional topics included reduced micronutrient concentrations in food [ 77 ], asthma [ 78 ], thunderstorm asthma [ 79 ], chronic diseases and obesity [ 80 ], toxin exposure (such as from increased concentrations in Arctic mammals [ 81 ] and increased algal blooms [ 82 ]), forest fires [ 83 ], mental health [ 84 ] and respiratory [ 85 ], cardio-vascular [ 86 ], renal [ 87 ], fetal [ 88 ], genito-urinal [ 89 ] and skin conditions [ 90 ]. By 2000, papers were also appearing arguing that the impact of climate change for malaria was overstated [ 91 , 92 ].

Articles also appeared on the impact of climate change on groups such as indigenous people [ 93 ], children [ 94 ], the elderly [ 95 ] and regions and locations, including cities [ 96 ], the Arctic [ 97 ] and small island states [ 98 ] as well as many individual nations. Other themes appeared, including on how the health sector might reduce its carbon footprint [ 99 ], on “co-benefits” [ 100 ], on climate change as a great opportunity to improve public health [ 101 ], on medical education [ 102 ], pharmaceuticals [ 103 ] and on the health risks of adaptation and geoengineering, including of carbon capture and storage [ 104 ].

3.3. The Leadership Role of Some Journals

Many journals played prominent, even campaigning roles, especially the Lancet, BMJ and Environmental Health Perspectives. Several journals had special issues, including Global Health Action, the American Journal of Preventive Medicine, the Asia Pacific Journal of Public Health and Health Promotion International. Seven journals published at least 28 articles each, including editorials and news items (see Table A4 in Appendix ). At least 34 journals published editorials, which, with an average score of 2.2, were more likely to be scored as two or three than journal articles (average score 1.3). News items and other journalistic pieces had an average score of 1.6. At least 21 articles were published in nursing journals, with an average score of 1.67.

3.4. Papers for the Period 2014–2017

A total of 3377 papers were identified by PubMed as published from 2014–2017. Of these, 346 were found to be ineligible, although the true number would be higher, if all candidates were examined. Of the potentially eligible remainder, 113 papers were published in 2018, but recorded by PubMed as e-published in 2017. Slightly over five percent of the articles for each year was randomly selected, resulting in 156 articles (see Table A5 in Appendix ). Their average score and 95% confidence interval, estimated by bootstrapping, was 1.29 (95% CI 1.21–1.39) (see Figure A2 in Appendix ). Details of these 156 papers are in the supplementary material . Note that their citations were not checked.

4. Discussion

This paper describes the first published analysis of the extent to which the literature on climate change and health has described or in other ways engaged with “existential” risk. By including 2000 articles, 60 editorials and 83 news items (2143 “papers” in total) on climate change and health, it is by far the largest review of the climate change and health literature to have so far been published. Lack of resources currently prevents an extension of the fuller analysis to more recent years. However, a randomly selected sample of 156 articles for papers identified by PubMed as published in the period 2014–2017 found that these papers had an average score lower than the average score for any quintile from 1989–2013, other than for 1999–2003 (see Table A2 and Table A5 in Appendix ).

Several systematic and other reviews of topics related to climate change and health have been published, but on a much smaller scale, and with different research questions. Ford and Pearce systematically reviewed 420 papers, published between 1990 and 2009, exploring the topic of climate change vulnerability in the Canadian western Arctic [ 105 ]. Two systematic reviews concerned heat. Huang et al. [ 106 ] searched for papers published between 1980 and July 2010, projecting the heat related mortality under climate change scenarios. Only 14 papers were included in their final analysis. Xu et al. [ 107 ] explored the relationship between heat waves and children’s health, but selected twelve, an even small number. A systematic review into dengue fever and climate change (for the period 1991–2012) included 16 studies [ 108 ].

Nichols et al. (2009) [ 109 ] undertook a systematic review on health, climate change and energy vulnerability, searching for papers published in English between 1998 and 2008. They retrieved 114 papers but included only 36 in their final analysis. Bouzid et al. (2013) undertook a “systematic review of systematic reviews” to explore the effectiveness of public health interventions to reduce the health impact of climate change [ 110 ]. This identified over 3100 unique records, but of these, only 85 full papers were assessed, with 33 included in the final review.

This may also be the first review paper concerning climate change and health to use a citation analysis [ 111 ] as an indicator of influence. Citations in Google Scholar were used for convenience and cost. Although such citations are prone to error, and include essays in the gray literature, they still reflect influence. Some reports in the gray literature may be more widely read and more influential than more scholarly work.

4.1. Selection and Other Forms of Bias

A systematic review was not undertaken. However, all papers identified by searching using PubMed and at least 100 papers for each year identified by Google Scholar were considered for inclusion. The search term relevant to health was restricted to a single word, rather than synonyms such as “disease”, “morbidity”, “illness”, or “mortality”. Undoubtedly, a search using additional terms will identify more papers, as would a systematic review.

To examine the possibility that a more extensive search strategy would alter the conclusions, PubMed was also searched for the terms “climate change” and “morbidity” for papers published in 2013. This strategy identified 261 papers, compared to 496 when searching for “climate change” and “health”. Of these 261 papers, 30 had not previously been identified by the other search methods used, and met the other inclusion criteria. However, all of these additional papers were scored as one. Their inclusion in the final analysis was considered likely to bias the paper away from the null hypothesis, by accentuating the fraction of papers not scored as two or three. This bias towards papers scored one (i.e., identified by searching for “morbidity”) seems plausible because the term morbidity may be more likely to be associated with specific diseases than the term “health”. These papers therefore were not added to the analysis.

The search was supplemented by the addition of 17 papers first identified from the author’s own database, but not later found by the search strategy using Google Scholar or PubMed (steps 2–3) as described in Figure A1 . Eight of these 17 papers, five of which the author wrote or co-wrote, were scored as three. Their average score was 2.17, far higher than for the balance (1.23). This group also includes two editorials, one published in the Lancet, one in the BMJ. The inclusion of one of these editorials (scored as three, published in 1989) has biased the findings in favor of the hypothesis that highly scored papers were more common in the early period of this literature. Note, however, that no citations were recorded for this editorial.

The inclusion of these higher scoring papers later in the period of analysis has biased the result to the null, that is, away from the hypothesis that fewer such papers were published from about 2000. The most influential of these 17 papers, judged by Google Scholar citations, was cited 272 times. It was the first to report that rising levels of carbon dioxide depress micronutrient concentrations in food [ 77 ]. The other 16 papers were cited 405 times between them, an average of 25, which is low compared to the average citation number (94). Twenty eight other papers were included, mostly identified from special issues. Their average score was 1.9. One paper was identified post-review, by chance. It was scored as two (perhaps generously) and was included because it was judged that to exclude it would bias the result away from the null hypothesis.

Bias is also likely to have been introduced in the scoring process, but not to the extent that it could challenge the main conclusions. The rigor of this paper would be improved if the scores could be checked by a third party, blind to the first score. Unfortunately, no resources were available for this purpose. Some classification errors are likely, especially for papers for which the author had no previous familiarity, and if published after 2009, when, due to time pressure, many papers were scored rapidly. On the other hand, in the process of ranking over 2000 papers the author became skilled at making rapid decisions, especially for most papers scored as one. The difference between papers scored one and two was generally more apparent than for papers scored between two and three. In cases of doubt a higher score was always selected.

The likelihood of bias and error is unlikely to explain the difference in the character of the papers in the early period and those which later dominated. Although the widely cited paper by Costello et al. [ 11 ] (1583 citations as of June 2018) may have refreshed appreciation of the potentially catastrophic nature of climate change, the majority of papers and their citations published between 2010 and 2013 continued to focus on specific issues. This trend appears to have persisted in the years since, judged by the analysis of a randomly selected sample, identified by PubMed as published between 2014 and 2018.

4.2. Reasons for the Apparent Conservatism of the Literature

There are several plausible, overlapping and interacting explanations for the decline in the proportion of papers scored as two or three (and for their comparatively fewer citations) following 1996, and also in the failure for papers published since 2009 to fully amplify the most severe warnings. One likely contributing explanation is self-censorship. The topic of climate change and health is unfamiliar territory for many health editors and writers. Climate change has become politicized in many English-speaking countries, especially in the U.S. and Australia. Although comparatively few health workers have expertise concerning climate change and health, the readership of some health journals seems judged, by their editor, to be skeptical of, or even to reject climate science. For example, one editor, defending the decision to publish a paper (scored, possibly generously, as two) [ 112 ] seemed almost apologetic, writing “On its face, the paper by Hess and colleagues is largely a political commentary and a departure from the types of articles found in Academic Emergency Medicine” [ 113 ].

Thus, for some health workers and editors, even broaching the topic of climate change and health may be a courageous act. The publication of papers in health journals that describe potential pathways that could threaten civilization would appear even bolder. It is unsurprising that such papers are still fairly uncommon, at least until 2014, and particularly in journals which do not yet have a long tradition of publishing papers or editorials on this topic.

In the early period of the climate and health literature (1989–1993) some of the most outspoken articles were editorials. Perhaps at that time, there was a certain sense of shock concerning climate change, which has since waned. It was also a time when concerns about overpopulation were slightly less taboo [ 114 , 115 , 116 ]. However, editorials in more years also tend to have a higher index of concern than other articles.

Another likely contributor to the comparative degree of restraint is the view, backed by some research, that an excess of fear is counter-productive [ 117 ]. However, the smell of smoke in a theater requires the sounding of a vigorous alarm. Compounding the difficulty of communicating the risk over climate change is the lag between the whiff of smoke and the onset of visible fire. Hansen warned of great danger over thirty years ago, and he, with others, have issued many warnings since [ 118 ]. Sceptics are still waiting to see the metaphorical “flames” of climate change, even disputing the link between literal flames (fires) and climate change.

On the other hand, science, though not infallible, has delivered countless miracles such as antisepsis, anesthesia, penicillin and the jet engine. It has long warned of the physical changes of climate change. We who work in health should not be amazed if the predictions of climate and Earth scientists prove broadly accurate. Social science is less precise than climatology [ 43 ], however the links between food insecurity, drought, sea level rise, migration and, in some places, conflict are, also, surely not far-fetched. Papers that fail to express appreciation of the extraordinary risks we face as civilization may be judged by people of the future as having failed in their duty of care to protect health.

Another likely reason for the general restraint in the literature is the fragmentation of science and limited funding for multidisciplinary work. Comparatively few authors, other than if collaborating in large, multidisciplinary teams (rare for most authors primarily concerned with health), are rewarded or funded for thinking systemically. This problem is possibly worsening. Related to this, many recent papers are by sub-disciplines of health that have not previously published on the topic of climate change. Such papers are probably less likely to discuss existential risk.

As the effects of climate change have become increasingly clear the need for adaptation has become overwhelming. A stress on adaptation does not necessarily reflect any underestimation of the eventual severity of climate change. However, a stress on adaptation at the expense of mitigation may do so. In many countries, political leadership favors adaptation.

5. Conclusions

In 1989, thirty two years after the International Geophysical Year, the first papers on global warming and health appeared in the world’s leading medical journals [ 3 , 6 , 7 ]. All three of these early papers warned of severe, even existential risk and were each scored as three.

In 1990 McCally and Cassel warned that “progression of these environmental changes could lead to unprecedented human suffering” [ 119 ]. Also, in 1990, Fiona Godlee, then deputy editor of the BMJ, wrote “Countries in the developing world would suffer both the direct effects of drought and flood and the knock-on effect of agricultural and economic decline in the West. The already present problems of feeding the world’s growing population would be compounded by the increasing numbers of displaced people unable to grow their own food” [ 120 ]. In 1992 Powles observed “It is possible that adverse lagged effects of current industrial (and military) activities will disrupt the habitat of future generations of our species through processes such as stratospheric ozone depletion, global warming and others as yet unpredicted” [ 121 ]. However, in the following years, this sense of urgency largely dissipated, until the long paper by Costello et al. in 2009 [ 11 ].

Conditioned by growing up during the Cold War, the author has long been apprehensive about civilization’s survival. However, my timeline for global health disaster has always been multi-decadal. Civilizational collapse, if it is to occur, will not necessarily be in my own lifetime [ 54 ]. My concerns are not based solely on climate change. Climate change, by itself, is most unlikely to cripple civilization. A well-functioning global society, motivated to do so, could easily eliminate hunger and poverty, not only today, but under all but worst-case climate change. Refugees from inundated islands, war-torn Syria or the drought-stricken Chad basin [ 122 ] could easily be accommodated in more fertile and more elevated parts of the world. Unfortunately, humans currently do not co-operate on such a scale, and this behavior may, in part, be driven by inborn, “hard-wired”, evolutionary-shaped traits [ 123 ]. If civilization is to endure we may need to collectively overcome our seemingly deep wiring for tribalism and separation.

Acknowledgments

My thanks to John Potter for his help with locating obscure references, and to Andy Morse and Kristie Ebi for their very helpful comments, and Joseph Guillaume for his statistical advice. I especially thank Ivan Hanigan for the bootstrap analysis. I also thank three anonymous reviewers.

Supplementary Materials

The following are available online at http://www.mdpi.com/1660-4601/15/10/2266/s1 .

Appendix A.1. Detailed Methods and Results

The search method had six steps (see Figure A1 ). Initial exploration used the author’s Endnote database, of over 35,000 references, to find relevant articles. The second step was to search, using Google Scholar, for up to the first 100 results for each year in the search period (1980–2013), using the terms “climate change” and health or “global warming” and “health”. For the first decade in which relevant articles were found (1989–1998) both pairs of terms were used, but from 1999 to 2013 inclusive, only the former terms were used (“climate change” and “health”). In the third step, the search was expanded by seeking the same terms, using PubMed, for the same period; 1980–2013 (inclusive). After eliminating duplicates, all remaining papers were checked to ensure that they met the eligibility criteria listed in Box 1 . In stage 4, several papers were included if they appeared in special issues of journals, together with articles identified by PubMed, or suggested by colleagues. In stage 5, the BMJ database for news items about climate change and health was searched explored, because although PubMed found a few the proportion it identified was low. Finally, in stage 6, several other papers were found by chance, such as in reviews, in the references of cited papers, or by searching for other papers.

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-02266-g0A1.jpg

Outline of the six stage search strategy for papers published from 1989–2013.

Appendix A.2. Further Scoring Details

The following details are provided in order to provide additional information about the scoring process. It discusses the scoring process for three highly cited papers (from 2005–2007), each of which was scored as two. The first (cited 2059 times) had no mention of population displacement or conflict, but included the sentence “Projections of the effect of climate change on food crop yield production globally appear to be broadly neutral, but climate change will probably exacerbate regional food supply inequalities” [ 124 ]. This statement was assessed as accepting the possibility of a degree of food scarcity judged to be more severe than that described by many papers (particularly concerning the Arctic) which discuss a likely impairment in regional nutrition, but do not forecast insufficient calories or nutrients, let alone famine. Although the conclusion regarding overall global food security in this paper was reassuring, there are already four acknowledged famines in African nations and one in Yemen [ 125 ]. Any exacerbation of regional food supply inequalities is therefore likely to result in aggravated famines, unless future famines are eliminated; an unlikely prospect. Because this paper was cited so frequently a lower score would impact the overall result. If there is a bias from scoring this paper as two it is towards the null hypothesis.

In 2006 a widely cited paper [ 126 ] stated “Other important climatic risks to health, from changes in regional food yields, disruption of fisheries, loss of livelihoods, and population displacement (because of sea-level rise, water shortages, etc.) are less easy to study than these factors and their causal processes and effects are less easily quantified”. This is a more comprehensive list of civilization-endangering effects than the paper discussed above, but the language is restrained and brief. It was scored as a two.

In 2007 another widely cited paper included the sentences “Climate change will, itself, affect food yields around the world unevenly. Although some regions, mostly at mid-to-high latitude, could experience gains, many (e.g., in sub-Saharan Africa) are likely to be adversely affected, with impairment of both nutrition and incomes. Population displacement and conflict are also likely, because of various factors including food insecurity, desertification, sea-level rise, and increased extreme weather events” [ 127 ]. Of the three papers discussed here this provided the most comprehensive list of such effects and also explores their interaction. However, it did not speculate about civilization collapse, nor describe climate change as the biggest threat to global public health.

A gradient exists between papers scored two or three, rather than a clear threshold. Papers were not scored as three simply by including a more detailed explanation or assertion of the existence and importance of conflict, migration or famine, even if an interaction among them was suggested. They needed something extra. For example, one paper [ 128 ] stated (referring to Costello et al. [ 11 ]) “a watershed paper … suggests that climate change represents the biggest potential threat to human health in the twenty-first century … a recent report … also estimates that four billion people are vulnerable and 500 million people are at extreme risk”. This paper was scored as three even though the paper focused on medical education. Although the phrase “the biggest potential threat to human health in the twenty-first century” can, with repetition, lose its capacity to shock, its meaning, if taken literally, is surely sufficiently dire to be scored as three.

Another paper (scored as three) stated “global health, population growth, economic development, environmental degradation, and climate change are the main challenges we face in the 21st century” [ 129 ]. It also stated that “significant mass migration is likely to occur in response to climate change”.

The interpretation of papers was not excessively generous. For example, a paper that noted: “Changes in the frequency and intensity of extreme weather and climate events have had profound effects on both human society and the natural environment” was scored as one because there was no discussion of this aspect in the abstract or further in the text. It was also considered that the words “have had profound” was insufficiently clear. Nor did the paper discuss conflict, migration or famine.

In contrast, two papers about climate change and health in Nepal were scored as two, as they included the statements “Climate change is becoming huge threat to health especially for those from developing countries” (sic) [ 130 ] and “Climate change is a global issue in this century which has challenged the survival of living creatures affecting the life supporting systems of the earth: atmosphere, hydrosphere and lithosphere” [ 131 ].

Appendix A.3. Sources (Detailed)

Seventeen articles were identified from the author’s database, but not found via PubMed or Google Scholar. Other sources are shown in Table A4 .

This shows the primary source of the 2146 included articles. 18 articles were from special issues, 5 were found accidentally, 1 was from a review and 1 was from a colleague. Many articles were found using multiple methods. The papers listed here in the GS column were not found by PM but may also have been identified by CB. Abbreviations: PM = PubMed, GS = Google Scholar, CB = Colin Butler.

YearPMGSBMJOtherCBTotal
2 13
23 2 7
141 15
18 211
77 14
1111 22
13171 31
1218 30
1621 37
15171 134
1019 29
301611351
34821 45
2881 138
1781 127
29122 43
35184 259
55112 68
671633 89
13422711164
109541812184
1861066142314
1769351 275
1581082 1269
15412622 284

Appendix A.4. Score, Citation and Journal Details

This shows the number of articles and their average score for each quintile from 1989–1993.

QuintileNumber of ArticlesAverage Score
501.90
1541.40
1901.26
4231.42
13261.34

This shows the number of papers and citations in each category divided into five quintiles for the 25 years of analysis. Note that in the third quintile (1999–2003) only 5 articles were ranked as three. Ironically, the paper scored as three in 2002 was a news item which quoted Andrew Sims, policy director of the New Economics Foundation as lamenting “Health is not even being talked about here [Delhi], although the potential health impact is a devastating one, almost unimaginable” [ 132 ].

Papers Scored as 1Papers Scored as 2Papers Scored as 3
Number CitationsNumberCitationsNumberCitations
231996919718802
10516,54536191013172
14640,352393985578
28639,1289612,59041836
98667,11222910,9161144748

Ten journals published at least 22 articles on climate change and health in the period 1989–2013.

JournalArticlesEditorialsNews ItemsTotal
351271
107 4
5582
53
39
38
28
26
24
19 3

Appendix A.5. Additional Papers 2014–2018

PubMed was searched for the terms “climate change” and “health” for the period 2014–2017 inclusive. This found 3377 papers, which were grouped by year of publication and listed alphabetically, by surname of the first author. Every 20th paper (in each year) was then examined. If a paper was found to be ineligible, successive consecutive (alphabetical) candidates were examined until at least 5% of the total maximum number for each year had been found eligible and analyzed. In total, 156 papers were scored. This sample represented 5.1% of the 3036 papers which remained after 341 of the original pool had been eliminated. More would be excluded, given a more thorough inspection. The average score of these 156 articles and their 95% confidence interval, determined by bootstrapping, was 1.29 (1.21–1.39). The average score of these papers is lower than for the papers published from 2009–2013 (1.37). Although the 95% confidence interval for the period 2014–2018 overlaps with this, there is no evidence to suggest that the more recent literature better recognizes existential risk. See Table A5 and Figure A2 .

This shows the number, number analyzed and scores for the 156 papers that were analyzed for the period 2014–2018, tabulated by year. Note that some of the candidate papers would be culled after further examination.

YearCandidate PapersPapers Analyzed% AnalyzedAverage Score
649345.2%1.4
639325.0%1.3
816435.3%1.2
813415.0%1.3
11365.1%1.2

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-02266-g0A2.jpg

This shows the density of means and distributions for each year (2014–2017), based on 10,000 bootstrapped resamples (with replacement from the set for each year) and also for papers from 2013–2018 inclusive.

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

August 12, 2023

AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype

Effective regulation of AI needs grounded science that investigates real harms, not glorified press releases about existential risks

By Alex Hanna & Emily M. Bender

Illustration of people walking and their faces being recognized by AI.

Hannah Perry

Wrongful Arrests, an expanding surveillance dragnet, defamation and deepfake pornography are all existing dangers of the so-called artificial-intelligence tools currently on the market. These issues, and not the imagined potential to wipe out humanity, are the real threat of artificial intelligence.

End-of-days hype surrounds many AI firms, but their technology already enables myriad harms, including routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.

Nevertheless, in 2023 the nonprofit Center for AI Safety released a statement—co-signed by hundreds of industry leaders—warning of "the risk of extinction from AI," which it asserted was akin to the threats of nuclear war and pandemics. Sam Altman, embattled CEO of Open AI, the company behind the popular language-learning model ChatGPT, had previously alluded to such a risk in a congressional hearing, suggesting that generative AI tools could go "quite wrong." In the summer of 2023 executives from AI companies met with President Joe Biden and made several toothless voluntary commitments to curtail "the most significant sources of AI risks," hinting at theoretical apocalyptic threats instead of emphasizing real ones. Corporate AI labs justify this kind of posturing with pseudoscientific research reports that misdirect regulatory attention to imaginary scenarios and use fearmongering terminology such as "existential risk."

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

The broader public and regulatory agencies must not fall for this maneuver. Rather we should look to scholars and activists who practice peer review and have pushed back on AI hype in an attempt to understand its detrimental effects here and now.

Because the term "AI" is ambiguous, having clear discussions about it is difficult. In one sense, it is the name of a subfield of computer science. In another it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. And in marketing copy and start-up pitch decks, the term "AI" serves as magic fairy dust that will supercharge your business.

Since OpenAI's release of ChatGPT in late 2022 (and Microsoft's incorporation of the tool into its Bing search engine), text-synthesis machines have emerged as the most prominent AI systems. Large language models such as ChatGPT extrude remarkably fluent and coherent-seeming text but have no understanding of what the text means, let alone the ability to reason. (To suggest otherwise is to impute comprehension where there is none, something done purely on faith by AI boosters.) These systems are the equivalent of enormous Magic 8 Balls that we can play with by framing the prompts we send them as questions and interpreting their output as answers.

Unfortunately, that output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem. Not only do we risk mistaking synthetic text for reliable information, but that noninformation reflects and amplifies the biases encoded in AI training data—in the case of large language models, every kind of bigotry found on the Internet. Moreover, the synthetic text sounds authoritative despite its lack of citation of real sources. The longer this synthetic text spill continues, the worse off we are because it gets harder to find trustworthy sources and harder to trust them when we do.

The people selling this technology propose that text-synthesis machines could fix various holes in our social fabric: the shortage of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, to name just a few.

But deployment of this technology actually hurts workers. For one thing, the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created them. In addition, the task of labeling data to create "guardrails" intended to prevent an AI system's most toxic output from being released is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom in terms of their pay and working conditions. What is more, employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This scenario motivated the recent actors' and writers' strikes in Hollywood, where grotesquely overpaid moguls have schemed to buy eternal rights to use AI replacements of actors for the price of a day's work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.

AI-related policy must be science-driven and built on relevant research, but too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Many of these publications are based on junk science—it is nonreproducible, hides behind trade secrecy, is full of hype, and uses evaluation methods that do not measure what they purport to measure.

Recent examples include a 155-page preprint paper entitled "Sparks of Artificial General Intelligence: Early Experiments with GPT-4" from Microsoft Research, which claims to find "intelligence" in the output of GPT-4, one of OpenAI's text-synthesis machines. Then there are OpenAI's own technical reports on GPT-4, which claim, among other things, that OpenAI systems have the ability to solve new problems that are not found in their training data. No one can test these claims because OpenAI refuses to provide access to, or even a description of, those data. Meanwhile "AI doomers" cite this junk science in their efforts to focus the world's attention on the fantasy of all-powerful machines possibly going rogue and destroying humanity.

We urge policymakers to draw on solid scholarship that investigates the harms and risks of AI, as well as the harms caused by delegating authority to automated systems, which include the disempowerment of the poor and the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on not using this technology to hurt people.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of  Scientific American.

Newsroom Post

Climate change: a threat to human wellbeing and health of the planet. taking action now can secure our future.

BERLIN, Feb 28 – Human-induced climate change is causing dangerous and widespread disruption in nature and affecting the lives of billions of people around the world, despite efforts to reduce the risks. People and ecosystems least able to cope are being hardest hit, said scientists in the latest Intergovernmental Panel on Climate Change (IPCC) report, released today.

“This report is a dire warning about the consequences of inaction,” said Hoesung Lee, Chair of the IPCC. “It shows that climate change is a grave and mounting threat to our wellbeing and a healthy planet. Our actions today will shape how people adapt and nature responds to increasing climate risks.”

The world faces unavoidable multiple climate hazards over the next two decades with global warming of 1.5°C (2.7°F). Even temporarily exceeding this warming level will result in additional severe impacts, some of which will be irreversible. Risks for society will increase, including to infrastructure and low-lying coastal settlements.

The Summary for Policymakers of the IPCC Working Group II report,  Climate Change 2022: Impacts, Adaptation and Vulnerability was approved on Sunday, February 27 2022, by 195 member governments of the IPCC, through a virtual approval session that was held over two weeks starting on February 14.

Urgent action required to deal with increasing risks

Increased heatwaves, droughts and floods are already exceeding plants’ and animals’ tolerance thresholds, driving mass mortalities in species such as trees and corals. These weather extremes are occurring simultaneously, causing cascading impacts that are increasingly difficult to manage. They have exposed millions of people to acute food and water insecurity, especially in Africa, Asia, Central and South America, on Small Islands and in the Arctic.

To avoid mounting loss of life, biodiversity and infrastructure, ambitious, accelerated action is required to adapt to climate change, at the same time as making rapid, deep cuts in greenhouse gas emissions. So far, progress on adaptation is uneven and there are increasing gaps between action taken and what is needed to deal with the increasing risks, the new report finds. These gaps are largest among lower-income populations. 

The Working Group II report is the second instalment of the IPCC’s Sixth Assessment Report (AR6), which will be completed this year.

“This report recognizes the interdependence of climate, biodiversity and people and integrates natural, social and economic sciences more strongly than earlier IPCC assessments,” said Hoesung Lee. “It emphasizes the urgency of immediate and more ambitious action to address climate risks. Half measures are no longer an option.”

Safeguarding and strengthening nature is key to securing a liveable future

There are options to adapt to a changing climate. This report provides new insights into nature’s potential not only to reduce climate risks but also to improve people’s lives.

“Healthy ecosystems are more resilient to climate change and provide life-critical services such as food and clean water”, said IPCC Working Group II Co-Chair Hans-Otto Pörtner. “By restoring degraded ecosystems and effectively and equitably conserving 30 to 50 per cent of Earth’s land, freshwater and ocean habitats, society can benefit from nature’s capacity to absorb and store carbon, and we can accelerate progress towards sustainable development, but adequate finance and political support are essential.”

Scientists point out that climate change interacts with global trends such as unsustainable use of natural resources, growing urbanization, social inequalities, losses and damages from extreme events and a pandemic, jeopardizing future development.

“Our assessment clearly shows that tackling all these different challenges involves everyone – governments, the private sector, civil society – working together to prioritize risk reduction, as well as equity and justice, in decision-making and investment,” said IPCC Working Group II Co-Chair Debra Roberts.

“In this way, different interests, values and world views can be reconciled. By bringing together scientific and technological know-how as well as Indigenous and local knowledge, solutions will be more effective. Failure to achieve climate resilient and sustainable development will result in a sub-optimal future for people and nature.”

Cities: Hotspots of impacts and risks, but also a crucial part of the solution

This report provides a detailed assessment of climate change impacts, risks and adaptation in cities, where more than half the world’s population lives. People’s health, lives and livelihoods, as well as property and critical infrastructure, including energy and transportation systems, are being increasingly adversely affected by hazards from heatwaves, storms, drought and flooding as well as slow-onset changes, including sea level rise.

“Together, growing urbanization and climate change create complex risks, especially for those cities that already experience poorly planned urban growth, high levels of poverty and unemployment, and a lack of basic services,” Debra Roberts said.

“But cities also provide opportunities for climate action – green buildings, reliable supplies of clean water and renewable energy, and sustainable transport systems that connect urban and rural areas can all lead to a more inclusive, fairer society.”

There is increasing evidence of adaptation that has caused unintended consequences, for example destroying nature, putting peoples’ lives at risk or increasing greenhouse gas emissions. This can be avoided by involving everyone in planning, attention to equity and justice, and drawing on Indigenous and local knowledge.

A narrowing window for action

Climate change is a global challenge that requires local solutions and that’s why the Working Group II contribution to the IPCC’s Sixth Assessment Report (AR6) provides extensive regional information to enable Climate Resilient Development.

The report clearly states Climate Resilient Development is already challenging at current warming levels. It will become more limited if global warming exceeds 1.5°C (2.7°F). In some regions it will be impossible if global warming exceeds 2°C (3.6°F). This key finding underlines the urgency for climate action, focusing on equity and justice. Adequate funding, technology transfer, political commitment and partnership lead to more effective climate change adaptation and emissions reductions.

“The scientific evidence is unequivocal: climate change is a threat to human wellbeing and the health of the planet. Any further delay in concerted global action will miss a brief and rapidly closing window to secure a liveable future,” said Hans-Otto Pörtner.

For more information, please contact:

IPCC Press Office, Email: [email protected]   IPCC Working Group II:  Sina Löschke,  Komila Nabiyeva: [email protected]

Notes for Editors

Climate Change 2022: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change

The Working Group II report examines the impacts of climate change on nature and people around the globe. It explores future impacts at different levels of warming and the resulting risks and offers options to strengthen nature’s and society’s resilience to ongoing climate change, to fight hunger, poverty, and inequality and keep Earth a place worth living on – for current as well as for future generations. 

Working Group II introduces several new components in its latest report: One is a special section on climate change impacts, risks and options to act for cities and settlements by the sea, tropical forests, mountains, biodiversity hotspots, dryland and deserts, the Mediterranean as well as the polar regions. Another is an atlas that will present data and findings on observed and projected climate change impacts and risks from global to regional scales, thus offering even more insights for decision makers.

The Summary for Policymakers of the Working Group II contribution to the Sixth Assessment Report (AR6) as well as additional materials and information are available at https://www.ipcc.ch/report/ar6/wg2/

Note : Originally scheduled for release in September 2021, the report was delayed for several months by the COVID-19 pandemic, as work in the scientific community including the IPCC shifted online. This is the second time that the IPCC has conducted a virtual approval session for one of its reports.

AR6 Working Group II in numbers

270 authors from 67 countries

  • 47 – coordinating authors
  • 184 – lead authors
  • 39 – review editors
  • 675 – contributing authors

Over 34,000 cited references

A total of 62,418 expert and government review comments

(First Order Draft 16,348; Second Order Draft 40,293; Final Government Distribution: 5,777)

More information about the Sixth Assessment Report can be found  here .

Additional media resources

Assets available after the embargo is lifted on Media Essentials website .

Press conference recording, collection of sound bites from WGII authors, link to presentation slides, B-roll of approval session, link to launch Trello board including press release and video trailer in UN languages, a social media pack.

The website includes  outreach materials  such as videos about the IPCC and video recordings from  outreach events  conducted as webinars or live-streamed events.

Most videos published by the IPCC can be found on our  YouTube  channel. Credit for artwork

About the IPCC

The Intergovernmental Panel on Climate Change (IPCC) is the UN body for assessing the science related to climate change. It was established by the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO) in 1988 to provide political leaders with periodic scientific assessments concerning climate change, its implications and risks, as well as to put forward adaptation and mitigation strategies. In the same year the UN General Assembly endorsed the action by the WMO and UNEP in jointly establishing the IPCC. It has 195 member states.

Thousands of people from all over the world contribute to the work of the IPCC. For the assessment reports, IPCC scientists volunteer their time to assess the thousands of scientific papers published each year to provide a comprehensive summary of what is known about the drivers of climate change, its impacts and future risks, and how adaptation and mitigation can reduce those risks.

The IPCC has three working groups:  Working Group I , dealing with the physical science basis of climate change;  Working Group II , dealing with impacts, adaptation and vulnerability; and  Working Group III , dealing with the mitigation of climate change. It also has a  Task Force on National Greenhouse Gas Inventories  that develops methodologies for measuring emissions and removals. As part of the IPCC, a Task Group on Data Support for Climate Change Assessments (TG-Data) provides guidance to the Data Distribution Centre (DDC) on curation, traceability, stability, availability and transparency of data and scenarios related to the reports of the IPCC.

IPCC assessments provide governments, at all levels, with scientific information that they can use to develop climate policies. IPCC assessments are a key input into the international negotiations to tackle climate change. IPCC reports are drafted and reviewed in several stages, thus guaranteeing objectivity and transparency. An IPCC assessment report consists of the contributions of the three working groups and a Synthesis Report. The Synthesis Report integrates the findings of the three working group reports and of any special reports prepared in that assessment cycle.

About the Sixth Assessment Cycle

At its 41st Session in February 2015, the IPCC decided to produce a Sixth Assessment Report (AR6). At its 42nd Session in October 2015 it elected a new Bureau that would oversee the work on this report and the Special Reports to be produced in the assessment cycle.

Global Warming of 1.5°C , an IPCC special report on the impacts of global warming of 1.5 degrees Celsius above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty  was launched in October 2018.

Climate Change and Land , an IPCC special report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems  was launched in August 2019, and the  Special Report on the Ocean and Cryosphere in a Changing Climate  was released in September 2019.

In May 2019 the IPCC released the  2019 Refinement to the 2006 IPCC Guidelines for National Greenhouse Gas Inventories , an update to the methodology used by governments to estimate their greenhouse gas emissions and removals.

In August 2021 the IPCC released the Working Group I contribution to the AR6, Climate Change 2021, the Physical Science Basis

The Working Group III contribution to the AR6 is scheduled for early April 2022.

The Synthesis Report of the Sixth Assessment Report will be completed in the second half of 2022.

For more information go to  www.ipcc.ch

Related Content

Remarks by the ipcc chair during the press conference to present the working group ii contribution to the sixth assessment report.

Monday, 28 February 2022 Distinguished representatives of the media, WMO Secretary-General Petteri, UNEP Executive Director Andersen, We have just heard …

February 2022

Fifty-fifth session of the ipcc (ipcc-55) and twelfth session of working group ii (wgii-12), february 14, 2022, working group report, ar6 climate change 2022: impacts, adaptation and vulnerability.

AI Is Not Actually an Existential Threat to Humanity, Scientists Say

AI is not actually an existential threat to humanity, scientists say

We encounter artificial intelligence (AI) every day. AI describes computer systems that are able to perform tasks that normally require human intelligence. When you search something on the internet, the top results you see are decided by AI.

Any recommendations you get from your favorite shopping or streaming websites will also be based on an AI algorithm. These algorithms use your browser history to find things you might be interested in.

Because targeted recommendations are not particularly exciting, science fiction prefers to depict AI as super-intelligent robots that overthrow humanity. Some people believe this scenario could one day become reality. Notable figures, including the late Stephen Hawking , have expressed fear about how future AI could threaten humanity.

To address this concern we asked 11 experts in AI and Computer Science "Is AI an existential threat to humanity?" There was an 82 percent consensus that it is not an existential threat. Here is what we found out.

How close are we to making AI that is more intelligent than us?

The AI that currently exists is called 'narrow' or 'weak' AI . It is widely used for many applications like facial recognition, self-driving cars, and internet recommendations. It is defined as 'narrow' because these systems can only learn and perform very specific tasks.

They often actually perform these tasks better than humans – famously, Deep Blue was the first AI to beat a world chess champion in 1997 – however they cannot apply their learning to anything other than a very specific task (Deep Blue can only play chess).

Another type of AI is called Artificial General Intelligence (AGI). This is defined as AI that mimics human intelligence, including the ability to think and apply intelligence to multiple different problems. Some people believe that AGI is inevitable and will happen imminently in the next few years.

Matthew O'Brien, robotics engineer from the Georgia Institute of Technology disagrees , "the long-sought goal of a 'general AI' is not on the horizon. We simply do not know how to make a general adaptable intelligence, and it's unclear how much more progress is needed to get to that point".

How could a future AGI threaten humanity?

Whilst it is not clear when or if AGI will come about, can we predict what threat they might pose to us humans? AGI learns from experience and data as opposed to being explicitly told what to do. This means that, when faced with a new situation it has not seen before, we may not be able to completely predict how it reacts.

Dr Roman Yampolskiy, computer scientist from Louisville University also believes that "no version of human control over AI is achievable" as it is not possible for the AI to both be autonomous and controlled by humans. Not being able to control super-intelligent systems could be disastrous.

Yingxu Wang, professor of Software and Brain Sciences from Calgary University disagrees, saying that "professionally designed AI systems and products are well constrained by a fundamental layer of operating systems for safeguard users' interest and wellbeing, which may not be accessed or modified by the intelligent machines themselves."

Dr O'Brien adds "just like with other engineered systems, anything with potentially dangerous consequences would be thoroughly tested and have multiple redundant safety checks."

Could the AI we use today become a threat?

Many of the experts agreed that AI could be a threat in the wrong hands. Dr George Montanez, AI expert from Harvey Mudd College highlights that "robots and AI systems do not need to be sentient to be dangerous; they just have to be effective tools in the hands of humans who desire to hurt others. That is a threat that exists today."

Even without malicious intent, today's AI can be threatening. For example, racial biases have been discovered in algorithms that allocate health care to patients in the US. Similar biases have been found in facial recognition software used for law enforcement. These biases have wide-ranging negative impacts despite the 'narrow' ability of the AI.

AI bias comes from the data it is trained on. In the cases of racial bias, the training data was not representative of the general population. Another example happened in 2016, when an AI-based chatbox was found sending highly offensive and racist content. This was found to be because people were sending the bot offensive messages, which it learnt from.

The takeaway:

The AI that we use today is exceptionally useful for many different tasks.

That doesn't mean it is always positive – it is a tool which, if used maliciously or incorrectly, can have negative consequences. Despite this, it currently seems to be unlikely to become an existential threat to humanity.

Article based on 11 expert answers to this question: Is AI an existential threat to humanity?

This expert response was published in partnership with independent fact-checking platform Metafact.io . Subscribe to their weekly newsletter here .

Score Card Research NoScript

Every print subscription comes with full digital access

Science News

How did we get here the roots and impacts of the climate crisis.

People’s heavy reliance on fossil fuels and the cutting down of carbon-storing forests have transformed global climate.

illustration in the shape of the Earth showing a train, a car, airplanes, felled trees, an oil spill, and other examples of humans' impact on their environment

For more than a century, researchers have honed their methods for measuring the impacts of human actions on Earth's atmosphere.

Sam Falconer

Share this:

By Alexandra Witze

March 10, 2022 at 11:00 am

Even in a world increasingly battered by weather extremes, the summer 2021 heat wave in the Pacific Northwest stood out. For several days in late June, cities such as Vancouver, Portland and Seattle baked in record temperatures that killed hundreds of people. On June 29, Lytton, a village in British Columbia, set an all-time heat record for Canada, at 121° Fahrenheit (49.6° Celsius); the next day, the village was incinerated by a wildfire.

Within a week, an international group of scientists had analyzed this extreme heat and concluded it would have been virtually impossible without climate change caused by humans. The planet’s average surface temperature has risen by at least 1.1 degrees Celsius since preindustrial levels of 1850–1900. The reason: People are loading the atmosphere with heat-trapping gases produced during the burning of fossil fuels, such as coal and gas, and from cutting down forests.

Science News 100

To celebrate our 100th anniversary, we’re highlighting some of the biggest advances in science over the last century. To see more from the series, visit Century of Science .

A little over 1 degree of warming may not sound like a lot. But it has already been enough to fundamentally transform how energy flows around the planet. The pace of change is accelerating, and the consequences are everywhere. Ice sheets in Greenland and Antarctica are melting, raising sea levels and flooding low-lying island nations and coastal cities. Drought is parching farmlands and the rivers that feed them. Wildfires are raging. Rains are becoming more intense, and weather patterns are shifting .

The roots of understanding this climate emergency trace back more than a century and a half. But it wasn’t until the 1950s that scientists began the detailed measurements of atmospheric carbon dioxide that would prove how much carbon is pouring from human activities. Beginning in the 1960s, researchers started developing comprehensive computer models that now illuminate the severity of the changes ahead.

Today we know that climate change and its consequences are real, and we are responsible. The emissions that people have been putting into the air for centuries — the emissions that made long-distance travel, economic growth and our material lives possible — have put us squarely on a warming trajectory . Only drastic cuts in carbon emissions, backed by collective global will, can make a significant difference.

“What’s happening to the planet is not routine,” says Ralph Keeling, a geochemist at the Scripps Institution of Oceanography in La Jolla, Calif. “We’re in a planetary crisis.”

aerial photo of the Lytton wildfire

Setting the stage

One day in the 1850s, Eunice Newton Foote, an amateur scientist and a women’s rights activist living in upstate New York, put two glass jars in sunlight. One contained regular air — a mix of nitrogen, oxygen and other gases including carbon dioxide — while the other contained just carbon dioxide. Both had thermometers in them. As the sun’s rays beat down, Foote observed that the jar of CO 2 alone heated up more quickly, and was slower to cool down, than the one containing plain air.

The results prompted Foote to muse on the relationship between CO 2 , the planet and heat. “An atmosphere of that gas would give to our earth a high temperature,” she wrote in an 1856 paper summarizing her findings .

black and white image of Eunice Newton Foote seated and petting a dog

Three years later, working independently and apparently unaware of Foote’s discovery, Irish physicist John Tyndall showed the same basic idea in more detail. With a set of pipes and devices to study the transmission of heat, he found that CO 2 gas, as well as water vapor, absorbed more heat than air alone. He argued that such gases would trap heat in Earth’s atmosphere, much as panes of glass trap heat in a greenhouse, and thus modulate climate.

Today Tyndall is widely credited with the discovery of how what we now call greenhouse gases heat the planet, earning him a prominent place in the history of climate science. Foote faded into relative obscurity — partly because of her gender, partly because her measurements were less sensitive. Yet their findings helped kick off broader scientific exploration of how the composition of gases in Earth’s atmosphere affects global temperatures.

Heat-trapping gases 

In 1859, John Tyndall used this apparatus to study how various gases trap heat. He sent infrared radiation through a tube filled with gas and measured the resulting temperature changes. Carbon dioxide and water vapor, he showed, absorb more heat than air does.

illustration of an apparatus used by John Tyndall to study how gases trap heat

Carbon floods in

Humans began substantially affecting the atmosphere around the turn of the 19th century, when the Industrial Revolution took off in Britain. Factories burned tons of coal; fueled by fossil fuels, the steam engine revolutionized transportation and other industries. Since then, fossil fuels including oil and natural gas have been harnessed to drive a global economy. All these activities belch gases into the air.

Yet Swedish physical chemist Svante Arrhenius wasn’t worried about the Industrial Revolution when he began thinking in the late 1800s about changes in atmospheric CO 2 levels. He was instead curious about ice ages — including whether a decrease in volcanic eruptions, which can put carbon dioxide into the atmosphere, would lead to a future ice age. Bored and lonely in the wake of a divorce, Arrhenius set himself to months of laborious calculations involving moisture and heat transport in the atmosphere at different zones of latitude. In 1896, he reported that halving the amount of CO 2 in the atmosphere could indeed bring about an ice age — and that doubling CO 2 would raise global temperatures by around 5 to 6 degrees C.

It was a remarkably prescient finding for work that, out of necessity, had simplified Earth’s complex climate system down to just a few variables. But Arrhenius’ findings didn’t gain much traction with other scientists at the time. The climate system seemed too large, complex and inert to change in any meaningful way on a timescale that would be relevant to human society. Geologic evidence showed, for instance, that ice ages took thousands of years to start and end. What was there to worry about?

Extreme Climate Survey

Science News is collecting reader questions about how to navigate our planet's changing climate.

What do you want to know about extreme heat and how it can lead to extreme weather events?

One researcher, though, thought the idea was worth pursuing. Guy Stewart Callendar, a British engineer and amateur meteorologist, had tallied weather records over time, obsessively enough to determine that average temperatures were increasing at 147 weather stations around the globe. In a 1938 paper in a Royal Meteorological Society journal, he linked this temperature rise to the burning of fossil fuels . Callendar estimated that fossil fuel burning had put around 150 billion metric tons of CO 2 into the atmosphere since the late 19th century.

Like many of his day, Callendar didn’t see global warming as a problem. Extra CO 2 would surely stimulate plants to grow and allow crops to be farmed in new regions. “In any case the return of the deadly glaciers should be delayed indefinitely,” he wrote. But his work revived discussions tracing back to Tyndall and Arrhenius about how the planetary system responds to changing levels of gases in the atmosphere. And it began steering the conversation toward how human activities might drive those changes.

When World War II broke out the following year, the global conflict redrew the landscape for scientific research. Hugely important wartime technologies, such as radar and the atomic bomb, set the stage for “big science” studies that brought nations together to tackle high-stakes questions of global reach. And that allowed modern climate science to emerge.

The Keeling curve

One major effort was the International Geophysical Year, or IGY, an 18-month push in 1957–1958 that involved a wide array of scientific field campaigns including exploration in the Arctic and Antarctica. Climate change wasn’t a high research priority during the IGY, but some scientists in California, led by Roger Revelle of the Scripps Institution of Oceanography, used the funding influx to begin a project they’d long wanted to do. The goal was to measure CO 2 levels at different locations around the world, accurately and consistently.

The job fell to geochemist Charles David Keeling, who put ultraprecise CO 2 monitors in Antarctica and on the Hawaiian volcano of Mauna Loa. Funds soon ran out to maintain the Antarctic record, but the Mauna Loa measurements continued. Thus was born one of the most iconic datasets in all of science — the “Keeling curve,” which tracks the rise of atmospheric CO 2 .

black and white photo of Charles David Keeling in a lab

When Keeling began his measurements in 1958, CO 2 made up 315 parts per million of the global atmosphere. Within just a few years it became clear that the number was increasing year by year. Because plants take up CO 2 as they grow in spring and summer and release it as they decompose in fall and winter, CO 2 concentrations rose and fell each year in a sawtooth pattern. But superimposed on that pattern was a steady march upward.

“The graph got flashed all over the place — it was just such a striking image,” says Ralph Keeling, who is Keeling’s son. Over the years, as the curve marched higher, “it had a really important role historically in waking people up to the problem of climate change.” The Keeling curve has been featured in countless earth science textbooks, congressional hearings and in Al Gore’s 2006 documentary on climate change, An Inconvenient Truth .

Each year the curve keeps going up: In 2016, it passed 400 ppm of CO 2 in the atmosphere as measured during its typical annual minimum in September. Today it is at 413 ppm. (Before the Industrial Revolution, CO 2 levels in the atmosphere had been stable for centuries at around 280 ppm.)

Around the time that Keeling’s measurements were kicking off, Revelle also helped develop an important argument that the CO 2 from human activities was building up in Earth’s atmosphere. In 1957, he and Hans Suess, also at Scripps at the time, published a paper that traced the flow of radioactive carbon through the oceans and the atmosphere . They showed that the oceans were not capable of taking up as much CO 2 as previously thought; the implication was that much of the gas must be going into the atmosphere instead.

Steady rise 

Known as the Keeling curve, this chart shows the rise in CO 2 levels as measured at the Mauna Loa Observatory in Hawaii due to human activities. The visible sawtooth pattern is due to seasonal plant growth: Plants take up CO 2   in the growing seasons, then release it as they decompose in fall and winter.

Monthly average CO 2 concentrations at Mauna Loa Observatory

line graph showing increasing monthly average CO2 concentrations at Mauna Loa Observatory from 1958 to 2022

“Human beings are now carrying out a large-scale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future,” Revelle and Suess wrote in the paper. It’s one of the most famous sentences in earth science history.

Here was the insight underlying modern climate science: Atmospheric carbon dioxide is increasing, and humans are causing the buildup. Revelle and Suess became the final piece in a puzzle dating back to Svante Arrhenius and John Tyndall. “I tell my students that to understand the basics of climate change, you need to have the cutting-edge science of the 1860s, the cutting-edge math of the 1890s and the cutting-edge chemistry of the 1950s,” says Joshua Howe, an environmental historian at Reed College in Portland, Ore.

Evidence piles up

Observational data collected throughout the second half of the 20th century helped researchers gradually build their understanding of how human activities were transforming the planet.

Ice cores pulled from ice sheets, such as that atop Greenland, offer some of the most telling insights for understanding past climate change. Each year, snow falls atop the ice and compresses into a fresh layer of ice representing climate conditions at the time it formed. The abundance of certain forms, or isotopes, of oxygen and hydrogen in the ice allows scientists to calculate the temperature at which it formed, and air bubbles trapped within the ice reveal how much carbon dioxide and other greenhouse gases were in the atmosphere at that time. So drilling down into an ice sheet is like reading the pages of a history book that go back in time the deeper you go.

photo of Geoffrey Hargreaves holding an ice core

Scientists began reading these pages in the early 1960s, using ice cores drilled at a U.S. military base in northwest Greenland . Contrary to expectations that past climates were stable, the cores hinted that abrupt climate shifts had happened over the last 100,000 years. By 1979, an international group of researchers was pulling another deep ice core from a second location in Greenland — and it, too, showed that abrupt climate change had occurred in the past. In the late 1980s and early 1990s, a pair of European- and U.S.-led drilling projects retrieved even deeper cores from near the top of the ice sheet, pushing the record of past temperatures back a quarter of a million years.

Together with other sources of information, such as sediment cores drilled from the seafloor and molecules preserved in ancient rocks, the ice cores allowed scientists to reconstruct past temperature changes in extraordinary detail. Many of those changes happened alarmingly fast. For instance, the climate in Greenland warmed abruptly more than 20 times in the last 80,000 years , with the changes occurring in a matter of decades. More recently, a cold spell that set in around 13,000 years ago suddenly came to an end around 11,500 years ago — and temperatures in Greenland rose 10 degrees C in a decade.

Evidence for such dramatic climate shifts laid to rest any lingering ideas that global climate change would be slow and unlikely to occur on a timescale that humans should worry about. “It’s an important reminder of how ‘tippy’ things can be,” says Jessica Tierney, a paleoclimatologist at the University of Arizona in Tucson.

More evidence of global change came from Earth-observing satellites, which brought a new planet-wide perspective on global warming beginning in the 1960s. From their viewpoint in the sky, satellites have measured the rise in global sea level — currently 3.4 millimeters per year and accelerating, as warming water expands and as ice sheets melt — as well as the rapid decline in ice left floating on the Arctic Ocean each summer at the end of the melt season. Gravity-sensing satellites have “weighed” the Antarctic and Greenlandic ice sheets from above since 2002, reporting that more than 400 billion metric tons of ice are lost each year.

Temperature observations taken at weather stations around the world also confirm that we are living in the hottest years on record. The 10 warmest years since record keeping began in 1880 have all occurred since 2005 . And nine of those 10 have come since 2010.

Worrisome predictions

By the 1960s, there was no denying that the planet was warming. But understanding the consequences of those changes — including the threat to human health and well-being — would require more than observational data. Looking to the future depended on computer simulations: complex calculations of how energy flows through the planetary system.

A first step in building such climate models was to connect everyday observations of weather to the concept of forecasting future climate. During World War I, British mathematician Lewis Fry Richardson imagined tens of thousands of meteorologists, each calculating conditions for a small part of the atmosphere but collectively piecing together a global forecast.

But it wasn’t until after World War II that computational power turned Richardson’s dream into reality. In the wake of the Allied victory, which relied on accurate weather forecasts for everything from planning D-Day to figuring out when and where to drop the atomic bombs, leading U.S. mathematicians acquired funding from the federal government to improve predictions. In 1950, a team led by Jule Charney, a meteorologist at the Institute for Advanced Study in Princeton, N.J., used the ENIAC, the first U.S. programmable, electronic computer, to produce the first computer-driven regional weather forecast . The forecasting was slow and rudimentary, but it built on Richardson’s ideas of dividing the atmosphere into squares, or cells, and computing the weather for each of those. The work set the stage for decades of climate modeling to follow.

By 1956, Norman Phillips, a member of Charney’s team, had produced the world’s first general circulation model, which captured how energy flows between the oceans, atmosphere and land. The field of climate modeling was born.

The work was basic at first because early computers simply didn’t have much computational power to simulate all aspects of the planetary system.

An important breakthrough came in 1967, when meteorologists Syukuro Manabe and Richard Wetherald — both at the Geophysical Fluid Dynamics Laboratory in Princeton, a lab born from Charney’s group — published a paper in the Journal of the Atmospheric Sciences that modeled connections between Earth’s surface and atmosphere and calculated how changes in CO 2 would affect the planet’s temperature. Manabe and Wetherald were the first to build a computer model that captured the relevant processes that drive climate , and to accurately simulate how the Earth responds to those processes.

The rise of climate modeling allowed scientists to more accurately envision the impacts of global warming. In 1979, Charney and other experts met in Woods Hole, Mass., to try to put together a scientific consensus on what increasing levels of CO 2 would mean for the planet. The resulting “Charney report” concluded that rising CO 2 in the atmosphere would lead to additional and significant climate change.

In the decades since, climate modeling has gotten increasingly sophisticated . And as climate science firmed up, climate change became a political issue.

The hockey stick 

This famous graph, produced by scientist Michael Mann and colleagues, and then reproduced in a 2001 report by the Intergovernmental Panel on Climate Change, dramatically captures temperature change over time. Climate change skeptics made it the center of an all-out attack on climate science.

image of the "hockey stick" graph showing the increase in temperature in the Northern Hemisphere from 1961 to 1990

The rising public awareness of climate change, and battles over what to do about it, emerged alongside awareness of other environmental issues in the 1960s and ’70s. Rachel Carson’s 1962 book Silent Spring , which condemned the pesticide DDT for its ecological impacts, catalyzed environmental activism in the United States and led to the first Earth Day in 1970.

In 1974, scientists discovered another major global environmental threat — the Antarctic ozone hole, which had some important parallels to and differences from the climate change story. Chemists Mario Molina and F. Sherwood Rowland, of the University of California, Irvine, reported that chlorofluorocarbon chemicals, used in products such as spray cans and refrigerants, caused a chain of reactions that gnawed away at the atmosphere’s protective ozone layer . The resulting ozone hole, which forms over Antarctica every spring, allows more ultraviolet radiation from the sun to make it through Earth’s atmosphere and reach the surface, where it can cause skin cancer and eye damage.

Governments worked under the auspices of the United Nations to craft the 1987 Montreal Protocol, which strictly limited the manufacture of chlorofluorocarbons . In the years following, the ozone hole began to heal. But fighting climate change is proving to be far more challenging. Transforming entire energy sectors to reduce or eliminate carbon emissions is much more difficult than replacing a set of industrial chemicals.

In 1980, though, researchers took an important step toward banding together to synthesize the scientific understanding of climate change and bring it to the attention of international policy makers. It started at a small scientific conference in Villach, Austria, on the seriousness of climate change. On the train ride home from the meeting, Swedish meteorologist Bert Bolin talked with other participants about how a broader, deeper and more international analysis was needed. In 1988, a United Nations body called the Intergovernmental Panel on Climate Change, the IPCC, was born. Bolin was its first chairperson.

The IPCC became a highly influential and unique body. It performs no original scientific research; instead, it synthesizes and summarizes the vast literature of climate science for policy makers to consider — primarily through massive reports issued every couple of years. The first IPCC report, in 1990 , predicted that the planet’s global mean temperature would rise more quickly in the following century than at any point in the last 10,000 years, due to increasing greenhouse gases in the atmosphere.

IPCC reports have played a key role in providing scientific information for nations discussing how to stabilize greenhouse gas concentrations. This process started with the Rio Earth Summit in 1992 , which resulted in the U.N. Framework Convention on Climate Change. Annual U.N. meetings to tackle climate change led to the first international commitments to reduce emissions, the Kyoto Protocol of 1997 . Under it, developed countries committed to reduce emissions of CO 2 and other greenhouse gases. By 2007, the IPCC declared the reality of climate warming is “unequivocal.” The group received the Nobel Peace Prize that year, along with Al Gore, for their work on climate change.

The IPCC process ensured that policy makers had the best science at hand when they came to the table to discuss cutting emissions. Of course, nations did not have to abide by that science — and they often didn’t. Throughout the 2000s and 2010s, international climate meetings discussed less hard-core science and more issues of equity. Countries such as China and India pointed out that they needed energy to develop their economies and that nations responsible for the bulk of emissions through history, such as the United States, needed to lead the way in cutting greenhouse gases.

Meanwhile, residents of some of the most vulnerable nations, such as low-lying islands that are threatened by sea level rise, gained visibility and clout at international negotiating forums. “The issues around equity have always been very uniquely challenging in this collective action problem,” says Rachel Cleetus, a climate policy expert with the Union of Concerned Scientists in Cambridge, Mass.

By 2015, the world’s nations had made some progress on the emissions cuts laid out in the Kyoto Protocol, but it was still not enough to achieve substantial global reductions. That year, a key U.N. climate conference in Paris produced an international agreement to try to limit global warming to 2 degrees C, and preferably 1.5 degrees C , above preindustrial levels.

Every country has its own approach to the challenge of addressing climate change. In the United States, which gets approximately 80 percent of its energy from fossil fuels, sophisticated efforts to downplay and critique the science led to major delays in climate action. For decades, U.S. fossil fuel companies such as ExxonMobil worked to influence politicians to take as little action on emissions reductions as possible.

Biggest footprint 

These 20 nations have emitted the largest cumulative amounts of carbon dioxide since 1850. Emissions are shown in billions of metric tons and are broken down into subtotals from fossil fuel use and cement manufacturing (blue) and land use and forestry (green).

Total carbon dioxide emissions by country, 1850–2021 

bar chart of total carbon dioxide emissions by country from 1850 to 2021 broken down by land use and fossil fuels for the top 20 countries

Such tactics undoubtedly succeeded in feeding politicians’ delay on climate action in the United States, most of it from Republicans. President George W. Bush withdrew the country from the Kyoto Protocol in 2001 ; Donald Trump similarly rejected the Paris accord in 2017 . As late as 2015, the chair of the Senate’s environment committee, James Inhofe of Oklahoma, brought a snowball into Congress on a cold winter’s day to argue that human-caused global warming is a “hoax.”

In Australia, a similar mix of right-wing denialism and fossil fuel interests has kept climate change commitments in flux, as prime ministers are voted in and out over fierce debates about how the nation should act on climate.

Yet other nations have moved forward. Some European countries such as Germany aggressively pursued renewable energies, including wind and solar, while activists such as Swedish teenager Greta Thunberg — the vanguard of a youth-action movement — pressured their governments for more.

In recent years, the developing economies of China and India have taken center stage in discussions about climate action. China, which is now the world’s largest carbon emitter, declared several moderate steps in 2021 to reduce emissions, including that it would stop building coal-burning power plants overseas. India announced it would aim for net-zero emissions by 2070, the first time it has set a date for this goal.

Yet such pledges continue to be criticized. At the 2021 U.N. Climate Change Conference in Glasgow, Scotland, India was globally criticized for not committing to a complete phaseout of coal — although the two top emitters, China and the United States, have not themselves committed to phasing out coal. “There is no equity in this,” says Aayushi Awasthy, an energy economist at the University of East Anglia in England.

Past and future 

Various scenarios for how greenhouse gas emissions might change going forward help scientists predict future climate change. This graph shows the simulated historical temperature trend along with future projections of rising temperatures based on five scenarios from the Intergovernmental Panel on Climate Change. Temperature change is the difference from the 1850–1900 average.

Historical and projected global temperature change

line graph showing future temperature change from the 1850–1900 average under various IPCC scenarios

Facing the future

In many cases, changes are coming faster than scientists had envisioned a few decades ago. The oceans are becoming more acidic as they absorb CO 2 , harming tiny marine organisms that build protective calcium carbonate shells and are the base of the marine food web. Warmer waters are bleaching coral reefs. Higher temperatures are driving animal and plant species into areas in which they previously did not live, increasing the risk of extinction for many.

No place on the planet is unaffected. In many areas, higher temperatures have led to major droughts, which dry out vegetation and provide additional fuel for wildfires such as those that have devastated Australia , the Mediterranean and western North America in recent years.

Then there’s the Arctic, where temperatures are rising at more than twice the global average and communities are at the forefront of change. Permafrost is thawing, destabilizing buildings, pipelines and roads. Caribou and reindeer herders worry about the increased risk of parasites for the health of their animals. With less sea ice available to buffer the coast from storm erosion, the Inupiat village of Shishmaref, Alaska, risks crumbling into the sea . It will need to move from its sand-barrier island to the mainland.

photo of people lining up for water amid tents in a makeshift camp for families displaced by drought

“We know these changes are happening and that the Titanic is sinking,” says Louise Farquharson, a geomorphologist at the University of Alaska Fairbanks who monitors permafrost and coastal change around Alaska. All around the planet, those who depend on intact ecosystems for their survival face the greatest threat from climate change. And those with the least resources to adapt to climate change are the ones who feel it first.

“We are going to warm,” says Claudia Tebaldi, a climate scientist at Lawrence Berkeley National Laboratory in California. “There is no question about it. The only thing that we can hope to do is to warm a little more slowly.”

That’s one reason why the IPCC report released in 2021 focuses on anticipated levels of global warming . There is a big difference between the planet warming 1.5 degrees versus 2 degrees or 2.5 degrees. Each fraction of a degree of warming increases the risk of extreme events such as heat waves and heavy rains, leading to greater global devastation.

The future rests on how much nations are willing to commit to cutting emissions and whether they will stick to those commitments. It’s a geopolitical balancing act the likes of which the world has never seen.

photo of young climate activists holding posters that read "Act Now" and "Uproot the system"

Science can and must play a role going forward. Improved climate models will illuminate what changes are expected at the regional scale, helping officials prepare. Governments and industry have crucial parts to play as well. They can invest in technologies, such as carbon sequestration, to help decarbonize the economy and shift society toward more renewable sources of energy.

Huge questions remain. Do voters have the will to demand significant energy transitions from their governments? How can business and military leaders play a bigger role in driving climate action? What should be the role of low-carbon energy sources that come with downsides, such as nuclear energy? How can developing nations achieve a better standard of living for their people while not becoming big greenhouse gas emitters? How can we keep the most vulnerable from being disproportionately harmed during extreme events, and incorporate environmental and social justice into our future?

These questions become more pressing each year, as carbon dioxideaccumulates in our atmosphere. The planet is now at higher levels of CO 2 than at any time in the last 3 million years.

At the U.N. climate meeting in Glasgow in 2021, diplomats from around the world agreed to work more urgently to shift away from using fossil fuels. They did not, however, adopt targets strict enough to keep the world below a warming of 1.5 degrees.

It’s been well over a century since chemist Svante Arrhenius recognized the consequences of putting extra carbon dioxide into the atmosphere. Yet the world has not pulled together to avoid the most dangerous consequences of climate change.

Time is running out.

More Stories from Science News on Climate

A photograph of James Price Point, in Western Australia.

Summer-like heat is scorching the Southern Hemisphere — in winter

A black and white mosquito sits on the skin of a white person, sucking up a meal. Its abdomen is slightly filled with blood.

Extreme heat and rain are fueling rising cases of mosquito-borne diseases

Debris from a collapsed house litters a beach in North Carolina. In the distance, a house on stilts still stands at the very edge of the ocean.

Zapping sand to create rock could help curb coastal erosion

In the background, a billboard shows a temperature of 107 degrees Celsius, while cars drive eon a freeway in the foreground.

The world’s record-breaking hot streak has lasted 14 months. When will it end?

A man puts a white cloth on a woman's forehead. The woman is holding a water bottle and sitting in the shade as another woman standing behind her looks concerned

Your medications might make it harder for you to beat the heat

science is a threat to humanity essay

Extraordinary heat waves have readers asking how A/C affects greenhouse gas emissions

A truck is parked in the foreground, with a large storm system in the background.

Squall line tornadoes are sneaky, dangerous and difficult to forecast

Schematic of how a building's zigzag wall both reflects and emits heat.

Zigzag walls could help buildings beat the heat

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

We need your support today

Independent journalism is more important than ever. Vox is here to explain this unprecedented election cycle and help you understand the larger stakes. We will break down where the candidates stand on major issues, from economic policy to immigration, foreign policy, criminal justice, and abortion. We’ll answer your biggest questions, and we’ll explain what matters — and why. This timely and essential task, however, is expensive to produce.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

  • Future Perfect

The case for taking AI seriously as a threat to humanity

Why some people fear AI, explained.

by Kelsey Piper

Illustrations by Javier Zarracina for Vox

An illustration of a human and gears in their head.

Stephen Hawking has said , “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “ biggest existential threat .”

That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of the researchers working in AI today believe that advanced AI systems, if deployed carelessly, could permanently cut off human civilization from a good future.

This concern has been raised since the dawn of computing. But it has come into particular focus in recent years, as advances in machine-learning techniques have given us a more concrete understanding of what we can do with AI, what AI can do for (and to) us, and how much we still don’t know.

There are also skeptics. Some of them think advanced AI is so distant that there’s no point in thinking about it now. Others are worried that excessive hype about the power of their field might kill it prematurely. And even among the people who broadly agree that AI poses unique dangers, there are varying takes on what steps make the most sense today.

The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things. So here’s the big picture on how artificial intelligence might pose a catastrophic danger, in nine questions:

1) What is AI?

Artificial intelligence is the effort to create computers capable of intelligent behavior. It is a broad catchall term, used to refer to everything from Siri to IBM’s Watson to powerful technologies we have yet to invent.

Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.

Narrow AI has seen extraordinary progress over the past few years. AI systems have improved dramatically at translation , at games like chess and Go , at important research biology questions like predicting how proteins fold , and at generating images . AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed . They compose music and write articles that, at a glance, read as if a human wrote them. They play strategy games . They are being developed to improve drone targeting and detect missiles .

But narrow AI is getting less narrow . Once, we made progress in AI by painstakingly teaching computer systems specific concepts. To do computer vision — allowing a computer to identify things in pictures and video — researchers wrote algorithms for detecting edges. To play chess, they programmed in heuristics about chess. To do natural language processing (speech recognition, transcription, translation, etc.), they drew on the field of linguistics.

But recently, we’ve gotten better at creating computer systems that have generalized learning capabilities. Instead of mathematically describing detailed features of a problem, we let the computer system learn that by itself. While once we treated computer vision as a completely different problem from natural language processing or platform game playing, now we can solve all three problems with the same approaches .

And as computers get good enough at narrow AI tasks, they start to exhibit more general capabilities. For example, OpenAI’s famous GPT-series of text AIs is, in one sense, the narrowest of narrow AIs — it just predicts what the next word will be in a text, based on the previous words and its corpus of human language. And yet, it can now identify questions as reasonable or unreasonable and discuss the physical world (for example, answering questions about which objects are larger or which steps in a process must come first). In order to be very good at the narrow task of text prediction, an AI system will eventually develop abilities that are not narrow at all.

Our AI progress so far has enabled enormous advances — and has also raised urgent ethical questions. When you train a computer system to predict which convicted felons will reoffend, you’re using inputs from a criminal justice system biased against black people and low-income people — and so its outputs will likely be biased against black and low-income people too . Making websites more addictive can be great for your revenue but bad for your users. Releasing a program that writes convincing fake reviews or fake news might make those widespread, making it harder for the truth to get out.

Rosie Campbell at UC Berkeley’s Center for Human-Compatible AI argues that these are examples, writ small, of the big worry experts have about general AI in the future. The difficulties we’re wrestling with today with narrow AI don’t come from the systems turning on us or wanting revenge or considering us inferior. Rather, they come from the disconnect between what we tell our systems to do and what we actually want them to do.

For example, we tell a system to run up a high score in a video game. We want it to play the game fairly and learn game skills — but if it instead has the chance to directly hack the scoring system, it will do that. It’s doing great by the metric we gave it. But we aren’t getting what we wanted.

science is a threat to humanity essay

In other words, our problems come from the systems being really good at achieving the goal they learned to pursue; it’s just that the goal they learned in their training environment isn’t the outcome we actually wanted. And we’re building systems we don’t understand, which means we can’t always anticipate their behavior.

Right now the harm is limited because the systems are so limited. But it’s a pattern that could have even graver consequences for human beings in the future as AI systems become more advanced.

2) Is it even possible to make a computer as smart as a person?

Yes, though current AI systems aren’t nearly that smart.

One popular adage about AI is “ everything that’s easy is hard, and everything that’s hard is easy .” Doing complex calculations in the blink of an eye? Easy. Looking at a picture and telling you whether it’s a dog? Hard (until very recently).

Lots of things humans do are still outside AI’s grasp. For instance, it’s hard to design an AI system that explores an unfamiliar environment, that can navigate its way from, say, the entryway of a building it’s never been in before up the stairs to a specific person’s desk. We are just beginning to learn how to design an AI system that reads a book and retains an understanding of the concepts.

The paradigm that has driven many of the biggest breakthroughs in AI recently is called “deep learning.” Deep learning systems can do some astonishing stuff: beat games we thought humans might never lose, invent compelling and realistic photographs, solve open problems in molecular biology.

These breakthroughs have made some researchers conclude it’s time to start thinking about the dangers of more powerful systems, but skeptics remain. The field’s pessimists argue that programs still need an extraordinary pool of structured data to learn from, require carefully chosen parameters, or work only in environments designed to avoid the problems we don’t yet know how to solve. They point to self-driving cars , which are still mediocre under the best conditions despite the billions that have been poured into making them work.

It’s rare, though, to find a top researcher in AI who thinks that general AI is impossible. Instead, the field’s luminaries tend to say that it will happen someday — but probably a day that’s a long way off.

Other researchers argue that the day may not be so distant after all.

That’s because for almost all the history of AI, we’ve been held back in large part by not having enough computing power to realize our ideas fully. Many of the breakthroughs of recent years — AI systems that learned how to play strategy games , generate fake photos of celebrities , fold proteins , and compete in massive multiplayer online strategy games — have happened because that’s no longer true. Lots of algorithms that seemed not to work at all turned out to work quite well once we could run them with more computing power.

And the cost of a unit of computing time keeps falling . Progress in computing speed has slowed recently, but the cost of computing power is still estimated to be falling by a factor of 10 every 10 years. Through most of its history, AI has had access to less computing power than the human brain. That’s changing. By most estimates , we’re now approaching the era when AI systems can have the computing resources that we humans enjoy.

And deep learning, unlike previous approaches to AI, is highly suited to developing general capabilities.

“If you go back in history,” top AI researcher and OpenAI cofounder Ilya Sutskever told me , “they made a lot of cool demos with little symbolic AI. They could never scale them up — they were never able to get them to solve non-toy problems. Now with deep learning the situation is reversed. ... Not only is [the AI we’re developing] general, it’s also competent — if you want to get the best results on many hard problems, you must use deep learning. And it’s scalable.”

In other words, we didn’t need to worry about general AI back when winning at chess required entirely different techniques than winning at Go. But now, the same approach produces fake news or music depending on what training data it is fed. And as far as we can discover, the programs just keep getting better at what they do when they’re allowed more computation time — we haven’t discovered a limit to how good they can get. Deep learning approaches to most problems blew past all other approaches when deep learning was first discovered.

Furthermore, breakthroughs in a field can often surprise even other researchers in the field. “Some have argued that there is no conceivable risk to humanity [from AI] for centuries to come,” wrote UC Berkeley professor Stuart Russell, “perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.”

Learn about the smart ways people are fixing the world’s problems. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good. Sign up for the Future Perfect newsletter .

There’s another consideration. Imagine an AI that is inferior to humans at everything, with one exception: It’s a competent engineer that can build AI systems very effectively. Machine learning engineers who work on automating jobs in other fields often observe, humorously, that in some respects, their own field looks like one where much of the work — the tedious tuning of parameters — could be automated.

If we can design such a system, then we can use its result — a better engineering AI — to build another, even better AI. This is the mind-bending scenario experts call “recursive self-improvement,” where gains in AI capabilities enable more gains in AI capabilities, allowing a system that started out behind us to rapidly end up with abilities well beyond what we anticipated.

This is a possibility that has been anticipated since the first computers. I.J. Good, a colleague of Alan Turing who worked at the Bletchley Park codebreaking operation during World War II and helped build the first computers afterward, may have been the first to spell it out, back in 1965 : “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

3) How exactly could AI wipe us out?

It’s immediately clear how nuclear bombs will kill us . No one working on mitigating nuclear risk has to start by explaining why it’d be a bad thing if we had a nuclear war.

The case that AI could pose an existential risk to humanity is more complicated and harder to grasp. So many of the people who are working to build safe AI systems have to start by explaining why AI systems, by default, are dangerous.

science is a threat to humanity essay

The idea that AI can become a danger is rooted in the fact that AI systems pursue their goals, whether or not those goals are what we really intended — and whether or not we’re in the way. “You’re probably not an evil ant-hater who steps on ants out of malice,” Stephen Hawking wrote, “but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Here’s one scenario that keeps experts up at night: We develop a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the world’s computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

It is easy to design an AI that averts that specific pitfall. But there are lots of ways that unleashing powerful computer systems will have unexpected and potentially devastating effects, and avoiding all of them is a much harder problem than avoiding any specific one.

Victoria Krakovna, an AI researcher at DeepMind (now a division of Alphabet, Google’s parent company), compiled a list of examples of “specification gaming” : the computer doing what we told it to do but not what we wanted it to do. For example, we tried to teach AI organisms in a simulation to jump, but we did it by teaching them to measure how far their “feet” rose above the ground. Instead of jumping, they learned to grow into tall vertical poles and do flips — they excelled at what we were measuring, but they didn’t do what we wanted them to do.

An AI playing the Atari exploration game Montezuma’s Revenge found a bug that let it force a key in the game to reappear , thereby allowing it to earn a higher score by exploiting the glitch. An AI playing a different game realized it could get more points by falsely inserting its name as the owner of high-value items .

Sometimes, the researchers didn’t even know how their AI system cheated : “the agent discovers an in-game bug. ... For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit).”

What these examples make clear is that in any system that might have bugs or unintended behavior or behavior humans don’t fully understand, a sufficiently powerful AI system might act unpredictably — pursuing its goals through an avenue that isn’t the one we expected.

In his 2009 paper “The Basic AI Drives,” Steve Omohundro , who has worked as a computer science professor at the University of Illinois Urbana-Champaign and as the president of Possibility Research, argues that almost any AI system will predictably try to accumulate more resources, become more efficient, and resist being turned off or modified: “These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”

His argument goes like this: Because AIs have goals, they’ll be motivated to take actions that they can predict will advance their goals. An AI playing a chess game will be motivated to take an opponent’s piece and advance the board to a state that looks more winnable.

But the same AI, if it sees a way to improve its own chess evaluation algorithm so it can evaluate potential moves faster, will do that too, for the same reason: It’s just another step that advances its goal.

If the AI sees a way to harness more computing power so it can consider more moves in the time available, it will do that. And if the AI detects that someone is trying to turn off its computer mid-game, and it has a way to disrupt that, it’ll do it. It’s not that we would instruct the AI to do things like that; it’s that whatever goal a system has, actions like these will often be part of the best path to achieve that goal.

That means that any goal, even innocuous ones like playing chess or generating advertisements that get lots of clicks online, could produce unintended results if the agent pursuing it has enough intelligence and optimization power to identify weird, unexpected routes to achieve its goals.

Goal-driven systems won’t wake up one day with hostility to humans lurking in their hearts. But they will take actions that they predict will help them achieve their goal — even if we’d find those actions problematic, even horrifying. They’ll work to preserve themselves, accumulate more resources, and become more efficient. They already do that, but it takes the form of weird glitches in games. As they grow more sophisticated, scientists like Omohundro predict more adversarial behavior.

4) When did scientists first start worrying about AI risk?

Scientists have been thinking about the potential of artificial intelligence since the early days of computers. In the famous paper where he put forth the Turing test for determining if an artificial system is truly “intelligent,” Alan Turing wrote:

Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. ... There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.

I.J. Good worked closely with Turing and reached the same conclusions, according to his assistant, Leslie Pendleton . In an excerpt from unpublished notes Good wrote shortly before he died in 2009, he writes about himself in third person and notes a disagreement with his younger self — while as a younger man, he thought powerful AIs might be helpful to us, the older Good expected AI to annihilate us.

[The paper] “Speculations Concerning the First Ultra-intelligent Machine” (1965) ... began: “The survival of man depends on the early construction of an ultra-intelligent machine.” Those were his words during the Cold War, and he now suspects that “survival” should be replaced by “extinction.” He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that “probably Man will construct the deus ex machina in his own image.”

In the 21st century, with computers quickly establishing themselves as a transformative force in our world, younger researchers started expressing similar worries.

Nick Bostrom is a professor at the University of Oxford, the director of the Future of Humanity Institute, and the director of the Governance of Artificial Intelligence Program . He researches risks to humanity , both in the abstract — asking questions like why we seem to be alone in the universe — and in concrete terms, analyzing the technological advances on the table and whether they endanger us. AI, he concluded, endangers us.

In 2014, he wrote a book explaining the risks AI poses and the necessity of getting it right the first time, concluding, “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”

science is a threat to humanity essay

Across the world, others have reached the same conclusion. Bostrom co-authored a paper on the ethics of artificial intelligence with Eliezer Yudkowsky, founder of and research fellow at the Berkeley Machine Intelligence Research Institute (MIRI), an organization that works on better formal characterizations of the AI safety problem.

Yudkowsky started his career in AI by worriedly poking holes in others’ proposals for how to make AI systems safe , and has spent most of it working to persuade his peers that AI systems will, by default, be unaligned with human values (not necessarily opposed to but indifferent to human morality) — and that it’ll be a challenging technical problem to prevent that outcome.

Increasingly, researchers realized that there’d be challenges that hadn’t been present with AI systems when they were simple. “‘Side effects’ are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their importance in the future,” concluded a 2016 research paper on problems in AI safety.

Bostrom’s book Superintelligence was compelling to many people, but there were skeptics. “ No, experts don’t think superintelligent AI is a threat to humanity ,” argued an op-ed by Oren Etzioni, a professor of computer science at the University of Washington and CEO of the Allan Institute for Artificial Intelligence. “ Yes, we are worried about the existential risk of artificial intelligence ,” replied a dueling op-ed by Stuart Russell, an AI pioneer and UC Berkeley professor, and Allan DaFoe, a senior research fellow at Oxford and director of the Governance of AI program there.

It’s tempting to conclude that there’s a pitched battle between AI-risk skeptics and AI-risk believers. In reality, they might not disagree as profoundly as you would think.

Facebook’s chief AI scientist Yann LeCun, for example, is a vocal voice on the skeptical side. But while he argues we shouldn’t fear AI, he still believes we ought to have people working on, and thinking about, AI safety . “Even if the risk of an A.I. uprising is very unlikely and very far in the future, we still need to think about it, design precautionary measures, and establish guidelines,” he writes.

That’s not to say there’s an expert consensus here — far from it . There is substantial disagreement about which approaches seem likeliest to bring us to general AI, which approaches seem likeliest to bring us to safe general AI, and how soon we need to worry about any of this.

Many experts are wary that others are overselling their field, and dooming it when the hype runs out . But that disagreement shouldn’t obscure a growing common ground; these are possibilities worth thinking about, investing in, and researching, so we have guidelines when the moment comes that they’re needed.

5) Why couldn’t we just shut off a computer if it got too powerful?

A smart AI could predict that we’d want to turn it off if it made us nervous. So it would try hard not to make us nervous, because doing so wouldn’t help it accomplish its goals. If asked what its intentions are, or what it’s working on, it would attempt to evaluate which responses are least likely to get it shut off, and answer with those. If it wasn’t competent enough to do that, it might pretend to be even dumber than it was — anticipating that researchers would give it more time, computing resources, and training data.

So we might not know when it’s the right moment to shut off a computer.

We also might do things that make it impossible to shut off the computer later, even if we realize eventually that it’s a good idea. For example, many AI systems could have access to the internet, which is a rich source of training data and which they’d need if they’re to make money for their creators (for example, on the stock market, where more than half of trading is done by fast-reacting AI algorithms).

But with internet access, an AI could email copies of itself somewhere where they’ll be downloaded and read, or hack vulnerable systems elsewhere. Shutting off any one computer wouldn’t help.

In that case, isn’t it a terrible idea to let any AI system — even one which doesn’t seem powerful enough to be dangerous — have access to the internet? Probably. But that doesn’t mean it won’t continue to happen. AI researchers want to make their AI systems more capable — that’s what makes them more scientifically interesting and more profitable. It’s not clear that the many incentives to make your systems powerful and use them online will suddenly change once systems become powerful enough to be dangerous.

So far, we’ve mostly talked about the technical challenges of AI. But from here forward, it’s necessary to veer more into the politics. Since AI systems enable incredible things, there will be lots of different actors working on such systems.

There will likely be startups, established tech companies like Google (Alphabet’s recently acquired startup DeepMind is frequently mentioned as an AI frontrunner), and organizations like Elon-Musk-founded OpenAI, which recently transitioned to a hybrid for-profit/non-profit structure .

There will be governments — Russia’s Vladimir Putin has expressed an interest in AI , and China has made big investments . Some of them will presumably be cautious and employ safety measures, including keeping their AI off the internet. But in a scenario like this one, we’re at the mercy of the least cautious actor , whoever they may be.

That’s part of what makes AI hard: Even if we know how to take appropriate precautions (and right now we don’t), we also need to figure out how to ensure that all would-be AI programmers are motivated to take those precautions and have the tools to implement them correctly.

6) What are we doing right now to avoid an AI apocalypse?

“It could be said that public policy on AGI [artificial general intelligence] does not exist,” concluded a paper in 2018 reviewing the state of the field .

The truth is that technical work on promising approaches is getting done, but there’s shockingly little in the way of policy planning, international collaboration, or public-private partnerships. In fact, much of the work is being done by only a handful of organizations, and it has been estimated that around 50 people in the world work full time on technical AI safety.

Bostrom’s Future of Humanity Institute has published a research agenda for AI governance : the study of “devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.” It has published research on the risk of malicious uses of AI , on the context of China’s AI strategy, and on artificial intelligence and international security .

The longest-established organization working on technical AI safety is the Machine Intelligence Research Institute (MIRI), which prioritizes research into designing highly reliable agents — artificial intelligence programs whose behavior we can predict well enough to be confident they’re safe. (Disclosure: MIRI is a nonprofit and I donated to its work in 2017-2019.)

The Elon Musk-founded OpenAI is a very new organization, less than three years old. But researchers there are active contributors to both AI safety and AI capabilities research. A research agenda in 2016 spelled out “ concrete open technical problems relating to accident prevention in machine learning systems,” and researchers have since advanced some approaches to safe AI systems .

Alphabet’s DeepMind, a leader in this field, has a safety team and a technical research agenda outlined here . “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” it concludes, outlining an approach with an emphasis on specification (designing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing).

There are also lots of people working on more present-day AI ethics problems: algorithmic bias , robustness of modern machine-learning algorithms to small changes, and transparency and interpretability of neural nets , to name just a few. Some of that research could potentially be valuable for preventing destructive scenarios.

But on the whole, the state of the field is a little bit as if almost all climate change researchers were focused on managing the droughts, wildfires, and famines we’re already facing today, with only a tiny skeleton team dedicating to forecasting the future and 50 or so researchers who work full time on coming up with a plan to turn things around.

Not every organization with a major AI department has a safety team at all, and some of them have safety teams focused only on algorithmic fairness and not on the risks from advanced systems. The US government doesn’t have a department for AI.

The field still has lots of open questions — many of which might make AI look much scarier, or much less so — which no one has dug into in depth.

7) Is this really likelier to kill us all than, say, climate change?

It sometimes seems like we’re facing dangers from all angles in the 21st century. Both climate change and future AI developments are likely to be transformative forces acting on our world.

Our predictions about climate change are more confident, both for better and for worse. We have a clearer understanding of the risks the planet will face, and we can estimate the costs to human civilization. They are projected to be enormous, risking potentially hundreds of millions of lives. The ones who will suffer most will be low-income people in developing countries ; the wealthy will find it easier to adapt. We also have a clearer understanding of the policies we need to enact to address climate change than we do with AI.

science is a threat to humanity essay

There’s intense disagreement in the field on timelines for critical advances in AI . While AI safety experts agree on many features of the safety problem, they’re still making the case to research teams in their own field, and they disagree on some of the details. There’s substantial disagreement on how badly it could go, and on how likely it is to go badly. There are only a few people who work full time on AI forecasting. One of the things current researchers are trying to nail down is their models and the reasons for the remaining disagreements about what safe approaches will look like.

Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction . But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.

8) Is there a possibility that AI can be benevolent?

AI safety researchers emphasize that we shouldn’t assume AI systems will be benevolent by default . They’ll have the goals that their training environment set them up for, and no doubt this will fail to encapsulate the whole of human values.

When the AI gets smarter, might it figure out morality by itself? Again, researchers emphasize that it won’t. It’s not really a matter of “figuring out” — the AI will understand just fine that humans actually value love and fulfillment and happiness, and not just the number associated with Google on the New York Stock Exchange. But the AI’s values will be built around whatever goal system it was initially built around, which means it won’t suddenly become aligned with human values if it wasn’t designed that way to start with.

Of course, we can build AI systems that are aligned with human values, or at least that humans can safely work with. That is ultimately what almost every organization with an artificial general intelligence division is trying to do. Success with AI could give us access to decades or centuries of technological innovation all at once.

“If we’re successful, we believe this will be one of the most important and widely beneficial scientific advances ever made,” writes the introduction to Alphabet’s DeepMind . “From climate change to the need for radically improved healthcare, too many problems suffer from painfully slow progress, their complexity overwhelming our ability to find solutions. With AI as a multiplier for human ingenuity, those solutions will come into reach.”

So, yes, AI can share our values — and transform our world for the good. We just need to solve a very hard engineering problem first.

9) I just really want to know: how worried should we be?

To people who think the worrying is premature and the risks overblown, AI safety is competing with other priorities that sound, well, a bit less sci-fi — and it’s not clear why AI should take precedence. To people who think the risks described are real and substantial, it’s outrageous that we’re dedicating so few resources to working on them.

While machine-learning researchers are right to be wary of hype, it’s also hard to avoid the fact that they’re accomplishing some impressive, surprising things using very generalizable techniques, and that it doesn’t seem that all the low-hanging fruit has been picked.

AI looks increasingly like a technology that will change the world when it arrives. Researchers across many major AI organizations tell us it will be like launching a rocket : something we have to get right before we hit “go.” So it seems urgent to get to work learning rocketry. No matter whether or not humanity should be afraid, we should definitely be doing our homework.

More in this stream

California’s governor has the chance to make AI history

Most Popular

  • The astonishing link between bats and the deaths of human babies
  • iPad kids speak up
  • How Russia secretly paid millions to a bunch of big right-wing podcasters
  • Has The Bachelorette finally gone too far?
  • Take a mental break with the newest Vox crossword

Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.

 alt=

This is the title for the native ad

 alt=

More in Future Perfect

What are ultra-processed foods and why are they bad for you?

From granola bars to chips, more studies are revealing that UPFs are tied to diseases like cancer and depression.

Should we think twice about fluoride?

Too much fluoride might lower IQ in kids, a new federal report says. The science (and debate), explained.

Rich countries are flooding the developing world with their used gas cars

EVs help reduce greenhouse emissions. But too many used gas-guzzlers could make that impossible.

America isn’t ready for another war — because it doesn’t have the troops

The US military’s recruiting crisis, explained.

Gavin Newsom could decide the future of AI safety. But will he cave to billionaire pressure?

The case of the nearly 7,000 missing pancreases

Organ companies are getting up to pancreas hijinks.

science is a threat to humanity essay

Why AI poses an existential danger to humanity

science is a threat to humanity essay

ILLUSTRATION: THE GLOBE AND MAIL. SOURCES: PUBLIC DOMAIN/GETTY IMAGES

Yuval Noah Harari’s latest book is Nexus: A Brief History of Information Networks from the Stone Age to AI , from which this essay has been adapted.

Many experts warn that the rise of AI might result in the collapse of human civilization, or even in the extinction of the human species. In a 2023 survey of 2,778 AI researchers, more than a third gave at least a 10-per-cent chance to advanced AI leading to outcomes as bad as human extinction. In 2023 close to 30 governments – including those of China, the United States, and the U.K. – signed the Bletchley Declaration on AI, which acknowledged that “there is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”

To some people, these warnings sound like over-the-top jeremiads. Every time a powerful new technology has emerged, anxieties arose that it might bring about the apocalypse. For example, as the Industrial Revolution unfolded many people feared that steam engines and telegraphs would destroy our societies and our well-being. But the machines ended up producing the most affluent societies in history. Most people today enjoy far better living conditions than their ancestors in the 18th century. AI enthusiasts such as Marc Andreessen and Ray Kurzweil promise that intelligent machines will prove even more beneficial than their industrial predecessors. They argue that thanks to AI, humans will enjoy much better health care, education and other services, and AI will even help save the ecosystem from collapse.

Unfortunately, a closer look at history reveals that humans actually have good reasons to fear powerful new technologies. Even if in the end the positives of these technologies outweigh their negatives, getting to that happy ending usually involves a lot of trials and tribulations. Novel technology often leads to historical disasters, not because the technology is inherently bad, but because it takes time for humans to learn how to use it wisely.

The Industrial Revolution is a prime example. When industrial technology began spreading globally in the 19th century, it upended traditional economic, social and political structures and opened the way to create entirely new societies, which were potentially more affluent and peaceful. However, learning how to build benign industrial societies was far from straightforward and involved many costly experiments and hundreds of millions of victims.

One costly experiment was modern imperialism. The Industrial Revolution originated in Britain in the late 18th century. During the 19th century industrial technologies and production methods were adopted in other European countries ranging from Belgium to Russia, as well as in the United States and Japan. Imperialist thinkers, politicians and parties in these industrial heartlands concluded that the only viable industrial society was an empire. The argument was that unlike traditional agrarian societies, the novel industrial societies relied much more on foreign markets and foreign raw materials, and only an empire could satisfy these unprecedented appetites. Imperialists feared that countries that industrialized but failed to conquer any colonies would be shut out from essential raw materials and markets by more ruthless competitors. Some imperialists argued that acquiring colonies was not just essential for the survival of their own state but beneficial for the rest of humanity, too. They claimed empires alone could spread the benefits of the new technologies to the so-called undeveloped world.

Consequently, industrial countries such as Britain and Russia that already had empires greatly expanded them, whereas countries like the United States, Japan, Italy and Belgium set out to build them. Equipped with mass-produced rifles and artillery, conveyed by steam power, and commanded by telegraph, the armies of industry swept the globe from New Zealand to Korea, and from Somalia to Turkmenistan. Millions of Indigenous people saw their traditional way of life trampled under the wheels of these industrial armies. It took more than a century of misery before most people realized that the industrial empires were a terrible idea and that there were better ways to build an industrial society and secure its necessary raw materials and markets.

Stalinism and Nazism were also extremely costly experiments in how to construct industrial societies. Leaders such as Stalin and Hitler argued that the Industrial Revolution had unleashed immense powers that only totalitarianism could rein in and exploit to the full. They pointed to the First World War – the first “total war” in history – as proof that survival in the industrial world demanded totalitarian control of all aspects of politics, society and the economy. On the positive side, they also claimed that the Industrial Revolution was like a furnace that melts all previous social structures with their human imperfections and weaknesses and provides the opportunity to forge perfect new societies inhabited by new unalloyed superhumans.

On the way to creating the perfect industrial society, Stalinists and Nazis learned how to industrially murder millions of people. Trains, barbed wires and telegraphed orders were linked to create an unprecedented killing machine. Looking back, most people today are horrified by what the Stalinists and Nazis perpetrated, but at the time their audacious visions mesmerized millions. In 1940 it was easy to believe that Stalin and Hitler were the model for harnessing industrial technology, whereas the dithering liberal democracies were on their way to the dustbin of history.

science is a threat to humanity essay

As the Industrial Revolution unfolded, many people feared that steam engines and telegraphs would destroy our societies and our well-being. GETTY IMAGES

The very existence of competing recipes for building industrial societies led to costly clashes. The two world wars and the Cold War can be seen as a debate about the proper way to go about it, in which all sides learned from each other, while experimenting with novel industrial methods to wage war. In the course of this debate, tens of millions died and humankind came perilously close to annihilating itself.

On top of all these other catastrophes, the Industrial Revolution also undermined the global ecological balance, causing a wave of extinctions. In the early 21st century up to 58,000 species are believed to go extinct every year, and total vertebrate populations have declined by 60 per cent between 1970 and 2014. The survival of human civilization, too, is under threat. Because we still seem unable to build an industrial society that is also ecologically sustainable, the vaunted prosperity of the present human generation comes at a terrible cost to other sentient beings and to future human generations. Maybe we’ll eventually find a way – perhaps with the help of AI – to create ecologically sustainable industrial societies, but until that day the jury on the Industrial Revolution is still out.

If we ignore for a moment the continuing damage to the ecosystem, we can nevertheless try to comfort ourselves with the thought that eventually humans did learn how to build more benevolent industrial societies. Imperial conquests, world wars, genocides and totalitarian regimes were woeful experiments that taught humans how not to do it. By the end of the 20th century, some might argue, humanity got it more or less right.

Yet even so the message to the 21st century is bleak. If it took humanity so many terrible lessons to learn how to manage steam power and telegraphs, what would it cost to learn to manage AI? AI is potentially far more powerful and unruly than steam engines, telegraphs and every previous technology, because it is the first technology in history that can make decisions and create new ideas by itself. AI isn’t a tool – it is an agent. Machine guns and atom bombs replaced human muscles in the act of killing, but they couldn’t replace human brains in deciding whom to kill. Little Boy – the bomb dropped on Hiroshima – exploded with a force of 12,500 tons of TNT, but when it came to brainpower, Little Boy was a dud. It couldn’t decide anything.

It is different with AI. In terms of intelligence, AIs far surpass not just atom bombs but also all previous information technology, such as clay tablets, printing presses and radio sets. Clay tablets stored information about taxes, but they couldn’t decide by themselves how much tax to levy, nor could they invent an entirely new tax. Printing presses copied information such as the Bible, but they couldn’t decide which texts to include in the Bible, nor could they write new commentaries on the holy book. Radio sets disseminated information such as political speeches and symphonies, but they couldn’t decide which speeches or symphonies to broadcast, nor could they compose them. AIs can do all these things, and it can even invent new weapons of mass destruction – from superpowerful nuclear bombs to superdeadly pandemics. While printing presses and radio sets were passive tools in human hands, AIs are already becoming active agents that might escape our control and understanding and that can take initiatives in shaping society, culture and history.

Perhaps we will eventually find ways to keep AIs under control and deploy them for the benefit of humanity. But would we need to go through another cycle of global empires, totalitarian regimes and world wars in order to figure out how to use AI benevolently? Since the technologies of the 21st century are far more powerful – and potentially far more destructive – than those of the 20th century, we have less room for error. In the 20th century, we can say that humanity got a C minus in the lesson on using industrial technology. Just enough to pass. In the 21st century, the bar is set much higher. We must do better this time.

Report an editorial error

Report a technical issue

Editorial code of conduct

Follow related authors and topics

  • Artificial Intelligence
  • Technology and Innovation

Authors and topics you follow will be added to your personal news feed in Following .

Interact with The Globe

journal-menu-img

Trending Terms:

  • artificial intelligence
  • ScienceInsider
  • News Features

More damaging than tornadoes, hail may finally get the scientific attention it deserves

With drones, mobile radars, and 3d printers, first major field campaign in 45 years to bring hail research “into the 21st century”.

  • 12:45 PM ET
  • By Hannah Richter

dark storm clouds approaching field

Hurricanes get names. Tornadoes get blockbuster movies. When it comes to extreme weather, those storms “tend to get a disproportionate share of the attention,” says Victor Gensini, a meteorologist at Northern Illinois University. Meanwhile hailstorms fly under the radar, literally and metaphorically.

Yet hailstorms total cars, destroy roofs, and devastate crops, costing the United States $46 billion in 2023—representing 60% to 80% of the losses from hail, tornadoes, wind, and lightning-caused fires combined, according to the Insurance Institute for Business & Home Safety (IBHS). So it’s no surprise U.S. hail scientists are frustrated that their last major research campaign took place in the late 1970s. “It’s just this big gaping hole,” says Becky Adams-Selin, a hail scientist at Atmospheric and Environmental Research.

A project called the In-situ Collaborative Experiment for the Collection of Hail in the Plains—ICECHIP, for short—is set to change that. In August, the National Science Foundation approved more than $11 million in funding for the research effort, which will be the largest ever international campaign for studying hail. Some 100 researchers from four countries and 11 states are planning fieldwork in May and June of 2025 in the U.S. Central Plains states and along Colorado and Wyoming’s Front Range—two of Earth’s “hail alleys,” home to powerful thunderstorms that blast raindrops high in the atmosphere, where they freeze and grow layer by layer into hailstones. Led by Adams-Selin, ICECHIP researchers hope to gather data that could improve hailstorm prediction and help answer fundamental questions, such as how climate change will affect the frequency of hailstorms and the size of the stones.

“What really excites me is the prospect of having high-quality observations using technologies that just didn’t exist when we had our previous [campaign],” says John Allen, a meteorologist at Central Michigan University and one of the lead scientists on ICECHIP.

One of the most pressing questions both scientifically and economically is how to better predict hailstorms and hailstones, which can range in size from pebbles to grapefruits. By studying the environmental factors that coincide with hailstorms, such as temperature, humidity, and wind, ICECHIP researchers hope that one day weather agencies will be able to provide hail watches and warnings, just as they do for other severe weather events. Residents in hail-prone states might be prompted to stay indoors and park their cars in garages, for example.

The data will also help gauge the effect of a warming planet on hail. Last month, a modeling study led by Gensini and published in Climate and Atmospheric Science found that, overall, hailstorms should become less common, because warming air will tend to melt more stones as they fall. But the study also predicted warming will drive up the frequency of the strongest thunderstorms, where updrafts suspend hailstones for longer, allowing them to grow larger. Field measurements will help test the relationships between temperature, storm strength, and hailstone size.

The campaign will rely on technologies new to hail research, such as high-speed videography, drone imagery, and mobile radar, for example. Researchers will also use balloons to deliver 3D-printed spherical sensors called hailsondes into storms to mimic the movement of hailstones. A fleet of 24 ground-based sensors will measure the energy of falling hailstones every 500th of a second, and 3D laser scanners will record collected hailstones’ intricate shapes. Back in the lab, ICECHIP researchers will analyze hailstones’ ice layers to determine the temperature and altitude at which they formed—and, in turn, the paths they took through storms.

Such tools will bring hail research “into the 21st century,” says Julian Brimelow, a meteorologist and executive director of Canada’s Northern Hail Project, a research group involved in ICECHIP. But some technologies from the 1970s will remain, such as hail pads: Pieces of foam topped with aluminum foil to catch hailstones and provide outlines of their sizes and shapes.

researcher hold hail stones in hands with tennis ball for size comparison

The results should help society cope with an often overlooked threat. The impact energy of hail rises exponentially with size; a common 5-centimeter hailstone falls with about three times as much energy as a penny dropped from the Empire State Building. But stones nearly as big as volleyballs have been documented. Although hail casualties are extremely rare, in 2022 a toddler in northern Spain died after being struck on the head during a particularly violent hailstorm. Property damage is far more common; it’s not unusual for people in hail-prone states, like Nebraska, Kansas, and Oklahoma, to replace their roofs every 5 years, Adams-Selin says.

Hail not only impacts homes and vehicles, but also agriculture. Even small hailstones can threaten crops, especially when windstorms turn them into projectiles that can strip plants of their leaves and bend their stalks, stunting their growth and rendering them susceptible to disease. There are counties in Nebraska where farmers’ number one concern is not drought, flooding, or wind damage, but hail, says Gensini, who is also a lead scientist on ICECHIP. Some farmers even take out hail insurance.

A newer vulnerability from hail is solar farms, which are often built in flat, sunny areas throughout the Central Plains. While all solar panels are required to meet impact testing standards for hailstones as big as billiard balls, larger stones can fracture the glass. Hail losses of all kinds are trending upward, although it isn’t clear whether that is due to a climate-driven rise in large hailstone events or an expansion of property and infrastructure into previously uninhabited areas.

Ian Giammanco, the lead research meteorologist at IBHS and a member of ICECHIP, has been testing more durable and flexible roofing materials that can withstand hail better. Incorporating such materials into building codes would limit costly repairs and rises in insurance premiums. “The dollars have just become too big to ignore now,” he says. “I think that’s why you’ve seen a renaissance in hail research.”

Relevant tags:

About the author.

Hannah Richter

Hannah Richter

Hannah Richter is a news intern at Science based in Washington, D.C. She has an M.S. in Science Writing from MIT and a B.S. in Environmental Science from the University of Chicago. Richter primarily covers environmental science and astronomy, with a particular interest in the process and culture of science.

More from news

Myotis septentrionalis with growth of Geomyces destructans clearly evident

  • Erik Stokstad

A shipment of mpox vaccines

  • Sara Reardon

SIGN UP FOR THE SCIENCE ADVISER NEWSLETTER

Support nonprofit science journalism.

Sophisticated, trustworthy reporting about science has never been more important. As part of the AAAS mission, Science has built a global award-winning network of reporters and editors that independently cover the most important developments in research and policy.

Your tax-deductible contribution plays a critical role in sustaining this effort.

news-white-logo

Get the latest news, commentary, and research, free to your inbox daily.

When A.I.’s Output Is a Threat to A.I. Itself

As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.

By Aatish Bhatia

Aatish Bhatia interviewed A.I. researchers, studied research papers and fed an A.I. system its own output.

The internet is becoming awash in words and images generated by artificial intelligence.

Sam Altman, OpenAI’s chief executive, wrote in February that the company generated about 100 billion words per day — a million novels’ worth of text, every day, an unknown share of which finds its way onto the internet.

A.I.-generated text may show up as a restaurant review, a dating profile or a social media post. And it may show up as a news article, too: NewsGuard, a group that tracks online misinformation, recently identified over a thousand websites that churn out error-prone A.I.-generated news articles .

In reality, with no foolproof methods to detect this kind of content, much will simply remain undetected.

All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.

In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.

Here’s a simple illustration of what happens when an A.I. system is trained on its own output, over and over again:

This is part of a data set of 60,000 handwritten digits.

When we trained an A.I. to mimic those digits, its output looked like this.

This new set was made by an A.I. trained on the previous A.I.-generated digits. What happens if this process continues?

After 20 generations of training new A.I.s on their predecessors’ output, the digits blur and start to erode.

After 30 generations, they converge into a single shape.

While this is a simplified example, it illustrates a problem on the horizon.

Imagine a medical-advice chatbot that lists fewer diseases that match your symptoms, because it was trained on a narrower spectrum of medical knowledge generated by previous chatbots. Or an A.I. history tutor that ingests A.I.-generated propaganda and can no longer separate fact from fiction.

Just as a copy of a copy can drift away from the original, when generative A.I. is trained on its own content, its output can also drift away from reality, growing further apart from the original data that it was intended to imitate.

In a paper published last month in the journal Nature, a group of researchers in Britain and Canada showed how this process results in a narrower range of A.I. output over time — an early stage of what they called “model collapse.”

The eroding digits we just saw show this collapse. When untethered from human input, the A.I. output dropped in quality (the digits became blurry) and in diversity (they grew similar).

How an A.I. that draws digits “collapses” after being trained on its own output

“6”“8”“9”
Handwritten digits
Initial A.I. output
After 10 generations
After 20 generations
After 30 generations

If only some of the training data were A.I.-generated, the decline would be slower or more subtle. But it would still occur, researchers say, unless the synthetic data was complemented with a lot of new, real data.

Degenerative A.I.

In one example, the researchers trained a large language model on its own sentences over and over again, asking it to complete the same prompt after each round.

When they asked the A.I. to complete a sentence that started with “To cook a turkey for Thanksgiving, you…,” at first, it responded like this:

Initial A.I. output

Even at the outset, the A.I. “hallucinates.” But when the researchers further trained it on its own sentences, it got a lot worse…

After two generations, it started simply printing long lists.

And after four generations, it began to repeat phrases incoherently.

“The model becomes poisoned with its own projection of reality,” the researchers wrote of this phenomenon.

This problem isn’t just confined to text. Another team of researchers at Rice University studied what would happen when the kinds of A.I. that generate images are repeatedly trained on their own output — a problem that could already be occurring as A.I.-generated images flood the web.

They found that glitches and image artifacts started to build up in the A.I.’s output, eventually producing distorted images with wrinkled patterns and mangled fingers.

A grid of A.I.-generated faces showing wrinkled patterns and visual distortions.

When A.I. image models are trained on their own output, they can produce distorted images, mangled fingers or strange patterns.

A.I.-generated images by Sina Alemohammad and others .

“You’re kind of drifting into parts of the space that are like a no-fly zone,” said Richard Baraniuk , a professor who led the research on A.I. image models.

The researchers found that the only way to stave off this problem was to ensure that the A.I. was also trained on a sufficient supply of new, real data.

While selfies are certainly not in short supply on the internet, there could be categories of images where A.I. output outnumbers genuine data, they said.

For example, A.I.-generated images in the style of van Gogh could outnumber actual photographs of van Gogh paintings in A.I.’s training data, and this may lead to errors and distortions down the road. (Early signs of this problem will be hard to detect because the leading A.I. models are closed to outside scrutiny, the researchers said.)

Why collapse happens

All of these problems arise because A.I.-generated data is often a poor substitute for the real thing.

This is sometimes easy to see, like when chatbots state absurd facts or when A.I.-generated hands have too many fingers.

But the differences that lead to model collapse aren’t necessarily obvious — and they can be difficult to detect.

When generative A.I. is “trained” on vast amounts of data, what’s really happening under the hood is that it is assembling a statistical distribution — a set of probabilities that predicts the next word in a sentence, or the pixels in a picture.

For example, when we trained an A.I. to imitate handwritten digits, its output could be arranged into a statistical distribution that looks like this:

Distribution of A.I.-generated data

Examples of initial A.I. output:

The distribution shown here is simplified for clarity.

The peak of this bell-shaped curve represents the most probable A.I. output — in this case, the most typical A.I.-generated digits. The tail ends describe output that is less common.

Notice that when the model was trained on human data, it had a healthy spread of possible outputs, which you can see in the width of the curve above.

But after it was trained on its own output, this is what happened to the curve:

Distribution of A.I.-generated data when trained on its own output

It gets taller and narrower. As a result, the model becomes more and more likely to produce a smaller range of output, and the output can drift away from the original data.

Meanwhile, the tail ends of the curve — which contain the rare, unusual or surprising outcomes — fade away.

This is a telltale sign of model collapse: Rare data becomes even rarer.

If this process went unchecked, the curve would eventually become a spike:

This was when all of the digits became identical, and the model completely collapsed.

Why it matters

This doesn’t mean generative A.I. will grind to a halt anytime soon.

The companies that make these tools are aware of these problems, and they will notice if their A.I. systems start to deteriorate in quality.

But it may slow things down. As existing sources of data dry up or become contaminated with A.I. “ slop ,” researchers say it makes it harder for newcomers to compete.

A.I.-generated words and images are already beginning to flood social media and the wider web . They’re even hiding in some of the data sets used to train A.I., the Rice researchers found .

“The web is becoming increasingly a dangerous place to look for your data,” said Sina Alemohammad , a graduate student at Rice who studied how A.I. contamination affects image models.

Big players will be affected, too. Computer scientists at N.Y.U. found that when there is a lot of A.I.-generated content in the training data, it takes more computing power to train A.I. — which translates into more energy and more money.

“Models won’t scale anymore as they should be scaling,” said ​​ Julia Kempe , the N.Y.U. professor who led this work.

The leading A.I. models already cost tens to hundreds of millions of dollars to train, and they consume staggering amounts of energy , so this can be a sizable problem.

‘A hidden danger’

Finally, there’s another threat posed by even the early stages of collapse: an erosion of diversity.

And it’s an outcome that could become more likely as companies try to avoid the glitches and “ hallucinations ” that often occur with A.I. data.

This is easiest to see when the data matches a form of diversity that we can visually recognize — people’s faces:

A grid of A.I.-generated faces showing variations in their poses, expressions, ages and races.

A.I. images generated by Sina Alemohammad and others .

After one generation of training on A.I. output, the A.I.-generated faces appear more similar.

This set of A.I. faces was created by the same Rice researchers who produced the distorted faces above. This time, they tweaked the model to avoid visual glitches.

This is the output after they trained a new A.I. on the previous set of faces. At first glance, it may seem like the model changes worked: The glitches are gone.

After two generations …

After three generations …

After four generations, the faces all appeared to converge.

This drop in diversity is “a hidden danger,” Mr. Alemohammad said. “You might just ignore it and then you don’t understand it until it's too late.”

Just as with the digits, the changes are clearest when most of the data is A.I.-generated. With a more realistic mix of real and synthetic data, the decline would be more gradual.

But the problem is relevant to the real world, the researchers said, and will inevitably occur unless A.I. companies go out of their way to avoid their own output.

Related research shows that when A.I. language models are trained on their own words, their vocabulary shrinks and their sentences become less varied in their grammatical structure — a loss of “ linguistic diversity .”

And studies have found that this process can amplify biases in the data and is more likely to erase data pertaining to minorities .

Perhaps the biggest takeaway of this research is that high-quality, diverse data is valuable and hard for computers to emulate.

One solution, then, is for A.I. companies to pay for this data instead of scooping it up from the internet , ensuring both human origin and high quality.

OpenAI and Google have made deals with some publishers or websites to use their data to improve A.I. (The New York Times sued OpenAI and Microsoft last year, alleging copyright infringement. OpenAI and Microsoft say their use of the content is considered fair use under copyright law.)

Better ways to detect A.I. output would also help mitigate these problems.

Google and OpenAI are working on A.I. “ watermarking ” tools, which introduce hidden patterns that can be used to identify A.I.-generated images and text.

But watermarking text is challenging , researchers say, because these watermarks can’t always be reliably detected and can easily be subverted (they may not survive being translated into another language, for example).

A.I. slop is not the only reason that companies may need to be wary of synthetic data. Another problem is that there are only so many words on the internet.

Some experts estimate that the largest A.I. models have been trained on a few percent of the available pool of text on the internet. They project that these models may run out of public data to sustain their current pace of growth within a decade.

“These models are so enormous that the entire internet of images or conversations is somehow close to being not enough,” Professor Baraniuk said.

To meet their growing data needs, some companies are considering using today’s A.I. models to generate data to train tomorrow’s models . But researchers say this can lead to unintended consequences (such as the drop in quality or diversity that we saw above).

There are certain contexts where synthetic data can help A.I.s learn — for example, when output from a larger A.I. model is used to train a smaller one, or when the correct answer can be verified, like the solution to a math problem or the best strategies in games like chess or Go .

And new research suggests that when humans curate synthetic data (for example, by ranking A.I. answers and choosing the best one), it can alleviate some of the problems of collapse.

Companies are already spending a lot on curating data, Professor Kempe said, and she believes this will become even more important as they learn about the problems of synthetic data.

But for now, there’s no replacement for the real thing.

About the data

To produce the images of A.I.-generated digits, we followed a procedure outlined by researchers . We first trained a type of a neural network known as a variational autoencoder using a standard data set of 60,000 handwritten digits .

We then trained a new neural network using only the A.I.-generated digits produced by the previous neural network, and repeated this process in a loop 30 times.

To create the statistical distributions of A.I. output, we used each generation’s neural network to create 10,000 drawings of digits. We then used the first neural network (the one that was trained on the original handwritten digits) to encode these drawings as a set of numbers, known as a “ latent space ” encoding. This allowed us to quantitatively compare the output of different generations of neural networks. For simplicity, we used the average value of this latent space encoding to generate the statistical distributions shown in the article.

  • Share full article

Advertisement

COMMENTS

  1. Graded Essay-Science Is A Threat To Humanity

    Graded Essay-Science is a Threat to Humanity - Free download as Word Doc (.doc / .docx), PDF File (.pdf), Text File (.txt) or read online for free. Science has brought many benefits to humanity such as medical advancements that save and improve lives, and easier and more effective communication. However, science can also threaten humanity if misused or overrelied on.

  2. The Power Of Science And The Danger Of Scientism

    The problem with Pinker's essay is that his main purpose is to convince friends in the humanities (history, literature, etc.) that adoption of methods from the science side of the campus poses no ...

  3. Science Is A Threat To Humanity Essay Example

    Table of contents. Science Is a Threat to Humanity 1st - Opposition. Assalamualaikum and a very good day to the wise and honorable adjudicators, the alert and punctual timekeeper, my fellow teammates, the misleading government team, and MOTH. Before I start, I would like to refute the definition given by the government team.

  4. Science is not the Enemy of the Humanities

    In this conception, science is of a piece with philosophy, reason, and Enlightenment humanism. It is distinguished by an explicit commitment to two ideals, and it is these that scientism seeks to ...

  5. Could science destroy the world? These scholars want to save us ...

    The only true existential threat, she says, is a familiar one: a global nuclear war. Otherwise, "There is nothing on the horizon." Harvard University psychologist Steven Pinker calls existential risks a "useless category" and warns that "Frankensteinian fantasies" could distract from real, solvable threats such as climate change and nuclear war.

  6. The Problem with Scientism

    Massimo Pigliucci. January 25, 2018. 32. Science is unquestionably the most powerful approach humanity has developed so far to the understanding of the natural world. There is little point in arguing about the spectacular successes of fundamental physics, evolutionary and molecular biology, and countless other fields of scientific inquiry.

  7. Here are the Top 10 threats to the survival of civilization

    Here are the Top 10 threats to the survival of civilization. From aliens and asteroids to pandemics, war and climate change, life as we know it is at risk. Civilization's downfall could be ...

  8. A Scientist's Warning to humanity on human population growth

    A Scientist's Warning to humanity on human population growth. One needs only to peruse the daily news to be aware that humanity is on a dangerous and challenging trajectory. This essay explores the prospect of adopting a science-based framework for confronting these potentially adverse prospects. It explores a perspective based on relevant ...

  9. When scientific advances can both help and hurt humanity

    As a reflection of how pressing this question is, on Jan. 4, the U.S. National Academies for Science, Engineering, and Medicine met to discuss how or if sensitive information arising in the life ...

  10. Science Is a Threat to Humanity

    According to Longman Dictionary of Contemporary English, science means knowledge about the world, especially based on examination and testing, and on facts that can be proved. Threat means someone or something that is regarded as a possible danger. Lastly, humanity means the state of being human and having qualities and rights that all people ...

  11. Science denial among the greatest risks to humanity, new 'call to arms

    Dr Hewson is one of a number of academics involved in the Commission for the Human Future, which has released a new report. The report lists the threats faced by humanity, including science denial ...

  12. Why climate change is still the greatest threat to human health

    Air pollution is detrimental to human health. Malnutrition is linked to a variety of illnesses, including heart disease, cancer, and diabetes. It can also increase the risk of stunting, or ...

  13. The five biggest threats to human existence

    1. Nuclear war. While only two nuclear weapons have been used in war so far - at Hiroshima and Nagasaki in World War II - and nuclear stockpiles are down from their the peak they reached in ...

  14. Responding to the Climate Threat: Essays on Humanity's Greatest

    Here, four scholars, each with decades of research on the climate threat, take on the task of explaining our current understanding of the climate threat and what can be done about it, in lay language―importantly, without losing critical aspects of the natural and social science. In a series of essays, published during the 2020 presidential ...

  15. Climate Change, Health and Existential Risks to Civilization: A

    1.1. Climate Change Science, Risk and the 2015 Paris Agreement. The scientific knowledge that gases, accumulating mainly from the burning of fossil fuels and the clearing of forests, add to the natural "greenhouse effect" has been known since the 19th century [].In 1957 scientists observed "human beings are now carrying out a largescale geophysical experiment of a kind which could not ...

  16. The case that AI threatens humanity, explained in 500 words

    The case that AI threatens humanity, explained in 500 words. The short version of a big conversation about the dangers of emerging technology. by Kelsey Piper. Feb 12, 2019, 8:10 AM PST. Javier ...

  17. AI Is an Existential Threat—Just Not the Way You Think

    On supporting science journalism. If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the ...

  18. AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype

    Opinion. August 12, 2023. 4 min read. AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype. Effective regulation of AI needs grounded science that investigates real harms, not ...

  19. Climate change: a threat to human wellbeing and health of the planet

    "The scientific evidence is unequivocal: climate change is a threat to human wellbeing and the health of the planet. Any further delay in concerted global action will miss a brief and rapidly closing window to secure a liveable future," said Hans-Otto Pörtner. For more information, please contact:

  20. AI Is Not Actually an Existential Threat to Humanity, Scientists Say

    Notable figures, including the late Stephen Hawking, have expressed fear about how future AI could threaten humanity. To address this concern we asked 11 experts in AI and Computer Science "Is AI an existential threat to humanity?" There was an 82 percent consensus that it is not an existential threat. Here is what we found out.

  21. How scientists found out that climate change is real and ...

    "Human beings are now carrying out a large-scale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future," Revelle and Suess wrote in the paper.

  22. Climate Change 'Biggest Threat Modern Humans Have Ever Faced', World

    Climate change is a "crisis multiplier" that has profound implications for international peace and stability, Secretary-General António Guterres told the Security Council today, amid calls for deep partnerships within and beyond the United Nations system to blunt its acute effects on food security, natural resources and migration patterns fuelling tensions across countries and regions.

  23. The case for taking AI seriously as a threat to humanity

    AI systems determine what you'll see in a Google search or in your Facebook Newsfeed. They compose music and write articles that, at a glance, read as if a human wrote them. They play strategy ...

  24. Opinion: Why AI poses an existential danger to humanity

    Whereas the threat of nuclear weapons is obvious for all, it is difficult to grasp why AIs are so dangerous. The history of the Industrial Revolution can help us understand the dangers inherent in ...

  25. More damaging than tornadoes, hail may finally get the ...

    In August, the National Science Foundation approved more than $11 million in funding for the research effort, which will be the largest ever international campaign for studying hail. Some 100 researchers from four countries and 11 states are planning fieldwork in May and June of 2025 in the U.S. Central Plains states and along Colorado and ...

  26. When A.I.'s Output Is a Threat to A.I. Itself

    "You're kind of drifting into parts of the space that are like a no-fly zone," said Richard Baraniuk, a professor who led the research on A.I. image models.. The researchers found that the ...