Nick Bostrom

Future of Humanity Institute

Faculty of Philosophy & James Martin 21 Century School

Oxford University

 

[Complete draft circulated (2007)]

[Published in , eds. Jan-Kyrre Berg Olsen, Evan Selinger, & Soren Riis (New York: Palgrave McMillan, 2009): 186-216]

[Reprinted in the journal , Vol. 1, No. 2 (2009): 41-78]

[ ]

 

 

The future of humanity is often viewed as a topic for idle speculation.  Yet our beliefs and assumptions on this subject matter shape decisions in both our personal lives and public policy – decisions that have very real and sometimes unfortunate consequences.  It is therefore practically important to try to develop a realistic mode of futuristic thought about big picture questions for humanity.  This paper sketches an overview of some recent attempts in this direction, and it offers a brief discussion of four families of scenarios for humanity’s future: extinction, recurrent collapse, plateau, and posthumanity.

 

 

In one sense, the future of humanity comprises everything that will ever happen to any human being, including what you will have for breakfast next Thursday and all the scientific discoveries that will be made next year.  In that sense, it is hardly reasonable to think of the future of humanity as a : it is too big and too diverse to be addressed as a whole in a single essay, monograph, or even 100-volume book series.  It is made into a topic by way of abstraction.  We abstract from details and short-term fluctuations and developments that affect only some limited aspect of our lives.  A discussion about the future of humanity is about how the important fundamental features of the human condition may change or remain constant in the long run.

              What features of the human condition are fundamental and important?  On this there can be reasonable disagreement.  Nonetheless, some features qualify by almost any standard.  For example, whether and when Earth-originating life will go extinct, whether it will colonize the galaxy, whether human biology will be fundamentally transformed to make us posthuman, whether machine intelligence will surpass biological intelligence, whether population size will explode, and whether quality of life will radically improve or deteriorate: these are all important fundamental questions about the future of humanity.  Less fundamental questions – for instance, about methodologies or specific technology projections – are also relevant insofar as they inform our views about more fundamental parameters.

              Traditionally, the future of humanity has been a topic for theology.  All the major religions have teachings about the ultimate destiny of humanity or the end of the world.   Eschatological themes have also been explored by big-name philosophers such as Hegel, Kant, and Marx.  In more recent times the literary genre of science fiction has continued the tradition.  Very often, the future has served as a projection screen for our hopes and fears; or as a stage setting for dramatic entertainment, morality tales, or satire of tendencies in contemporary society; or as a banner for ideological mobilization.  It is relatively rare for humanity’s future to be taken seriously as a subject matter on which it is important to try to have factually correct beliefs.  There is nothing wrong with exploiting the symbolic and literary affordances of an unknown future, just as there is nothing wrong with fantasizing about imaginary countries populated by dragons and wizards.  Yet it is important to attempt (as best we can) to distinguish futuristic scenarios put forward for their symbolic significance or entertainment value from speculations that are meant to be evaluated on the basis of literal plausibility.  Only the latter form of “realistic” futuristic thought will be considered in this paper.

              We need realistic pictures of what the future might bring in order to make sound decisions.  Increasingly, we need realistic pictures not only of our personal or local near-term futures, but also of remoter global futures.  Because of our expanded technological powers, some human activities now have significant global impacts.  The scale of human social organization has also grown, creating new opportunities for coordination and action, and there are many institutions and individuals who either consider, or to consider, or to consider, possible long-term global impacts of their actions.  Climate change, national and international security, economic development, nuclear waste disposal, biodiversity, natural resource conservation, population policy, and scientific and technological research funding are examples of policy areas that involve long time-horizons.  Arguments in these areas often rely on implicit assumptions about the future of humanity.  By making these assumptions explicit, and subjecting them to critical analysis, it might be possible to address some of the big challenges for humanity in a more well-considered and thoughtful manner.

              The fact that we “need” realistic pictures of the future does not entail that we can have them.  Predictions about future technical and social developments are notoriously unreliable – to an extent that have led some to propose that we do away with prediction altogether in our planning and preparation for the future.  Yet while the methodological problems of such forecasting are certainly very significant, the extreme view that we can or should do away with prediction altogether is misguided.  That view is expressed, to take one example, in a recent paper on the societal implications of nanotechnology by Michael Crow and Daniel Sarewitz, in which they argue that the issue of predictability is “irrelevant”:

 

preparation for the future obviously does not require accurate prediction; rather, it requires a foundation of knowledge upon which to base action, a capacity to learn from experience, close attention to what is going on in the present, and healthy and resilient institutions that can effectively respond or adapt to change in a timely manner.

 

Note that each of the elements Crow and Sarewitz mention as required for the preparation for the future relies in some way on accurate prediction.  A capacity to learn from experience is not useful for preparing for the future unless we can correctly assume (predict) that the lessons we derive from the past will be applicable to future situations.  Close attention to what is going on in the present is likewise futile unless we can assume that what is going on in the present will reveal stable trends or otherwise shed light on what is likely to happen next.  It also requires non-trivial prediction to figure out what kind of institution will prove healthy, resilient, and effective in responding or adapting to future changes.

              The reality is that predictability is a matter of degree, and different aspects of the future are predictable with varying degrees of reliability and precision.   It may often be a good idea to develop plans that are flexible and to pursue policies that are robust under a wide range of contingencies.  In some cases, it also makes sense to adopt a reactive approach that relies on adapting quickly to changing circumstances rather than pursuing any detailed long-term plan or explicit agenda.  Yet these coping strategies are only one part of the solution.  Another part is to work to improve the accuracy of our beliefs about the future (including the accuracy of conditional predictions of the form “if x is done, y will result”).  There might be traps that we are walking towards that we could only avoid falling into by means of foresight.  There are also opportunities that we could reach much sooner if we could see them farther in advance.  And in a strict sense, prediction is necessary for meaningful decision-making.

              Predictability does not necessarily fall off with temporal distance.  It may be highly unpredictable where a traveler will be one hour after the start of her journey, yet predictable that after five hours she will be at her destination.  The long-term future of humanity may be relatively easy to predict, being a matter amenable to study by the natural sciences, particularly cosmology (physical eschatology).  And for there to be a degree of predictability, it is not necessary that it be possible to identify one specific scenario as what will definitely happen.  If there is at least some scenario that can be , that is also a degree of predictability.  Even short of this, if there is some basis for assigning different probabilities (in the sense of credences, degrees of belief) to different propositions about logically possible future events, or some basis for criticizing some such probability distributions as less rationally defensible or reasonable than others, then again there is a degree of predictability.  And this is surely the case with regard to many aspects of the future of humanity.  While our knowledge is insufficient to narrow down the space of possibilities to one broadly outlined future for humanity, we do know of many relevant arguments and considerations which in combination impose significant constraints on what a plausible view of the future could look like.  The future of humanity need not be a topic on which all assumptions are entirely arbitrary and anything goes.  There is a vast gulf between knowing exactly what will happen and having absolutely no clue about what will happen.  Our actual epistemic location is some offshore place in that gulf.

Most differences between our lives and the lives of our hunter-gatherer forebears are ultimately tied to technology, especially if we understand “technology” in its broadest sense, to include not only gadgets and machines but also techniques, processes, and institutions.  In this wide sense we could say that technology is the sum total of instrumentally useful culturally-transmissible information.  Language is a technology in this sense, along with tractors, machine guns, sorting algorithms, double-entry bookkeeping, and Robert’s Rules of Order.

              Technological innovation is the main driver of long-term economic growth.  Over long time scales, the compound effects of even modest average annual growth are profound.  Technological change is in large part responsible for many of the secular trends in such basic parameters of the human condition as the size of the world population, life expectancy, education levels, material standards of living, and the nature of work, communication, health care, war, and the effects of human activities on the natural environment.  Other aspects of society and our individual lives are also influenced by technology in many direct and indirect ways, including governance, entertainment, human relationships, and our views on morality, mind, matter, and our own human nature.  One does not have to embrace any strong form of technological determinism to recognize that technological capability – through its complex interactions with individuals, institutions, cultures, and environment – is a key determinant of the ground rules within which the games of human civilization get played out.

              This view of the important role of technology is consistent with large variations and fluctuations in deployment of technology in different times and parts of the world.  The view is also consistent with technological development itself being dependent on socio-cultural, economic, or personalistic enabling factors.  The view is also consistent with denying any strong version of inevitability of the particular growth pattern observed in human history.  One might hold, for example, that in a “re-run” of human history, the timing and location of the Industrial Revolution might have been very different, or that there might not have been any such revolution at all but rather, say, a slow and steady trickle of invention.  One might even hold that there are important bifurcation points in technological development at which history could take either path with quite different results in what kinds of technological systems developed.  Nevertheless, , one might expect that , most of the important basic capabilities that could be obtained through some possible technology, will in fact be obtained through technology.  A bolder version of this idea could be formulated as follows:

 

.  If scientific and technological development efforts do not effectively cease, then all important basic capabilities that could be obtained through some possible technology will be obtained.

 

The conjecture is not tautological.  It would be false if there is some possible basic capability that could be obtained through some technology which, while possible in the sense of being consistent with physical laws and material constraints, is so difficult to develop that it would remain beyond reach even after an indefinitely prolonged development effort.  Another way in which the conjecture could be false is if some important capability can only be achieved through some possible technology which, while it could have been developed, will not in fact ever be developed even though scientific and technological development efforts continue.

              The conjecture expresses the idea that which important basic capabilities are eventually attained does not depend on the paths taken by scientific and technological research in the short term.  The principle allows that we might attain some capabilities sooner if, for example, we direct research funding one way rather than another; but it maintains that provided our general techno-scientific enterprise continues, even the non-prioritized capabilities will eventually be obtained, either through some indirect technological route, or when general advancements in instrumentation and understanding have made the originally neglected direct technological route so easy that even a tiny effort will succeed in developing the technology in question.

              One might find the thrust of this underlying idea plausible without being persuaded that the Technological Completion Conjecture is strictly true, and in that case, one may explore what exceptions there might be.  Alternatively, one might accept the conjecture but believe that its antecedent is false, i.e. that scientific and technological development efforts will at some point effectively cease (before the enterprise is complete).  But if one accepts both the conjecture and its antecedent, what are the implications?  What will be the results if, in the long run, all of the important basic capabilities that could be obtained through some possible technology are in fact obtained?  The answer may depend on the order in which technologies are developed, the social, legal, and cultural frameworks within which they are deployed, the choices of individuals and institutions, and other factors, including chance events.  The obtainment of a basic capability does not imply that the capability will be used in a particular way or even that it will be used at all.

              These factors determining the uses and impacts of potential basic capabilities are often hard to predict.  What might be somewhat more foreseeable is which important basic capabilities will eventually be attained.  For under the assumption that the Technological Completion Conjecture and its antecedent are true, the capabilities that will eventually be include all the ones that could be obtained through some possible technology.  While we may not be able to foresee all possible technologies, we can foresee many possible technologies, including some that are currently infeasible; and we can show that these anticipated possible technologies would provide a large range of new important basic capabilities.

              One way to foresee possible future technologies is through what Eric Drexler has termed “theoretical applied science”.   Theoretical applied science studies the properties of possible physical systems, including ones that cannot yet be built, using methods such as computer simulation and derivation from established physical laws.,   Theoretical applied science will not in every instance deliver a definitive and uncontroversial yes-or-no answer to questions about the feasibility of some imaginable technology, but it is arguably the best method we have for answering such questions.  Theoretical applied science – both in its more rigorous and its more speculative applications – is therefore an important methodological tool for thinking about the future of technology and, a fortiori, one key determinant of the future of humanity.

              It may be tempting to refer to the expansion of technological capacities as “progress”.  But this term has evaluative connotations – of things getting better – and it is far from a truth that expansion of technological capabilities makes things go better.  Even if empirically we find that such an association has held in the past (no doubt with many big exceptions), we should not uncritically assume that the association will always continue to hold.  It is preferable, therefore, to use a more neutral term, such as “technological development”, to denote the historical trend of accumulating technological capability.

              Technological development has provided human history with a kind of directionality.  Instrumentally useful information has tended to accumulate from generation to generation, so that each new generation has begun from a different and technologically more advanced starting point than its predecessor.  One can point to exceptions to this trend, regions that have stagnated or even regressed for extended periods of time.  Yet looking at human history from our contemporary vantage point, the macro-pattern is unmistakable.

              It was not always so.  Technological development for most of human history was so slow as to be indiscernible.  When technological development was that slow, it could only have been detected by comparing how levels of technological capability differed over large spans of time.  Yet the data needed for such comparisons – detailed historical accounts, archeological excavations with carbon dating, and so forth – were unavailable until fairly recently, as Robert Heilbroner explains:

 

At the very apex of the first stratified societies, dynastic dreams were dreamt and visions of triumph or ruin entertained; but there is no mention in the papyri and cuniform tablets on which these hopes and fears were recorded that they envisaged, in the slightest degree, changes in the material conditions of the great masses, or for that matter, of the ruling class itself.

 

Heilbroner argued in for the bold thesis that humanity’s perceptions of the shape of things to come has gone through exactly three phases since the first appearance of Homo sapiens.  In the first phase, which comprises all of human prehistory and most of history, the worldly future was envisaged – with very few exceptions – as changeless in its material, technological, and economic conditions.  In the second phase, lasting roughly from the beginning of the eighteenth century until the second half of the twentieth, worldly expectations in the industrialized world changed to incorporate the belief that the hitherto untamable forces of nature could be controlled through the appliance of science and rationality, and the future became a great beckoning prospect.  The third phase – mostly post-war but overlapping with the second phase – sees the future in a more ambivalent light: as dominated by impersonal forces, as disruptive, hazardous, and foreboding as well as promising.

              Supposing that some perceptive observer in the past had noticed some instance of directionality – be it a technological, cultural, or social trend – the question would have remained whether the detected directionality was a global feature or a mere local pattern.  In a cyclical view of history, for example, there can be long stretches of steady cumulative development of technology or other factors.  Within a period, there is clear directionality; yet each flood of growth is followed by an ebb of decay, returning things to where they stood at the beginning of the cycle.  Strong local directionality is thus compatible with the view that, globally, history moves in circles and never really gets anywhere.  If the periodicity is assumed to go on forever, a form of eternal recurrence would follow.

              Modern Westerners who are accustomed to viewing history as directional pattern of development may not appreciate how natural the cyclical view of history once seemed.   Any closed system with only a finite number of possible states must either settle down into one state and remain in that one state forever, or else cycle back through states in which it has already been.  In other words, a closed finite state system must either become static or else start repeating itself.  If we assume that the system has already been around for an eternity, then this eventual outcome must already have come about; i.e., the system is already either stuck or is cycling through states in which it has been before.  The proviso that the system has only a finite number of states may not be as significant as it seems, for even a system that has an infinite number of possible states may only have finitely many possible states.   For many practical purposes, it may not matter much whether the current state of the world has already occurred an infinite number of times, or whether an infinite number of states have previously occurred each of which is merely imperceptibly different from the present state. Either way, we could characterize the situation as one of eternal recurrence – the extreme case of a cyclical history.

              In the actual world, the cyclical view is false because the world had a beginning a finite time ago.  The human species has existed for a mere two hundred thousand years or so, and this is far from enough time for it to have experienced all possible conditions and permutations of which the system of humans and their environment is capable.

              More fundamentally, the reason why the cyclical view is false is that the universe itself has existed for only a finite amount of time.   The universe started with the Big Bang an estimated 13.7 billion years ago, in a low-entropy state.  The history of the universe has its own directionality: an ineluctable increase in entropy.  During its process of entropy increase, the universe has progressed through a sequence of distinct stages.  In the eventful first three seconds, a number of transitions occurred, including probably a period of inflation, reheating, and symmetry breaking.  These were followed, later, by nucleosynthesis, expansion, cooling, and formation of galaxies, stars, and planets, including Earth (circa 4.5 billion years ago).  The oldest undisputed fossils are about 3.5 billion years old, but there is some evidence that life already existed 3.7 billion years ago and possibly earlier.  Evolution of more complex organisms was a slow process.  It took some 1.8 billion years for eukaryotic life to evolve from prokaryotes, and another 1.4 billion years before the first multicellular organisms arose.  From the beginning of the Cambrian period (some 542 million years ago), “important developments” began happening at a faster pace, but still enormously slowly by human standards.  Homo habilis – our first “human-like ancestors” – evolved some 2 million years ago; Homo sapiens 100,000 years ago.  The agricultural revolution began in the Fertile Crescent of the Middle East 10,000 years ago, and the rest is history.  The size of the human population, which was about 5 million when we were living as hunter-gatherers 10,000 years ago, had grown to about 200 million by the year 1; it reached one billion in 1835 AD; and today over 6.6 billion human beings are breathing on this planet.  From the time of the industrial revolution, perceptive individuals living in developed countries have noticed significant technological change within their lifetimes.

              All techno-hype aside, it is striking how recent many of the events are that define what we take to be the modern human condition.  If compress the time scale such that the Earth formed one year ago, then Homo sapiens evolved less than 12 minutes ago, agriculture began a little over one minute ago, the Industrial Revolution took place less than 2 seconds ago, the electronic computer was invented 0.4 seconds ago, and the Internet less than 0.1 seconds ago – in the blink of an eye.

              Almost all the volume of the universe is ultra-high vacuum, and almost all of the tiny material specks in this vacuum are so hot or so cold, so dense or so dilute, as to be utterly inhospitable to organic life.  Spatially as well as temporally, our situation is an anomaly.

              Given the technocentric perspective adopted here, and in light of our incomplete but substantial knowledge of human history and its place in the universe, how might we structure our expectations of things to come?  The remainder of this paper will outline four families of scenarios for humanity’s future:

 

 

Unless the human species lasts literally forever, it will some time cease to exist.  In that case, the long-term future of humanity is easy to describe: extinction.  An estimated 99.9% of all species that ever existed on Earth are already extinct.

              There are two different ways in which the human species could become extinct: one, by evolving or developing or transforming into one or more new species or life forms, sufficiently different from what came before so as no longer to count as Homo sapiens; the other, by simply dying out, without any meaningful replacement or continuation.  Of course, a transformed continuant of the human species might itself eventually terminate, and perhaps there will be a point where all life comes to an end; so scenarios involving the first type of extinction may eventually converge into the second kind of scenario of complete annihilation.  We postpone discussion of transformation scenarios to a later section, and we shall not here discuss the possible existence of fundamental physical limitations to the survival of intelligent life in the universe.  This section focuses on the direct form of extinction (annihilation) occurring within any very long, but not astronomically long, time horizon – we could say one hundred thousand years for specificity.

              Human extinction risks have received less scholarly attention than they deserve.  In recent years, there have been approximately three serious books and one major paper on this topic.  John Leslie, a Canadian philosopher, puts the probability of humanity failing to survive the next five centuries to 30% in his book .   His estimate is partly based on the controversial “Doomsday argument” and on his own views about the limitations of this argument.   Sir Martin Rees, Britain’s Astronomer Royal, is even more pessimistic, putting the odds that humanity will survive the 21 century to no better than 50% in .   Richard Posner, an eminent American legal scholar, offers no numerical estimate but rates the risk of extinction “significant” in .   And I published a paper in 2002 in which I suggested that assigning a probability of less than 25% to existentialdisaster (no time limit) would be misguided.  The concept of is distinct from that of extinction risk.  As I introduced the term, an existential disaster is one that causes either the annihilation of Earth-originating intelligent life or the permanent and drastic curtailment of its potential for future desirable development.

              It is possible that a publication bias is responsible for the alarming picture presented by these opinions.  Scholars who believe that the threats to human survival are severe might be more likely to write books on the topic, making the threat of extinction seem greater than it really is.  Nevertheless, it is noteworthy that there seems to be a consensus among those researchers who have seriously looked into the matter that there is a serious risk that humanity’s journey will come to a premature end.

              The greatest extinction risks (and existential risks more generally) arise from human activity.  Our species has survived volcanic eruptions, meteoric impacts, and other natural hazards for tens of thousands of years.  It seems unlikely that any of these old risks should exterminate us in the near future.  By contrast, human civilization is introducing many novel phenomena into the world, ranging from nuclear weapons to designer pathogens to high-energy particle colliders.  The most severe existential risks of this century derive from expected technological developments.  Advances in biotechnology might make it possible to design new viruses that combine the easy contagion and mutability of the influenza virus with the lethality of HIV.  Molecular nanotechnology might make it possible to create weapons systems with a destructive power dwarfing that of both thermonuclear bombs and biowarfare agents.   Superintelligent machines might be built and their actions could determine the future of humanity – and whether there will be one.   Considering that many of the existential risks that now seem to be among the most significant were conceptualized only in recent decades, it seems likely that further ones still remain to be discovered.

              The same technologies that will pose these risks will also help us to mitigate some risks.  Biotechnology can help us develop better diagnostics, vaccines, and anti-viral drugs.  Molecular nanotechnology could offer even stronger prophylactics. Superintelligent machines may be the last invention that human beings ever need to make, since a superintelligence, by definition, would be far more effective than a human brain in practically all intellectual endeavors, including strategic thinking, scientific analysis, and technological creativity. In addition to creating and mitigating risks, these powerful technological capabilities would also affect the human condition in many other ways.

              Extinction risks constitute an especially severe subset of what could go badly wrong for humanity.  There are many possible global catastrophes that would cause immense worldwide damage, maybe even the collapse of modern civilization, yet fall short of terminating the human species.  An all-out nuclear war between Russia and the United States might be an example of a global catastrophe that would be unlikely to result in extinction.  A terrible pandemic with high virulence and 100% mortality rate among infected individuals might be another example: if some groups of humans could successfully quarantine themselves before being exposed, human extinction could be avoided even if, say, 95% or more of the world’s population succumbed.  What distinguishes extinction and other existential catastrophes is that a comeback is impossible.  A non-existential disaster causing the breakdown of global civilization is, from the perspective of humanity as a whole, a potentially recoverable setback: a giant massacre for man, a small misstep for mankind.

              An existential catastrophe is therefore qualitatively distinct from a “mere” collapse of global civilization, although in terms of our moral and prudential attitudes perhaps we should simply view both as unimaginably bad outcomes.   One way that civilization collapse could be a significant feature in the larger picture for humanity, however, is if it formed part of a repeating pattern. This takes us to the second family of scenarios: recurrent collapse.

 

Environmental threats seem to have displaced nuclear holocaust as the chief specter haunting the public imagination.  Current-day pessimists about the future often focus on the environmental problems facing the growing world population, worrying that our wasteful and polluting ways are unsustainable and potentially ruinous to human civilization.  The credit for having handed the environmental movement its initial impetus is often given to Rachel Carson, whose book (1962) sounded the alarm on pesticides and synthetic chemicals that were being released into the environment with allegedly devastating effects on wildlife and human health.  The environmentalist forebodings swelled over the decade.  Paul Ehrlich’s book , and the Club of Rome report , which sold 30 million copies, predicted economic collapse and mass starvation by the eighties or nineties as the results of population growth and resource depletion.

              In recent years, the spotlight of environmental concern has shifted to global climate change.  Carbon dioxide and other greenhouse gases are accumulating in the atmosphere, where they are expected to cause a warming of Earth’s climate and a concomitant rise in sea water levels.  The more recent report by the United Nations’ Intergovernmental Panel on Climate Change, which represents the most authoritative assessment of current scientific opinion, attempts to estimate the increase in global mean temperature that would be expected by the end of this century under the assumption that no efforts at mitigation are made.  The final estimate is fraught with uncertainty because of uncertainty about what the default rate of emissions of greenhouse gases will be over the century, uncertainty about the climate sensitivity parameter, and uncertainty about other factors.  The IPCC therefore expresses its assessment in terms of six different climate scenarios based on different models and different assumptions.  The “low” model predicts a mean global warming of +1.8°C (uncertainty range 1.1°C to 2.9°C); the “high” model predicts warming by +4.0°C (2.4°C to 6.4°C).   Estimated sea level rise predicted by these two most extreme scenarios among the six considered is 18 to 38 cm, and 26 to 59 cm, respectively.

              While this prognosis might well justify a range of mitigation policies, it is important to maintain a sense of perspective when we are considering the issue from a “future of humanity” point of view.  Even the , a report prepared for the British Government which has been criticized by some as overly pessimistic, estimates that under the assumption of business-as-usual with regard to emissions, global warming will reduce welfare by an amount equivalent to a permanent reduction in per capita consumption of between 5 and 20%. In absolute terms, this would be a huge harm.  Yet over the course of the twentieth century, world GDP grew by some 3,700%, and per capita world GDP rose by some 860%.  It seems safe to say that (absent a radical overhaul of our best current scientific models of the Earth’s climate system) whatever negative economic effects global warming will have, they will be completely swamped by other factors that will influence economic growth rates in this century.

              There have been a number of attempts by scholars to explain societal collapse – either as a case study of some particular society, such as Gibbons’ classic – or else as an attempt to discover failure modes applying more generally. Two examples of the latter genre include Joseph Tainter’s , and Jared Diamond’s more recent .  Tainter notes that societies need to secure certain resources such as food, energy, and natural resources in order to sustain their populations.   In their attempts to solve this supply problem, societies may grow in complexity – for example, in the form of bureaucracy, infrastructure, social class distinction, military operations, and colonies.  At some point, Tainter argues, the marginal returns on these investments in social complexity become unfavorable, and societies that do not manage to scale back when their organizational overheads become too large eventually face collapse.

              Diamond argues that many past cases of societal collapse have involved environmental factors such as deforestation and habitat destruction, soil problems, water management problems, overhunting and overfishing, the effects of introduced species, human population growth, and increased per-capita impact of people.   He also suggests four new factors that may contribute to the collapse of present and future societies: human-caused climate change, but also build-up of toxic chemicals in the environment, energy shortages, and the full utilization of the Earth’s photosynthetic capacity.  Diamond draws attention to the danger of “creeping normalcy”, referring to the phenomenon of a slow trend being concealed within noisy fluctuations, so that a detrimental outcome that occurs in small, almost unnoticeable steps may be accepted or come about without resistance even if the same outcome, had it come about in one sudden leap, would have evoked a vigorous response.

              We need to distinguish different classes of scenarios involving societal collapse.  First, we may have a merely local collapse: individual societies can collapse, but this is unlikely to have a determining effect on the future of humanity if other advanced societies survive and take up where the failed societies left off.  All historical examples of collapse have been of this kind.  Second, we might suppose that new kinds of threat (e.g. nuclear holocaust or catastrophic changes in the global environment) or the trend towards globalization and increased interdependence of different parts of the world create a vulnerability to human civilization as a whole.  Suppose that a global societal collapse were to occur.  What happens next?  If the collapse is of such a nature that a new advanced global civilization can be rebuilt, the outcome would qualify as an existential disaster.  However, it is hard to think of a plausible collapse which the human species survives but which nevertheless makes it permanently impossible to rebuild civilization.  Supposing, therefore, that a new technologically advanced civilization is eventually rebuilt, what is the fate of this resurgent civilization?  Again, there are two possibilities.  The new civilization might avoid collapse; and in the following two sections we will examine what could happen to such a sustainable global civilization.  Alternatively, the new civilization collapses again, and the cycle repeats.  If eventually a sustainable civilization arises, we reach the kind of scenario that the following sections will discuss.  If instead one of the collapses leads to extinction, then we have the kind of scenario that was discussed in the previous section.  The remaining case is that we face a cycle of indefinitely repeating collapse and regeneration (see figure 1).

 

              While there are many conceivable explanations for why an advanced society might collapse, only a subset of these explanations could plausibly account for an unending pattern of collapse and regeneration.  An explanation for such a cycle could not rely on some contingent factor that would apply to only some advanced civilizations and not others, or to a factor that an advanced civilization would have a realistic chance of counteracting; for if such a factor were responsible, one would expect that the collapse-regeneration pattern would at some point be broken when the right circumstances finally enabled an advanced civilization to overcome the obstacles to sustainability.  Yet at the same time, the postulated cause for collapse could not be so powerful as to cause the extinction of the human species.

              A recurrent collapse scenario consequently requires a carefully calibrated homeostatic mechanism that keeps the level of civilization confined within a relatively narrow interval, as illustrated in figure 1.  Even if humanity were to spend many millennia on such an oscillating trajectory, one might expect that eventually this phase would end, resulting in either the permanent destruction of humankind, or the rise of a stable sustainable global civilization, or the transformation of the human condition into a new “posthuman” condition.  We turn now to the second of these possibilities, that the human condition will reach a kind of stasis, either immediately or after undergoing one of more cycles of collapse-regeneration.

 

Figure 2 depicts two possible trajectories, one representing an increase followed by a permanent plateau, the other representing stasis at (or close to) the current status quo.

              The static view is implausible.  It would imply that we have recently arrived at the final human condition even at a time when change is exceptionally rapid: “What we do know,” writes distinguished historian of technology Vaclav Smil, “is that the past six generations have amounted to the most rapid and the most profound change our species has experienced in its 5,000 years of recorded history.”   The static view would also imply a radical break with several long-established trends.  If the world economy continues to grow at the same pace as in the last half century, then by 2050 the world will be seven times richer than it is today.  World population is predicted to increase to just over 9 billion in 2050, so average wealth would also increase dramatically. Extrapolating further, by 2100 the world would be almost 50 times richer than today.  A single modest-sized country might then have as much wealth as the entire world has at the present.  Over the course of human history, the doubling time of the world economy has been drastically reduced on several occasions, such as in the agricultural transition and the Industrial Revolution.  Should another such transition should occur in this century, the world economy might be several orders of magnitudes larger by the end of the century.

 

 

 

 


              Another reason for assigning a low probability to the static view is that we can foresee various specific technological advances that will give humans important new capacities.  Virtual reality environments will constitute an expanding fraction of our experience.  The capability of recording, surveillance, biometrics, and data mining technologies will grow, making it increasingly feasible to keep track of where people go, whom they meet, what they do, and what goes on inside their bodies.

              Among the most important potential developments are ones that would enable us to alter our biology directly through technological means. Such interventions could affect us more profoundly than modification of beliefs, habits, culture, and education.  If we learn to control the biochemical processes of human senescence, healthy lifespan could be radically prolonged.  A person with the age-specific mortality of a 20-year-old would have a life expectancy of about a thousand years.  The ancient but hitherto mostly futile quest for happiness could meet with success if scientists could develop safe and effective methods of controlling the brain circuitry responsible for subjective well-being.   Drugs and other neurotechnologies could make it increasingly feasible for users to shape themselves into the kind of people they want to be by adjusting their personality, emotional character, mental energy, romantic attachments, and moral character.   Cognitive enhancements might deepen our intellectual lives.

              Nanotechnology will have wide-ranging consequences for manufacturing, medicine, and computing.   Machine intelligence, to be discussed further in the next section, is another potential revolutionary technology.  Institutional innovations such as prediction markets might improve the capability of human groups to forecast future developments, and other technological or institutional developments might lead to new ways for humans to organize more effectively.   The impacts of these and other technological developments on the character of human lives are difficult to predict, but that they will have such impacts seems a safe bet.

              Those who believe that developments such as those listed will not occur should consider whether their skepticism is really about ultimate feasibility or merely about timescales.  Some of these technologies will be difficult to develop.  Does that give us reason to think that they will never be developed?  Not even in 50 years?  200 years?  10,000 years?  Looking back, developments such as language, agriculture, and perhaps the Industrial Revolution may be said to have significantly changed the human condition.  There are at least a thousand times more of us now; and with current world average life expectancy at 67 years, we live perhaps three times longer than our Pleistocene ancestors.  The mental life of human beings has been transformed by developments such as language, literacy, urbanization, division of labor, industrialization, science, communications, transport, and media technology.

              The other trajectory in figure 2 represents scenarios in which technological capability continues to grow significantly beyond the current level before leveling off below the level at which a fundamental alteration of the human condition would occur.  This trajectory avoids the implausibility of postulating that we have just now reached a permanent plateau of technological development.  Nevertheless, it does propose that a permanent plateau will be reached not radically far above the current level.  We must ask what could cause technological development to level off at that stage.

              One conceptual possibility is that development beyond this level is impossible because of limitation imposed by fundamental natural laws.  It appears, however, that the physical laws of our universe permit forms of organization that would qualify as a posthuman condition (to be discussed further in the next section).  Moreover, there appears to be no fundamental obstacle to the development of technologies that would make it possible to build such forms of organization.  Physical impossibility, therefore, is not a plausible explanation for why we should end up on either of the trajectories depicted in figure 2.

              Another potential explanation is that while theoretically possible, a posthuman condition is just too difficult to attain for humanity ever to be able to get there.  For this explanation to work, the difficulty would have to be of a certain kind.  If the difficulty consisted merely of there being a large number of technologically challenging steps that would be required to reach the destination, then the argument would at best suggest that it will take a long time to get there, not that we never will.  Provided the challenge can be divided into a sequence of individually feasible steps, it would seem that humanity could eventually solve the challenge given enough time.  Since at this point we are not so concerned with timescales, it does not appear that technological difficulty of this kind would make any of the trajectories in figure 2 a plausible scenario for the future of humanity.

              In order for technological difficulty to account for one of the trajectories in figure 2, the difficulty would have to be of a sort that is not reducible to a long sequence of individually feasible steps.  If all the pathways to a posthuman condition required technological capabilities that could be attained only by building enormously complex, error-intolerant systems of a kind which could not be created by trial-and-error or by assembling components that could be separately tested and debugged, then the technological difficulty argument would have legs to stand on.  Charles Perrow argued in that efforts to make complex systems safer often backfire because the added safety mechanisms bring with them additional complexity which creates additional opportunities for things to go wrong when parts and processes interact in unexpected ways.   For example, increasing the number of security personnel on a site can increase the “insider threat”, the risk that at least one person on the inside can be recruited by would-be attackers.   Along similar lines, Jaron Lanier has argued that software development has run into a kind of complexity barrier.   An informal argument of this kind has also been made against the feasibility of molecular manufacturing.

              Each of these arguments about complexity barriers is problematic.  And in order to have an explanation for why humanity’s technological development should level off before a posthuman condition is reached, it is not sufficient to show that technologies run into insuperable complexity barriers.  Rather, it would have to be shown that technologies that would enable a posthuman condition (biotechnology, nanotechnology, artificial intelligence, etc.) will be blocked by such barriers.  That seems an unlikely proposition.  Alternatively, one might try to build an argument based on complexity barriers for social organization in general rather than for particular technologies – perhaps something akin to Tainter’s explanation of past cases of societal collapse, mentioned in the previous section.  In order to produce the trajectories in figure 2, however, the explanation would have to be modified to allow for stagnation and plateauing rather than collapse.  One problem with this hypothesis is that it is unclear that the development of the technologies requisite to reach a posthuman condition would necessarily require a significant increase in the complexity of social organization beyond its present level.

              A third possible explanation is that even if a posthuman condition is both theoretically possible and practically feasible, humanity might “decide” not to pursue technological development beyond a certain level.  One could imagine systems, institutions, or attitudes emerging which would have the effect of blocking further development, whether by design or as an unintended consequence.  Yet an explanation rooted in unwillingness for technological advancement would have to overcome several challenges.  First, how does enough unwillingness arise to overcome what at the present appears like an inexorable process of technological innovation and scientific research?  Second, how does a decision to relinquish development get implemented globally in a way that leaves no country and no underground movement able to continue technological research?  Third, how does the policy of relinquishment avoid being overturned, even on timescales extending over tens of thousands of years and beyond?  Relinquishment would have to be global and permanent in order to account for a trajectory like one of those represented in figure 2.  A fourth difficulty emerges out of the three already mentioned: the explanation for how the aversion to technological advancement arises, how it gets universally implemented, and how it attains permanence, would have to avoid postulating causes that in themselves would usher in a posthuman condition.  For example, if the explanation postulated that powerful new mind-control technologies would be deployed globally to change people’s motivation, or that an intensive global surveillance system would be put in place and used to manipulate the direction of human development along a predetermined path, one would have to wonder whether these interventions, or their knock-on effects on society, culture, and politics, would not themselves alter the human condition in sufficiently fundamental ways that the resulting condition would qualify as posthuman.

              To argue that stasis and plateau are relatively unlikely scenarios is not inconsistent with maintaining that of the human condition will remain unchanged.  For example, Francis Fukuyama argued in that the endpoint of mankind’s ideological evolution has essentially been reached with the end of the Cold War.   Fukuyama suggested that Western liberal democracy is the final form of human government, and that while it would take some time for this ideology to become completely universalized, secular free-market democracy will in the long term become more and more prevalent.  In his more recent book , he adds an important qualification to his earlier thesis, namely that direct technological modification of human nature could undermine the foundations of liberal democracy.   But be that as it may, the thesis that liberal democracy (or any other political structure) is the final form of government is consistent with the thesis that the general condition for intelligent Earth-originating life will not remain a condition for the indefinite future.

 

An explication of what has been referred to as “posthuman condition” is overdue.  In this paper, the term is used to refer to a condition which has at least one of the following characteristics:

 

 

This definition’s vagueness and arbitrariness may perhaps be excused on grounds that the rest of this paper is at least equally schematic.  In contrast to some other explications of “posthumanity”, the one above does not require direct modification of human nature.   This is because the relevant concept for the present discussion is that of a level of technological or economic development that would involve a radical change in the human condition, whether the change was wrought by biological enhancement or other causes.

 

 

 


              The two dashed lines in figure 3 differ in steepness.  One of them depicts slow gradual growth that in the fullness of time rises into the posthuman level and beyond.  The other depicts a period of extremely rapid growth in which humanity abruptly transitions into a posthuman condition.  This latter possibility can be referred to as .   Proponents of the singularity hypothesis usually believe not only that a period of extremely rapid technological development will usher in posthumanity suddenly, but also that this transition will take place soon – within a few decades.  Logically, these two contentions are quite distinct.

              In 1958, Stanislaw Ulam, a Polish-born American mathematician, referring to a meeting with John von Neumann, wrote:

 

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

 

The idea of a technological singularity tied specifically to artificial intelligence was perhaps first clearly articulated by the statistician I. J. Good in 1965:

 

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever.  Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the invention that man need ever make…  It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built...

 

Mathematician and science fiction writer Vernor Vinge elaborated on this idea in his 1993-essay , adjusting the timing of Good’s prediction:

 

Within thirty years, we will have the technological means to create superhuman intelligence.  Shortly thereafter, the human era will be ended.

 

Vinge considered several possible avenues to superintelligence, including AI in individual machines or computer networks, computer/human interfaces, and biological improvement of the natural human intellect.  An important part of both Good’s and Vinge’s reasoning is the idea of a strong positive feedback-loop as increases in intelligence lead to increased ability to make additional progress in intelligence-increasing technologies.  (“Intelligence” could here be understood as a general rubric for all those mental faculties that are relevant for developing new technologies, thus including for example creativity, work capacity, and the ability to write a persuasive case for funding.)

              Skeptics of the singularity hypothesis can object that while greater intelligence would lead to faster technological progress, there is an additional factor at play which may slow things down, namely that the easiest improvements will be made first, and that after the low-hanging fruits have all been picked, each subsequent improvement will be more difficult and require a greater amount of intellectual capability and labor to achieve.  The mere existence of positive feedback, therefore, is not sufficient to establish that an intelligence explosion would occur once intelligence reaches some critical magnitude.

              To assess the singularity hypothesis one must consider more carefully what kinds of intelligence-increasing interventions might be feasible and how closely stacked these interventions are in terms of their difficulty.  Only if intelligence growth could exceed the growth in difficulty level for each subsequent improvement could there be a singularity.  The period of rapid intelligence growth would also have to last long enough to usher in a posthuman era before running out of steam.

              It might be easiest to assess the prospect for an intelligence explosion if we focus on the possibility of quantitative rather than qualitative improvements in intelligence.  One interesting pathway to greater intelligence illustrating such quantitative growth – and one that Vinge did not discuss – is uploading.

              Uploading refers to the use of technology to transfer a human mind to a computer.  This would involve the following steps: First, create a sufficiently detailed scan of a particular human brain, perhaps by feeding vitrified brain tissue into an array of powerful microscopes for automatic slicing and scanning.  Second, from this scanning data, use automatic image processing to reconstruct the 3-dimensional neuronal network that implemented cognition in the original brain, and combine this map with neurocomputational models of the different types of neurons contained in the network.  Third, emulate the whole computational structure on a powerful supercomputer (or cluster).  If successful, the procedure would result in a qualitative reproduction of the original mind, with memory and personality intact, onto a computer where it would now exist as software.   This mind could either inhabit a robotic body or live in virtual reality.  In determining the prerequisites for uploading, a tradeoff exists between the power of the scanning and simulation technology, on the one hand, and the degree of neuroscience insight on the other.  The worse the resolution of the scan, and the lower the computing power available to simulate functionally possibly irrelevant features, the more scientific insight would be needed to make the procedure work.  Conversely, with sufficiently advanced scanning technology and enough computing power, it might be possible to brute-force an upload even with fairly limited understanding of how the brain works – perhaps a level of understanding representing merely an incremental advance over the current state of the art.

              One obvious consequence of uploading is that many copies could be created of one uploaded mind.  The limiting resource is computing power to store and run the upload minds.  If enough computing hardware already exists or could rapidly be built, the upload population could undergo explosive growth: the replication time of an upload need be no longer than the time it takes to make a copy of a big piece of software, perhaps minutes or hours – a vast speed-up compared to biological human replication.  And the upload replica would be an exact copy, possessing from birth all the skills and knowledge of the original.  This could result in rapidly exponential growth in the supply of highly skilled labor.  Additional acceleration is likely to result from improvements in the computational efficiency of the algorithms used to run the uploaded minds.  Such improvements would make it possible to create faster-thinking uploads, running perhaps at speeds thousands or millions times that of an organic brain.

              If uploading is technologically feasible, therefore, a singularity scenario involving an intelligence explosion and very rapid change seems realistic based only on the possibility of quantitative growth in machine intelligence.   The harder-to-evaluate prospect of qualitative improvements adds some further credence to the singularity hypothesis.

              Uploading would almost certainly produce a condition that would qualify as “posthuman” in this paper’s terminology, for example on grounds of population size, control of sensory input, and life expectancy.  (A human upload could have an indefinitely long lifespan as it would not be subject to biological senescence, and periodic backup copies could be created for additional security.)  Further changes would likely follow swiftly from the productivity growth brought about by the population expansion.  These further changes may include qualitative improvements in the intelligence of uploads, other machine intelligences, and remaining biological human beings.

              Inventor and futurist Ray Kurzweil has argued for the singularity hypothesis on somewhat different grounds.  His most recent book, , is an update of his earlier writings.   It covers a vast range of ancillary topics related to radical future technological prospects, but its central theme is an attempt to demonstrate “the law of accelerating returns”, which manifests itself as exponential technological progress.  Kurzweil plots progress in a variety of areas, including computing, communications, and biotechnology, and in each case finds a pattern similar to Moore’s law for microchips: performance grows as an exponential with a short doubling time (typically a couple of years).  Extrapolating these trend lines, Kurzweil infers that a technological singularly is due around the year 2045.   While machine intelligence features as a prominent factor in Kurzweil’s forecast, his singularity scenario differs from that of Vinge in being more gradual: not a virtually-overnight total transformation resulting from runaway self-improving artificial intelligence, but a steadily accelerating pace of general technological advancement.

              Several critiques could be leveled against Kurzweil’s reasoning.  First, one might of course doubt that present exponential trends will continue for another four decades.  Second, while it is possible to identify certain fast-growing areas, such as IT and biotech, there are many other technology areas where progress is much slower.  One could argue that to get an index of the overall pace of technological development, we should look not at a hand-picked portfolio of hot technologies; but instead at economic growth, which implicitly incorporates all productivity-enhancing technological innovations, weighted by their economic significance.  In fact, the world economy has also been growing at a roughly exponential rate since the Industrial Revolution; but the doubling time is much longer, approximately 20 years. Third, if technological progress is exponential, then the current rate of technological progress must be vastly greater than it was in the remote past.  But it is far from clear that this is so.  Vaclav Smil – the historian of technology who, as we saw, has argued that the past six generations have seen the most rapid and profound change in recorded history – maintains that the 1880s was the most innovative decade of human history.

 

The four families of scenarios we have considered – extinction, recurrent collapse, plateau, and posthumanity – could be modulated by varying the timescale over which they are hypothesized to occur.  A few hundred years or a few thousand years might already be ample time for the scenarios to have an opportunity to play themselves out.  Yet such an interval is a blip compared to the lifetime of the universe.  Let us therefore zoom out and consider the longer term prospects for humanity.

              The first thing to notice is that the longer the time scale we are considering, the less likely it is that technological civilization will remain within the zone we termed “the human condition” throughout.  We can illustrate this point graphically by redrawing the earlier diagrams using an expanded scale on the two axes (figure 4).

 

 

 


              The extinction scenario is perhaps the one least affected by extending the timeframe of consideration.  If humanity goes extinct, it stays extinct.   The cumulative probability of extinction increases monotonically over time.  One might argue, however, that the current century, or the next few centuries, will be a critical phase for humanity, such that if we make it through this period then the life expectancy of human civilization could become extremely high.  Several possible lines of argument would support this view.  For example, one might believe that superintelligence will be developed within a few centuries, and that, while the creation of superintelligence will pose grave risks, once that creation and its immediate aftermath have been survived, the new civilization would have vastly improved survival prospects since it would be guided by superintelligent foresight and planning.  Furthermore, one might believe that self-sustaining space colonies may have been established within such a timeframe, and that once a human or posthuman civilization becomes dispersed over multiple planets and solar systems, the risk of extinction declines.  One might also believe that many of the possible revolutionary technologies (not only superintelligence) that be developed be developed within the next several hundred years; and that if these technological revolutions are destined to cause existential disaster, they would already have done so by then.

              The recurrent collapse scenario becomes increasingly unlikely the longer the timescale, for reasons that are apparent from figure 4.  The scenario postulates that technological civilization will oscillate continuously within a relatively narrow band of development.  If there is any chance that a cycle will either break through to the posthuman level or plummet into extinction, then there is for each period a chance that the oscillation will end.  Unless the chance of such a breakout converges to zero at a sufficiently rapid rate, then with probability one the pattern will be broken.  At that point the pattern might degenerate into one of the other ones we have considered.

              The plateau scenarios are similar to the recurrent collapse scenario in that the level of civilization is hypothesized to remain confined within a narrow range; and the longer the timeframe considered, the smaller the probability that the level of technological development will remain within this range.  But compared to the recurrent collapse pattern, the plateau pattern might be thought to have a bit more staying power.  The reason is that the plateau pattern is consistent with a situation of complete stasis – such as might result, for example, from the rise of a very stable political system, propped up by greatly increased powers of surveillance and population control, and which for one reason or another opts to preserve its status quo.  Such stability is inconsistent with the recurrent collapse scenario.

              The cumulative probability of posthumanity, like that of extinction, increases monotonically over time.  By contrast to extinction scenarios, however, there is a possibility that a civilization that has attained a posthuman condition will later revert to a human condition.  For reasons paralleling those suggested earlier for the idea that the annual risk of extinction will decline substantially after certain critical technologies have been developed and after self-sustaining space colonies have been created, one might maintain that the annual probability that a posthuman condition would revert to a human condition will likewise decline over time.

 

Bostrom, N. (1998) "How Long Before Superintelligence?" 2.

——— (2002a) (New York: Routledge).

——— (2002b) "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards", 9.

——— (2002c) "Self-Locating Belief in Big Worlds: Cosmology's Missing Link to Observation", 99 (12):607–623.

——— (2003a) "Astronomical Waste: The Opportunity Cost of Delayed Technological Development", 15 (3):308-314.

——— . World Transhumanist Association 2003b. Available from .

——— (2005) "Transhumanist Values", 4 (1-2):87-101.

——— (2006) "Quantity of Experience: Brain-Duplication and Degrees of Consciousness", 16 (2):185-200.

——— (2007a) "Infinite Ethics", in, . Available from .

——— (2007b) "Technological Revolutions: Ethics and Policy in the Dark", in Nigel M. de S. Cameron (ed.), (John Wiley).

——— (2007c) "Why I Want to be a Posthuman When I Grow Up", in Bert Gordijn and Ruth Chadwick (eds.), (Springer).

Bostrom, N., and Ord, T. (2006) "The Reversal Test: Eliminating Status Quo Bias in Bioethics", 116 (4):656-680.

Bostrom, N., and Sandberg, A. (2007) "Cognitive Enhancement: Methods, Ethics, Regulatory Challenges", forthcoming.

Brin, D. (1998) (Reading, Mass.: Addison-Wesley).

Bureau, U. S. C.   2007. Available from .

Burkhead, L.   1999. Available from http://www.geniebusters.org/00_contents.htm.

Carson, R. (1962) (Boston: Houghton Mifflin).

Cox, S., and Vadon, R. (2007) "Running the rule over Stern's numbers", in, . Available from .

Crow, M. M., and Sarewitz, D. (2001) "Nanotechnology and Societal Transformation", in Albert H. Teich, Stephen D. Nelson, Celia McEnaney and Stephen J. Lita (eds.), (Washington, DC: American Association for the Advancement of Science), 89-101.

De Long, J. B. (1998) "Estimating World GDP, One Million B.C. - Present", in, . Available from .

De Long, J. B., and Olney, M. L. (2006) . 2nd ed (Boston: McGraw-Hill).

Diamond, J. M. (2005) (New York: Viking).

Drexler, E. (1992) (New York: John Wiley & Sons, Inc.).

——— (2003) "Nanotechnology Essays: Revolutionizing the Future of Technology (Revised 2006)", April.

——— (2007) "The stealth threat: an interview with K. Eric Drexler", 68 (1):55-58.

Drexler, K. E. (1985) (London: Forth Estate).

Ehrlich, P. R. (1968) (New York: Ballantine Books).

Freitas, R. A. (1999) (Austin, TX: Landes Bioscience).

Fukuyama, F. (1992) (New York: Free Press).

——— (2002) (Farrar, Straus and Giroux).

Gibbon, E., and Kitchin, T. (1777) . A new edition ed. 12 vols (London: Printed for Lackington, Allen, and Co.).

Good, I. J. (1965) "Speculations Concerning the First Ultraintelligent Machine", 6:31-88.

Hanson, R. (1994) "What If Uploads Come First: The Crack of a Future Dawn", 6 (2).

——— (1995) "Could Gambling Save Science? Encouraging an Honest Consensus", 9:1:3-33.

——— (2000) "Long-term growth as a sequence of exponential modes", .

Heilbroner, R. L. (1995) (New York: Oxford University Press).

Hughes, J. (2007) "Millennial Tendencies in Responses to Apocalyptic Threats", in Nick Bostrom and Milan Cirkovic (eds.), (Oxford: Oxford University Press).

Joy, B. (2000) "Why the future doesn't need us", 8.04.

Kurzweil, R. (2005) (New York: Viking).

Lanier, J. (2000) "One-Half of a Manifesto", 8 (21).

Leslie, J. (1996) (London: Routledge).

Meadows, D. H., and Club of Rome. (1972) (New York: Universe Books).

Moravec, H. (1999) (New York: Oxford University Press).

Nordhaus, W. (2007) "A Review of the Stern Review on the Economics of Global Warming", forthcoming.

Parfit, D. (1984) (Oxford: Clarendon Press).

Pearce, D.   2004. Available from .

Perrow, C. (1984) (New York: Basic Books).

Posner, R. (2004) (Oxford: Oxford University Press).

Raup, D. M. (1991) (New York: W.W. Norton).

Rees, M. (2003) (Basic Books).

Sagan, S. (2004) "The Problem of Redundancy Problem: Why More Nuclear Security Forces May Produce Less Nuclear Security", 24 (4):935-946.

Smil, V. (2006) (Oxford: Oxford University Press).

Solomon, S., Qin, D., Manning, M., and al., e. (2007) . Edited by Intergovernmental Panel on Climate Change (Cambridge: Cambridge University Press).

Steinhardt, P., and Turok, N. (2002) "The Cyclic Universe: An informal introduction", arXiv:astro-ph/0204479v1.

Stern, N., and Great Britain Treasury (2006) (England: HM Treasury).

Tainter, J. A. (1988) (Cambridge: Cambridge University Press).

Ulam, S. (1958) "John von Neumann 1903-1957", (May).

United_Nations_Population_Division (2004) "World Population Prospects: The 2004 Revision", .

Vinge, V. (1993) "The Coming Technological Singularity", Winter issue.

Wolfers, J., and Zitzewitz, E. (2004) "Prediction markets", 18 (2):107-126.

Wright, R. (1999) (New York: Pantheon Books).

Yudkowsky, E. (2007) "Artificial Intelligence as a Positive and Negative Factor in Global Risk", in Nick Bostrom and Milan Cirkovic (eds.), (Oxford: Oxford University Press).

 

 

1. (Hughes 2007)

2. (Crow and Sarewitz 2001)

3. For example, it is likely that computers will become faster, materials will become stronger, and medicine will cure more diseases; cf. (Drexler 2003).

4. You lift the glass to your mouth because you predict that drinking will quench your thirst; you avoid stepping in front of a speeding car because you predict that a collision will hurt you.

5. For more on technology and uncertainty, see (Bostrom 2007b).

6. I’m cutting myself some verbal slack.  On the proposed terminology, a particular physical object such as farmer Bob’s tractor is not, strictly speaking, technology but rather a , which depends on and embodies technology-as-information.  The individual tractor is physical capital.  The transmissible information needed to produce tractors is technology.

7. See e.g. (Wright 1999).

8. For a visual analogy, picture a box with large but finite volume, representing the space of basic capabilities that could be obtained through some possible technology.  Imagine sand being poured into this box, representing research effort.  The way in which you pour the sand will determine the places and speed at which piles build up in the box.  Yet if you keep pouring, eventually the whole space gets filled.

9. (Drexler 1992)

10. Theoretical applied science might also study potential pathways to the technology that would enable the construction of the systems in question, that is, how in principle one could solve the bootstrap problem of how to get from here to there.

11.(Heilbroner 1995), p. 8

12. The cyclical pattern is prominent in dharmic religions.  The ancient Mayans held a cyclical view, as did many in ancient Greece.  In the more recent Western tradition, the thought of eternal recurrence is most strongly associated with Nietzsche’s philosophy, but the idea has been explored by numerous thinkers and is a common trope in popular culture.

13.The proviso of system may also not have seemed significant.  The universe is a closed system.  The universe may not be a finite state system, but any finite part of the universe may permit of only finitely many different configurations, or finitely many perceptibly different configurations, allowing a kind of recurrence argument.  In the actual case, an analogous result may hold with regard to spatial rather than temporal repetition.  If we are living in a “Big World” then all possible human observations are in fact made by some observer (in fact, by infinitely many observers); see (Bostrom 2002c).

14. It could matter if one accepted the “Unification” thesis.  For a definition of this thesis, and an argument against it, see (Bostrom 2006).

15.According to the consensus model; but for a dissenting view, see e.g. (Steinhardt and Turok 2002).

16.(Bureau 2007).  There is considerable uncertainty about the numbers especially for the earlier dates.

17.Does anything interesting follow from this observation?  Well, it is connected to a number of issues that do matter a great deal to work on the future of humanity – issues like observation selection theory and the Fermi paradox; cmp. (Bostrom 2002a).

18. (Raup 1991), p. 3f.

19. (Leslie 1996)

20. Leslie defends the Cater-Leslie Doomsday argument, which leads to a strong probability shift in favor of “doom” (i.e. human extinction) occurring sooner rather than later.  Yet Leslie also believes that the force of the Doomsday argument is weakened by quantum indeterminacy.  Both of these beliefs – that the Doomsday argument is sound, and that if it is sound its conclusion would be weakened by quantum indeterminacy – are highly controversial.  For a critical assessment, see (Bostrom 2002a).

21. (Rees 2003)

22.(Posner 2004)

23.(Bostrom 2002b)

24.Some scenarios in which the human species goes extinct may not be existential disasters – for example, if by the time of the disappearance of Homo sapiens we have developed new forms of intelligent life that continues and expands on what we valued in old biological humanity.  Conversely, not all existential disasters involve extinction.  For example, a global tyranny, if it could never be overthrown and if it were sufficiently horrible, would constitute an existential disaster even if the human species continued to exist.

25.A recent popular article by Bill Joy has also done much to disseminate concern about extinction risks.  Joy’s article focuses on the risks from genetics, nanotechnology, and robotics (artificial intelligence); (Joy 2000).

26. (Drexler 1985).  Drexler is even more concerned about the potential misuse of tools based on advanced nanotechnology to control and oppress populations than he is about the possibility that nanotechnology weapons systems would be used to directly cause human extinction; (Drexler 2007), p. 57.

27.(Bostrom 2002b; Yudkowsky 2007)

28.(Freitas 1999)

29.(Bostrom 1998)

30.How much worse would an existential risk be than an event that merely killed 99% of all humans but allowed for eventual recovery?  The answer requires a theory of value.  See e.g. (Parfit 1984; Bostrom 2003a, 2007a).

31.(Carson 1962)

32.(Ehrlich 1968; Meadows and Club of Rome. 1972)

33. (Solomon et al. 2007), p. 749

34.Ibid, p. 750

35.(Stern and Great Britain Treasury 2006); for references to critiques thereof, see e.g. (Nordhaus 2007; Cox and Vadon 2007).

36.These numbers, which are of course approximate, are calculated from data presented in (De Long and Olney 2006); see also (De Long 1998).

37.(Gibbon and Kitchin 1777)

38.(Tainter 1988)

39.(Diamond 2005)

40.Ibid., p. 425.

41.(Smil 2006), p.  311. 

42.(United_Nations_Population_Division 2004)

43.(Hanson 2000)

44.(Brin 1998)

45.(Bostrom 2005, 2007c)

46.(Pearce 2004)

47.(Pearce 2004)

48.(Bostrom and Ord 2006; Bostrom and Sandberg 2007)

49.Molecular nanotechnology (aka molecular manufacturing, or machine-phase nanotechnology) is one area where a considerable amount of “theoretically applied science” has been done, although this has not yet resulted in a consensus about the feasibility of this anticipated technology; see e.g. (Drexler 1992).

50.(Hanson 1995; Wolfers and Zitzewitz 2004)

51.See e.g. (Bostrom 2003b; Moravec 1999; Drexler 1985; Kurzweil 2005)

52.(Perrow 1984)

53.See e.g. (Sagan 2004).

54.(Lanier 2000)

55.(Burkhead 1999)

56.(Fukuyama 1992)

57.(Fukuyama 2002)

58.E.g. (Bostrom 2003b, 2007c)

59.“Singularity” is to be interpreted here not in its strict mathematical meaning but as suggesting extreme abruptness.  There is no claim that any of the quantities involved would become literally infinite or undefined.

60.(Ulam 1958)

61.(Good 1965)

62.(Vinge 1993)

63.I use the term “qualitative reproduction” advisedly, in order to sidestep the philosophical questions of whether the original mind could be quantitatively the same mind as the upload, and whether the uploaded person could survive the procedure and continue to live as an upload.  The relevance of uploading to the present argument does not depend on the answers to these questions.

64.(Hanson 1994).  Absent regulation, this would lead to a precipitous drop in wages.

65.The antecedent of the conditional (“if uploading is technologically feasible –”) includes, of course, assumptions of a metaphysical nature, such as the assumption that a computer could in principle manifest the same level of intelligence as a biological human brain.  However, in order to see that uploading would have wide-ranging practical ramifications, it is not necessary to assume that uploads would have qualia or subjective conscious experiences.  The question of upload qualia would be important, though, in assessing the meaning and value of scenarios in which a significant percentage of the population of intelligent beings are machine-based.

66.To say something more definite about the probability of a singularity, we would at this stage of the analysis have to settle on a more unambiguous definition of the term.

67.The distinction between quantitative and qualitative improvements may blur in this context.  When I suggest that qualitative changes might occur, I am not referring to a strict mathematical concept like Turing computability, but to a looser idea of an improvement in intelligence that is not aptly characterized as a mere speed-up.

68.(Kurzweil 2005)

69.Note that the expected arrival time of the singularity has receded at a rate of roughly one year per year.  Good, writing in 1965, expected it before 2000.  Vinge, writing in 1993, expected it before 2023.  Kurzweil, writing in 2005, expects it by 2045.

70.(De Long 1998)

71.(Smil 2006), p. 131

72.It is possible that if humanity goes extinct, another intelligent species might evolve on Earth to fill the vacancy.  The fate of such a possible future substitute species, however, would not strictly be part of the future of .

73.I am grateful to Rebecca Roache for research assistance and to her and Nick Shackel helpful comments on an earlier draft.

 

 

January 1, 2009

12 min read

The Future of Man—How Will Evolution Change Humans?

Contrary to popular belief, humans continue to evolve. Our bodies and brains are not the same as our ancestors' were—or as our descendants' will be

By Peter Ward

When you ask for opinions about what future humans might look like, you typically get one of two answers. Some people trot out the old science-fiction vision of a big-brained human with a high forehead and higher intellect. Others say humans are no longer evolving physically—that technology has put an end to the brutal logic of natural selection and that evolution is now purely cultural.

The big-brain vision has no real scientific basis. The fossil record of skull sizes over the past several thousand generations shows that our days of rapid increase in brain size are long over. Accordingly, most scientists a few years ago would have taken the view that human physical evolution has ceased. But DNA techniques, which probe genomes both present and past, have unleashed a revolution in studying evolution; they tell a different story. Not only has Homo sapiens been doing some major genetic reshuffling since our species formed, but the rate of human evolution may, if anything, have increased. In common with other organisms, we underwent the most dramatic changes to our body shape when our species first appeared, but we continue to show genetically induced changes to our physiology and perhaps to our behavior as well. Until fairly recently in our history, human races in various parts of the world were becoming more rather than less distinct. Even today the conditions of modern life could be driving changes to genes for certain behavioral traits.

If giant brains are not in store for us, then what is? Will we become larger or smaller, smarter or dumber? How will the emergence of new diseases and the rise in global temperature shape us? Will a new human species arise one day? Or does the future evolution of humanity lie not within our genes but within our technology, as we augment our brains and bodies with silicon and steel? Are we but the builders of the next dominant intelligence on the earth—the machines?

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

The Far and Recent Past Tracking human evolution used to be the province solely of paleontologists, those of us who study fossil bones from the ancient past. The human family, called the Hominidae, goes back at least seven million years to the appearance of a small proto-human called Sahelanthropus tchadensis .

Since then, our family has had a still disputed, but rather diverse, number of new species in it—as many as nine that we know of and others surely still hidden in the notoriously poor hominid fossil record. Because early human skeletons rarely made it into sedimentary rocks before they were scavenged, this estimate changes from year to year as new discoveries and new interpretations of past bones make their way into print [see “Once We Were Not Alone,” by Ian Tattersall; Scientific American , January 2000, and “ An Ancestor to Call Our Own ,” by Kate Wong; Scientific American , January 2003].

Each new species evolved when a small group of hominids somehow became separated from the larger population for many generations and then found itself in novel environmental conditions favoring a different set of adaptations. Cut off from kin, the small population went its own genetic route and eventually its members could no longer successfully reproduce with the parent population.

The fossil record tells us that the oldest member of our own species lived 195,000 years ago in what is now Ethiopia. From there it spread out across the globe. By 10,000 years ago modern humans had successfully colonized each of the continents save Antarctica, and adaptations to these many locales (among other evolutionary forces) led to what we loosely call races. Groups living in different places evidently retained just enough connections with one another to avoid evolving into separate species. With the globe fairly well covered, one might expect that the time for evolving was pretty much finished.

But that turns out not to be the case. In a study published a year ago Henry C. Harpending of the University of Utah, John Hawks of the University of Wisconsin–Madison and their colleagues analyzed data from the international haplotype map of the human genome [see “ Traces of a Distant Past ,” by Gary Stix; Scientific American, July 2008]. They focused on genetic markers in 270 people from four groups: Han Chinese, Japanese, Yoruba and northern Europeans. They found that at least 7 percent of human genes underwent evolution as recently as 5,000 years ago. Much of the change involved adaptations to particular environments, both natural and human-shaped. For example, few people in China and Africa can digest fresh milk into adulthood, whereas almost everyone in Sweden and Denmark can. This ability presumably arose as an adaptation to dairy farming.

Another study by Pardis C. Sabeti of Harvard University and her colleagues used huge data sets of genetic variation to look for signs of natural selection across the human genome. More than 300 regions on the genome showed evidence of recent changes that improved people’s chance of surviving and reproducing. Examples included resistance to one of Africa’s great scourges, the virus causing Lassa fever; partial resistance to other diseases, such as malaria, among some African populations; changes in skin pigmentation and development of hair follicles among Asians; and the evolution of lighter skin and blue eyes in northern Europe.

Harpending and Hawks’s team estimated that over the past 10,000 years humans have evolved as much as 100 times faster than at any other time since the split of the earliest hominid from the ancestors of modern chimpanzees. The team attributed the quickening pace to the variety of environments humans moved into and the changes in living conditions brought about by agriculture and cities. It was not farming per se or the changes in the landscape that conversion of wild habitat to tamed fields brought about but the often lethal combination of poor sanitation, novel diet and emerging diseases (from other humans as well as domesticated animals). Although some researchers have expressed reservations about these estimates, the basic point seems clear: humans are first-class evolvers.

Unnatural Selection During the past century, our species’ circumstances have again changed. The geographic isolation of different groups has been broached by the ease of transportation and the dismantling of social barriers that once kept racial groups apart. Never before has the human gene pool had such widespread mixing of what were heretofore entirely separated local populations of our species. In fact, the mobility of humanity might be bringing about the homogenization of our species. At the same time, natural selection in our species is being thwarted by our technology and our medicines. In most parts of the globe, babies no longer die in large numbers. People with genetic damage that was once fatal now live and have children. Natural predators no longer affect the rules of survival.

Steve Jones of University College London has argued that human evolution has essentially ceased. At a Royal Society of Edinburgh debate in 2002 entitled “Is Evolution Over?” he said: “Things have simply stopped getting better, or worse, for our species. If you want to know what Utopia is like, just look around—this is it.” Jones suggested that, at least in the developed world, almost everyone has the opportunity to reach reproductive age, and the poor and rich have an equal chance of having children. Inherited disease resistance—say, to HIV—may still confer a survival advantage, but culture, rather than genetic inheritance, is now the deciding factor in whether people live or die. In short, evolution may now be memetic—involving ideas—rather than genetic [see “The Power of Memes,” by Susan Blackmore; Scientific American, October 2000].

Another point of view is that genetic evolution continues to occur even today, but in reverse. Certain characteristics of modern life may drive evolutionary change that does not make us fitter for survival—or that even makes us less fit. Innumerable college students have noticed one potential way that such “inadaptive” evolution could happen: they put off reproduction while many of their high school classmates who did not make the grade started having babies right away. If less intelligent parents have more kids, then intelligence is a Darwinian liability in today’s world, and average intelligence might evolve downward.

Such arguments have a long and contentious history. One of the many counterarguments is that human intelligence is made up of many different abilities encoded by a large number of genes. It thus has a low degree of heritability, the rate at which one generation passes the trait to the next. Natural selection acts only on heritable traits. Researchers actively debate just how heritable intelligence is [see “ The Search for Intelligence ,” by Carl Zimmer; Scientific American, October 2008], but they have found no sign that average intelligence is in fact decreasing.

Even if intelligence is not at risk, some scientists speculate that other, more heritable traits could be accumulating in the human species and that these traits are anything but good for us. For instance, behavior disorders such as Tourette’s syndrome and attention-deficit hyperactivity disorder (ADHD) may, unlike intelligence, be encoded by but a few genes, in which case their heritability could be very high. If these disorders increase one’s chance of having children, they could become ever more prevalent with each generation. David Comings, a specialist in these two diseases, has argued in scientific papers and a 1996 book that these conditions are more common than they used to be and that evolution might be one reason: women with these syndromes are less likely to attend college and thus tend to have more children than those who do not. But other researchers have brought forward serious concerns about Comings’s methodology. It is not clear whether the incidence of Tourette’s and ADHD is, in fact, increasing at all. Research into these areas is also made more difficult because of the perceived social stigma that many of these afflictions attach to their carriers.

Although these particular examples do not pass scientific muster, the basic line of reasoning is plausible. We tend to think of evolution as something involving structural modification, yet it can and does affect things invisible from the outside—behavior. Many people carry the genes making them susceptible to alcoholism, drug addiction and other problems. Most do not succumb, because genes are not destiny; their effect depends on our environment. But others do succumb, and their problems may affect whether they survive and how many children they have. These changes in fertility are enough for natural selection to act on. Much of humanity’s future evolution may involve new sets of behaviors that spread in response to changing social and environmental conditions. Of course, humans differ from other species in that we do not have to accept this Darwinian logic passively.

Directed Evolution We have directed the evolution of so many animal and plant species. Why not direct our own? Why wait for natural selection to do the job when we can do it faster and in ways beneficial to ourselves? In the area of human behavior, for example, geneticists are tracking down the genetic components not just of problems and disorders but also of overall disposition and various aspects of sexuality and competitiveness, many of which may be at least partially heritable. Over time, elaborate screening for genetic makeup may become commonplace, and people will be offered drugs based on the results.

The next step will be to actually change people’s genes. That could conceivably be done in two ways: by changing genes in the relevant organ only (gene therapy) or by altering the entire genome of an individual (what is known as germ-line therapy). Researchers are still struggling with the limited goal of gene therapy to cure disease. But if they can ever pull off germ-line therapy, it will help not only the individual in question but also his or her children. The major obstacle to genetic engineering in humans will be the sheer complexity of the genome. Genes usually perform more than one function; conversely, functions are usually encoded by more than one gene. Because of this property, known as pleiotropy, tinkering with one gene can have unintended consequences.

Why try at all, then? The pressure to change genes will probably come from parents wanting to guarantee their child is a boy or a girl; to endow their children with beauty, intelligence, musical talent or a sweet nature; or to try to ensure that they are not helplessly disposed to become mean-spirited, depressed, hyperactive or even criminal. The motives are there, and they are very strong. Just as the push by parents to genetically enhance their children could be socially irresistible, so, too, would be an assault on human aging. Many recent studies suggest that aging is not so much a simple wearing down of body parts as it is a programmed decay, much of it genetically controlled. If so, the next century of genetic research could unlock numerous genes controlling many aspects of aging. Those genes could be manipulated.

Assuming that it does become practical to change our genes, how will that affect the future evolution of humanity? Probably a great deal. Suppose parents alter their unborn children to enhance their intelligence, looks and longevity. If the kids are as smart as they are long-lived—an IQ of 150 and a lifespan of 150 years—they could have more children and accumulate more wealth than the rest of us. Socially they will probably be drawn to others of their kind. With some kind of self-imposed geographic or social segregation, their genes might drift and eventually differentiate as a new species. One day, then, we will have it in our power to bring a new human species into this world. Whether we choose to follow such a path is for our descendants to decide.

The Borg Route Even less predictable than our use of genetic manipulation is our manipulation of machines—or they of us. Is the ultimate evolution of our species one of symbiosis with machines, a human-machine synthesis? Many writers have predicted that we might link our bodies with robots or upload our minds into computers. In fact, we are already dependent on machines. As much as we build them to meet human needs, we have structured our own lives and behavior to meet theirs. As machines become ever more complex and interconnected, we will be forced to try to accommodate them. This view was starkly enunciated by George Dyson in his 1998 book Darwin among the Machines : “Everything that human beings are doing to make it easier to operate computer networks is at the same time, but for different reasons, making it easier for computer networks to operate human beings.... Darwinian evolution, in one of those paradoxes with which life abounds, may be a victim of its own success, unable to keep up with non-Darwinian processes that it has spawned.”

Our technological prowess threatens to swamp the old ways that evolution works. Consider two different views of the future taken from an essay in 2004 by evolutionary philosopher Nick Bostrom of the University of Oxford. On the optimistic side, he wrote: “The big picture shows an overarching trend towards increasing levels of complexity, knowledge, consciousness, and coordinated goal-directed organization, a trend which, not to put too fine a point on it, we may label ‘progress.’ What we shall call the Panglossian view maintains that this past record of success gives us good grounds for thinking that evolution (whether biological, memetic or technological) will continue to lead in desirable directions.”

Although the reference to “progress” surely causes the late evolutionary biologist Steven Jay Gould to spin in his grave, the point can be made. As Gould argued, fossils, including those from our own ancestors, tell us that evolutionary change is not a continuous thing; rather it occurs in fits and starts, and it is certainly not “progressive” or directional. Organisms get smaller as well as larger. But evolution has indeed shown at least one vector: toward increasing complexity. Perhaps that is the fate of future human evolution: greater complexity through some combination of anatomy, physiology or behavior. If we continue to adapt (and undertake some deft planetary engineering), there is no genetic or evolutionary reason that we could not still be around to watch the sun die. Unlike aging, extinction does not appear to be genetically programmed into any species.

The darker side is all too familiar. Bostrom (who must be a very unsettled man) offered a vision of how uploading our brains into computers could spell our doom. Advanced artificial intelligence could encapsulate the various components of human cognition and reassemble those components into something that is no longer human—and that would render us obsolete. Bostrom predicted the following course of events: “Some human individuals upload and make many copies of themselves. Meanwhile, there is gradual progress in neuroscience and artificial intelligence, and eventually it becomes possible to isolate individual cognitive modules and connect them up to modules from other uploaded minds.... Modules that conform to a common standard would be better able to communicate and cooperate with other modules and would therefore be economically more productive, creating a pressure for standardization.... There might be no niche for mental architectures of a human kind.”

As if technological obsolescence were not disturbing enough, Bostrom concluded with an even more dreary possibility: if machine efficiency became the new measure of evolutionary fitness, much of what we regard as quintessentially human would be weeded out of our lineage. He wrote: “The extravagancies and fun that arguably give human life much of its meaning—humor, love, game-playing, art, sex, dancing, social conversation, philosophy, literature, scientific discovery, food and drink, friendship, parenting, sport—we have preferences and capabilities that make us engage in such activities, and these predispositions were adaptive in our species’ evolutionary past; but what ground do we have for being confident that these or similar activities will continue to be adaptive in the future? Perhaps what will maximize fitness in the future will be nothing but nonstop high-intensity drudgery, work of a drab and repetitive nature, aimed at improving the eighth decimal of some economic output measure.”

In short, humanity’s future could take one of several routes, assuming we do not go extinct:

Stasis. We largely stay as we are now, with minor tweaks, mainly as races merge.

Speciation. A new human species evolves on either this planet or another.

Symbiosis with machines. Integration of machines and human brains produces a collective intelligence that may or may not retain the qualities we now recognize as human.

Quo vadis Homo futuris?

Note: This article was originally printed with the title, "What Will Become of Homo Sapiens ?"

Philosophy Now: a magazine of ideas

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Question of the Month

What is the future of humanity, the following philosophical forecasts of our fate each win an unforeseeable book..

From the onset of the Industrial Revolution, human progress has been unprecedented in its sheer speed and scale. Anyone born before the mid-1980s, remembering the world before the internet, will surely appreciate technology’s power to uproot our lives. There is no doubt that advances in technology and automation will keep on transforming our lives. Soon the devices we use will respond to our voices, performing many routine chores as we talk with them. The testing of self-drive cars and of drones delivering packages have already reached an advanced state. The virtual world will become ever more developed and sophisticated, offering us yet more unimaginable ways to experience reality. Humans will in all probability make it to Mars before the end of this century; and afterwards leave our imprint further out in space. Meanwhile humanity’s dabbling with and control over nature will continue to know no bounds in the years to come, thereby helping societies more effectively combat illness, disease, infertility and ageing. But the most terrifying aspect of the future will be when the code of life is altered to suit the vanity and greed of humans, the ageing process is prolonged or postponed, and human mortality is eventually overcome. I think such developments could indeed spell the doom of humanity, as they spark an all-out war between the haves and have nots. It cannot be denied that in all epochs of history we have continuously resorted to war and violence to solve our conflicts, and to the present day humanity has failed to organise societies truly capable of addressing the unequal distribution of resources. Meanwhile the systematic degradation that has been wrought on the natural environment in the name of progress still cries out for our care and attention. Above all, climate change remains the most pressing problem to be tackled on a global scale if the future of humanity is to be safeguarded. Nevertheless, I do hold some hope that humanity can be saved if an influential world movement recognises that the availability and sustainability of natural resources must be foremost in whatever economic philosophy is advocated; that unless the sharp inequalities in different regions of the world are truly addressed, the world will remain bedevilled by uncontrollable immigration, hatred and terrorism; and that unless humanity becomes consciously aware of the futility of war and violence, the path of self-destruction will continually be sought. Alas, the future of humanity can only be truly safe if humans accept that they are mortal beings and that happiness on this planet can only be achieved if the comfort and convenience bestowed on us by technological improvements is reconciled with meaningful and uncomplicated lives.

Ian Rizzo, Zabbar, Malta

Noam Chomsky has, on more than one occasion, pointed out that the two biggest threats that face humanity are global warming and nuclear war. Let’s entertain these two ideas briefly.

Nuclear war: Although some have speculated that nuclear weapons are impractical compared to the ever-advancing smart bombs, the devastation from fallout can quickly persuade a ruler or government to end a war and submit. In violent and warring minds that may be reason enough to want to retain them – see Theresa May’s and Donald Trump’s cavalier sanctioning of nuclear strikes. The consequences of such a strike, not to mention an all-out war, would be hellish: apart from untold deaths and injuries, birth defects and ruined soil and crops for decades.

Climate change: The long-term effects of icecaps melting, of fracking, of beaches being eroded, and air and water pollution, are frightening. Equally as frightening are the unspoken effects animal agriculture is having: for example, the build-up in the oceans of waste from cattle farming (too much for plankton to break down fast enough) can create dead zones where no life exists; not to mention the land, water and food which livestock take in order to feed us a proportionally smaller amount. This creates much more scarcity in an already competitive and difficult-to-get-by-in world.

These scenarios, which seem increasingly hard to separate, unfortunately indicate a grim future for humanity of scarcity, war, nuclear fallout and environmental devastation. Although very bleak, there is always hope; and to recycle another cliché, the future is not set. Passivity on the part of those appalled by such potential futures only increases the chances of them coming about. Conscientious action is, as seems to be the norm nowadays, needed. While people may, rightly or otherwise, distrust their elected officials and the media, there are other people and groups that they can trust. A lesson taken from the revolutionary left, particularly the libertarian socialists (anarchists), would teach us that coming together and organising into groups to cause change can happen, and can succeed. Educate, agitate, organise!

Shane Mc Donnell, Navan, Co. Meath, Ireland

Based on fossils and archeological artifacts from around the world, modern humans have existed for about 200,000 years; but the roots of civilization only go back 20,000 years, to when we first began planting grain and building walls. These dates slide back and forth on history’s timeline depending on the viewpoint, but practically all sources agree that up until about sixty years ago, humanity’s footprint on the sands of time was for the most part biodegradable.

Today, the footprint of humanity has toxic radioactive waste all over it. The World Nuclear Association reported in 2016 that 450 nuclear reactors were generating electricity in thirty countries around the world. Incredibly, sixty new reactors are being built on the heels of Fukushima!

It is chilling to think that between 1962 and 1983, the world faced nuclear annihilation more than once, when the only thing between humanity and devastation was a red button under a human thumb! An age-old question here begs an answer: Is humanity an experiment gone badly wrong?

The first mainland Greek philosopher, Anaximander, theorized that all things are generated from, and returned to, an endless creative source that he called ‘the Boundless’. In more recent times Carl Jung fleshed out Anaximander’s idea somewhat with his theory of the Collective Unconscious. Jung believed that this is the collective mostly-forgotten memory of our personal relationship with a higher authority. His philosophy was that in the final analysis nothing is as important as the life of the individual, whose hidden resources ultimately transform the world. Jung wrote: “In our most private and subjective lives we are not only the passive witness of our age, and its sufferers, but also its makers. We make our own epoch.”

Ancient devastations such as a globally-remembered great flood were believed to be acts of God or gods which humanity barely survived. Perhaps humanity’s future has always rested on the shoulders of extraordinary individuals, who manage to keep us afloat during the darkest of times. God willing, such an individual will come along to show future generations how to render radioactive waste inert, or gift them with the formula for cold fusion. In the meantime, it wouldn’t hurt to show Mother Nature a little respect and quit living like there’s no tomorrow.

Connie Koehler, Austin, Texas

Let’s look at our future in terms of two adaptive strategies in the evolutionary process: competition and cooperation.

We start with single cell organisms, which become multicelled ones. They develop diffuse nervous systems. These in turn organize into central nervous systems that serve the basic needs of complex organisms. Eventually, these blossom into the frontal cortex that allows the higher cognitive functions that land us here trying to answer the big questions.

This trajectory has left us with two often-conflicting modes of negotiating an environment filled with other organisms. The competitive mode involves our baser impulses utilizing our cognitive functions strictly for the sake of our baser impulses. We can see here the brutal world described by Hobbes and Ayn Rand. By contrast, the cooperative mode sees its interest in a trajectory from inward self-interest out to the interest of others. Here, we see the less brutal world of Marx or Rawls. Consequently, we find ourselves at an important evolutionary crossroads. Do we stick with the competitive instinct which has, via capitalism, got us to this point, and risk, at best, subjecting ourselves to a global oligarchy, the dismantling of our democracies, and the depletion of our natural resources: or, worse, our extinction as a species through manmade climate change and war? Or do we turn to the next evolutionary step, and evolve? Do we become better than market economics tells us we are?

I’m not optimistic, not only because of the growing influence of the right in America and other advanced nations, but because of the sensibility of the voters perpetrating this. As a progressive in the American Midwest, in last year’s election I enjoyed a front row seat for watching otherwise decent and intelligent people succumb to dogma, sensationalism, and misinformation – a complete lack of critical inquiry supplanted by fancy – as can be seen in political campaigns that resembled some Quentin Tarantino revenge fantasy. But this only makes sense as an evolutionary backlash in which our higher cognitive functions act strictly in behalf of baser impulses and immediate self-interest.

Still, we can hope. And sometimes the only way out is through . Perhaps the current evolutionary political backlash, by demonstrating in very real terms the actual consequences of competition, is what we’ll need to put it behind us and truly evolve.

D. Tarkington, Bellevue, Nebraska

The future of humanity is speculative, and so I’ll apprehend it more with hope than knowledge. Our first two hopes are that we do not annihilate our species with global biological or nuclear warfare, and that we do not destroy our planet. If we assume that we will avoid those futures, then we can expect that science and technology will advance and provide us with many blessings, and some dangers. But I think the cardinal question about our future is, “What kind of government will we have? This is because we are political animals, as Aristotle famously said. We are part nature and part nurture, and the latter is shaped by the society we happen to be raised in, which in turn is determined by the nature of our government. Thus, our future will be largely a function of our future society and government.

About this we can expect increased globalization and commingling of peoples until, perhaps in a few millennia, we are one people with one language and a complex global federal government. Perhaps there will be an end to war, and other benefits. However, in federations, the superior government tends to accumulate power by diminishing that of subordinate governments. Power corrupts proportionately, and this presents us with the specter of a dystopian society.

Trends in history strongly indicate two possible primary developments: freedom or slavery. Many see in history an increase in individual freedom; but clearly there also has been an increase in state power. The source of the former lies in the hopes and aspirations of individuals. The source of the latter lies in the fact that the power of the elite naturally enlarges itself.

Freedom or slavery: which will it be? That is, what will be the balance of individual freedom and self-determination versus state control and state determination of what humanity is? It depends on the nature of the over-arching supergovernment. Specifically, of who will rule the rulers: the people, or an established elite? A global government may be a Frankenstein we cannot control. But then we are an amazingly adaptable species.

There are too many variables to speculate about the future fruitfully. We can only hope it will be a future of liberty.

John Talley, Rutherfordton, North Carolina

In the future, humanity will still ponder the concept of death and its meaning, but perhaps with an additional clause: the fear of our private digital minds left behind. Digital footprints, the memorial grooves in the wax, the living binary representation of lives typed, clicked, or swished by our physical hands, our handiwork floating in the digital ether forever. It is not hard to imagine with some advances in technology that the digital self, made feasible with the use of holograms, or mediums such as virtual reality could provide representations of our persona after death. A digital likeness filled with the essence of you, the ‘ghost in the machine’. In other words, I think, therefore, I am your entire life’s browser history. A collection of algorithms, from preferred GPS haunts, from online shopping preferences to your late night browsing searches, all composed and collated to represent the embodied holographic you after death. Sartre’s ‘human existence precedes essence’ made all the more relevant, the digital essence of your earthly existence left behind.

In the future, after your funeral, relatives shall be able to buy such a holographic essence. A grieving partner comforted by a more than passable intuitive Turing system finely tuned to represent you. Perhaps, also the curiosities of grandchildren, wishing to know who their grandparents really were, reanimated in the holographic flesh. Indeed, you could even give your own narcissistic eulogy, the voice from beyond the grave. In every instance, a visual binary essence that can speak, listen, gesture, reason, appear to show emotion, and bring meaning to those still in life. Unfortunately, unbeknown to your internet provider, you also shared a flat with Dave, who had a penchant for the darker side of the web. Additionally, on your daily commute, roadwork traffic lights had an uncanny knack of holding you just outside a Ku Klux Klan hall. All information impartially collected and collated, unfairly representing the essence of you. The repercussions aren’t hard to predict; loving relatives shocked to find you had a secret life, one that included nefarious activities and racist tendencies. In such a technological future, every word typed, every destination you travelled would take on an uncontrolled limbo existence. The fear of death may be relegated to second place by the anxiety of judgements passed on an eternal digital future you.

John Scotland, Kilsyth, North Lanarkshire

In the future corporations and governments will create a variety of virtual worlds, in which all humans will eventually choose to live. Most will choose to live in simulations of the Twenty First Century, because life was much better back then. Of course, these humans will not remember that their world is virtual. Some philosophers and scientists in these virtual worlds will present skeptical arguments about the existence of a real external world, but most people won’t take these arguments seriously. Some of the skeptics will argue that empirical observations are consistent with their world being a simulation. However, most people won’t care because the virtual world feels so real and people value the useful, not the true. Philosophers will also present interesting arguments about how human minds could never, in principle, fully grasp higher dimensions, just as two-dimensional minds could never know there’s a bird flying above them because there is no ‘above’ for such minds. Although a two dimensional mind could use math to infer that there is a higher dimension with some sort of entity casting the observable light-and-dark patterns, that mind could never see or even imagine it. Still, others will sometimes believe their world is virtual because they ate a special mushroom, had a mystical experience, or simply because they momentarily trusted their intuition. Most of these people will be virtually locked up. Some geniuses will argue that it is likely that we are living in a virtual world: If the universe is as big as we think, and advanced people create virtual worlds, then there are many virtual worlds and only one reality: therefore, it is more likely that the future world is virtual. But wait, the future is here.

Paul Stearns, Blinn College, Texas

The organic and inorganic will become less distinct. Bioengineers will create living cells capable of performing simple ‘Turing functions’ (programmable tasks), and on this basis, organic computers will transform humanity. Almost certainly, organs will be artificially produced, this extending human life; and with the tweaking of genes we could end up living almost indefinitely. Cancer, AIDS and other fatal diseases will be eradicated, as smallpox was in the 1970s. Unfortunately, new and deadlier diseases (such as Zika) will spring up and become lethal weapons. Disease, famine, war and terrorism will turn cities into savage ghettoes run by marauding gangs. Humans will be microchipped from birth and monitored by surveillance satellites. ‘Genetically compromised’ individuals will be sterilised, leading to mass sterilisations. Only the healthy super-rich will be able to afford to live in biodomes with pollution-free air and Eden-like forests and gardens. The rest will be forced to “defend themselves against the ever-present menace of barbaric, atavistic and reactionary forces.” (Winston Churchill in Civilization , Niall Ferguson, p.297, 2012).

Fortunately, the philanthropic wealthy will continue to repair the damage wreaked against nature since the start of the Industrial Revolution. Humanity’s goal must therefore be to diminish our ‘inner animal’ in favour of the power of reason, thereby becoming truly human – Homo sapiens victorens! “The future of humanity must gaze harder upon… looking within.” (Buddha, in Dogen’s Shobo Genzo , p.47, 2012).

Aaron V. Adosa, Swansea

There will only be two types of human beings in the future: the minority having enormous brains and tiny bodies, and the majority with tiny brains and muscular bodies. The size of the average brain will gradually diminish; not because of our innate laziness, but because of our over-concern about our physical appearance. In the old days, most people dreamt of having shelter and a stable food supply. As we no longer struggle for the basic necessities, our dreams focus instead on the search for physical beauty – how to obtain and maintain the ‘ideal body shape’ and healthy life the media promotes. Physical beauty will become the main goal of the majority. They’ll do exercises everyday, taking nutrients to maintain their shape while not noticing that their brains are shrinking. Actually, there is no doubt that they’ll work extremely hard to make their brains smaller. Unfortunately, both the majority and minority will enter states of extreme depression and show hatred towards the other set. Many who cannot categorize themselves into either the majority or the minority will eventually commit suicide as the pressure from both extremes will be overwhelming.

Science has caused the separation of intelligence and health. The misinterpretation or over-interpretation of health and evolutionary facts by the public is causing the decay of intelligence and the increase in concern about physical beauty; in fact we are just eliminating ourselves.

Cyrus Aegean Lamprecht, Hong Kong

What is the future of humanity? Answer: Extinction within a few thousand years. Mother Nature, God, or the blind forces of evolution (take your pick) has arranged it so that we higher animals reproduce by engaging in sex for pleasure, with babies as a by-product. However, human ingenuity in creating contraceptives has cut the link between the pleasure and the babies, and so in the wealthy parts of the world the replication rate has fallen below the 2.1 per couple necessary to maintain a stable population. And the world is getting steadily wealthier. So it is a fairly modest assumption that in a hundred years from now, the planetary human population will have peaked at ten billion, but most of them will be as wealthy as today’s average in the West. It is also plausible that sexbots will be widely available, be far more beautiful than most real women or men, and be far better at giving pleasure than another human. So, finally, it is plausible that the average reproduction rate will then become 1.5 or less. The rest is arithmetic. Dividing 1.5 by 2 to give the reproduction rate per person of 0.75, and taking this rate to the power of 30, we get a value less than 0.0002. So dividing, thirty generations later, or about a thousand years from now, the world population will be about two million. This will ensure civilizational collapse. But I expect the sexbots will still be there – a few thousand per person. So another few thousand years will see us all gone.

The only obstruction to this that I can see is religion imposing a sexbot ban. The Roman Catholic Church has had indifferent success in similar sexual bans; the Muslims might do a little better. But it seems unlikely that a world populated by only a few million religious believers would survive for long; and all the more intelligent and creative people will have experienced a blissful death long ago.

John Lawless, Crawley, Western Australia

In the opening chapter of The Napoleon of Notting Hill , G.K. Chesterton introduced us to the traditional game of ‘Cheat the Prophet’. This is played when, extrapolating from current trends, a wise man ( sic ) predicts how we will live in the future. He’s listened to respectfully; and, once he is dead and buried, humanity does something totally other than he predicted.

Towards the end of his life Karl Marx said that he was not a Marxist. I believe that what he meant was that he did not join in with his followers’ confident Marxist predictions. That is, he believed that his philosophy could explain the historical processes which had led to his contemporary situation, explain current trends, even exhort humanity how to respond to them; but his theories could not determine or predict the future. Despite this, Twentieth Century prophets such as Leon Trotsky, H.G. Wells, and Francis Fukuyama, have asserted that they know where humanity is going; and humanity has duly responded by going in a different direction entirely, or, when feeling particularly bloody-minded, several different directions. We have difficulty enough in understanding the past: the future is unknowable. The only safe prediction is that every prediction about the future of humanity is almost certain to be wrong (and, to paraphrase Einstein, I’m not sure about the ‘almost’).

Martin Jenkins, London

Niels Bohr supposedly said that prediction is very difficult, especially about the future. Yet a spacecraft’s path is predictable to extraordinary precision, and it must be, because by the time it gets anywhere interesting the right time to correct its trajectory has long past. Then there’s the long-term cyclic reliability of the Sun, Moon, and the planets. The future of details is difficult to predict, but if the details average out, then barring the odd black swan, the future is predictable to a degree . In the 1950s, Isaac Asimov invented ‘psychohistory’, the statistical extrapolation of future events and the behaviour of significant figures from society’s present state. However, if some unforeseeable details grow to dominate, even the broad shape of the future becomes uncertain. This is likely where many actors and forces interact, as they do in human reality. Self-reinforcing cycles can form. Thus predicting the near future is a little like forecasting the weather. So if we cannot forecast humanity’s ‘weather’, can we at least forecast its ‘climate’?

Today the world is more peaceful, better educated (particularly women) and proportionally less affected by extreme poverty than ever before. With these trends, population will level off at around ten billion, and apart from in a few wretched countries, the prospects for a democratic near-future are favourable. However, democracy relies on rising expectations being fulfilled through economic growth; and today there is a collision course between greening technology and population growth, rising emissions, and diminishing resources. A good outcome depends on cutting personal consumption and the conventional industrial employment that leads to the growing gap between richest and poorest. However, denying expectations is unpopular, and confounding them risks the instability of political reaction. The costs are so high that governments may yet seek ways to distribute wealth more evenly, even if they won’t yet admit it. Barring a world epidemic – more likely given ease of travel – or a climate or other catastrophe, population will fall gradually through elective non-replacement rather than as a result of collective action. The environment will improve, but nature may still be diminished unless people build greener cities. Earth is special, and exploration of other planetary systems will yield many wonders, but few habitats. Apart from on Mars, any colonies will be too far away to interact with Earth. Ultimately, human progress can carry life throughout the universe, but as we suppress our evolutionary pressures, this life may not be us.

Dr Nicholas B. Taylor, Little Sandhurst

Next Question of the Month

The next question is: What Sorts of Things Exist, and How? Please give and justify your ontology in fewer than 400 words. The prize is a semi-random book from our book mountain. Subject lines should be marked ‘Question of the Month’, and must be received by 12th June 2017. If you want a chance of getting a book, please include your physical address. Thanks.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy . X

  • Blackholes, Wormholes and the Tenth Dimension
  • Follow the Methane! New NASA Strategy for Mars?
  • Hyperspace – A Scientific Odyssey
  • Hyperspace and a Theory of Everything
  • M-Theory: The Mother of all SuperStrings
  • So You Want to Become a Physicist?
  • The Physics of Extraterrestrial Civilizations
  • The Physics of Interstellar Travel
  • The Physics of Time Travel
  • What to Do If You Have a Proposal for the Unified Field Theory?
  • Excerpt from ‘THE FUTURE OF THE MIND’
  • QUANTUM SUPREMACY: How The Quantum Computer Revolution Will Change Everything
  • THE FUTURE OF HUMANITY: Our Destiny In The Universe
  • THE FUTURE OF THE MIND: The Scientific Quest to Understand, Enhance, and Empower the Mind
  • THE GOD EQUATION: The Quest for a Theory of Everything
  • Radio Programs

Physicist, Futurist, Bestselling Author, Popularizer of Science

  • Book Updates
  • Dr. Kaku's Universe
  • Kaku on Movies
  • News Appearances
  • Public Appearances
  • Science Frontiers
  • Television & Media

The #1 bestselling author of THE FUTURE OF THE MIND traverses the frontiers of astrophysics, artificial intelligence, and technology to offer a stunning vision of man’s future in space, from settling Mars to traveling to distant galaxies in MICHIO KAKU’S THE FUTURE OF HUMANITY: Our Destiny In The Universe .

Michio Kaku contemplates the cosmos

ORDER NOW AT THESE FINE BOOKSELLERS…





Hudson Booksellers

Formerly the domain of fiction, moving human civilization to the stars is increasingly becoming a scientific possibility–and a necessity. Whether in the near future due to climate change and the depletion of finite resources, or in the distant future due to catastrophic cosmological events, we must face the reality that humans will one day need to leave planet Earth to survive as a species.

World-renowned physicist and futurist Michio Kaku explores in rich, intimate detail the process by which humanity may gradually move away from the planet and develop a sustainable civilization in outer space. Kaku reveals how cutting-edge developments in robotics, nanotechnology, and biotechnology may allow us to terraform and build habitable cities on Mars. He then takes us beyond the solar system to nearby stars, which may soon be reached by nanoships traveling on laser beams at near the speed of light.

Finally, he brings us beyond our galaxy, and even beyond our universe, to the possibility of immortality, showing us how humans may someday be able to leave our bodies entirely and laser port to new havens in space. With irrepressible enthusiasm and wonder, Dr. Kaku takes readers on a fascinating journey to a future in which humanity may finally fulfill its long-awaited destiny among the stars.

READY FOR MORE?

For the complete library of books by Dr. Michio Kaku, click here .

The Future of Humanity - Paperback Edition

  • Publications
  • TAG CLOUD ai alien intelligence book BookTour2021 Book Tour 2023 cataclysm CBSN CBS News CBS This Morning Charlie Rose cnn cosmology earthquake earth science Elon Musk FOX Business fox news Gayle King human civilization Kennedy lhc life beyond earth manned space mission mars Mission to Mars moon moon base MSNBC NASA natural disaster new space race Norah O'Donnell physics of the impossible plate tectonics private sector reusable rocket seismic surge solar system space space exploration space explore SpaceX String Theory theory of everything wsj
  • FIND BY HISTORY
  • FIND BY TAG

This feature has not been activated yet.

Order Now

august, 2024

FOR ALL PUBLIC EVENTS, CLICK HERE .

the future of humanity essay

  • February 2024
  • December 2023
  • August 2023
  • February 2023
  • January 2023
  • February 2021
  • January 2021
  • November 2020
  • September 2020
  • August 2020
  • August 2018
  • February 2018
  • December 2017
  • October 2017
  • September 2017
  • August 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • October 2015
  • September 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • February 2014
  • February 2013
  • February 2012
  • August 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • August 2010
  • February 2010
  • January 2010
  • November 2009
  • October 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • February 2008
  • January 2008

the future of humanity essay

© COPYRIGHT 2024 MICHIO KAKU. ALL RIGHTS RESERVED.

  • DOI: 10.1057/9780230227279_10
  • Corpus ID: 14109768

The Future of Humanity

  • Published 24 April 2009
  • Geopolitics, History, and International Relations

115 Citations

‘humanity’: constitution, value, and extinction, after the human, evaluating the posthuman future – some philosophical problems, the end of selection as a driver of human evolution, drifting away from engineering humanity: technical humanity a mirage, the future of humanity: heidegger, personhood and technology.

  • Highly Influenced

Mind and Consciousness as per J. Krishnamurti

The armageddon club: education for the future of humanity, anthropogeny: human quality virtual leap, covid-19, philosophy and the leap towards the posthuman, 66 references, visions of the future, why i want to be a posthuman when i grow up, nonzero: the logic of human destiny, our posthuman future: consequences of the biotechnology revolution, robot: mere machine to transcendent mind, speculations concerning the first ultraintelligent machine, the reversal test: eliminating status quo bias in applied ethics*, quantity of experience: brain-duplication and degrees of consciousness, technological revolutions: ethics and policy in the dark, artificial intelligence as a positive and negative factor in global risk, related papers.

Showing 1 through 3 of 0 Related Papers

  • Advanced Search
  • All new items
  • Journal articles
  • Manuscripts
  • All Categories
  • Metaphysics and Epistemology
  • Epistemology
  • Metaphilosophy
  • Metaphysics
  • Philosophy of Action
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Religion
  • Value Theory
  • Applied Ethics
  • Meta-Ethics
  • Normative Ethics
  • Philosophy of Gender, Race, and Sexuality
  • Philosophy of Law
  • Social and Political Philosophy
  • Value Theory, Miscellaneous
  • Science, Logic, and Mathematics
  • Logic and Philosophy of Logic
  • Philosophy of Biology
  • Philosophy of Cognitive Science
  • Philosophy of Computing and Information
  • Philosophy of Mathematics
  • Philosophy of Physical Science
  • Philosophy of Social Science
  • Philosophy of Probability
  • General Philosophy of Science
  • Philosophy of Science, Misc
  • History of Western Philosophy
  • Ancient Greek and Roman Philosophy
  • Medieval and Renaissance Philosophy
  • 17th/18th Century Philosophy
  • 19th Century Philosophy
  • 20th Century Philosophy
  • History of Western Philosophy, Misc
  • Philosophical Traditions
  • African/Africana Philosophy
  • Asian Philosophy
  • Continental Philosophy
  • European Philosophy
  • Philosophy of the Americas
  • Philosophical Traditions, Miscellaneous
  • Philosophy, Misc
  • Philosophy, Introductions and Anthologies
  • Philosophy, General Works
  • Teaching Philosophy
  • Philosophy, Miscellaneous
  • Other Academic Areas
  • Natural Sciences
  • Social Sciences
  • Cognitive Sciences
  • Formal Sciences
  • Arts and Humanities
  • Professional Areas
  • Other Academic Areas, Misc
  • Submit a book or article
  • Upload a bibliography
  • Personal page tracking
  • Archives we track
  • Information for publishers
  • Introduction
  • Submitting to PhilPapers
  • Frequently Asked Questions
  • Subscriptions
  • Editor's Guide
  • The Categorization Project
  • For Publishers
  • For Archive Admins
  • PhilPapers Surveys
  • Bargain Finder
  • About PhilPapers
  • Create an account

The future of humanity

Author's profile.

the future of humanity essay

Reprint years

Other versions.

original
reprint
reprint

PhilArchive

External links.

the future of humanity essay

Through your library

  • Sign in / register and customize your OpenURL resolver
  • Configure custom resolver

Similar books and articles

Citations of this work, references found in this work.

Phiosophy Documentation Center

IMAGES

  1. ≫ Imagine the Future of Humanity Free Essay Sample on Samploon.com

    the future of humanity essay

  2. Humanity and the World: Future Challenges

    the future of humanity essay

  3. Future

    the future of humanity essay

  4. Essay on Our Earth Our Future for Students, Kids

    the future of humanity essay

  5. Humanity Essay

    the future of humanity essay

  6. Humanity Essay

    the future of humanity essay

COMMENTS

  1. The Future of Humanity - Nick Bostrom

    This paper sketches an overview of some recent attempts in this direction, and it offers a brief discussion of four families of scenarios for humanitys future: extinction, recurrent collapse, plateau, and posthumanity.

  2. The future of humanism, from Toni Morrison to Nick Bostrom ...

    Does humanity have a future? Do we deserve one? What will that future look like? The answers to those questions will be determined by many forces – technological, economic, political, environmental and more – but also by how we experience and think about our own births and deaths.

  3. 20 Big Questions about the Future of Humanity | Scientific ...

    20 Big Questions about the Future of Humanity. We asked leading scientists to predict the future. Here’s what they had to say. Kyle Hilton. September 2016 Issue. Evolution. 1. Does humanity...

  4. The Future of Man—How Will Evolution Change Humans?

    Consider two different views of the future taken from an essay in 2004 by evolutionary philosopher Nick Bostrom of the University of Oxford.

  5. What Is The Future Of Humanity? | Issue 119 | Philosophy Now

    The future of humanity is speculative, and so I’ll apprehend it more with hope than knowledge. Our first two hopes are that we do not annihilate our species with global biological or nuclear warfare, and that we do not destroy our planet.

  6. THE FUTURE OF HUMANITY: Our Destiny In The Universe ...

    World-renowned physicist and futurist Michio Kaku explores in rich, intimate detail the process by which humanity may gradually move away from the planet and develop a sustainable civilization in outer space.

  7. [PDF] The Future of Humanity | Semantic Scholar

    A discussion about the future of humanity is about how the important fundamental features of the human condition may change or remain constant in the long run. In one sense, the future of humanity comprises everything that will ever happen to any human being, including what you will have for breakfast next Thursday and all the scientific ...

  8. Nick Bostrom, The future of humanity - PhilPapers

    This paper sketches an overview of some recent attempts in this direction, and it offers a brief discussion of four families of scenarios for humanitys future: extinction, recurrent collapse, plateau, and posthumanity.

  9. On the survival of humanity - oar.princeton.edu

    according to the first, we should ensure the survival of humanity because we have reason to maximize the number of happy lives that are ever lived, all else equal. According to the second, seeking to sustain humanity into the future is the appropriate response to the final value of humanity itself. Along the way, the

  10. AI and the future of humanity | Yuval Noah Harari at the ...

    In this keynote and Q&A, Yuval Noah Harari summarizes and speculates on 'AI and the future of humanity'. There are a number of questions related to this disc...