The future seems to arrive at an increasingly faster pace. It is as if humanity collectively ordered express delivery robots with advanced artificial intelligence to deliver a ticket to space. But we didn’t sit down to order this package, did we? And will there be enough tickets for everyone? Here the philosophy of transhumanism enters because it grapples with questions about the future. What is the best way to develop humanity with growing access to new gadgets, robots, and space? Some think we should colonize space, others think that a built-in communications device installed below the skin is the way to go. Maybe you want your consciousness uploaded to the cloud? Or perhaps you want to live forever?
Transhumanism: A Historical Perspective
In A History of Transhumanist Thought, the Swedish philosopher Nick Bostrom outlines the history of transhumanism in a straightforward and well-argued manner. Bostrom locates the first notion of extending human life as far back as the Epic of Gilgamesh (1700 B.C.). Transhumanism, according to Bostrom, began with the publication of Darwin’s Origin of Species (1859) and Nietzsche’s conception of the Übermensch, in Thus Spoke Zarathustra (1883).
Both these books are important because they form the beginning of the philosophical foundation of transhumanism. Each of them suggests a novel understanding of humanity which does not see humanity’s current stage as “the endpoint of evolution but rather as a possibly quite early phase” (Bostrom, 2005). For Bostrom and other transhumanists, this means that the phase humanity is currently in could potentially (and perhaps should) be transcended. Transcendence (the term from which transhumanism takes its prefix), or overcoming our present physical, mental, economic, or technological limitations, is a vital aim for most transhumanists. Let us continue to the development of transhumanism as a philosophical position.
Get the latest articles delivered to your inbox
Sign up to our Free Weekly NewsletterIn the years after WWII, the genre of science-fiction developed and caught the imagination of the public. Writers like Karel Čapek, Isaac Asimov, Stanisław Lem, and Arthur C. Clarke became important figures whose visions shaped an entire generation’s conception of the future. One of the early science fiction hallmarks was a techno-optimism that envisioned the unlimited possibilities for human development and property. However, contemporary science-fiction series’ like Battlestar Galactica, Elysium or The Expanse have taken a more dystopian perspective on humanity’s future.
From techno-optimism to techno-scepticism or pessimism, this change seems to mirror the growing realization on the fragile nature of humanity. The fact that a single asteroid could potentially exterminate humankind is, if not a stark reminder of our mortality, at least a wake-up call. It is precisely a wake-up call that these realizations have impacted on transhumanism, helping it develop into a philosophy aiming at saving humanity from impending destruction. Books, TV shows, films, and plays have been created as thought experiments for examining possible future scenarios for humanity’s development.
Two events were necessary for the intellectual development of transhumanism as a philosophical position. The first event was the publication of Are you a Transhuman? (1989) by FM-2030 (formerly known as F.M. Esfandiary). This book played a huge role in making the theory a proper philosophical position, bringing it out of darkened bedrooms of geeks and into the world of academia.
The second event, the founding of The World Transhumanist Association in 1998, helped shape the current transhumanist program into its contemporary form. Transhumanism became a political agenda that seeks, on one hand the betterment, and on the other, the survival of humanity. This began when transhumanism attracted fame by drawing in philanthropists such as Elon Musk, making it relevant for popular news outlets to engage with this philosophy. A political party has even been created in the US, which espouses a fundamental transhumanist position.
What is the Philosophy of Transhumanism?
In general, transhumanism is often described as an umbrella term for technology-optimistic philosophies concerned with these three general characteristics:
- The survival of humanity and safeguarding humanity against extinction
- Enhancing our abilities to make humans better
- Overcoming our limitations to improve humankind beyond its current capacities.
More often than not, these concerns overlap because any new solution to one impacts another.
Let’s begin by looking at the first characteristic: in our day and age, threats humanity faces have shifted focus. Before, we feared that the Cold War would end in a nuclear holocaust. Today, our concerns surround viruses, climate change, AI, or the possibility of asteroids colliding with Earth. This leads to the question of whether or not we should fear the future.
The second characteristic focuses on improving humanity’s inherent abilities and capabilities to make humanity and society better. Philosophers tend to ask: do we need to be better to live a good life?
The third and final characteristic is overcoming natural limits, such as only perceiving light from a limited range of wavelengths, limited strength and intellectual capacities, and even our own death. The response: is immortality as great as it appears?
Why Transhumanism?
Transhumanists come in many shapes and sizes. Like Nick Bostrom and Elon Musk, some worry about the survival of humanity, about the dangers posed by asteroids or artificial intelligence. Others again wish to experiment with the human body to enhance its capabilities or to extend life indefinitely. With the wide-ranging interests of transhumanists, it can be hard to describe them as a uniform group.
Most transhumanists do, however, agree with the Transhumanist Declaration published by the non-profit HumanityPlus. The points of agreement that most transhumanists share could be paraphrased as follows:
- Overcoming mental or physical barriers
- Humanity’s real potential is still undeveloped
- The risks posed to humanity by technology, which could bring about extinction or unwelcome effects, means that society must develop ways to limit those technologies
- Research should aim at discussing the risks and finding solutions to them
- Existential risks and mitigation of suffering are urgent priorities for humanity
- A moral vision for the future, enlightened by future opportunities and risks, should be a global guiding light
- All present and future sentient beings, humans, animals and artificial intelligence alike, should have their well-being taken care of
- Personal choice and the right to live life as one wishes are central tenets of transhumanism.
Who are the Transhumanists?
Based on these eight points, we see transhumanists have a common concern. They wish to make the future world as pleasant as possible, not only for humans but also for animals and for artificial intelligence. Bostrom, FM-2030 (Esfandiary) and other transhumanist thinkers such as Anders Sandberg, Max More, and Natasha Vita-More are only a few of the original signatories of the Transhumanist Declaration.
Today, many philanthropists have been convinced of the Transhumanist cause. Elon Musk’s goal of colonizing Mars could easily be conceived of as a Transhumanist endeavour to make humanity a multi-planetary species. Both Musk’s and Bill Gates’ fear of artificial intelligence has spurred research into what Nick Bostrom and Stuart Russell has called the control issue. The problem can be boiled down to the question of how to stop a super-intelligent AI from turning humanity into a cog in its wheel. Think of the movie The Matrix, 1999, and you get the sense of how such a future might look.
The Survival of Humanity
When it comes to asteroids hitting Earth, one potential threat to the planet, people at NASA’s Center for Near-Earth Object Studies (CNEOS) in California are hard at work. Prominent transhumanists such as Elon Musk and Nick Bostrom are outspoken advocates of the view that humanity must seek to reduce its existential risks. The transhumanist answer to whether humanity should colonize Mars would be to consider the risk of two asteroids hitting both Mars and Earth simultaneously. The chances of this are small, which entails that if humanity inhabits both planets, the risk of extinction becomes smaller.
The philosophy of transhumanism can be seen as preparation for human extinction — a sort of survivalist philosophy on a global scale. Instead of preparing to save one’s immediate family, transhumanists want to safeguard the human race.
Enhancing Our Abilities
When not looking to secure humanity’s survival, transhumanists often look towards enhancements to improve the capabilities of humans. Some prefer to use herbal or chemical supplements to boost their mental abilities. Others install electronic devices in their bodies or experiment with genetic modification of their genes outside of the restrictions imposed by governments. Many of these ‘treatments’ are considered controversial, while others might be illegal depending on the country. Lumped together, all of those engaging in these practices share a common goal; to enhance and further humanity’s capabilities at withstanding the tests of time and space.
Transhumanism: Overcoming Our Limitations
Another goal at which many transhumanists aim for is the desire to overcome natural limitations. Longevity, or the prolongation of human lifespan, is sought as a goal among transhumanists, and the final goal is immortality. This aim is the subject of philosophical criticism because death, within certain philosophical traditions, is considered a necessary condition of life. But according to transhumanism, such thinking is outdated because of the advances in technology and medical capabilities involved in keeping the human body alive.
I will now attempt to sum up the philosophy of transhumanism in a single sentence. Transhumanism is a philosophy that seeks to make human life better by advancing human capabilities and survivability. It is, however, essential to keep in mind that transhumanism’s fixes could potentially create further inequalities between those who have enhancements and those who do not.