Nick Bostrom(1973–) is an
Oxford University philosopher with a physics and computational neuroscience
multidisciplinary academic background.
He is a cofounder of the World Transhumanist Association and
a founding director of the Future of Humanity Institute.
Anthropic Bias (2002), Human Enhancement (2009),
Superintelligence: Paths, Dangers, Strategies (2014), and Global Catastrophic
Risks (2014) are among the works he has authored or edited (2014).
Bostrom was born in the Swedish city of Helsingborg in 1973.
Despite his dislike of formal education, he enjoyed
studying.
Science, literature, art, and anthropology were among his
favorite interests.
Bostrom earned bachelor's degrees in philosophy,
mathematics, logic, and artificial intelligence from the University of
Gothenburg, as well as master's degrees in philosophy and physics from
Stockholm University and computational neuroscience from King's College London.
The London School of Economics gave him a PhD in philosophy.
Bostrom is a regular consultant or contributor to the
European Commission, the United States President's Council on Bioethics, the
CIA, and Cambridge University's Centre for the Study of Existential Risk.
Bostrom is well-known for his contributions to a variety of
subjects, and he has proposed or written extensively on a number of well-known
philosophical arguments and conjectures, including the simulation hypothesis,
existential risk, the future of machine intelligence, and transhumanism.
Bostrom's concerns in the future of technology, as well as
his discoveries on the mathematics of the anthropic bias, are combined in the
so-called "Simulation Argument." Three propositions make up the
argument.
The first hypothesis is that almost all civilizations that
attain human levels of knowledge eventually perish before achieving
technological maturity.
The second hypothesis is that most civilizations develop
"ancestor simulations" of sentient beings, but ultimately abandon
them.
The "simulation hypothesis" proposes that mankind
is now living in a simulation.
He claims that just one of the three assertions must be
true.
If the first hypothesis is false, some proportion of
civilizations at the current level of human society will ultimately acquire
technological maturity.
If the second premise is incorrect, certain civilizations
may be interested in continuing to perform ancestor simulations.
These civilizations' researchers may be performing massive
numbers of these simulations.
There would be many times as many simulated humans living in
simulated worlds as there would be genuine people living in real universes in
that situation.
As a result, mankind is most likely to exist in one of the
simulated worlds.
If the second statement is true, the third possibility is
also true.
It's even feasible, according to Bostrom, for a civilization
inside a simulation to conduct its own simulations.
In the form of an endless regress, simulations may be living
within simulated universes, inside their own simulated worlds.
It's also feasible that all civilizations would vanish,
maybe as a result of the discovery of a new technology, posing an existential
threat beyond human control.
Bostrom's argument implies that humanity is not blind to the
truth of the external world, an argument that can be traced back to Plato's
conviction in the existence of universals (the "Forms") and the
capacity of human senses to see only specific examples of universals.
His thesis also implies that computers' ability to imitate
things will continue to improve in terms of power and sophistication.
Computer games and literature, according to Bostrom, are
modern instances of natural human fascination with synthetic reality.
The Simulation Argument is sometimes mistaken with the
restricted premise that mankind lives in a simulation, which is the third
argument.
Humans, according to Bostrom, have a less than 50%
probability of living in some kind of artificial matrix.
He also argues that if mankind lived in one, society would
be unlikely to notice "glitches" that revealed the simulation's
existence since they had total control over the simulation's operation.
Simulator creators, on the other hand, would inform people
that they are living in a simulation.
Existential hazards are those that pose a serious threat to
humanity's existence.
Humans, rather than natural dangers, pose the biggest
existential threat, according to Bostrom (e.g., asteroids, earthquakes, and
epidemic disease).
He argues that artificial hazards like synthetic biology,
molecular nanotechnology, and artificial intelligence are considerably more
threatening.
Bostrom divides dangers into three categories: local,
global, and existential.
Local dangers might include the theft of a valuable item of
art or an automobile accident.
A military dictator's downfall or the explosion of a
supervolcano are both potential global threats.
The extent and intensity of existential hazards vary.
They are cross-generational and long-lasting.
Because of the amount of lives that might be spared, he
believes that reducing the danger of existential hazards is the most essential
thing that human beings can do; battling against existential risk is also one
of humanity's most neglected undertakings.
He also distinguishes between several types of existential
peril.
Human extinction, defined as the extinction of a species
before it reaches technological maturity; permanent stagnation, defined as the
plateauing of human technological achievement; flawed realization, defined as
humanity's failure to use advanced technology for an ultimately worthwhile
purpose; and subsequent ruination, defined as a society reaching technological
maturity but then something goes wrong.
While mankind has not yet harnessed human ingenuity to
create a technology that releases existentially destructive power, Bostrom
believes it is possible that it may in the future.
Human civilization has yet to produce a technology with such
horrific implications that mankind could collectively forget about it.
The objective would be to go on a technical path that is
safe, includes global collaboration, and is long-term.
To argue for the prospect of machine superintelligence,
Bostrom employs the metaphor of altered brain complexity in the development of
humans from apes, which took just a few hundred thousand generations.
Artificial systems that use machine learning (that is,
algorithms that learn) are no longer constrained to a single area.
He also points out that computers process information at a
far faster pace than human neurons.
Humans will eventually rely on super intelligent robots in
the same manner that chimps presently rely on humans for their ultimate
survival, according to Bostrom, even in the wild.
By establishing a powerful optimizing process with a poorly
stated purpose, super intelligent computers have the potential to cause
devastation, or possibly an extinction-level catastrophe.
By subverting humanity to the programmed purpose, a
superintelligence may even foresee a human response.
Bostrom recognizes that there are certain algorithmic
techniques used by humans that computer scientists do not yet understand.
As they engage in machine learning, he believes it is
critical for artificial intelligences to understand human values.
On this point, Bostrom is drawing inspiration from
artificial intelligence theorist Eliezer Yudkowsky's concept of "coherent
extrapolated volition"—also known as "friendly AI"—which is akin
to what is currently accessible in human good will, civil society, and
institutions.
A superintelligence should seek to provide pleasure and joy
to all of humanity, and it may even make difficult choices that benefit the
whole community rather than the individual.
In 2015, Bostrom, along with Stephen Hawking, Elon Musk, Max
Tegmark, and many other top AI researchers, published "An Open Letter on
Artificial Intelligence" on the Future of Life Institute website, calling
for artificial intelligence research that maximizes the benefits to humanity while
minimizing "potential pitfalls." Transhumanism is a philosophy or
belief in the technological extension and augmentation of the human species'
physical, sensory, and cognitive capacity.
In 1998, Bostrom and colleague philosopher David Pearce
founded the World Transhumanist Association, now known as Humanity+, to address
some of the societal hurdles to the adoption and use of new transhumanist
technologies by people of all socioeconomic strata.
Bostrom has said that he is not interested in defending technology,
but rather in using modern technologies to address real-world problems and
improve people's lives.
Bostrom is particularly concerned in the ethical
implications of human enhancement and the long-term implications of major
technological changes in human nature.
He claims that transhumanist ideas may be found throughout
history and throughout cultures, as shown by ancient quests such as the
Gilgamesh Epic and historical hunts for the Fountain of Youth and the Elixir of
Immortality.
The transhumanist idea, then, may be regarded fairly
ancient, with modern representations in disciplines like artificial
intelligence and gene editing.
Bostrom takes a stand against the emergence of strong
transhumanist instruments as an activist.
He expects that politicians may act with foresight and
command the sequencing of technical breakthroughs in order to decrease the
danger of future applications and human extinction.
He believes that everyone should have the chance to become
transhuman or posthuman (have capacities beyond human nature and intelligence).
For Bostrom, success would require a worldwide commitment to
global security and continued technological progress, as well as widespread
access to the benefits of technologies (cryonics, mind uploading, anti-aging
drugs, life extension regimens), which hold the most promise for transhumanist
change in our lifetime.
Bostrom, however cautious, rejects conventional humility,
pointing out that humans have a long history of dealing with potentially
catastrophic dangers.
In such things, he is a strong supporter of "individual
choice," as well as "morphological freedom," or the ability to
transform or reengineer one's body to fulfill specific wishes and requirements.
~ Jai Krishna Ponnappan
You may also want to read more about Artificial Intelligence here.