In its most common
use, the phrase "superintelligence" refers to any degree of
intelligence that at least equals, if not always exceeds, human intellect, in a
broad sense.
Though computer intelligence has long outperformed natural
human cognitive capacity in specific tasks—for example, a calculator's ability
to swiftly interpret algorithms—these are not often considered examples of
superintelligence in the strict sense due to their limited functional range.
In this sense, superintelligence would necessitate, in
addition to artificial mastery of specific theoretical tasks, some kind of
additional mastery of what has traditionally been referred to as practical
intelligence: a generalized sense of how to subsume particulars into universal
categories that are in some way worthwhile.
To this day, no such generalized superintelligence has
manifested, and hence all discussions of superintelligence remain speculative
to some degree.
Whereas traditional theories of superintelligence have been
limited to theoretical metaphysics and theology, recent advancements in
computer science and biotechnology have opened up the prospect of
superintelligence being materialized.
Although the timing of such evolution is hotly discussed, a
rising body of evidence implies that material superintelligence is both
possible and likely.
If this hypothesis is proved right, it will very certainly
be the result of advances in one of two major areas of AI research:
- Bioengineering
- Computer science
The former involves efforts to not only map out and
manipulate the human DNA, but also to exactly copy the human brain
electronically through full brain emulation, also known as mind uploading.
The first of these bioengineering efforts is not new, with
eugenics programs reaching back to the seventeenth century at the very least.
Despite the major ethical and legal issues that always
emerge as a result of such efforts, the discovery of DNA in the twentieth
century, together with advances in genome mapping, has rekindled interest in
eugenics.
Much of this study is aimed at gaining a better
understanding of the human brain's genetic composition in order to manipulate
DNA code in the direction of superhuman intelligence.
Uploading is a somewhat different, but still biologically
based, approach to superintelligence that aims to map out neural networks in
order to successfully transfer human intelligence onto computer interfaces.
- The brains of insects and tiny animals are micro-dissected
and then scanned for thorough computer analysis in this relatively new area of
study.
- The underlying premise of whole brain emulation is that if
the brain's structure is better known and mapped, it may be able to copy it
with or without organic brain tissue.
Despite the fast growth of both genetic mapping and whole brain
emulation, both techniques have significant limits, making it less likely that
any of these biological approaches will be the first to attain
superintelligence.
The genetic alteration of the human genome, for example, is
constrained by generational constraints.
Even if it were now feasible to artificially boost cognitive
functioning by modifying the DNA of a human embryo (which is still a long way
off), it would take an entire generation for the changed embryo to evolve into
a fully fledged, superintelligent human person.
This would also imply that there are no legal or moral
barriers to manipulating the human DNA, which is far from the fact.
Even the comparatively minor genetic manipulation of human
embryos carried done by a Chinese physician as recently as November 2018
sparked international outrage (Ramzy and Wee 2019).
Given the current medical technology, the extreme levels of
accuracy necessary at every step of the uploading process are impossible to
achieve.
Science and technology currently lack the capacity to
dissect and scan human brain tissue with sufficient precision to produce full
brain simulation results.
Furthermore, even if such first steps are feasible,
researchers would face significant challenges in analyzing and digitally
replicating the human brain using cutting-edge computer technology.
Many analysts believe that such constraints will be
overcome, although the timeline for such realizations is unknown.
Apart from biotechnology, the area of AI, which is strictly
defined as any type of nonorganic (particularly computer-based) intelligence,
is the second major path to superintelligence.
Of course, the work of creating a superintelligent AI from
the ground up is complicated by a number of elements, not all of which are
purely logistical in nature, such as processing speed, hardware/software
design, finance, and so on.
In addition to such practical challenges, there is a
significant philosophical issue: human programmers are unable to know, and so
cannot program, that which is superior to their own intelligence.
Much contemporary research on computer learning and interest
in the notion of a seed AI is motivated in part by this worry.
Any machine capable of changing reactions to stimuli based
on an examination of how well it performs in relation to a predetermined
objective is defined as the latter.
Importantly, the concept of a seed AI entails not only the
capacity to change its replies by extending its base of content knowledge
(stored information), but also the ability to change the structure of its
programming to better fit a specific job (Bostrom 2017, 29).
Indeed, it is this latter capability that would give a seed
AI what Nick Bostrom refers to as "recursive self-improvement," or
the ability to evolve iteratively (Bostrom 2017, 29).
This would eliminate the requirement for programmers to have
an a priori vision of super intelligence since the seed AI would constantly
enhance its own programming, with each more intelligent iteration writing a
superior version of itself (beyond the human level).
Such a machine would undoubtedly cast doubt on the
conventional philosophical assumption that robots are incapable of
self-awareness.
This perspective's proponents may be traced all the way back
to Descartes, but they also include more current thinkers like John Haugeland
and John Searle.
Machine intelligence, in this perspective, is defined as the
successful correlation of inputs with outputs according to a predefined
program.
As a result, robots differ from humans in type, the latter
being characterized only by conscious self-awareness.
Humans are supposed to comprehend the activities they
execute, but robots are thought to carry out functions mindlessly—that is,
without knowing how they work.
Should it be able to construct a successful seed AI, this
core idea would be forced to be challenged.
The seed AI would demonstrate a level of self-awareness and
autonomy not readily explained by the Cartesian philosophical paradigm by
upgrading its own programming in ways that surprise and defy the forecasts of
its human programmers.
Indeed, although it is still speculative (for the time
being), the increasingly possible result of superintelligent AI poses a slew of
moral and legal dilemmas that have sparked a lot of philosophical discussion in
this subject.
The main worries are about the human species' security in
the case of what Bostrom refers to as a "intelligence explosion"—that
is, the creation of a seed AI followed by a possibly exponential growth in
intellect (Bostrom 2017).
One of the key problems is the inherently unexpected
character of such a result.
Humans will not be able to totally foresee how
superintelligent AI would act due to the autonomy entailed by superintelligence
in a definitional sense.
Even in the few cases of specialized superintelligence that
humans have been able to construct and study so far—for example, robots that have
surpassed humans in strategic games like chess and Go—human forecasts for AI
have shown to be very unreliable.
For many critics, such unpredictability is a significant
indicator that, should more generic types of superintelligent AI emerge, humans
would swiftly lose their capacity to manage them (Kissinger 2018).
Of all, such a loss of control does not automatically imply
an adversarial relationship between humans and superintelligence.
Indeed, although most of the literature on superintelligence
portrays this relationship as adversarial, some new work claims that this
perspective reveals a prejudice against machines that is particularly prevalent
in Western cultures (Knight 2014).
Nonetheless, there are compelling grounds to believe that
superintelligent AI would at the very least consider human goals as
incompatible with their own, and may even regard humans as existential dangers.
For example, computer scientist Steve Omohundro has claimed
that even a relatively basic kind of superintelligent AI like a chess bot would
have motive to want the extinction of humanity as a whole—and may be able to
build the tools to do it (Omohundro 2014).
Similarly, Bostrom has claimed that a superintelligence
explosion would most certainly result in, if not the extinction of the human
race, then at the very least a gloomy future (Bostrom 2017).
Whatever the benefits of such theories, the great
uncertainty entailed by superintelligence is obvious.
If there is one point of agreement in this large and diverse
literature, it is that if AI research is to continue, the global community must
take great care to protect its interests.
Hardened determinists who claim that technological
advancement is so tightly connected to inflexible market forces that it is
simply impossible to change its pace or direction in any major manner may find
this statement contentious.
According to this determinist viewpoint, if AI can deliver
cost-cutting solutions for industry and commerce (as it has already started to
do), its growth will proceed into the realm of superintelligence, regardless of
any unexpected negative repercussions.
Many skeptics argue that growing societal awareness of the
potential risks of AI, as well as thorough political monitoring of its
development, are necessary counterpoints to such viewpoints.
Bostrom highlights various examples of effective worldwide
cooperation in science and technology as crucial precedents that challenge the
determinist approach, including CERN, the Human Genome Project, and the
International Space Station (Bostrom 2017, 253).
To this, one may add examples from the worldwide environmental
movement, which began in the 1960s and 1970s and has imposed significant
restrictions on pollution committed in the name of uncontrolled capitalism
(Feenberg 2006).
Given the speculative nature of superintelligence research,
it is hard to predict what the future holds.
However, if superintelligence poses an existential danger to
human existence, caution would dictate that a worldwide collaborative strategy
rather than a free market approach to AI be used.
~ Jai Krishna Ponnappan
Find Jai on Twitter | LinkedIn | Instagram
You may also want to read more about Artificial Intelligence here.
See also:
Berserkers; Bostrom, Nick; de Garis, Hugo; General and Narrow AI; Goertzel, Ben; Kurzweil, Ray; Moravec, Hans; Musk, Elon; Technological Singularity; Yudkowsky, Eliezer.
References & Further Reading:
- Bostrom, Nick. 2017. Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press.
- Feenberg, Andrew. 2006. “Environmentalism and the Politics of Technology.” In Questioning Technology, 45–73. New York: Routledge.
- Kissinger, Henry. 2018. “How the Enlightenment Ends.” The Atlantic, June 2018. https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/.
- Knight, Heather. 2014. How Humans Respond to Robots: Building Public Policy Through Good Design. Washington, DC: The Project on Civilian Robotics. Brookings Institution.
- Omohundro, Steve. 2014. “Autonomous Technology and the Greater Human Good.” Journal of Experimental & Theoretical Artificial Intelligence 26, no. 3: 303–15.
- Ramzy, Austin, and Sui-Lee Wee. 2019. “Scientist Who Edited Babies’ Genes Is Likely to Face Charges in China.” The New York Times, January 21, 2019