The Turing Test is a method of determining whether or not a machine can exhibit intelligence that mimics or is equivalent and likened to Human intelligence.
The Turing Test, named after
computer scientist Alan Turing, is an AI benchmark that assigns intelligence to
any machine capable of displaying intelligent behavior comparable to that of a
person.
Turing's "Computing Machinery and Intelligence"
(1950), which establishes a simple prototype—what Turing calls "The
Imitation Game," is the test's locus classicus.
In this game, a person is asked to determine which of the
two rooms is filled by a computer and which is occupied by another human based
on anonymized replies to natural language questions posed by the judge to each
inhabitant.
Despite the fact that the human respondent must offer
accurate answers to the court's queries, the machine's purpose is to fool the
judge into thinking it is human.
According to Turing, the machine may be considered intelligent to the degree that it is successful at this job.
The fundamental benefit of this essentially operationalist
view of intelligence is that it avoids complex metaphysics and epistemological
issues about the nature and inner experience of intelligent activities.
According to Turing's criteria, little more than empirical
observation of outward behavior is required for predicting object intelligence.
This is in sharp contrast to the widely Cartesian epistemological tradition, which holds that some internal self-awareness is a
need for intelligence.
Turing's method avoids the so-called "problem of other
minds" that arises from such a viewpoint—namely, how to be confident of
the presence of other intelligent individuals if it is impossible to know their
thoughts from a presumably required first-person perspective.
Nonetheless, the Turing Test, at least insofar as it considers intelligence in a strictly formalist manner, is bound up with the spirit of Cartesian epistemol ogy.
The machine in the Imitation Game is a digital computer in
the sense of Turing: a set of operations that may theoretically be implemented
in any material.
A digital computer consists of three parts: a knowledge
store, an executive unit that executes individual orders, and a control that
regulates the executive unit.
However, as Turing points out, it makes no difference
whether these components are created using electrical or mechanical means.
What matters is the formal set of rules that make up the
computer's very nature.
Turing holds to the core belief that intellect is inherently
immaterial.
If this is true, it is logical to assume that human
intellect functions in a similar manner to a digital computer and may therefore
be copied artificially.
Since Turing's work, AI research has been split into two camps:
- those who embrace and
- those who oppose this fundamental premise.
To describe the first camp, John Haugeland created the term
"good old-fashioned AI," or GOFAI.
Marvin Minsky, Allen Newell, Herbert Simon, Terry Winograd,
and, most notably, Joseph Weizenbaum, whose software ELIZA was controversially
hailed as the first to pass the Turing Test in 1966.
Nonetheless, detractors of Turing's formalism have
proliferated, particularly in the past three decades, and GOFAI is now widely
regarded as a discredited AI technique.
John Searle's Minds, Brains, and Programs (1980), in which
Searle builds his now-famous Chinese Room thought experiment, is one of the
most renowned criticisms of GOFAI in general—and the assumptions of the Turing
Test in particular.
Searle thinks that, assuming adequate mastery of the software, the person within the room may pass the Turing Test, fooling a native Chinese speaker into thinking she knew Chinese.
If, on the other hand, the person in the room is a digital
computer, Turing-type tests, according to Searle, fail to capture the phenomena
of understanding, which he claims entails more than the functionally accurate
connection of inputs and outputs.
Searle's argument implies that AI research should take
materiality issues seriously in ways that Turing's Imitation Game's formalism
does not.
Searle continues his own explanation of the Chinese Room
thought experiment by saying that human species' physical makeup—particularly
their sophisticated nerve systems, brain tissue, and so on—should not be
discarded as unimportant to conceptions of intelligence.
This viewpoint has influenced connectionism, an altogether
new approach to AI that aims to build computer intelligence by replicating the
electrical circuitry of human brain tissue.
The effectiveness of this strategy has been hotly contested, although it looks to outperform GOFAI in terms of developing generalized kinds of intelligence.
Turing's test, on the other hand, may be criticized not just
from the standpoint of materialism, but also from the one of fresh formalism.
As a result, one may argue that Turing tests are
insufficient as a measure of intelligence since they attempt to reproduce human
behavior, which is frequently exceedingly dumb.
According to certain variants of this argument, if criteria of rationality are to distinguish rational from illogical human conduct in the first place, they must be derived a priori rather than from real human experience.
This line of criticism has gotten more acute as AI research
has shifted its focus to the potential of so-called super-intelligence: forms
of generalized machine intelligence that far outperform human intellect.
Should this next level of AI be attained, Turing tests would
seem to be outdated.
Furthermore, simply discussing the idea of superintelligence
would seem to need additional intelligence criteria in addition to severe
Turing testing.
Turing may be defended against such accusation by pointing
out that establishing a universal criterion of intellect was never his goal.
Indeed, according to Turing (1997, 29–30), the purpose is to replace the metaphysically problematic issue "can machines think" with the more empirically verifiable alternative:
"What will happen when a computer assumes the role [of the man in the Imitation Game]" (Turing 1997, 29–30).
Thus, Turing's test's above-mentioned flaw—that it fails to
establish a priori rationality standards—is also part of its strength and
drive.
It also explains why, since it was initially presented
three-quarters of a century ago, it has had such a lengthy effect on AI research
in all domains.
Find Jai on Twitter | LinkedIn | Instagram
You may also want to read more about Artificial Intelligence here.
See also:
Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.
References And Further Reading
Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.
Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.
Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cambridge, MA: MIT Press.