At Tufts University, Daniel Dennett(1942–) is the Austin B. Fletcher Professor of Philosophy and Co-Director of the Center for Cognitive Studies.
Philosophy of mind, free will, evolutionary biology,
cognitive neuroscience, and artificial intelligence are his main areas of study
and publishing.
He has written over a dozen books and hundreds of articles.
Much of this research has focused on the origins and nature
of consciousness, as well as how naturalistically it may be described.
Dennett is also an ardent atheist, and one of the New
Atheism's "Four Horsemen." Richard Dawkins, Sam Harris, and
Christopher Hitchens are the others.
Dennett's worldview is naturalistic and materialistic
throughout.
He opposes Cartesian dualism, which holds that the mind and
body are two distinct things that merge.
Instead, he contends that the brain is a form of computer
that has developed through time due to natural selection.
Dennett also opposes the homunculus theory of the mind,
which holds that the brain has a central controller or "little man"
who performs all of the thinking and emotion.
Dennett, on the other hand, argues for a viewpoint he refers
to as the numerous drafts model.
According to his theory, which he lays out in his 1991 book
Consciousness Explained, the brain is constantly sifting through, interpreting,
and editing sensations and inputs, forming overlapping drafts of experience.
Dennett later used the metaphor of "fame in the
brain" to describe how various aspects of ongoing neural processes are
periodically emphasized at different times and under different circumstances.
Consciousness is a story made up of these varied
interpretations of human events.
Dennett dismisses the assumption that these ideas coalesce
or are structured in a central portion of the brain, which he mockingly refers
to as "Cartesian theater." The brain's story is made up of a
never-ending, un-centralized flow of bottom-up awareness that spans time and
place.
Dennett denies the existence of qualia, which are subjective
individual experiences such as how colors seem to the human eye or how food
feels.
He does not deny that colors and tastes exist; rather, he
claims that the sensation of color and taste does not exist as a separate thing
in the human mind.
He claims that there is no difference between human and
computer "sensation experiences." According to Dennett, just as some
robots can discern between colors without people deciding that they have
qualia, so can the human brain.
For Dennett, the color red is just the quality that brains
sense and which is referred to as red in the English language.
It has no extra, indescribable quality.
This is a crucial consideration for artificial intelligence
because the ability to experience qualia is frequently seen as a barrier to the
development of Strong AI (AI that is functionally equivalent to that of a
human) and as something that will invariably distinguish human and machine
intelligence.
However, if qualia do not exist, as Dennett contends, it
cannot constitute a stumbling block to the creation of machine intelligence
comparable to that of humans.
Dennett compares our brains to termite colonies in another
metaphor.
Termites do not join together and plot to form a mound, but
their individual activities cause it to happen.
The mound is the consequence of natural selection producing
uncomprehending expertise in cooperative mound-building rather than
intellectual design by the termites.
To create a mound, termites don't need to comprehend what
they're doing.
Likewise, comprehension is an emergent attribute of such
abilities.
Brains, according to Dennett, are control centers that have
evolved to respond swiftly and effectively to threats and opportunities in the
environment.
As the demands of responding to the environment grow more
complicated, understanding emerges as a tool for dealing with them.
On a sliding scale, comprehension is a question of degree.
Dennett, for example, considers bacteria's
quasi-comprehension in response to diverse stimuli and computers'
quasi-comprehension in response to coded instructions to be on the low end of
the range.
On the other end of the spectrum, he placed Jane Austen's
comprehension of human social processes and Albert Einstein's understanding of
relativity.
However, they are just changes in degree, not in type.
Natural selection has shaped both extremes of the spectrum.
Comprehension is not a separate mental process arising from
the brain's varied abilities.
Rather, understanding is a collection of these skills.
Consciousness is an illusion to the extent that we recognize
it as an additional element of the mind in the shape of either qualia or
cognition.
In general, Dennett advises mankind to avoid positing
understanding when basic competence would suffice.
Humans, on the other hand, often adopt what Dennett refers
to as a "intentional position" toward other humans and, in some
cases, animals.
When individuals perceive acts as the outcome of
mind-directed thoughts, emotions, wants, or other mental states, they adopt the
intentional viewpoint.
This is in contrast to the "physical posture" and
the "design stance," according to him.
The physical stance is when anything is seen as the outcome
of simply physical forces or natural principles.
Gravity causes a stone to fall when it is dropped, not any
conscious purpose to return to the ground.
An action is seen as the mindless outcome of a
preprogrammed, or predetermined, purpose in the design stance.
An alarm clock, for example, beeps at a certain time because
it was built to do so, not because it chose to do so on its own.
In contrast to both the physical and design stances, the
intentional stance considers behaviors and acts as though they are the
consequence of the agent's deliberate decision.
It might be difficult to decide whether to apply the
purposeful or design perspective to computers.
A chess-playing computer has been created with the goal of
winning.
However, its movements are often indistinguishable from
those of a human chess player who wants or intends to win.
In fact, having a purposeful posture toward the computer's
behavior, rather than a design stance, improves human interpretation of its
behavior and capacity to respond to it.
Dennett claims that the purposeful perspective is the
greatest strategy to adopt toward both humans and computers since it works best
in describing both human and computer behavior.
Furthermore, there is no need to differentiate them in any
way.
Though the intentional attitude considers behavior as
agent-driven, it is not required to take a position on what is truly going on
inside the human or machine's internal workings.
This posture provides a neutral starting point from which to
investigate cognitive competency without presuming a certain explanation of
what's going on behind the scenes.
Dennett sees no reason why AI should be impossible in theory
since human mental abilities have developed organically.
Furthermore, by abandoning the concept of qualia and
adopting an intentional posture that relieves people of the responsibility of
speculating about what is going on in the background of cognition, two major
impediments to solving the hard issue of consciousness have been eliminated.
Dennett argues that since the human brain and computers are
both machines, there is no good theoretical reason why humans should be capable
of acquiring competence-driven understanding while AI should be intrinsically
unable.
Consciousness in the traditional sense is illusory, hence it
is not a need for Strong AI.
Dennett does not believe that Strong AI is theoretically
impossible.
He feels that society's technical sophistication is still at
least fifty years away from producing it.
Strong AI development, according to Dennett, is not
desirable.
Humans should strive to build AI tools, but Dennett believes
that attempting to make computer pals or colleagues would be a mistake.
Such robots, he claims, would lack human moral intuitions
and understanding, and hence would not be able to integrate into human society.
Humans do not need robots to provide friendship since they
have each other.
Robots, even AI-enhanced machines, should be seen as tools to be utilized by humans alone.
~ Jai Krishna Ponnappan
You may also want to read more about Artificial Intelligence here.
See also:
Cognitive Computing; General and Narrow AI.
Further Reading:
Dennett, Daniel C. 1987. The Intentional Stance. Cambridge, MA: MIT Press.
Dennett, Daniel C. 1993. Consciousness Explained. London: Penguin.
Dennett, Daniel C. 1998. Brainchildren: Essays on Designing Minds. Cambridge, MA: MIT Press.
Dennett, Daniel C. 2008. Kinds of Minds: Toward an Understanding of Consciousness. New York: Basic Books.
Dennett, Daniel C. 2017. From Bacteria to Bach and Back: The Evolution of Minds. New York: W. W. Norton.
Dennett, Daniel C. 2019. “What Can We Do?” In Possible Minds: Twenty-Five Ways of Looking at AI, edited by John Brockman, 41–53. London: Penguin Press.