Questions regarding the
autonomy, culpability, and dispersed accountability of smart robots have
sparked a popular and intellectual discussion over the idea of rights and
personhood for artificial intelligences in recent decades.
The agency of intelligent computers in business and commerce
is of importance to legal systems.
Machine awareness, dignity, and interests pique the interest
of philosophers.
Personhood is in many respects a fabrication that emerges
from normative views that are renegotiating, if not equalizing, the statuses of
humans, artificial intelligences, animals, and other legal persons, as shown by
issues relating to smart robots and AI.
Definitions and precedents from previous philosophical,
legal, and ethical attempts to define human, corporate, and animal persons are
often used in debates about electronic personhood.
In his 1909 book The Nature and Sources of Law, John Chipman
Gray examined the concept of legal personality.
Gray points out that when people hear the word
"person," they usually think of a human being; nevertheless, the
technical, legal definition of the term "person" focuses more on
legal rights.
According to Gray, the issue is whether an entity can be
subject to legal rights and obligations, and the answer depends on the kind of
entity being considered.
Gray, on the other hand, claims that a thing can only be a
legal person if it has intellect and volition.
Charles Taylor demonstrates in his article "The Concept
of a Person" (1985) that to be a person, one must have certain rights.
Per sonhood, as Gray and Taylor both recognize, is centered
on legality in respect to having guaranteed freedoms.
Legal individuals may, for example, engage into contracts,
purchase property, and be sued.
Legal people are likewise protected by the law and have
certain rights, including the right to life.
Not all legal people are humans, and not all humans are
persons in the perspective of the law.
Gray demonstrates how Roman temples and medieval churches
were seen as individuals with certain rights.
Personhood is now conferred to companies and government
entities under the law.
Despite the fact that these entities are not human, the law
recognizes them as people, which means they have rights and are subject to
certain legal obligations.
Alternatively, there is still a lot of discussion regarding
whether human fetuses are legal persons.
Humans in a vegetative condition are likewise not recognized
as having personhood under the law.
This personhood argument, which focuses on rights related to
intellect and volition, has prompted concerns about whether intelligent animals
should be awarded persons.
The Great Ape Project, for example, was created in 1993 to
advocate for apes' rights, such as their release from captivity, protection of
their right to life, and an end to animal research.
Marine animals were deemed potential humans in India in
2013, resulting in a prohibition on their custody.
Sandra, an orangutan, was granted the right to life and
liberty by an Argentinian court in 2015.
Some individuals have sought personhood for androids or
robots based on moral concerns for animals.
For some individuals, it is only natural that an android be
given legal protections and rights.
Those who disagree think that we cannot see androids in the
same light as animals since artificial intelligence was invented and engineered
by humans.
In this perspective, androids are both machines and
property.
At this stage, it's impossible to say if a robot may be
considered a legal person.
However, since the defining elements of personhood often
intersect with concerns of intellect and volition, the argument over whether
artificial intelligence should be accorded personhood is fueled by these
factors.
Personhood is often defined by two factors: rights and moral
standing.
A person's moral standing is determined by whether or not
they are seen as valuable and, as a result, treated as such.
However, Taylor goes on to define the category of person by
focusing on certain abilities.
To be categorized as a per son, he believes, one must be
able to recognize the difference between the future and the past.
A person must also be able to make decisions and establish a
strategy for his or her future.
A person must have a set of values or morals in order to be
considered a human.
In addition, a person's self-image or sense of identity
would exist.
In light of these requirements, those who believe that
androids might be accorded personality admit that these beings would need to
possess certain capacities.
F. Patrick Hubbard, for example, believes that robots should only be accorded personality if they satisfy specific conditions.
These qualities include having a sense of self, having a
life goal, and being able to communicate and think in sophisticated ways.
An alternative set of conditions for awarding personality to
an android is proposed by David Lawrence.
For starters, he talks about AI having awareness, as well as
the ability to comprehend information, learn, reason, and have subjectivity,
among other things.
Although his concentration is on the ethical treatment of
animals, Peter Singer offers a much simpler approach to personhood.
The distinguishing element of conferring personality, in his
opinion, is suffering.
If anything can suffer, it should be treated the same
regardless of whether it is a person, an animal, or a computer.
In fact, Singer considers it wrong to deny any being's pain.
Some individuals feel that if androids meet some or all of
the aforementioned conditions, they should be accorded personhood, which comes
with individual rights such as the right to free expression and freedom from
slavery.
Those who oppose artificial intelligence being awarded
personhood often feel that only natural creatures should be given personhood.
Another point of contention is the robot's position as a
human-made item.
In this situation, since robots are designed to follow human
instructions, they are not autonomous individuals with free will; they are just
an item that people have worked hard to create.
It's impossible to give an android rights if it doesn't have
its own will and independent mind.
Certain limitations may bind androids, according to David
Calverley.
Asimov's Laws of Robotics, for example, may constrain an
android.
If such were the case, the android would lack the capacity
to make completely autonomous decisions.
Others argue that artificial intelligence lacks a critical
component of persons, such as a soul, emotions, and awareness, all of which
have previously been used to reject animal existence.
Even in humans, though, anything like awareness is difficult
to define or quantify.
Finally, resistance to android personality is often
motivated by fear, which is reinforced by science fiction literature and films.
In such stories, androids are shown as possessing greater
intellect, potentially immortality, and a desire to take over civilization,
displacing humans.
Each of these concerns, according to Lawrence Solum, stems
from a dread of anything that isn't human, and he claims that humans reject
personhood for AI only because they lack human DNA.
Such an attitude bothers him, and he compares it to American
slavery, in which slaves were denied rights purely because they were not white.
He objects to an android being denied rights just because it
is not human, particularly since other things have emotions, awareness, and
intellect.
Although the concept of personality for androids is still
theoretical, recent events and discussions have brought it up in a practical
sense.
Sophia, a social humanoid robot, was created by Hanson
Robotics, a Hong Kong-based business, in 2015.
It first debuted in public in March 2016, and in October
2017, it became a Saudi Arabian citizen.
Sophia was also the first nonhuman to be conferred a United
Nations title when she was dubbed the UN Development Program's inaugural Innovation
Champion in 2017.
Sophia has made talks and interviews all around the globe.
Sophia has even indicated a wish to own a house, marry, and
have a family.
The European Parliament sought in early 2017 to give robots
"electronic identities," making them accountable for any harm they
cause.
Those who supported the reform regarded legal personality as
having the same legal standing as corporations.
In contrast, over 150 experts from 14 European nations
signed an open letter in 2018 opposing this legislation, claiming that it was
unsuitable for absolving businesses of accountability for their products.
The personhood of robots is not included in a revised
proposal from the European Parliament.
However, the dispute about culpability continues, as illustrated
by the death of a pedestrian in Arizona by a self-driving vehicle in March
2018.
Our notions about who merits ethical treatment have evolved
through time in Western history.
Susan Leigh Anderson views this as a beneficial development
since she associates the expansion of rights for more entities with a rise in
overall ethics.
As more animals are granted rights and continue to do so,
the incomparable position of humans may evolve.
If androids begin to process in comparable ways to the human
mind, our understanding of personality may need to expand much further.
The word "person" covers a set of talents and
attributes, as David DeGrazia explains in Human Identity and Bioethics (2012).
Any entity exhibiting these qualities, including artificial intelligence, might be considered as a human in such situation.
Find Jai on Twitter | LinkedIn | Instagram
You may also want to read more about Artificial Intelligence here.
See also:
Asimov, Isaac; Blade Runner; Robot Ethics; The Terminator.
References & Further Reading:
Anderson, Susan L. 2008. “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics.” AI & Society 22, no. 4 (April): 477–93.
Calverley, David J. 2006. “Android Science and Animal Rights, Does an Analogy Exist?” Connection Science 18, no 4: 403–17.
DeGrazia, David. 2005. Human Identity and Bioethics. New York: Cambridge University Press. Gray, John Chipman. 1909. The Nature and Sources of the Law. New York: Columbia University Press.
Hubbard, F. Patrick. 2011. “‘Do Androids Dream?’ Personhood and Intelligent Artifacts.” Temple Law Review 83: 405–74.
Lawrence, David. 2017. “More Human Than Human.” Cambridge Quarterly of Healthcare Ethics 26, no. 3 (July): 476–90.
Solum, Lawrence B. 1992. “Legal Personhood for Artificial Intelligences.” North Carolina Law Review 70, no. 4: 1231–87.
Taylor, Charles. 1985. “The Concept of a Person.” In Philosophical Papers, Volume 1: Human Agency and Language, 97–114. Cambridge, UK: Cambridge University Press.