The Asilomar Meeting on Beneficial AI has most prominently portrayed social concerns around artificial intelligence and danger to people via Isaac Asimov's Three Laws of Robotics.
"A robot may not damage a human being or, by
inactivity, enable a human being to come to harm; A robot must follow human
instructions unless such orders would contradict with the First Law; A robot
must safeguard its own existence unless such protection would clash with the
First or Second Law" (Asimov 1950, 40).
In subsequent books, Asimov added a Fourth Law or Zeroth
Law, which is often quoted as "A robot may not hurt mankind, or, by
inactivity, enable humanity to come to harm," and is detailed in Robots
and Empire by the robot character Daneel Olivaw (Asimov 1985, chapter 18).
Asimov's zeroth rule sparked debate on how to judge whether
or not something is harmful to mankind.
This was the topic of the 2017 Asilomar Conference on
Beneficial AI, which went beyond the Three Laws and the Zeroth Law to propose
twenty-three principles to protect mankind in the future of AI.
The conference's sponsor, the Future of Life Institute, has
posted the principles on its website and has received 3,814 signatures from AI
experts and other multidisciplinary supporters.
There are three basic kinds of principles: research
questions, ethics and values, and long-term concerns.
These research guidelines are intended to guarantee that the
aims of artificial intelligence continue to be helpful to people.
They're meant to help investors decide where to put their
money in AI research.
To achieve useful AI, Asilomar signatories con incline that
research agendas should encourage and preserve openness and conversation
between AI researchers, policymakers, and developers.
Researchers interested in the development of artificial
intelligence systems should work together to prioritize safety.
Proposed concepts relating to ethics and values are aimed to
prevent damage and promote direct human control over artificial intelligence
systems.
Parties to the Asilomar principles believe that AI should
reflect human values such as individual rights, freedoms, and diversity
acceptance.
Artificial intelligences, in particular, should respect
human liberty and privacy, and should only be used to empower and enrich
humanity.
Human social and civic norms must be adhered to by AI.
The Asilomar signatories believe that AI creators should be
held accountable for their work.
One aspect that stands out is the likelihood of an
autonomous weapons arms race.
Because of the high stakes, the designers of the Asilomar
principles incorporated principles that addressed longer-term challenges.
They advised prudence, meticulous planning, and human
supervision.
Superintelligences must be produced for the wider welfare of
mankind, and not merely to further the aims of one industry or government.
The Asilomar Conference's twenty-three principles have
sparked ongoing discussions about the need for beneficial AI and specific
safeguards for the future of AI and humanity.
~ Jai Krishna Ponnappan
You may also want to read more about Artificial Intelligence here.
See also:
Accidents and Risk Assessment; Asimov, Isaac; Autonomous Weapons Systems, Ethics of; Campaign to Stop Killer Robots; Robot Ethics.
Further Reading
Asilomar AI Principles. 2017. https://futureoflife.org/ai-principles/.
Asimov, Isaac. 1950. “Runaround.” In I, Robot, 30–47. New York: Doubleday.
Asimov, Isaac. 1985. Robots and Empire. New York: Doubleday.
Sarangi, Saswat, and Pankaj Sharma. 2019. Artificial Intelligence: Evolution, Ethics, and Public Policy. Abingdon, UK: Routledge.