Showing posts sorted by date for query Dartmouth Artificial Intelligence Conference. Sort by relevance Show all posts
Showing posts sorted by date for query Dartmouth Artificial Intelligence Conference. Sort by relevance Show all posts

Artificial Intelligence - Who Is Sherry Turkle?

 


 

 

Sherry Turkle(1948–) has a background in sociology and psychology, and her work focuses on the human-technology interaction.

While her study in the 1980s focused on how technology affects people's thinking, her work in the 2000s has become more critical of how technology is utilized at the expense of building and maintaining meaningful interpersonal connections.



She has employed artificial intelligence in products like children's toys and pets for the elderly to highlight what people lose out on when interacting with such things.


Turkle has been at the vanguard of AI breakthroughs as a professor at the Massachusetts Institute of Technology (MIT) and the creator of the MIT Initiative on Technology and the Self.

She highlights a conceptual change in the understanding of AI that occurs between the 1960s and 1980s in Life on the Screen: Identity inthe Age of the Internet (1995), substantially changing the way humans connect to and interact with AI.



She claims that early AI paradigms depended on extensive preprogramming and employed a rule-based concept of intelligence.


However, this viewpoint has given place to one that considers intelligence to be emergent.

This emergent paradigm, which became the recognized mainstream view by 1990, claims that AI arises from a much simpler set of learning algorithms.

The emergent method, according to Turkle, aims to emulate the way the human brain functions, assisting in the breaking down of barriers between computers and nature, and more generally between the natural and the artificial.

In summary, an emergent approach to AI allows people to connect to the technology more easily, even thinking of AI-based programs and gadgets as children.



Not just for the area of AI, but also for Turkle's study and writing on the subject, the rising acceptance of the emerging paradigm of AI and the enhanced relatability it heralds represents a significant turning point.


Turkle started to employ ethnographic research techniques to study the relationship between humans and their gadgets in two edited collections, Evocative Objects: Things We Think With (2007) and The Inner History of Devices (2008).


She emphasized in her book The Inner History of Devices that her intimate ethnography, or the ability to "listen with a third ear," is required to go past the advertising-based clichés that are often employed when addressing technology.


This method comprises setting up time for silent meditation so that participants may think thoroughly about their interactions with their equipment.


Turkle used similar intimate ethnographic approaches in her second major book, Alone Together

Why We Expect More from Technology and Less from Each Other (2011), to argue that the increasing connection between people and the technology they use is harmful.

These issues are connected to the increased usage of social media as a form of communication, as well as the continuous degree of familiarity and relatability with technology gadgets, which stems from the emerging AI paradigm that has become practically omnipresent.

She traced the origins of the dilemma back to early pioneers in the field of cybernetics, citing, for example, Norbert Weiner's speculations on the idea of transmitting a human person across a telegraph line in his book God & Golem, Inc.(1964).

Because it reduces both people and technology to information, this approach to cybernetic thinking blurs the barriers between them.



In terms of AI, this implies that it doesn't matter whether the machines with which we interact are really intelligent.


Turkle claims that by engaging with and caring for these technologies, we may deceive ourselves into feeling we are in a relationship, causing us to treat them as if they were sentient.

In a 2006 presentation titled "Artificial Intelligence at 50: From Building Intelligence to Nurturing Sociabilities" at the Dartmouth Artificial Intelligence Conference, she recognized this trend.

She identified the 1997 Tamagotchi, 1998 Furby, and 2000 MyReal Baby as early versions of what she refers to as relational artifacts, which are more broadly referred to as social machines in the literature.

The main difference between these devices and previous children's toys is that these devices come pre-animated and ready for a relationship, whereas previous children's toys required children to project a relationship onto them.

Turkle argues that this change is about our human weaknesses as much as it is about computer capabilities.

In other words, just caring for an item increases the likelihood of not only seeing it as intelligent but also feeling a connection to it.

This sense of connection is more relevant to the typical person engaging with these technologies than abstract philosophic considerations concerning the nature of their intelligence.



Turkle delves more into the ramifications of people engaging with AI-based technologies in both Alone Together and Reclaiming Conversation: The Power of Talk in a Digital Age (2015).


She provides the example of Adam in Alone Together, who appreciates the appreciation of the AI bots he controls over in the game Civilization.

Adam appreciates the fact that he is able to create something fresh when playing.

Turkle, on the other hand, is skeptical of this interaction, stating that Adam's playing isn't actual creation, but rather the sensation of creation, and that it's problematic since it lacks meaningful pressure or danger.

In Reclaiming Conversation, she expands on this point, suggesting that social partners simply provide a perception of camaraderie.

This is important because of the value of human connection and what may be lost in relationships that simply provide a sensation or perception of friendship rather than true friendship.

Turkle believes that this transition is critical.


She claims that although connections with AI-enabledtechnologies may have certain advantages, they pale in contrast to what is missing: 

  • the complete complexity and inherent contradictions that define what it is to be human.


A person's connection with an AI-enabled technology is not as intricate as one's interaction with other individuals.


Turkle claims that as individuals have become more used to and dependent on technology gadgets, the definition of friendship has evolved.


  • People's expectations for companionship have been simplified as a result of this transformation, and the advantages that one wants to obtain from partnerships have been reduced.
  • People now tend to associate friendship only with the concept of interaction, ignoring the more nuanced sentiments and arguments that are typical in partnerships.
  • By engaging with gadgets, one may form a relationship with them.
  • Conversations between humans have become merely transactional as human communication has shifted away from face-to-face conversation and toward interaction mediated by devices. 

In other words, the most that can be anticipated is engagement.



Turkle, who has a background in psychoanalysis, claims that this kind of transactional communication allows users to spend less time learning to view the world through the eyes of another person, which is a crucial ability for empathy.


Turkle argues we are in a robotic period in which people yearn for, and in some circumstances prefer, AI-based robotic companionship over that of other humans, drawing together these numerous streams of argument.

For example, some people enjoy conversing with their iPhone's Siri virtual assistant because they aren't afraid of being judged by it, as evidenced by a series of Siri commercials featuring celebrities talking to their phones.

Turkle has a problem with this because these devices can only respond as if they understand what is being said.


AI-based gadgets, on the other hand, are confined to comprehending the literal meanings of data stored on the device.

They can decipher the contents of phone calendars and emails, but they have no idea what any of this data means to the user.

There is no discernible difference between a calendar appointment for car maintenance and one for chemotherapy for an AI-based device.

A person may lose sight of what it is to have an authentic dialogue with another human when entangled in a variety of these robotic connections with a growing number of technologies.


While Reclaiming Communication documents deteriorating conversation skills and decreasing empathy, it ultimately ends on a positive note.

Because people are becoming increasingly dissatisfied with their relationships, there may be a chance for face-to-face human communication to reclaim its vital role.


Turkle's ideas focus on reducing the amount of time people spend on their phones, but AI's involvement in this interaction is equally critical.


  • Users must accept that their virtual assistant connections will never be able to replace face-to-face interactions.
  • This will necessitate being more deliberate in how one uses devices, prioritizing in-person interactions over the faster and easier interactions provided by AI-enabled devices.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

  • Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.
  • Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.
  • Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cambridge, MA: MIT Press.



AI - Symbol Manipulation.

 



The broad information-processing skills of a digital stored program computer are referred to as symbol manipulation.

From the 1960s through the 1980s, seeing the computer as fundamentally a symbol manipulator became the norm, leading to the scientific study of symbolic artificial intelligence, now known as Good Old-Fashioned AI (GOFAI).

In the 1960s, the emergence of stored-program computers sparked a renewed interest in a computer's programming flexibility.

Symbol manipulation became a comprehensive theory of intelligent behavior as well as a research guideline for AI.

The Logic Theorist, created by Herbert Simon, Allen Newell, and Cliff Shaw in 1956, was one of the first computer programs to mimic intelligent symbol manipulation.

The Logic Theorist was able to prove theorems from Bertrand Russell's Principia Mathematica (1910–1913) and Alfred North Whitehead's Principia Mathematica (1910–1913).

It was presented at Dartmouth's Artificial Intelligence Summer Research Project in 1956. (the Dartmouth Conference).


John McCarthy, a Dartmouth mathematics professor who invented the phrase "artificial intelligence," convened this symposium.


The Dartmouth Conference might be dubbed the genesis of AI since it was there that the Logic Theorist first appeared, and many of the participants went on to become pioneering AI researchers.

The features of symbol manipulation, as a generic process that underpins all types of intelligent problem-solving behavior, were thoroughly explicated and provided a foundation for most of the early work in AI only in the early 1960s, when Simon and Newell had built their General Problem Solver (GPS).

In 1961, Simon and Newell took their knowledge of AI and their work on GPS to a wider audience.


"A computer is not a number-manipulating device; it is a symbol-manipulating device," they wrote in Science, "and the symbols it manipulates may represent numbers, letters, phrases, or even nonnumerical, nonverbal patterns" (Newell and Simon 1961, 2012).





Reading "symbols or patterns presented by appropriate input devices, storing symbols in memory, copying symbols from one memory location to another, erasing symbols, comparing symbols for identity, detecting specific differences between their patterns, and behaving in a manner conditional on the results of its processes," Simon and Newell continued (Newell and Simon 1961, 2012).


The growth of symbol manipulation in the 1960s was also influenced by breakthroughs in cognitive psychology and symbolic logic prior to WWII.


Starting in the 1930s, experimental psychologists like Edwin Boring at Harvard University began to advance their profession away from philosophical and behavioralist methods.





Boring challenged his colleagues to break the mind open and create testable explanations for diverse cognitive mental operations (an approach that was adopted by Kenneth Colby in his work on PARRY in the 1960s).

Simon and Newell also emphasized their debt to pre-World War II developments in formal logic and abstract mathematics in their historical addendum to Human Problem Solving—not because all thought is logical or follows the rules of deductive logic, but because formal logic treated symbols as tangible objects.

"The formalization of logic proved that symbols can be copied, compared, rearranged, and concatenated with just as much definiteness of procedure as [wooden] boards can be sawed, planed, measured, and glued [in a carpenter shop]," Simon and Newell noted (Newell and Simon 1973, 877).



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Expert Systems; Newell, Allen; PARRY; Simon, Herbert A.


References & Further Reading:


Boring, Edwin G. 1946. “Mind and Mechanism.” American Journal of Psychology 59, no. 2 (April): 173–92.

Feigenbaum, Edward A., and Julian Feldman. 1963. Computers and Thought. New York: McGraw-Hill.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman and Company

Newell, Allen, and Herbert A. Simon. 1961. “Computer Simulation of Human Thinking.” Science 134, no. 3495 (December 22): 2011–17.

Newell, Allen, and Herbert A. Simon. 1972. Human Problem Solving. Englewood Cliffs, NJ: Prentice Hall.

Schank, Roger, and Kenneth Colby, eds. 1973. Computer Models of Thought and Language. San Francisco: W. H. Freeman and Company.


Artificial Intelligence - Who Was Herbert A. Simon?

 


Herbert A. Simon (1916–2001) was a multidisciplinary scholar who contributed significantly to artificial intelligence.


He is largely regarded as one of the twentieth century's most prominent social scientists.

His contributions at Carnegie Mellon University lasted five decades.

Early artificial intelligence research was driven by the idea of the computer as a symbol manipulator rather than a number cruncher.

Emil Post, who originally wrote about this sort of computational model in 1943, is credited with inventing production systems, which included sets of rules for symbol strings used to establish conditions—which must exist before rules can be applied—and the actions to be done or conclusions to be drawn.

Simon and his Carnegie Mellon colleague Allen Newell popularized these theories regarding symbol manipulation and production systems by praising their potential benefits for general-purpose reading, storing, and replicating, as well as comparing and contrasting various symbols and patterns.


Simon, Newell, and Cliff Shaw's Logic Theorist software was the first to employ symbol manipulation to construct "intelligent" behavior.


Theorems presented in Bertrand Russell and Alfred North Whitehead's Principia Mathematica (1910) might be independently proved by logic theorists.

Perhaps most notably, the Logic Theorist program uncovered a shorter, more elegant demonstration of Theorem 2.85 in the Principia Mathematica, which was subsequently rejected by the Journal of Symbolic Logic since it was coauthored by a machine.

Although it was theoretically conceivable to prove the Principia Mathematica's theorems in an exhaustively detailed and methodical manner, it was impractical in reality due to the time required.

Newell and Simon were fascinated by the human rules of thumb for solving difficult issues for which an extensive search for answers was impossible due to the massive quantities of processing necessary.

They used the term "heuristics" to describe procedures that may solve issues but do not guarantee success.


A heuristic is a "rule of thumb" used to solve a problem that is too difficult or time consuming to address using an exhaustive search, a formula, or a step-by-step method.


Heuristic approaches are often compared with algorithmic methods in computer science, with the result of the method being a significant differentiating element.

According to this contrast, a heuristic program will provide excellent results in most cases, but not always, while an algorithmic program is a clear technique that guarantees a solution.

This is not, however, a technical difference.

In fact, a heuristic procedure that consistently yields the best result may no longer be deemed "heuristic"—alpha-beta pruning is an example of this.

Simon's heuristics are still utilized by programmers who are trying to solve issues that demand a lot of time and/or memory.

The game of chess is one such example, in which an exhaustive search of all potential board configurations for the proper solution is beyond the human mind's or any computer's capabilities.


Indeed, for artificial intelligence research, Herbert Simon and Allen Newell referred to computer chess as the Drosophila or fruit fly.


Heuristics may also be used to solve issues that don't have a precise answer, such as in medical diagnosis, when heuristics are applied to a collection of symptoms to determine the most probable diagnosis.

Production rules are derived from a class of cognitive science models that apply heuristic principles to productions (situations).

In practice, these rules reduce down to "IF-THEN" statements that reflect specific preconditions or antecedents, as well as the conclusions or consequences that these preconditions or antecedents justify.

"IF there are two X's in a row, THEN put an O to block," is a frequent example offered for the application of production rules to the tic-tac-toe game.

These IF-THEN statements are incorporated into expert systems' inference mechanisms so that a rule interpreter can apply production rules to specific situations lodged in the context data structure or short-term working memory buffer containing information supplied about that situation and draw conclusions or make recommendations.


Production rules were crucial in the development of artificial intelligence as a discipline.


Joshua Lederberg, Edward Feigenbaum, and other Stanford University partners would later use this fundamental finding to develop DENDRAL, an expert system for detecting molecular structure, in the 1960s.

These production guidelines were developed in DENDRAL after discussions between the system's developers and other mass spectrometry specialists.

Edward Shortliffe, Bruce Buchanan, and Edward Feigenbaum used production principles to create MYCIN in the 1970s.

MYCIN has over 600 IFTHEN statements in it, each reflecting domain-specific knowledge about microbial illness diagnosis and treatment.

PUFF, EXPERT, PROSPECTOR, R1, and CLAVIER were among the several production rule systems that followed.


Simon, Newell, and Shaw demonstrated how heuristics may overcome the drawbacks of classical algorithms, which promise answers but take extensive searches or heavy computing to find.


A process for solving issues in a restricted, clear sequence of steps is known as an algorithm.

Sequential operations, conditional operations, and iterative operations are the three kinds of fundamental instructions required to create computable algorithms.

Sequential operations perform tasks in a step-by-step manner.

The algorithm only moves on to the next job when each step is completed.

Conditional operations are made up of instructions that ask questions and then choose the next step dependent on the response.

One kind of conditional operation is the "IF-THEN" expression.

Iterative operations run "loops" of instructions.

These statements tell the task flow to go back and repeat a previous series of statements in order to solve an issue.

Algorithms are often compared to cookbook recipes, in which a certain order and execution of actions in the manufacture of a product—in this example, food—are dictated by a specific sequence of set instructions.


Newell, Shaw, and Simon created list processing for the Logic Theorist software in 1956.


List processing is a programming technique for allocating dynamic storage.

It's mostly utilized in symbol manipulation computer applications like compiler development, visual or linguistic data processing, and artificial intelligence, among others.

Allen Newell, J. Clifford Shaw, and Herbert A. Simon are credited with creating the first list processing software with enormous, sophisticated, and flexible memory structures that were not reliant on subsequent computer/machine memory.

List processing techniques are used in a number of higher-order languages.

IPL and LISP, two artificial intelligence languages, are the most well-known.


Simon and Newell's Generic Problem Solver (GPS) was published in the early 1960s, and it thoroughly explains the essential properties of symbol manipulation as a general process that underpins all types of intelligent problem-solving behavior.


GPS formed the foundation for decades of early AI research.

To arrive at a solution, General Problem Solver is a software for a problem-solving method that employs means-ends analysis and planning.

GPS was created with the goal of separating the problem-solving process from information particular to the situation at hand, allowing it to be used to a wide range of issues.

Simon is an economist, a political scientist, and a cognitive psychologist.


Simon is known for the notions of limited rationality, satisficing, and power law distributions in complex systems, in addition to his important contributions to organizational theory, decision-making, and problem-solving.


Computer and data scientists are interested in all three themes.

Human reasoning is inherently constrained, according to bounded rationality.

Humans lack the time or knowledge required to make ideal judgments; problems are difficult, and the mind has cognitive limitations.

Satisficing is a term used to describe a decision-making process that produces a solution that "satisfies" and "suffices," rather than the most ideal answer.

Customers use satisficing in market conditions when they choose things that are "good enough," meaning sufficient or acceptable.


Simon described how power law distributions were obtained from preferred attachment mechanisms in his study on complex organizations.


When a relative change in one variable induces a proportionate change in another, power laws, also known as scaling laws, come into play.

A square is a simple illustration; when the length of a side doubles, the square's area quadruples.

Power laws may be found in biological systems, fractal patterns, and wealth distributions, among other things.

Preferential attachment processes explain why the affluent grow wealthier in income/wealth distributions: Individuals' wealth is dispersed according on their current level of wealth; those with more wealth get proportionately more income, and hence greater overall wealth, than those with less.

When graphed, such distributions often create so-called long tails.

These long-tailed distributions are being employed to explain crowdsourcing, microfinance, and online marketing, among other things.



Simon was born in Milwaukee, Wisconsin, to a Jewish electrical engineer with multiple patents who came from Germany in the early twentieth century.


His mother was a musical prodigy. Simon grew interested in the social sciences after reading books on psychology and economics written by an uncle.

He has said that two works inspired his early thinking on the subjects: Norman Angell's The Great Illusion (1909) and Henry George's Progress and Poverty (1879).



Simon obtained his doctorate in organizational decision-making from the University of Chicago in 1943.

Rudolf Carnap, Harold Lasswell, Edward Merriam, Nicolas Rashevsky, and Henry Schultz were among his instructors.

He started his career as a political science professor at the Illinois Institute of Technology, where he taught and conducted research.

In 1949, he transferred to Carnegie Mellon University, where he stayed until 2001.

He progressed through the ranks of the Department of Industrial Management to become its chair.

He has written twenty-seven books and several articles that have been published.

In 1959, he was elected a member of the American Academy of Arts and Sciences.

In 1975, Simon was awarded the coveted Turing Award, and in 1978, he was awarded the Nobel Prize in Economics.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Dartmouth AI Conference; Expert Systems; General Problem Solver; Newell, Allen.


References & Further Reading:


Crowther-Heyck, Hunter. 2005. Herbert A. Simon: The Bounds of Reason in Modern America. Baltimore: Johns Hopkins Press.

Newell, Allen, and Herbert A. Simon. 1956. The Logic Theory Machine: A Complex Information Processing System. Santa Monica, CA: The RAND Corporation.

Newell, Allen, and Herbert A. Simon. 1976. “Computer Science as Empirical Inquiry: Symbols and Search.” Communications of the ACM 19, no. 3: 113–26.

Simon, Herbert A. 1996. Models of My Life. Cambridge, MA: MIT Press.



Artificial Intelligence - Robot Ethics.

     



    What Is Robot Ethics?


    Robot ethics is a branch of technology ethics that studies, clarifies, and addresses the moral possibilities and concerns that come from the design, development, and deployment of robots and other autonomous systems.

    "Robot ethics" is an umbrella phrase that encompasses a number of similar but distinct projects and undertakings.

    The earliest known articulation of a robot ethics may be found in fiction, notably in Isaac Asimov's collection of robot tales, I, Robot (1950).



    Asimov presented the three rules of robotics in the short story "Runaround," which initially published in the March 1942 edition of Astounding Science Fiction: 


    1. A robot may not damage a human being or enable a human being to come to danger as a result of its inactivity.

    2. Except when such directions contradict with the First Law, a robot shall follow the orders issued to it by humans.

    3. A robot must defend its own existence as long as doing so does not violate the First or Second Laws.



    (Asimov, 40, 1950) In his 1985 book Robots and Empire, Asimov adds a fourth element to the sequence, which he refers to as the "zeroth rule," to maintain the dominance of lower-numbered components over higher-numbered ones.

    The rules are both functionalist and anthropocentric by design, outlining a series of layered limits on robot conduct in order to protect human persons and communities' interests and well-being.

    Despite this, many have attacked the legislation as weak and impracticable for enforcing a true moral code.

    The principles were created by Asimov to create captivating science fiction tales, not to address real-world problems involving machine action and robot behavior.

    As a result, Asimov's rules were never meant to constitute a full and final set of instructions for real robots.

    He used the rules to create dramatic tension, imaginative circumstances, and character struggle in his stories.

    "Asimov's Three Laws of Robotics are literary techniques, not technical concepts," writes Lee McCauley (2007, 160).

    Asimov's rules have been discovered to be severely inadequate for daily practice by theorists and practitioners in the domains of robotics and computer ethics.

    Susan Leigh Anderson tackles this problem front on, showing not only that Asimov ignored his own principles as a basis for machine ethics, but also that the laws are inadequate as a foundation for an ethical framework or system (Anderson 2008, 487–93).

    As a result, although academics and developers are acquainted with the Three Rules of Robotics, they are also aware that the laws are neither computable or implementable in any meaningful way.

    Beyond Asimov's original science fiction invention, the scientific literature has evolved various variations of robot ethics.

    Robot ethics, roboethics, and robot rights are examples of these.

    Gianmarco Veruggio, a roboticist, coined the term "roboethics" in 2002.

    It was first addressed in public in 2004 at the First International Symposium on Roboethics, and has since been expanded upon and explained in a number of publications.

    "Roboethics is an applied ethics whose goal is to build scientific/cultural/technical instruments that may be shared by diverse social groups and beliefs," according to Veruggio.

    These technologies are intended to promote and support the development of robotics for the benefit of human society and people, as well as to assist in the prevention of its abuse against humanity" (Veruggio and Operto 2008, 1504).

    "Roboethics is neither the ethics of robots, nor any artificial ethics," says one definition, "but it is the human ethics of robots' inventors, makers, and users" (Veruggio and Operto 2008, 1504).

    As a result, roboethics is often used to establish a professional ethics for roboticists, and is therefore comparable to other professional, applied ethics formulations such as bioethics or computer ethics.

    The European Robotics Research Network (EURON) Roboethics Roadmap, which aimed to develop an ethical framework for "the design, manufacturing, and use of robots" (Veruggio 2006, 612) and the Foundation for Responsible Robotics (FRR), which recognizes that because "robots are tools without moral intelligence," their creators must "be accountable for the ethical developments that must come with technological innovation" (Veruggio 2006, 612). (FRR 2019).

    There's also the issue of robot ethics. Robot ethics, according to Veruggio et al. (2011, 21), refers to the code of behavior that designers adopt in robot artificial intelligence.

    This entails a kind of artificial ethics capable of ensuring that autonomous robots behave ethically in all scenarios where they interact with humans or when their activities may have negative implications for humans or the environment.

    Robot ethics is concerned with the moral behavior of the machine itself, as opposed to roboethics, which is concerned with the moral conduct of the human creator, developer, or user.

    Robot ethics is often confused with "machine ethics," and Veruggio uses both terms interchangeably.

    Machine ethics is concerned with the moral capabilities of machines themselves, as opposed to computer ethics, which is concerned with the moral behavior of the human inventor, developer, or user of the system (Anderson and Anderson 2007, 15).

    Under the title Moral Machines, Wendell Wallach and Colin Allen have explored a similar line of thought.

    "The area of machine morality extends the study of computer ethics beyond concern about what humans do with their computers to issues about what machines do by themselves," according to Wallach and Allen (2009, 6).

    Robot ethics, like computer ethics before it, believes technology to be a more or less transparent tool or instrument of human moral decision-making and behavior.

    Patrick Lin et al. (2012 and 2017) attempted to bring all of these works together under a broader definition of the word, describing it as an emerging discipline of applied moral philosophy.

    To present, the majority of robot ethics research has focused on concerns of accountability, either as it pertains to human designers of robotic systems or as it pertains to or is attributed to the robotic device itself.

    However, this is just one side of the story.

    As Luciano Floridi and J. W. Sanders (2001, 349–50) correctly point out, ethics is about social connections between two interacting components: the actor (or agent) and the action receiver.

    The majority of roboethics and robot ethics initiatives may be classified as solely agent-oriented endeavors.

    "Robot rights," a term coined by philosophers Mark Coeckelbergh (2010) and David Gunkel (2018), as well as legal scholars Kate Darling (2012) and Alain Bensoussan and Jérémy Bensoussan. 

    For these researchers, robot ethics include not just the robot's moral behavior, but also the artifact's moral and legal standing, as well as its place in our ethical and legal systems as a potential subject rather than merely an object.

    The European Parliament has put this notion to the test, proposing a new legal category of electronic person to cope with the societal integration of increasingly autonomous robotics and AI systems.

    In conclusion, the phrase "robot ethics" encompasses a wide range of initiatives relating to robots and their societal influence and repercussions.

    In its more specialized form, roboethics refers to a field of applied or professional ethics concerned with moral dilemmas connected to the design, development, and implementation of robots and other autonomous technologies.

    In a broader sense, robot ethics refers to a branch of moral philosophy concerned with the moral and legal implications of robots acting as both agents and patients.




    Robot ethics is a rising multidisciplinary study endeavor that aims to understand the ethical implications and repercussions of robotic technology, particularly autonomous robots. 


    It is generally located at the crossroads of applied ethics and robotics. 

    Researchers, thinkers, and academics from fields as varied as robotics, computer science, psychology, law, philosophy, and others are tackling the difficult ethical issues surrounding the development and deployment of robotic technology in society. 

    Many fields of robotics are touched, particularly those that include robots interacting with people, such as elder care and medical robotics, as well as robots for different search and rescue tasks, including military robots, and other types of service and entertainment robots. 

    While military robots were initially at the forefront of the debate (e.g., whether and when autonomous robots should be allowed to use lethal force, whether they should be allowed to make those decisions autonomously, etc. ), the impact of other types of robots, particularly social robots, has grown in importance in recent years. 


    The IEEE-RAS Technical Committee on Robot Ethics' goal is to offer a platform for the IEEE-RAS to raise and solve the pressing ethical issues raised by and linked with robotics research and technology. 


    The TC (now in its third generation) has been organizing various types of meetings (from satellite workshops at main conferences to standalone venues) to draw attention to the increasingly urgent ethical issues raised by rapidly advancing robotics technology since its inception almost a decade ago in 2004. 

    For example, in recent major conferences, an increasing number of workshops and special sessions have been offered (such as ICRA, IACAP, AISB and others). 

    There are also plans for further seminars, special sessions, and stand-alone locations. 

    Furthermore, an increasing number of publications, public lectures, and interviews by former and current TC co-chairs and other researchers invested in this topic are aimed at raising awareness of the urgent need for researchers and non-researchers alike to understand the social impact and ethical implications of robot technology. 

    The TC continues to promote public awareness and plans to arrange a standalone worldwide event on robot ethics in the near future, in addition to special sessions and seminars on robot ethics at important international venues. 

    Artificial Intelligence and Robotics Ethics. 

    Artificial intelligence (AI) and robots are digital technologies that will have a major influence on humanity's future progress in the near future. 

    They've highlighted basic concerns about what we should do with these systems, what they should do for us, what hazards they pose, and how we might manage them. 



    Context. 


    The ethics of AI and robots are often centered on different "concerns," which is a common reaction to new technology. 

    Many of these concerns turn out to be rather quaint (trains are too fast for souls); some are predictably wrong when they claim that technology will fundamentally change humans (telephones will destroy personal communication, writing will destroy memory, video cassettes will render going out obsolete); some are broadly correct but moderately relevant (digital technology will destroy industries that make photographic film, cassette tapes, or vinyl records); and some are egregiously wrong when they claim that technology will fundamentally change humans (telephones will destroy personal (cars will kill children and fundamentally change the landscape). 

    The purpose of a piece like this is to dissect the concerns and deflate the non-issues. 

    Some technologies, such as nuclear power, automobiles, and plastics, have sparked ethical and political debate as well as considerable regulatory initiatives to limit their trajectory, generally after some harm has been done. 

    New technologies, in addition to "ethical issues," challenge present norms and conceptual frameworks, which is of special interest to philosophy. 

    Finally, after we've grasped the context of a technology, we must create our society reaction, which includes regulation and legislation. 

    All of these characteristics are present in modern AI and robotics technologies, as well as the more basic worry that they will usher in the end of the period of human control on Earth. 

    In recent years, the ethics of AI and robotics have gotten a lot of press attention, which helps support related research but also risks undermining it: the press frequently talks as if the issues under discussion are just predictions of what future technology will bring, and as if we already know what would be most ethical and how to get there. 

    Risk, security (Brundage et al. 2018, see the Other Internet Resources section below, henceforth [OIR]), and effect prediction are therefore the focus of press attention (e.g., on the job market). 

    As a consequence, a discussion of mostly technical issues focuses on how to accomplish a desired result. 

    Image and public relations are also driving current policy and industrial debates, where the term "ethical" is nothing more than the new "green," maybe used for "ethics washing." In order for an issue to qualify as a dilemma for AI ethics, we must be unsure of what the proper way to do is. 

    Under this view, job loss, stealing, or death using AI are not ethical issues; the question is whether they are permitted in particular situations. 

    This article focuses on serious ethical issues for which we do not have immediate solutions. 

    Last but not least, AI and robotics ethics is a relatively new field within applied ethics, with significant dynamics but few well-established issues and authoritative overviews—though there is a promising outline (European Group on Ethics in Science and New Technologies 2018) and there are beginnings on societal impact (Floridi et al. 2018; Taddeo and Floridi 2018; S. Taylor et al. 2018; Walsh 2018; Bryson 2019; Gibert 2019; Whittlestone et (AI HLEG 2019 [OIR]; IEEE 2019). 

    As a result, this page must not only repeat what the community has already accomplished, but also offer an order where none exists. 



    Artificial Intelligence and Robotics 



    The term "artificial intelligence" (AI) refers to any kind of artificial computer system that exhibits intelligent behavior, i.e., complicated behavior that is conducive to achieving objectives. 

    We don't want to limit "intelligence" to what would need intelligence if performed by people, as Minsky proposed (1985). 

    This means we include a variety of machines, including "technical AI" computers that have limited learning and reasoning skills but excel at automating certain activities, as well as "general AI" machines that attempt to produce a generally intelligent agent. 

    As a result, the topic of "philosophy of AI" has emerged as a way for AI to reach closer to human skin than previous technologies. 


    Perhaps this is because AI's goal is to develop computers that have a trait that is important to how we humans understand ourselves: feelings, thoughts, and intelligence. 

    Sensing, modeling, planning, and action are probably the most important functions of an artificially intelligent agent, but current AI applications also include perception, text analysis, natural language processing (NLP), logical reasoning, game-playing, decision support systems, data analytics, predictive analytics, autonomous vehicles, and other forms of robotics (P. Stone et al. 2016). 

    To accomplish these goals, AI may use a variety of computing strategies, such as traditional symbol-manipulating AI inspired by natural cognition, or machine learning using neural networks (Goodfellow, Bengio, and Courville 2016; Silver et al. 2018). 


    It's worth mentioning that the word "AI" was widely used from 1950 to 1975, then fell out of favor during the "AI winter" of 1975–1995, and was restricted. 


    As a consequence, terms like "machine learning," "natural language processing," and "data science" were often omitted from the definition of "AI." Since about 2010, the definition has been expanded yet further, and "AI" now encompasses practically everything of computer science and even high-tech. 

    Now it's a household name, a booming industry with significant capital investment (Shoham et al. 2018), and it's on the verge of regaining popularity. 

    It may enable us to nearly eradicate global poverty, dramatically decrease sickness, and give better education to essentially everyone on the earth, as Erik Brynjolfsson pointed out. 

    (according to Anderson, Rainie, and Luchsinger, 2018) Robots, on the other hand, are physical machines that move. 


    While AI may be totally software, robots are physical machines that move. 

    Robots are exposed to physical force via "sensors," and they impose physical force on the environment through "actuators," such as a gripper or a rotating wheel. 

    As a result, self-driving automobiles or aircraft are robots, and only a small percentage of robots are "humanoid" (human-shaped), as shown in movies. 

    Some robots use artificial intelligence, whereas others do not: Typical industrial robots mindlessly execute scripts with limited sensory input and no learning or thinking (about 500,000 new industrial robots are deployed each year (IFR 2019 [OIR])). 

    While robotics systems are likely to generate more anxiety among the general public, AI systems are more likely to have a bigger influence on humans. 

    Furthermore, AI or robotics systems that are designed to do a certain set of tasks are less likely to introduce new challenges than systems that are more flexible and autonomous. 

    As a result, robotics and AI may be thought of as two overlapping sets of systems: AI-only systems, robotics-only systems, and systems that are both. 

    We're interested in all three, thus the scope of this essay isn't only the intersection of the two sets, but also their union. 




    Some Thoughts on AI Robotics Policy 


    One of the issues raised in this essay is policy. 

    There is a lot of public debate on AI ethics, and politicians often declare that the issue needs new policy, which is easier said than done: Technology policy is challenging to develop and implement in practice. 

    Incentives and financing, infrastructure, taxes, or good-will messages, as well as regulation by different parties and the law, are all examples. 

    AI policy may inadvertently collide with other goals of technology policy or general policy. 

    In recent years, governments, parliaments, organizations, and business circles in industrialized nations have published studies and white papers, and some have coined catchphrases ("trusted/responsible/humane/human-centered/good/beneficial AI"), but is it all that is required? See Jobin, Ienca, and Vayena (2019) for a survey, as well as V. 

    Müller's list of PT-AI Policy Documents and Institutions. 

    People working in ethics and policy may have a propensity to exaggerate the influence and risks posed by new technologies, while underestimating the scope of present regulation (e.g., for product liability). 

    Businesses, the military, and certain government agencies, on the other hand, have a propensity to "simply speak" and conduct some "ethical washing" in order to maintain a positive public image and go on as before. 

    Putting in place legally enforceable regulations would put established corporate structures and practices to the test. 

    Actual policy is not only an application of ethical theory; it is also influenced by society power structures, and those with power will resist any restrictions. 

    As a result, there's a good chance that regulation will be rendered useless in the face of economic and political power. 

    There have been several remarkable starts, despite the fact that virtually little real policy has been produced: The current EU policy statement says that "trustworthy AI" should be legal, ethical, and technically sound, and then lists seven criteria: human supervision, technological robustness, privacy and data governance, transparency, fairness, well-being, and accountability (AI HLEG 2019 [OIR]). 

    Much European research today operates under the banner of "responsible research and innovation," and "technology evaluation" has been a common area since nuclear power's inception. 

    In the subject of information technology, professional ethics is also a standard field, and this covers topics that are pertinent to this article. 

    Perhaps a "code of ethics" for AI developers, similar to medical practitioners' codes of ethics, is a possibility here (Véliz 2019). 

    In this article, we look at what data science should be doing (L. Taylor and Purtova 2019). 

    We also believe that, rather than the area as a whole, much regulation will ultimately address individual applications or technologies of AI and robots. 

    In, you'll find a handy overview of an ethical framework for AI (European Group on Ethics in Science and New Technologies 2018: 13ff). 

    Calo (2018), as well as Crawford and Calo (2016), Stahl, Timmermans, and Mittelstadt (2016), Johnson and Verdicchio (2017), and Giubilini and Savulescu (2017), discuss AI policy in general (2018). 

    In the discipline of "Science and Technology Studies," a more political perspective on technology is often emphasized (STS). 

    Concerns in STS are frequently fairly similar to those in ethics, as works like The Ethics of Invention (Jasanoff 2016) demonstrate (Jacobs et al. 2019 [OIR]). 

    Rather than discussing AI or robotics in general, we discuss policy for each type of issue separately in this article. 

     



    Human Use Of AI Robotics Has Ethical Issues.


    We look at challenges that emerge with specific applications of AI and robotics systems that can be more or less autonomous in this part, which means we look at issues that arise with certain usage of the technologies that do not arise with others. 

    It's important to remember, too, that technological advancements will always make certain usage simpler and hence more common, while hindering others. 

    As a result, the design of technological objects has ethical implications for their usage (Houkes and Vermaas 2010; Verbeek 2011), therefore we need "responsible design" in this sector in addition to "responsible use." The emphasis on usage does not presume which ethical systems are most suited to addressing these difficulties; virtue ethics (Vallor 2017) may be more appropriate than consequentialist or value-based ethics (Floridi et al. 2018). 

    This section is also unaffected by the debate over whether AI systems have true "intelligence" or other mental properties: It would also apply if AI and robots were just seen as the present face of automation (see Müller forthcoming-b). 




    Surveillance & Privacy 


    There is a broader debate over privacy and surveillance in information technology (e.g., Macnish 2017; Roessler 2017), which primarily concerns access to private data and individually identifiable information. 

    "The right to be left alone," "information privacy," "privacy as a feature of personhood," "control over one's own information," and "the right to secrecy" are all well-known facets of privacy (Bennett and Raab 2006). 

    Surveillance by other state agents, businesses, and even individuals is now included in privacy studies, which previously focused on state surveillance by secret services. 

    Technology has advanced dramatically in recent decades, but regulation has lagged behind (though there is the Regulation (EU) 2016/679), resulting in a state of anarchy that is used by the most powerful parties, sometimes in plain sight, sometimes in secret. 

    The digital environment has substantially expanded: All data collection and storage is now digital, our lives are becoming more digital, the majority of digital data is now linked to a single Internet, and sensor technology is rapidly being used to create data on non-digital parts of our lives. 

    AI broadens the scope of intelligent data collecting as well as the scope of data analysis. 

    This applies to both broad monitoring of whole populations and traditional targeted surveillance. 

    Furthermore, most of the information is shared between agents for a charge. 

    Controlling who collects which data and who has access, on the other hand, is considerably more difficult in the digital world than it was in the analogue world of paper and phone conversations. 


    Many new AI technologies magnify already identified problems. 


    Face recognition in images and videos, for example, enables for identification and hence profiling and searching for people (Whittaker et al. 2018: 15ff). 

    This is followed by the use of additional identifying methods, such as "device fingerprinting," which are prevalent on the Internet (and occasionally stated in the "privacy policy"). 

    As a consequence, "there is a disturbingly full image of ourselves in this immense ocean of data" (Smolan 2016: 1:01). 

    As a consequence, there's a controversy that hasn't gotten the attention it deserves. 

    Our "free" services are paid for by the data trail we leave behind—but we aren't notified about the data collecting or the worth of this new raw material, and we are pushed into leaving even more data. 

    The major data-collection aspect of the business for the "big 5" corporations (Amazon, Google/Alphabet, Microsoft, Apple, and Facebook) seems to be built on deceit, exploiting human vulnerabilities, promoting procrastination, inducing addiction, and manipulation (Harris 2016 [OIR]). 


    In this "surveillance economy," the major goal of social media, gaming, and much of the Internet is to acquire, keep, and direct attention—and therefore data supply. 

    "The Internet's economic model is surveillance" (Schneier 2015). 

    "Surveillance capitalism" is a term used to describe the surveillance and attention economy (Zuboff 2019). 

    It has resulted in several efforts to break free from these companies' hold, such as via "minimalism" (Newport 2019) and the open source movement, but it seems that today's people lack the degree of autonomy required to break free while continuing to live and work normally. 

    If "ownership" is the proper connection here, we have lost ownership of our data. 

    We have, in some ways, lost control of our data. 

    These systems often disclose truths about us that we prefer to keep hidden or are unaware of: they know more about us than we do. 

    Even just watching our online behavior provides information into our mental processes (Burr and Christianini 2019) and may be used to manipulate us (see below section 2.2). 

    As a result, calls for the protection of "derived data" have been made (Wachter and Mittelstadt 2019). 

    Harari questions about the long-term repercussions of AI in the last phrase of his best-selling book Homo Deus: What will happen to society, politics, and everyday life when non-conscious yet extremely intelligent algorithms know us better than we do? (462, 2016) Except for security patrols, robotic devices have not yet played a significant role in this sector, but that will change as they become more ubiquitous outside of industrial contexts. 

    They are destined to become part of the data-gathering machinery, alongside the "Internet of things," so-called "smart" systems (phone, TV, oven, lamp, virtual assistant, house,...), "smart city" (Sennett 2018), and "smart government." (Relative) anonymisation, access control (plus encryption), and other models where computation is carried out with fully or partially encrypted input data are now standard in data science (Stahl and Wright 2018); in the case of "differential privacy," this is done by adding calibrated noise to encrypt the output of queries (Dwork et al. 2006; Abowd 2017). 

    While more time and money are required, such solutions may help to avoid many of the privacy concerns. 

    Better privacy has also been considered by certain firms as a competitive advantage that can be exploited and sold for a profit. 

    One of the most challenging aspects of regulation is enforcing it, both at the state level and at the level of the person who has a claim. 

    They must locate a court that declares itself competent, identify the liable legal entity, prove the action, maybe establish purpose, and find a court that declares itself competent... and finally persuade the court to follow through on its judgment. 

    Consumer rights, product responsibility, and other civil liabilities, as well as protection of intellectual property rights, are often lacking or difficult to enforce with digital goods. 

    This implies that enterprises with a "digital" history are used to testing their goods on customers without fear of legal repercussions while vigorously maintaining their intellectual property rights. 

    This "Internet Libertarianism" is frequently misinterpreted as implying that technological innovations would solve social issues on their own (Mozorov 2013). 

    Manipulation of Behaviour is a term that refers to the act of manipulating someone's behavior. 

    The ethical difficulties raised by AI in surveillance extend beyond the collection of data and the focus of attention: They include the use of data to influence behavior, both online and offline, in a manner that impairs rational decision-making autonomy. 

    Of course, attempts to control behavior are nothing new, but when AI systems are used, they may take on a new dimension. 

    Users are subject to "nudges," manipulation, and deceit because of their intensive connection with data systems and the rich information about persons that this gives. 

    With enough past data, algorithms may be used to target people or small groups with precisely the kind of information that would most likely effect them. 

    A 'nudge' alters the environment in such a manner that it impacts behavior in a predictable, positive way that is simple and inexpensive to avoid (Thaler & Sunstein 2008). 

    From here, it's a short step to paternalism and manipulation. 

    Many advertisers, marketers, and internet vendors will use behavioural biases, deceit, and addiction development to maximize profit (Costa and Halpern 2019 [OIR]). 

    The economic strategy for most of the gambling and gaming businesses involves manipulation, but it is expanding to other sectors, such as low-cost airlines. 

    This manipulation is done using "dark patterns" in web page or gaming interface design (Mathur et al. 2019). 

    Gambling and the selling of addictive drugs are heavily regulated at the present, but online manipulation and addiction are not—despite the fact that manipulating online behavior is becoming a key Internet business model. 

    Furthermore, political propaganda is now primarily distributed through social media. 

    As in the Facebook-Cambridge Analytica "crisis" (Woolley and Howard 2017; Bradshaw, Neudert, and Howard 2019), this influence may be utilized to sway voting behavior, and if effective, it may jeopardize human liberty (Susser, Roessler, and Nissenbaum 2019). 

    Improved AI "faking" technologies turn what was previously dependable evidence into unreliable evidence—digital pictures, voice recordings, and video have all been affected. 

    It will be rather simple to produce (rather than change) "deep fake" text, images, and video material with any desired content in the near future. 

    Real-time engagement with people by text, phone, or video will soon be faked as well. 

    As a result, we can't trust digital connections while still becoming more reliant on them. 

    Another difficulty is that AI machine learning algorithms depend on large volumes of data for training. 

    As a result, there will often be a trade-off between privacy and data rights vs. 

    product technical excellence. 

    This has an impact on the consequentialist assessment of privacy-invading behaviors. 

    This field's policy has its ups and downs: Businesses' lobbying, secret services, and other governmental agencies that rely on monitoring are putting a lot of pressure on civil liberties and the preservation of individual rights. 

    In comparison to the pre-digital period, when communication was dependent on letters, analogue telephone conversations, and human interaction, and monitoring was subject to severe legal limits, privacy protection has deteriorated dramatically. 

    Despite the fact that the EU General Data Protection Regulation (Regulation (EU) 2016/679) has enhanced privacy protection, the US and China choose growth with fewer regulation (Thompson and Bremmer 2018), most likely in the expectation of gaining a competitive edge. 

    It is evident that, with the assistance of AI technology, state and corporate actors have expanded their power to breach privacy and influence individuals, and will continue to do so to serve their own interests—unless legislation in the public interest intervenes.



    AI Systems' Transparency. 


    The primary difficulties in what is now referred to as "data ethics" or "big data ethics" are opacity and prejudice (Floridi and Taddeo 2016; Mittelstadt and Floridi 2016). 

    "Significant issues regarding a lack of due process, accountability, community participation, and auditing" are raised by AI systems for automated decision assistance and "predictive analytics" (Whittaker et al. 2018: 18ff). 

    They are part of a power structure that "creates decision-making procedures that restrict and limit human involvement" (Danaher 2016b: 245). 

    At the same time, the impacted individual will often be unable to understand how the system arrived at this result, i.e., the system will be "opaque" to that person. 


    Even an expert will struggle to understand how a certain pattern was detected, much alone what the pattern is, if the system uses machine learning. 

    This opacity exacerbates bias in decision systems and data sets. 

    So, at least in circumstances where there is a desire to eliminate bias, opacity and bias analysis go hand in hand, and political responses must address both challenges simultaneously. 

    Many AI systems depend on supervised, semi-supervised, or unsupervised machine learning methods in (simulated) neural networks to extract patterns from a given dataset, with or without "correct" answers supplied. 

    With these methods, the "learning" identifies patterns in the data and labels them in a manner that looks relevant to the choice the system makes, despite the fact that the programmer has no idea which patterns in the data the system has employed. 

    In reality, the algorithms are changing, so when fresh data or feedback ("this was accurate," "this was wrong") is received, the learning system's patterns alter. 

    This implies that the end result is opaque to the user and coders. 

    Furthermore, the program's quality is largely reliant on the data given, as the old adage goes, "garbage in, trash out." So, if the data previously has a bias (for example, police data on suspects' skin color), the algorithm will duplicate that bias. 

    There have been ideas for a common representation of datasets in the form of a "datasheet," which would make detecting bias more straightforward (Gebru et al. 2018 [OIR]). 

    There's also a lot of new research on the limits of machine learning systems, which are basically powerful data filters (Marcus 2018 [OIR]). 

    Some have contended that today's ethical issues are the consequence of AI's technological "shortcuts" (Cristianini forthcoming). 

    Starting with (Van Lent, Fisher, and Mancuso 1999; Lomas et al. 2012) and more recently, a DARPA initiative, numerous technological efforts aimed at "explainable AI" have been undertaken (Gunning 2017 [OIR]). 

    The requirement for a system for clarifying and articulating the power structures, biases, and impacts that computational artefacts exert in society is often referred to as "algorithmic accountability reporting" (Diakopoulos 2015: 398). 

    This isn't to say that we should expect an AI to "explain its reasoning"—doing so would need significantly greater moral autonomy than we now provide AI systems (see section 2.10). 

    If we rely on a system that is supposedly superior to humans but cannot explain its decisions, there is a fundamental problem for democratic decision-making, according to politician Henry Kissinger. 

    We may have "created a potentially dominant technology in search of a guiding philosophy," according to him (Kissinger 2018). 

    Danaher (2016b) refers to this issue as "the menace of algocracy" (adding to Aneesh 2002 [OIR], 2006's usage of the term "algocracy"). 

    To prevent AI becoming a force that leads to a Kafka-style impenetrable suppression mechanism in public administration and elsewhere, Cave (2019) emphasizes the necessity for a larger social shift toward more "democratic" decision-making. 

    In her renowned book Weapons of Math Destruction (2016), O'Neil, as well as Yeung and Lodge, have emphasized the political aspect of this debate (2019). 

    Some of these concerns have been addressed in the EU with the (Regulation (EU) 2016/679), which stipulates that consumers would have a legal "right to explanation" when confronted with a choice based on data processing—how far this goes and to what extent it can be enforced is debatable (Goodman and Flaxman 2017; Wachter, Mittelstadt, and Floridi 2016; Wachter, Mittelstadt, and Russell 2017). 

    According to Zerilli et al. 

    (2019), there may be a double standard here, in which we require a high degree of explanation for machine-based judgments despite people not always meeting that threshold. 


     AI Bias.


    Automated AI decision support systems and "predictive analytics" work with data to create a "output" judgment. 

    This output might be anything from "this restaurant matches your tastes" to "the patient in this X-ray has finished bone development," "credit card application refused," "donor organ will be donated to another patient," "bail is denied," or "target identified and engaged." Data analysis is often used in "predictive analytics" in business, healthcare, and other industries to forecast future events; as prediction becomes simpler, it will become a cheaper commodity. 

    Prediction is used in "predictive policing" (NIJ 2014 [OIR]), which many worry will erode civil rights (Ferguson 2017) since it takes authority away from those whose behavior is expected. 

    Many of the concerns about policing, however, seem to be based on future scenarios in which law enforcement anticipates and punishes planned activities rather than waiting until a crime is committed (as in the 2002 film "Minority Report"). 

    One problem is that these systems may amplify bias already present in the data used to create the system, for as by boosting police patrols in a certain region and uncovering more crime in that area. 

    Actual "predictive policing" or "intelligence-led policing" tactics are primarily concerned with determining where and when police personnel will be most required. 

    In workflow support software (e.g., "ArcGIS"), police officers may also be given additional data, giving them greater power and allowing them to make better judgments. 

    The right amount of faith in the technical quality of these systems, as well as the appraisal of the police work's goals, determine if this is an issue. 

    "AI ethics in predictive policing: From threat models to an ethics of caring," according to a recent study title, may lead in the correct way (Asaro 2019). 

    When a person makes an unjust judgment due of a feature that is truly unrelated to the subject at hand, such as a prejudiced assumption about members of a group, bias is likely to emerge. 

    As a result, one kind of bias is a person's taught cognitive trait, which is frequently not made apparent. 

    The individual in question may not be conscious of their prejudice, and they may even be openly and publicly opposed to a bias that is discovered (e.g., through priming, cf. Graham and Lowery 2004). 

    Binns discusses fairness vs. bias in machine learning (2018). 

    Apart from the social phenomena of learnt prejudice, the human brain system is prone to a variety of "cognitive biases," such as the "confirmation bias," in which people perceive information as supporting their existing beliefs. 

    This second kind of prejudice is generally considered to impair rational judgment (Kahnemann 2011)—though certain cognitive biases, such as the efficient use of resources for intuitive judgment, may provide an evolutionary benefit. 

    It's debatable whether AI systems should or might exhibit cognitive bias. 

    When data contains systematic mistake, such as "statistical bias," a third kind of bias is present. 

    Strictly speaking, each given dataset will only be unbiased for a single kind of problem, therefore just creating one increases the risk that it will be utilized for a different type of issue and so be biased for that type. 

    On the basis of such data, machine learning will not only fail to recognize the prejudice, but also codify and automate the "historical bias." An automated recruitment screening system at Amazon (discontinued early 2017) was revealed to be biased against women, likely because the corporation has a history of discriminating against women in the employment process. 

    The "Correctional Offender Management Profiling for Alternative Sanctions" (COMPAS), a system for predicting whether a defendant will reoffend, was found to be as accurate (65.2 percent) as a group of random humans (Dressel and Farid 2018), with more false positives and fewer false negatives for black defendants. 

    As a result, the issue with such systems is prejudice, as well as people' undue faith in them. 

    Eubanks investigates the political aspects of such automated systems in the United States (2018). 

    There are substantial technological efforts underway to identify and eliminate bias from AI systems, but they are still in their infancy: see the UK Institute for Ethical AI & Machine Learning (Brownsword, Scotford, and Yeung 2017; Yeung and Lodge 2019). 

    Technical remedies tend to have limitations in that they need a mathematical definition of fairness, which is difficult to come by (Whittaker et al. 2018: 24ff; Selbst et al. 

    2019), as well as a formal notion of "race" (see Benthall and Haynes 2019). 

    A proposal for an institution has been submitted (Veale and Binns 2017). 



    Interaction between humans and robots. 


    Human-robot interaction (HRI) is a distinct academic discipline that today pays close attention to ethical issues, perception dynamics on both sides, and both the diversity of interests and the complexities of the social milieu, including co-working (e.g., Arnold and Scheutz 2017). 

    Calo, Froomkin, and Kerr (2016), Royakkers and van Est (2016), and Tzafestas (2016) are useful surveys for robotics ethics; Lin, Abney, and Jenkins is a common collection of studies (2017). 

    While AI may be used to persuade people to believe and act in certain ways (see section 2.2), it can also be used to control robots that are problematic if their methods or appearance are deceptive, endanger human dignity, or violate Kant's "respect for humanity" criterion. 

    Humans are quick to ascribe mental traits to things and empathize with them, particularly when the items' exterior appearance resembles that of live creatures. 

    This may be used to trick people (or animals) into giving robots or AI systems more intellectual or even emotional weight than they deserve. 

    Other aspects of humanoid robots (e.g., Hiroshi Ishiguro's remote-controlled Geminoids) are problematic in this sense, and some examples have been plainly fraudulent for public relations reasons (e.g., Hanson Robotics' "Sophia"). 

    Of course, certain very fundamental corporate ethical and legal limits apply to robots as well, such as product safety and responsibility, or non-deception in advertising. 

    Many of the problems that have been mentioned seem to be addressed by the present limits. 

    However, there are several elements of human-human connection that seem to be uniquely human in ways that robots may not be able to replicate: compassion, love, and sex. 




    CareRobots



    The employment of robots in human health care is presently limited to concept research in actual surroundings, but it might become a practical technology in a few years, raising fears of a dystopian future of dehumanized care (A. Sharkey and N. Sharkey 2011; Robert Sparrow 2016). 

    Robots that assist human caregivers (e.g., in lifting patients or transporting materials), robots that enable patients to do certain tasks on their own (e.g., eat with a robotic arm), and robots that are given to patients as companions and comfort (e.g., the "Paro" robot seal) are all examples of current systems. 

    See van Wynsberghe (2016), Nrskov (2017), Fosch-Villaronga and Albo-Canals (2019) for an overview, and Draper et al. (2019) for a survey of users (2014). 

    People have claimed that we will need robots in ageing societies, which is one reason why the topic of care has risen to the fore. 

    This argument is flawed because it assumes that as people live longer, they will need more care and that it will be impossible to recruit more people into caring professions. 

    It might also reveal an age prejudice (Jecker forthcoming). 

    Most crucially, it misses the essence of automation, which is about assisting people to work more effectively rather than replacing them. 

    It's not apparent if there's a problem here, given that the conversation largely centers on the fear of robots dehumanizing care, yet the actual and anticipated robots in care are helpful robots for traditional technical work automation. 

    They are therefore "care robots" solely in the sense that they execute activities in healthcare facilities, not in the sense that a human "cares" for the patients. 

    The effectiveness of "being cared for" seems to be dependent on this deliberate sensation of "care," which prospective robots cannot give. 

    The concern of robots in care isn't so much the lack of such purposeful care as it is the necessity for fewer human caregivers. 

    Surprisingly, caring for anything, even a virtual agent, may be beneficial to the caregiver (Lee et al. 2019). 

    Unless the deception is offset by a sufficiently high utility benefit, a system that purports to care would be misleading and hence problematic (Coeckelbergh 2016). 

    Some robots that pretend to "care" on a rudimentary level are already on the market (Paro seal), while others are under development. 

    To some degree, feeling cared for by a machine may be progress for certain people. 





    Sex Robots


    Several tech optimists have stated that humans will be interested in sex and friendship with robots and will be comfortable with the notion (Levy 2007). 

    This seems extremely possible, given the diversity of human sexual tastes, including sex toys and sex dolls: The debate is whether such gadgets should be produced and marketed, and if there should be any restrictions in this sensitive field. 

    It seems to have just entered the mainstream of "robot philosophy" (Sullins 2012; Danaher and McArthur 2017; N. Sharkey et al. 2017 [OIR]; Bendel 2018; Devlin 2018) . 

    Humans have traditionally had strong emotional relationships to items, so maybe friendship or even love with a dependable android is appealing, particularly to those who have difficulty interacting with real people and already prefer dogs, cats, birds, computers, or tamagotchis. 

    Danaher (2019b) counters Nyholm and Frank (2017) by arguing that these may be real friendships and hence a worthwhile objective. 

    Even if it is shallow, it seems that a friendship might boost overall usefulness. 

    There is a problem of dishonesty in these conversations, since a robot cannot (at this time) mean what it says or have affections for a person. 

    Humans are renowned for attributing emotions and ideas to creatures that act as though they had sentience, even to plainly inanimate objects that display no behavior at all. 

    In addition, it seems that paying for deceit is an integral aspect of the conventional sex economy. 

    Finally, there are worries that have always accompanied sex issues, such as permission (Frank and Nyholm 2017), aesthetic considerations, and the fear that certain experiences would "corrupt" people. 

    Human behavior is shaped by experience, and pornography or sex robots are likely to encourage the idea of other people as simply objects of desire, or even recipients of abuse, and therefore destroy a deeper sexual and romantic experience. 

    The "Campaign Against Sex Robots" claims that these gadgets constitute a continuation of slavery and prostitution in this line (Richardson 2016). 





    Employment and Automation.


    AI and robots, it seems, will result in large increases in productivity and consequently global prosperity. 

    Though the focus on "growth" is a recent phenomena, the desire to boost productivity has long been a characteristic of the economy (Harari 2016: 240). 

    Automation, on the other hand, often means that fewer individuals are needed to produce the same amount of output. 

    However, this does not always imply a reduction in total employment since accessible wealth rises, which might boost demand enough to offset productivity gains. 

    In the long term, increased productivity in industrial society has resulted in an increase in total wealth. 

    Historically, major labor market changes have occurred; for example, farming employed almost 60% of the workforce in Europe and North America in 1800, but just 5% in 2010 in the EU, and even less in the richest nations (European Commission 2013). 

    Between 1950 and 1970, the number of employed agricultural labourers in the United Kingdom fell by half (Zayed and Loft 2019). 

    Some of these disruptions result in more labor-intensive companies relocating to lower-cost locations. 

    This is a continuous procedure. 

    Digital automation, unlike physical machinery, substitutes human cognition or information processing (Bostrom and Yudkowsky 2014). 

    As a result, a more drastic shift in the labor market is possible. 

    So, the big concern is whether the impacts will be different this time. 

    Will the development of new jobs and wealth be able to keep up with the employment losses? And, even if it isn't, what are the transition expenses, and who is responsible for them? Do we need to undertake social changes to ensure that the costs and benefits of digital automation are distributed fairly? The fearful (Frey and Osborne 2013; Westlake 2014) to the neutral (Metcalf, Keller, and Boyd 2016 [OIR]; Calo 2018; Frey 2019) to the hopeful (Metcalf, Keller, and Boyd 2016 [OIR]; Calo 2018; Frey 2019). 

    (Brynjolfsson and McAfee 2016; Harari 2016; Danaher 2019a). 

    In principle, the effect of automation on the labor market appears to be fairly well understood as involving two channels: 

    I the nature of interactions between differently skilled workers and new technologies affecting labor demand, and (ii) the equilibrium effects of technological progress through subsequent changes in labor supply and product markets. 

    (Goos et al., 2018: 362) "Job polarisation" or the "dumbbell" form (Goos, Manning, and Salomons 2009) seems to be occurring in the labor market as a consequence of AI and robotics automation: High-skilled technical jobs are in high demand and well compensated, while low-skilled service jobs are in high demand but poorly compensated, but the majority of jobs in factories and offices, i.e., the majority of jobs, are under pressure and being eliminated because they are relatively predictable and most likely to be automated (Baldwin 2019). 

    Perhaps enormous productivity gains will allow the "age of leisure" to come to pass, as predicted by Keynes in 1930 (assuming a 1% annual growth rate). 

    Actually, we've already achieved the amount he predicted for 2030, but we're still working—consuming more and constructing ever higher organizational layers. 

    Harari describes how economic growth enabled mankind to overcome famine, sickness, and war—and now, via artificial intelligence, we want immortality and everlasting happiness, thus his moniker Homo Deus (Harari 2016: 75). 

    Unemployment is, in general, a question of how commodities in a community should be divided fairly. 

    A common belief is that distributive justice should be chosen logically from behind a "veil of ignorance" (Rawls 1971), that is, as if one had no idea what place in society one would be occupying (labourer or industrialist, etc.). 

    Rawls believed that the selected principles would then promote fundamental rights and a distribution that benefited the poorest members of society the most. 


    The AI economy seems to have three characteristics that make such justice unlikely: 


    • For starters, it works in a highly uncontrolled environment in which blame is sometimes difficult to assign. 
    • Second, it functions in marketplaces where monopolies form fast due to a "winner takes all" characteristic. 
    • Third, the digital service industries' "new economy" is founded on intangible assets, commonly known as "capitalism without money" (Haskel and Westlake 2017). 


    This implies that international digital firms that do not have a physical presence in a certain region are difficult to manage. 

    These three characteristics seem to indicate that if we leave wealth distribution to free market forces, the consequence will be a very unequal distribution: And this is a trend that we are currently seeing. 

    One fascinating subject that has gotten little attention is whether AI development is ecologically sustainable: AI systems, like other computer systems, generate trash that is difficult to recycle and require enormous amounts of energy, particularly when training machine learning systems (and even while "mining" cryptocurrencies). 

    It appears that some players in this space offload these costs to the general public. 




    Autonomous Systems


    In the context of autonomous systems, there are numerous definitions of autonomy. 

    In philosophical disputes, where autonomy is the foundation for accountability and personality, a stronger concept is at play (Christman 2003 [2018]). 

    In this context, accountability implies autonomy, but not the other way around, therefore systems with varying degrees of technological autonomy may exist without generating responsibility concerns. 

    In robotics, the weaker, more technical concept of autonomy is relative and progressive: To some extent, a system is considered to be autonomous in terms of human control (Müller 2012). 

    Since autonomy also involves a power relationship: who is in charge and who is accountable, there is a connection to the difficulties of bias and opacity in AI. 

    In general, one concern is whether autonomous robots pose difficulties to which our current conceptual systems must adapt, or whether they just need technological changes. 

    To settle such difficulties, most nations have a complex system of civil and criminal responsibility. 

    Technical norms, such as those governing the safe use of machines in medical settings, will very certainly need to be revised. 

    For such safety-critical systems and "security applications," there is already a discipline called "verifiable AI." The IEEE (Institute of Electrical and Electronics Engineers) and the BSI (British Standards Institution) have published "standards," focusing on more technical issues such data security and transparency. 

    We look at two examples of autonomous systems: autonomous cars and autonomous weapons, which may be found on land, sea, under water, in the air, or in space.



    Autonomous Vehicles


    Autonomous vehicles have the potential to lessen the enormous harm that human driving now causes—roughly 1 million people are killed each year, many more are wounded, the environment is polluted, the land is coated with concrete and asphalt, cities are full of parked automobiles, and so on. 

    However, there seem to be concerns about how autonomous cars should act, as well as how responsibility and risk should be shared in the complex system in which they operate. 

    (There is also much dispute on how long it will take to build completely autonomous, or "level 5" automobiles (SAE International 2018).) In this context, there is considerable discussion of "trolley difficulties." Various challenges are described in the famous "trolley problems" (Thomson 1976; Woollard and Howard-Snyder 2016: part 2) The simplest form is a trolley train on a track that is headed straight towards five people and would kill them unless the train is redirected onto a side track, but that side track has one person on it who will be killed if the train takes it. 

    The example stems from a comment in (Foot 1967: 6) on a variety of dilemma scenarios in which the permitted and desired effects of an action diverge. 

    "Trolley dilemmas" aren't designed to be used to illustrate real ethical issues or to be addressed by making the "correct" decision. 

    Rather, they are thought experiments in which the agent's choice is arbitrarily limited to a small number of unique one-off alternatives and the agent possesses complete information. 


    The distinction between actively doing something vs. allowing something to happen, intended vs. acceptable effects, and consequentialist vs. alternative normative approaches are all investigated using these difficulties as a theoretical tool (Kamm 2016). 


    Many of the issues observed in real driving and autonomous driving have been reminiscent of this kind of issue (Lin 2016). 

    However, it's unlikely that a real driver or a self-driving vehicle would ever have to deal with trolley issues (but see Keeling 2020). 

    While autonomous car trolley issues have garnered a lot of media attention (Awad et al. 2018), they don't seem to add anything to ethical theory or autonomous vehicle programming. 

    The most prevalent ethical issues in driving, such as speeding, unsafe overtaking, failing to maintain a safe distance, and so on, are typical cases of personal gain vs. the collective good. 

    The great majority of them are covered under driver's license laws. 

    Programming the automobile to drive "by the laws" rather than "in the best interests of the passengers" or "to maximize utility" reduces the challenge to a basic problem of ethical machine programming (see section 2.9). 

    There are likely more discretionary politeness rules and fascinating concerns about when to violate the norms (Lin 2016), but this seems to be more of a matter of applying conventional considerations (rules vs. 

    usefulness) to the instance of autonomous cars. 

    In this arena, notable policy initiatives include the report (German Federal Ministry of Transport and Digital Infrastructure 2017), which emphasizes the importance of safety. 

    The tenth rule says In the case of automated and connected driving systems, responsibility for infrastructure, policy, and legal choices passes from the person to the producers and operators of the technical systems, as well as the entities responsible for infrastructure, policy, and legal decisions. 

    (See section 2.10.1 for further information.) The ensuing German and EU legislation on licensing autonomous driving are much more stringent than their American equivalents, where some corporations utilize "testing on customers" as a strategy—without the consumers' or potential victims' informed agreement. 




    Autonomous Weapons.


    The concept of automated weaponry is not new: Instead of simple guided missiles or remotely piloted vehicles, for example, we might deploy fully autonomous land, sea, and air vehicles capable of complicated, long-range surveillance and strike operations. 

    (1) (DARPA 1983) At the time, this concept was mocked as "fantasy" (Dreyfus, Dreyfus, and Athanasiou 1986: ix), but it is now a reality, at least for more clearly recognizable targets (missiles, aircraft, ships, tanks, and so on), but not for human fighters. 

    The primary reasons against (lethal) autonomous weapon systems (AWS or LAWS) are that they encourage extrajudicial executions, remove accountability from people, and increase the likelihood of conflicts or killings—see Lin, Bekey, and Abney (2008: 73–86) for a thorough list of problems. 

    It seems that decreasing the barrier to using such systems (autonomous cars, "fire-and-forget" missiles, or drones carrying explosives) and lowering the risk of being held responsible will enhance their usage. 

    In traditional drone battles with remote controlled weaponry, the key imbalance remains where one side may kill with impunity and so has few reasons not to (e.g., US in Pakistan). 

    It's simple to envisage a tiny drone searching for, identifying, and killing a single person—or possibly a certain sort of human. 

    The Campaign to Stop Killer Robots and other activist organizations have brought forward examples like these. 

    Some appear to imply that autonomous weapons are, in fact, weapons..., and that weapons kill, but we continue to manufacture them in massive quantities. 

    In terms of accountability, autonomous weapons may make it more difficult to identify and prosecute the culpable agents—but this is unclear, given the digital records that may be kept, at least in conventional warfare. 

    The "retribution gap" is a term used to describe the difficulties of distributing punishment (Danaher 2016a). 

    Another concern is whether the use of autonomous weapons in conflict would make wars worse or better. 

    If robots lessen war crimes and crimes in war, the response is likely to be good, and this has been used as both a proponent and a detractor of these weapons (Arkin 2009; Müller 2016a) (Amoroso and Tamburrini 2018). 

    The major concern, according to some, is not the deployment of such weapons in traditional combat, but rather in asymmetric conflicts or by non-state actors, such as criminals. 

    Autonomous weapons are also claimed to be incompatible with International Humanitarian Law, which requires armed confrontation to adhere to the principles of distinction (between combatants and civilians), proportionality (of force), and military necessity (of force) (A. Sharkey 2019). 

    True, distinguishing between fighters and non-combatants is difficult, but distinguishing between civilian and military ships is simple—all this means is that such weapons should not be built or used if they violate Humanitarian Law. 

    Additional concerns have been expressed that being murdered by an autonomous weapon endangers human dignity, however even proponents of a ban on these weapons seem to dismiss these worries: There are other weapons and technology that jeopardize human dignity as well. 

    Given this, as well as the ambiguity in the idea, it is preferable to use a variety of concerns in opposition to AWS rather than relying just on human dignity. 

    (2019, A. Sharkey) The military instruction on weaponry has made much of keeping people "in the loop" or "on the loop"—these approaches of spelling out "meaningful control" are explored in (Santoni de Sio and van den Hoven 2018). 

    There have been talks concerning the problems of assigning blame for an autonomous weapon's deaths, and a "responsibility gap" has been proposed (e.g., Rob Sparrow 2007), implying that neither the person nor the machine can be held accountable. 

    On the other hand, we don't presume that someone is to blame for every occurrence; instead, the true problem may be risk allocation (Simpson and Müller 2016). 

    According to risk analysis (Hansson 2013), determining who is at risk, who is a possible benefit, and who makes the choices is critical (Hansson 2018: 1822–1824). 

     



    Machine Ethics


    Machine ethics is the study of ethics for machines, or "ethical machines," as opposed to the human usage of machines as objects. 

    It's not always clear if this is meant to encompass all of AI ethics or just a portion of it (Floridi and Saunders 2004; Moor 2006; Anderson and Anderson 2011; Wallach and Asaro 2017). 

    It seems at times that the (dubious) conclusion is at work here: if robots operate in morally significant ways, then we need a machine ethics. 

    As a result, some people employ a wider definition: machine ethics is concerned with ensuring that robots' conduct toward humans, and maybe other machines, is morally acceptable. 

    Anderson and Anderson (2007), p. 15 This might involve simple product safety concerns, for example. 

    Other authors sound more ambitious, but they use a narrower definition: AI reasoning should be able to consider societal values, moral and ethical considerations; weigh the relative priorities of values held by different stakeholders in various multicultural contexts; explain its reasoning; and ensure transparency. 

    (Dignum, 2018, pp. 1–2) Some of the debate in machine ethics is predicated on the idea that robots may be ethical agents accountable for their acts, or "autonomous moral agents," in some way (see van Wynsberghe and Robbins 2019). 

    The fundamental concept of machine ethics is now making its way into practical robotics, where the premise that these machines are artificial moral actors in any meaningful sense is seldom made (Winfield et al. 2019). 

    It has been noted that a robot trained to obey ethical principles may readily be reprogrammed to follow immoral ones (Vanderelst and Winfield 2018). 

    Isaac Asimov notably studied the concept that machine ethics may take the shape of "rules," proposing "three laws of robotics" (Asimov 1942): First Law: A robot may not hurt a human being or enable a human being to come to harm via inactivity. 

    Second Law—Except when such commands clash with the First Law, a robot must follow human directions. 

    Third Law—A robot must defend its own existence as long as it does not contradict the First or Second Laws. 

    In a series of scenarios, Asimov demonstrated how, despite their hierarchical organization, conflicts between these three rules would make it difficult to apply them. 

    Weaker forms of "machine ethics" risk limiting "having an ethics" to ideas that would not ordinarily be deemed adequate (e.g., without "reflection" or even "activity"); stronger conceptions that advance towards artificial moral beings may describe a—currently—empty set. 




    Moral Agents Created by Machines.


    If one considers machine ethics to be about moral agents in any meaningful way, these agents might be referred to as "artificial moral agents" with rights and obligations. 

    However, the debate over artificial creatures calls into question a number of fundamental ethical assumptions, and it may be quite helpful to comprehend these concepts in isolation from the human scenario (cf. Misselhorn 2020; Powers and Ganascia forthcoming). 

    Several writers use the term "artificial moral agent" in a less demanding meaning, drawing from the term "agent" in software engineering, where issues of duty and rights aren't a concern (Allen, Varner, and Zinser 2000). 

    Ethical impact agents (e.g., robot jockeys), implicit ethical agents (e.g., safe autopilot), explicit ethical agents (e.g., using formal methods to estimate utility), and full ethical agents (who "can make explicit ethical judgments and generally is competent to reasonably justify them," according to James Moor (2006)). 

    A complete ethical agent is an ordinary adult person.") Several approaches to achieving "explicit" or "full" ethical agents have been proposed, including programming it in (operational morality), "developing" the ethics itself (functional morality), and finally full-blown morality with full intelligence and sentience (full-blown morality with full intelligence and sentience) (Allen, Smit, and Wallach 2005; Moor 2006). 

    Because programmed agents, like neurons in the brain, are "capable without cognition," they are not often regarded "complete" agents (Dennett 2017; Hakli and Mäkelä 2019). 

    In various debates, the concept of a "moral patient" is brought up: Because ethical patients matter, ethical agents have obligations and ethical patients have rights. 

    Some creatures, such as basic animals that may feel pain but cannot make rational decisions, seem to be patients without being agents. 

    On the other hand, it is often assumed that all agents will be patients as well (e.g., in a Kantian framework). 

    Being a person is often seen to be what qualifies an entity as a responsible agent, someone who can carry out responsibilities and be the subject of ethical issues. 

    Personhood is usually a profound concept connected to phenomenal awareness, intention, and free will (Frankfurt 1971; Strawson 1998). 

    Torrance (2011) proposes that "artificial (or machine) ethics could be defined as designing machines that do things that, when done by humans, are indicative of the possession of 'ethical status' in those humans" (2011: 116)—which he defines as "ethical productivity and ethical receptivity" (2011: 117)—as his expressions for moral agents and patients.

     

    Robots' Responsibilities


    There is widespread agreement that accountability, liability, and the rule of law are fundamental requirements that must be upheld in the face of new technologies (European Group on Ethics in Science and New Technologies 2018, 18), but the question in the case of robots is how to do so and how responsibility should be distributed. 

    Will the robots be held responsible, liable, or accountable for their conduct if they act? Should the distribution of risk, rather than talks about accountability, take precedence? Traditional responsibility distribution already exists: a vehicle producer is responsible for the automobile's technical safety, a driver is responsible for driving, a technician is responsible for appropriate maintenance, and the government is responsible for the road's technical conditions, among other things. 

    Generally speaking The outcomes of AI-based choices or actions are often the consequence of several interactions involving numerous players, including designers, developers, users, software, and hardware.... Spread agency entails distributed accountability. 751 (Taddeo and Floridi 2018). 

    The manner in which this distribution occurs is not a problem unique to AI, but it takes on added significance in this context (Nyholm 2018a, 2018b). 

    Distributed control is often performed in traditional control engineering using a control hierarchy and control loops that span these hierarchies. 



    Rights for Robots



    According to certain scholars, it should be carefully studied if modern robots need rights (Gunkel 2018a, 2018b; Danaher forthcoming; Turner 2019). 

    This stance seems to be based mostly on opponents' criticisms and the factual observation that robots and other non-humans are occasionally treated as though they had rights. 

    In this spirit, a "relational turn" has been proposed: If we treat robots as if they had rights, we may be better off not looking into whether they "actually" have (Coeckelbergh 2010, 2012, 2018). 

    This begs the issue of how far such anti-realism or quasi-realism may go, and what it means to declare in a human-centered perspective that "robots have rights" (Gerdes 2016). 

    Bryson, on the other hand, has said that robots should not have rights (Bryson 2010), albeit she acknowledges that this is a possibility (Gunkel and Bryson 2014). 

    The question of whether robots (or other AI systems) should be classified as "legal entities" or "legal people" is a different one. 

    While governments, firms, and organizations are "entities," they may have legal rights and obligations. 

    The European Parliament has discussed giving robots this status to deal with civil responsibility (EU Parliament 2016; Bertolini and Aiello 2018), but not criminal culpability, which is reserved for human beings. 

    It would also be feasible to give robots merely a subset of rights and responsibilities. 

    "Such legislative action would be ethically unnecessary and legally difficult," it has been said, since it would not promote the interests of humanity (Bryson, Diamantis, and Grant 2017: 273). 

    There has long been a debate in environmental ethics concerning the legal rights of natural things such as trees (C. D. Stone 1972). 

    It has also been suggested that the ethical justifications for constructing robots with rights, or artificial moral patients, in the future are dubious (van Wynsberghe and Robbins 2019). 

    Some writers have urged for a "moratorium on synthetic phenomenology" among the community of "artificial consciousness" researchers since producing such awareness would probably include ethical responsibility to a sentient entity, such as avoiding harming it or ending its life by switching it off (Bentley et al. 2018: 28f). 




    Singularity and Superintelligence



    The goal of modern AI, according to some, is to create a "artificial general intelligence" (AGI), as opposed to a technical or "narrow" AI. 

    Traditional concepts of AI as a general-purpose system, as well as Searle's notion of "strong AI," in which computers with the correct instructions may physically comprehend and experience other cognitive states, are widely differentiated from AGI. 

    (Searle, 1980, pp. 417-420) The concept of singularity states that if artificial intelligence progresses to the point where systems have a human level of intellect, these systems will be able to construct AI systems that transcend human intelligence, i.e. they will be "superintelligent" (see below). 

    Such superintelligent AI systems would rapidly enhance themselves or evolve into ever more intelligent systems. 

    This abrupt change in circumstances after achieving superintelligent AI is known as the "singularity," a point at which AI development is beyond human control and difficult to anticipate (Kurzweil 2005: 487). 

    The dread that "the robots we built will take over the world" captivated human imagination long before computers (e.g., Butler 1863) and is the core topic of apek's renowned play (apek 1920), which popularized the term "robot." Irvin Good initially proposed this worry as a potential path for present AI to lead to a "intelligence explosion": Allow an ultraintelligent machine to be described as a machine capable of far surpassing all of a man's intellectual pursuits, regardless of how bright he is. 

    Because one of these intellectual activity is machine design, an ultraintelligent machine might create even better machines; there would undoubtedly be a "intelligence explosion," and man's intelligence would be far behind. 

    As a result, the first ultraintelligent machine is the final innovation that man will ever need to produce, assuming that the machine is docile enough to teach us how to keep it in check. 

    (Good 1965: 33) Kurzweil (1999, 2005, 2012) elucidates the optimistic argument from acceleration to singularity by stating that computing power has been increasing exponentially, i.e., doubling every 2 years since 1970 in accordance with "Moore's Law" on the number of transistors, and will continue to do so for some time in the future. 

    Kurzweil (1999) projected that supercomputers will approach human processing power by 2010, that "mind uploading" would be achievable by 2030, and that the "singularity" would occur by 2045. 

    Kurzweil mentions an increase in the amount of processing power that can be acquired for a given price, but the money accessible to AI startups have also risen dramatically in recent years: According to Amodei and Hernandez (2018 [OIR]), the real computational power available to train an AI system doubled every 3.4 months from 2012 to 2018, resulting in a 300,000x increase—not the 7x gain that doubling every two years would have produced. 

    A popular version of this argument (Chalmers 2010) speaks about a growth in the AI system's "intelligence" (rather than sheer processing capacity), but the critical point of "singularity" remains the moment at which AI systems take control and push AI development beyond human levels. 

    Bostrom (2014) goes into great length on what might happen at that time and the dangers it poses to mankind. 

    Eden et al. (2012), Armstrong (2014), and Shanahan (2015) summarize the topic (2015). 

    Additional than increasing computing power, there are other avenues to superintelligence, such as perfect computer simulation of the human brain (Kurzweil 2012; Sandberg 2013), biological paths, or networks and organizations (Bostrom 2014: 22–51). 

    Despite the apparent flaws in equating "intelligence" with processing capacity, Kurzweil seems to be correct in his assertion that people tend to underestimate the potential of exponential development. 

    Mini-test: How far would you travel in 30 steps if you walked in steps that were twice as long as the previous one, beginning with a one-metre step? (The answer is about three times the distance between the Earth's sole permanent natural satellite and the moon.) Indeed, most AI advancements may be attributed to the availability of processors that are orders of magnitude quicker, bigger storage, and increased funding (Müller 2018). 

    (Müller and Bostrom 2016; Bostrom, Dafoe, and Flynn forthcoming) address the real acceleration and its rates; Sandberg (2019) claims that development will continue for some time. 

    The participants in this argument are all technophiles in the sense that they anticipate technology to advance quickly and bring about a wide range of positive changes—but they are divided into two groups: those who concentrate on benefits (such as Kurzweil) and those who focus on hazards (e.g., Bostrom). 

    Both parties sympathize with "transhuman" beliefs of humankind's survival in a new physical form, such as being transferred into a computer (Moravec 1990, 1998; Bostrom 2003a, 2003c). 

    They also study the possibilities of "human improvement" in different areas, including as intellect (commonly referred to as "IA") (intelligence augmentation). 

    It's possible that future AI may be employed to improve human performance or lead to the collapse of the cleanly defined human single individual. 

    Robin Hanson offers a comprehensive analysis of what would happen economically if human "brain emulation" allows genuinely intelligent robots or "ems" to be created (Hanson 2016). 

    Contrary to Kantian ethical traditions, which have argued that higher levels of rationality or intelligence would go along with a better understanding of what is moral and a better ability to act morally, the argument from superintelligence to risk requires the assumption that superintelligence does not imply benevolence (Gewirth 1978; Chalmers 2010: 36f). 

    The "orthogonality thesis" (Bostrom 2012; Armstrong 2013; Bostrom 2014: 105–109) asserts that rationality and morality are completely separate dimensions (Bostrom 2012; Armstrong 2013; Bostrom 2014: 105–109). 

    The singularity story has been criticized from a variety of perspectives. 

    Both Kurzweil and Bostrom seem to believe that intelligence is a one-dimensional feature and that the set of intelligent beings is completely ordered in the mathematical sense, although neither book goes into detail on intelligence. 

    In general, despite some attempts, the assumptions used in the compelling story of superintelligence and singularity have not been thoroughly examined. 

    One concern is whether such a singularity would ever occur—it might be philosophically impossible, practically impossible, or simply not happen due to unforeseen circumstances, such as individuals actively working against it. 

    The intriguing philosophical topic is if singularity is really a "myth" (Floridi 2016; Ganascia 2017), rather than a real AI development trajectory. 

    This is something that many practitioners take for granted (e.g., Brooks 2017 [OIR]). 

    They may do so because of fear of public reaction, an overestimation of practical issues, or a strong belief that superintelligence is an improbable consequence of present AI research (Müller forthcoming-a). 

    This debate raises the issue of whether the "singularity" worry is really a story about fictitious AI based on human anxieties. 

    Even if one believes the negative arguments are convincing and that the singularity is unlikely to occur, there is still a chance that one is mistaken. 

    Perhaps AI and robots aren't on the "safe road of a science" (Kant 1791: B15), and perhaps philosophy isn't either (Müller 2020). 

    So, even if one believes the possibility of such a singularity ever happening is very low, it seems that addressing the extremely high-impact danger of singularity has reason. 

    Superintelligence poses an existential threat. 

    Thinking about superintelligence in the long run raises the question of whether it will lead to the extinction of the human species, which is referred to as a "existential risk" (or XRisk): superintelligent systems may have preferences that conflict with the existence of humans on Earth, and thus may decide to end that existence—and given their superior intelligence, they will have the power to do so (or they may happen to end it because they do not really care). 

    The ability to think long-term is a key aspect of this literature. 

    It makes little difference if the singularity (or another catastrophic catastrophe) happens in 30 years, 300 years, or 3000 years (Baum et al. 2019). 

    Perhaps there is an astronomical pattern in which an intelligent species is destined to discover AI at some point in the future, resulting in its own extinction. 

    Such a "great filter" might help explain the "Fermi conundrum," which explains why there is no trace of life in the known universe despite the high chance of it arising. 

    It would be awful news if we discovered that the "great filter" is still ahead of us, rather than a barrier that Earth has already overcome. 

    These challenges are often framed more narrowly as a threat to human extinction (Bostrom 2013), or more widely as a threat to the species in general (Rees 2018)—of which AI is merely one (Häggström 2016; Ord 2020). 

    For dangers that are sufficiently high along the two dimensions of "scope" and "severity," Bostrom uses the concept of "global catastrophic risk" (Bostrom and irkovi 2011; Bostrom 2013). 

    These risk considerations are often unrelated to the broader issue of ethics in perilous situations (e.g., Hansson 2013, 2018). 

    The long-term perspective has its own methodological challenges, but it has sparked a lot of debate: (Tegmark 2017) focuses on AI and human life "3.0" after the singularity, while Russell, Dewey, and Tegmark (2015) and Bostrom, Dafoe, and Flynn (forthcoming) look at longer-term ethical AI policy challenges. 

    Several collections of studies (Müller 2016b; Callaghan et al. 

    2017; Yampolskiy 2018) have looked at the hazards of artificial general intelligence (AGI) and the elements that can make this development more or less risky (Müller 2016b; Callaghan et al. 2017; Yampolskiy 2018). (Drexler 2019). 




    Is it Possible to Control Superintelligence?



    In a nutshell, the "control dilemma" is how we humans can keep control of a superintelligent AI system after it has become superintelligent (Bostrom 2014: 127ff). 

    In a broader sense, it's the difficulty of ensuring that an AI system will turn out to be beneficial in the eyes of humans (Russell 2019); this is frequently referred to as "value alignment." The speed with which a superintelligence system "takes off" determines how simple or difficult it is to regulate it. 

    As a result, systems that promote self-improvement, such as AlphaZero, have gotten a lot of attention (Silver et al. 2018). 

    One facet of this dilemma is that we may determine that a specific feature is desirable, only to discover that it has unintended repercussions that are so detrimental that we no longer want it. 

    This is the age-old conundrum of King Midas, who desired that everything he touched turn to gold. 

    Various instances of this topic have been studied, such as the "paperclip maximiser" (Bostrom 2003b) or the chess performance optimization algorithm (Omohundro 2014). 

    Speculations about omniscient creatures, profound alterations on a "later day," and the possibility of immortality via transcendence of our present corporeal form are all common themes in discussions about superintelligence (Capurro 1993; Geraci 2008, 2010; O'Connell 2017: 160ff). 

    These concerns also raise a well-known epistemological issue: Can we understand the omniscient's methods (Danaher 2015)? The regular foes have already arrived: People fear that computers will become too clever and take over the world, but the true issue is that they are too dumb and have already taken over the world, according to one atheist (Domingos 2015) According to the new nihilists, "techno-hypnosis" through digital technology has now become our primary means of avoiding the loss of meaning (Gertz 2018). 

    Both opponents would argue that an ethics is needed for the "little" difficulties that arise with AI and robotics (parts 2.1 through 2.9 above), but not for the "large ethics" of existential peril from AI (section 2.10). 





    Conclusion.


    As a result, the singularity raises the issue of AI once again. 

    It's amazing how imagination, or "vision," has played such an important part in the field from its inception at the "Dartmouth Summer Research Project" (McCarthy et al. 1955 [OIR]; Simon and Newell 1958). 

    And the assessment of this vision is always changing: We've gone from mantras like "AI is impossible" (Dreyfus 1972) and "AI is merely automation" (Lighthill 1973) to "AI will solve all issues" (Kurzweil 1999) and "AI may kill us all" (Lighthill 1973). (Bostrom 2014). 

    This drew attention from the media and prompted public relations efforts, but it also raised the question of how much of this "AI philosophy and ethics" is really about AI rather than a hypothetical technology. 

    As we have said, AI and robots have posed basic challenges about what we should do with these systems, what they should accomplish, and what threats they pose in the long run. 

    They also cast doubt on humanity's status as the planet's most intellectual and dominating species. 

    We've seen challenges that have arisen, and we'll have to keep a careful eye on technical and social advances in order to catch new issues early on, establish a philosophical explanation, and learn for conventional philosophical problems.





    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read more about Artificial Intelligence here.



    See also: 


    Accidents and Risk Assessment; Algorithmic Bias and Error; Autonomous Weapons Systems, Ethics of; Driverless Cars and Trucks; Moral Turing Test; Robot Ethics; Trolley Problem.



    References & Further Reading:


    Anderson, Michael, and Susan Leigh Anderson. 2007. “Machine Ethics: Creating an Ethical Intelligent Agent.” AI Magazine 28, no. 4 (Winter): 15–26.

    Anderson, Susan Leigh. 2008. “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics.” AI & Society 22, no. 4 (March): 477–93.

    Asimov, Isaac. 1950. “Runaround.” In I, Robot, 30–47. New York: Doubleday.

    Asimov, Isaac. 1985. Robots and Empire. Garden City, NY: Doubleday.

    Bensoussan, Alain, and Jérémy Bensoussan. 2015. Droit des Robots. Brussels: Éditions Larcier.

    Coeckelbergh, Mark. 2010. “Robot Rights? Towards a Social-Relational Justification of  Moral Consideration.” Ethics and Information Technology 12, no. 3 (September): 209–21.

    Darling, Kate. 2012. “Extending Legal Protection to Social Robots.” IEEE Spectrum, September 10, 2012. https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/extending-legal-protection-to-social-robots.

    Floridi, Luciano, and J. W. Sanders. 2001. “Artificial Evil and the Foundation of Computer Ethics.” Ethics and Information Technology 3, no. 1 (March): 56–66.

    Foundation for Responsible Robotics (FRR). 2019. Mission Statement. https://responsiblerobotics.org/about-us/mission/.

    Gunkel, David J. 2018. Robot Rights. Cambridge, MA: MIT Press. 

    Lin, Patrick, Keith Abney, and George A. Bekey. 2012. Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, MA: MIT Press.

    Lin, Patrick, Ryan Jenkins, and Keith Abney. 2017. Robot Ethics 2.0: New Challenges in Philosophy, Law, and Society. New York: Oxford University Press.

    McCauley, Lee. 2007. “AI Armageddon and the Three Laws of Robotics.” Ethics and Information Technology 9, no. 2 (July): 153–64.

    Veruggio, Gianmarco. 2006. “The EURON Roboethics Roadmap.” In 2006 6th IEEE RAS International Conference on Humanoid Robots, 612–17. Genoa, Italy: IEEE.

    Veruggio, Gianmarco, and Fiorella Operto. 2008. “Roboethics: Social and Ethical Implications of Robotics.” In Springer Handbook of Robotics, edited by Bruno Siciliano and Oussama Khatib, 1499–1524. New York: Springer.

    Veruggio, Gianmarco, Jorge Solis, and Machiel Van der Loos. 2011. “Roboethics: Ethics Applied to Robotics.” IEEE Robotics & Automation Magazine 18, no. 1 (March): 21–22.

    Wallach, Wendell, and Colin Allen. 2009. Moral Machines: Teaching Robots Right from Wrong. Oxford, UK: Oxford University Press.


    • Abowd, John M, 2017, “How Will Statistical Agencies Operate When All Data Are Private?”, Journal of Privacy and Confidentiality, 7(3): 1–15. doi:10.29012/jpc.v7i3.404
    • Allen, Colin, Iva Smit, and Wendell Wallach, 2005, “Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches”, Ethics and Information Technology, 7(3): 149–155. doi:10.1007/s10676-006-0004-4
    • Allen, Colin, Gary Varner, and Jason Zinser, 2000, “Prolegomena to Any Future Artificial Moral Agent”, Journal of Experimental & Theoretical Artificial Intelligence, 12(3): 251–261. doi:10.1080/09528130050111428
    • Amoroso, Daniele and Guglielmo Tamburrini, 2018, “The Ethical and Legal Case Against Autonomy in Weapons Systems”, Global Jurist, 18(1): art. 20170012. doi:10.1515/gj-2017-0012
    • Anderson, Janna, Lee Rainie, and Alex Luchsinger, 2018, Artificial Intelligence and the Future of Humans, Washington, DC: Pew Research Center.
    • Anderson, Michael and Susan Leigh Anderson, 2007, “Machine Ethics: Creating an Ethical Intelligent Agent”, AI Magazine, 28(4): 15–26.
    • ––– (eds.), 2011, Machine Ethics, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511978036
    • Aneesh, A., 2006, Virtual Migration: The Programming of Globalization, Durham, NC and London: Duke University Press.
    • Arkin, Ronald C., 2009, Governing Lethal Behavior in Autonomous Robots, Boca Raton, FL: CRC Press.
    • Armstrong, Stuart, 2013, “General Purpose Intelligence: Arguing the Orthogonality Thesis”, Analysis and Metaphysics, 12: 68–84.
    • –––, 2014, Smarter Than Us, Berkeley, CA: MIRI.
    • Arnold, Thomas and Matthias Scheutz, 2017, “Beyond Moral Dilemmas: Exploring the Ethical Landscape in HRI”, in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction—HRI ’17, Vienna, Austria: ACM Press, 445–452. doi:10.1145/2909824.3020255
    • Asaro, Peter M., 2019, “AI Ethics in Predictive Policing: From Models of Threat to an Ethics of Care”, IEEE Technology and Society Magazine, 38(2): 40–53. doi:10.1109/MTS.2019.2915154
    • Asimov, Isaac, 1942, “Runaround: A Short Story”, Astounding Science Fiction, March 1942. Reprinted in “I, Robot”, New York: Gnome Press 1950, 1940ff.
    • Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan, 2018, “The Moral Machine Experiment”, Nature, 563(7729): 59–64. doi:10.1038/s41586-018-0637-6
    • Baldwin, Richard, 2019, The Globotics Upheaval: Globalisation, Robotics and the Future of Work, New York: Oxford University Press.
    • Baum, Seth D., Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy, 2019, “Long-Term Trajectories of Human Civilization”, Foresight, 21(1): 53–83. doi:10.1108/FS-04-2018-0037
    • Bendel, Oliver, 2018, “Sexroboter aus Sicht der Maschinenethik”, in Handbuch Filmtheorie, Bernhard Groß and Thomas Morsch (eds.), (Springer Reference Geisteswissenschaften), Wiesbaden: Springer Fachmedien Wiesbaden, 1–19. doi:10.1007/978-3-658-17484-2_22-1
    • Bennett, Colin J. and Charles Raab, 2006, The Governance of Privacy: Policy Instruments in Global Perspective, second edition, Cambridge, MA: MIT Press.
    • Benthall, Sebastian and Bruce D. Haynes, 2019, “Racial Categories in Machine Learning”, in Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19, Atlanta, GA, USA: ACM Press, 289–298. doi:10.1145/3287560.3287575
    • Bentley, Peter J., Miles Brundage, Olle Häggström, and Thomas Metzinger, 2018, “Should We Fear Artificial Intelligence? In-Depth Analysis”, European Parliamentary Research Service, Scientific Foresight Unit (STOA), March 2018, PE 614.547, 1–40. [Bentley et al. 2018 available online]
    • Bertolini, Andrea and Giuseppe Aiello, 2018, “Robot Companions: A Legal and Ethical Analysis”, The Information Society, 34(3): 130–140. doi:10.1080/01972243.2018.1444249
    • Binns, Reuben, 2018, “Fairness in Machine Learning: Lessons from Political Philosophy”, Proceedings of the 1st Conference on Fairness, Accountability and Transparency, in Proceedings of Machine Learning Research, 81: 149–159.
    • Bostrom, Nick, 2003a, “Are We Living in a Computer Simulation?”, The Philosophical Quarterly, 53(211): 243–255. doi:10.1111/1467-9213.00309
    • –––, 2003b, “Ethical Issues in Advanced Artificial Intelligence”, in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Volume 2, Iva Smit, Wendell Wallach, and G.E. Lasker (eds), (IIAS-147-2003), Tecumseh, ON: International Institute of Advanced Studies in Systems Research and Cybernetics, 12–17. [Botstrom 2003b revised available online]
    • –––, 2003c, “Transhumanist Values”, in Ethical Issues for the Twenty-First Century, Frederick Adams (ed.), Bowling Green, OH: Philosophical Documentation Center Press.
    • –––, 2012, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”, Minds and Machines, 22(2): 71–85. doi:10.1007/s11023-012-9281-3
    • –––, 2013, “Existential Risk Prevention as Global Priority”, Global Policy, 4(1): 15–31. doi:10.1111/1758-5899.12002
    • –––, 2014, Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press.
    • Bostrom, Nick and Milan M. Ćirković (eds.), 2011, Global Catastrophic Risks, New York: Oxford University Press.
    • Bostrom, Nick, Allan Dafoe, and Carrick Flynn, forthcoming, “Policy Desiderata for Superintelligent AI: A Vector Field Approach (V. 4.3)”, in Ethics of Artificial Intelligence, S Matthew Liao (ed.), New York: Oxford University Press. [Bostrom, Dafoe, and Flynn forthcoming – preprint available online]
    • Bostrom, Nick and Eliezer Yudkowsky, 2014, “The Ethics of Artificial Intelligence”, in The Cambridge Handbook of Artificial Intelligence, Keith Frankish and William M. Ramsey (eds.), Cambridge: Cambridge University Press, 316–334. doi:10.1017/CBO9781139046855.020 [Bostrom and Yudkowsky 2014 available online]
    • Bradshaw, Samantha, Lisa-Maria Neudert, and Phil Howard, 2019, “Government Responses to Malicious Use of Social Media”, Working Paper 2019.2, Oxford: Project on Computational Propaganda. [Bradshaw, Neudert, and Howard 2019 available online/]
    • Brownsword, Roger, Eloise Scotford, and Karen Yeung (eds.), 2017, The Oxford Handbook of Law, Regulation and Technology, Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199680832.001.0001
    • Brynjolfsson, Erik and Andrew McAfee, 2016, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, New York: W. W. Norton.
    • Bryson, Joanna J., 2010, “Robots Should Be Slaves”, in Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues, Yorick Wilks (ed.), (Natural Language Processing 8), Amsterdam: John Benjamins Publishing Company, 63–74. doi:10.1075/nlp.8.11bry
    • –––, 2019, “The Past Decade and Future of Ai’s Impact on Society”, in Towards a New Enlightenment: A Transcendent Decade, Madrid: Turner - BVVA. [Bryson 2019 available online]
    • Bryson, Joanna J., Mihailis E. Diamantis, and Thomas D. Grant, 2017, “Of, for, and by the People: The Legal Lacuna of Synthetic Persons”, Artificial Intelligence and Law, 25(3): 273–291. doi:10.1007/s10506-017-9214-9
    • Burr, Christopher and Nello Cristianini, 2019, “Can Machines Read Our Minds?”, Minds and Machines, 29(3): 461–494. doi:10.1007/s11023-019-09497-4
    • Butler, Samuel, 1863, “Darwin among the Machines: Letter to the Editor”, Letter in The Press (Christchurch), 13 June 1863. [Butler 1863 available online]
    • Callaghan, Victor, James Miller, Roman Yampolskiy, and Stuart Armstrong (eds.), 2017, The Technological Singularity: Managing the Journey, (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-662-54033-6
    • Calo, Ryan, 2018, “Artificial Intelligence Policy: A Primer and Roadmap”, University of Bologna Law Review, 3(2): 180-218. doi:10.6092/ISSN.2531-6133/8670
    • Calo, Ryan, A. Michael Froomkin, and Ian Kerr (eds.), 2016, Robot Law, Cheltenham: Edward Elgar.
    • Čapek, Karel, 1920, R.U.R., Prague: Aventium. Translated by Peter Majer and Cathy Porter, London: Methuen, 1999.
    • Capurro, Raphael, 1993, “Ein Grinsen Ohne Katze: Von der Vergleichbarkeit Zwischen ‘Künstlicher Intelligenz’ und ‘Getrennten Intelligenzen’”, Zeitschrift für philosophische Forschung, 47: 93–102.
    • Cave, Stephen, 2019, “To Save Us from a Kafkaesque Future, We Must Democratise AI”, The Guardian , 04 January 2019. [Cave 2019 available online]
    • Chalmers, David J., 2010, “The Singularity: A Philosophical Analysis”, Journal of Consciousness Studies, 17(9–10): 7–65. [Chalmers 2010 available online]
    • Christman, John, 2003 [2018], “Autonomy in Moral and Political Philosophy”, (Spring 2018) Stanford Encyclopedia of Philosophy (EDITION NEEDED), URL = <https://plato.stanford.edu/archives/spr2018/entries/autonomy-moral/>
    • Coeckelbergh, Mark, 2010, “Robot Rights? Towards a Social-Relational Justification of Moral Consideration”, Ethics and Information Technology, 12(3): 209–221. doi:10.1007/s10676-010-9235-5
    • –––, 2012, Growing Moral Relations: Critique of Moral Status Ascription, London: Palgrave. doi:10.1057/9781137025968
    • –––, 2016, “Care Robots and the Future of ICT-Mediated Elderly Care: A Response to Doom Scenarios”, AI & Society, 31(4): 455–462. doi:10.1007/s00146-015-0626-3
    • –––, 2018, “What Do We Mean by a Relational Ethics? Growing a Relational Approach to the Moral Standing of Plants, Robots and Other Non-Humans”, in Plant Ethics: Concepts and Applications, Angela Kallhoff, Marcello Di Paola, and Maria Schörgenhumer (eds.), London: Routledge, 110–121.
    • Crawford, Kate and Ryan Calo, 2016, “There Is a Blind Spot in AI Research”, Nature, 538(7625): 311–313. doi:10.1038/538311a
    • Cristianini, Nello, forthcoming, “Shortcuts to Artificial Intelligence”, in Machines We Trust, Marcello Pelillo and Teresa Scantamburlo (eds.), Cambridge, MA: MIT Press. [Cristianini forthcoming – preprint available online]
    • Danaher, John, 2015, “Why AI Doomsayers Are Like Sceptical Theists and Why It Matters”, Minds and Machines, 25(3): 231–246. doi:10.1007/s11023-015-9365-y
    • –––, 2016a, “Robots, Law and the Retribution Gap”, Ethics and Information Technology, 18(4): 299–309. doi:10.1007/s10676-016-9403-3
    • –––, 2016b, “The Threat of Algocracy: Reality, Resistance and Accommodation”, Philosophy & Technology, 29(3): 245–268. doi:10.1007/s13347-015-0211-1
    • –––, 2019a, Automation and Utopia: Human Flourishing in a World without Work, Cambridge, MA: Harvard University Press.
    • –––, 2019b, “The Philosophical Case for Robot Friendship”, Journal of Posthuman Studies, 3(1): 5–24. doi:10.5325/jpoststud.3.1.0005
    • –––, forthcoming, “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism”, Science and Engineering Ethics, first online: 20 June 2019. doi:10.1007/s11948-019-00119-x
    • Danaher, John and Neil McArthur (eds.), 2017, Robot Sex: Social and Ethical Implications, Boston, MA: MIT Press.
    • DARPA, 1983, “Strategic Computing. New-Generation Computing Technology: A Strategic Plan for Its Development an Application to Critical Problems in Defense”, ADA141982, 28 October 1983. [DARPA 1983 available online]
    • Dennett, Daniel C, 2017, From Bacteria to Bach and Back: The Evolution of Minds, New York: W.W. Norton.
    • Devlin, Kate, 2018, Turned On: Science, Sex and Robots, London: Bloomsbury.
    • Diakopoulos, Nicholas, 2015, “Algorithmic Accountability: Journalistic Investigation of Computational Power Structures”, Digital Journalism, 3(3): 398–415. doi:10.1080/21670811.2014.976411
    • Dignum, Virginia, 2018, “Ethics in Artificial Intelligence: Introduction to the Special Issue”, Ethics and Information Technology, 20(1): 1–3. doi:10.1007/s10676-018-9450-z
    • Domingos, Pedro, 2015, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, London: Allen Lane.
    • Draper, Heather, Tom Sorell, Sandra Bedaf, Dag Sverre Syrdal, Carolina Gutierrez-Ruiz, Alexandre Duclos, and Farshid Amirabdollahian, 2014, “Ethical Dimensions of Human-Robot Interactions in the Care of Older People: Insights from 21 Focus Groups Convened in the UK, France and the Netherlands”, in International Conference on Social Robotics 2014, Michael Beetz, Benjamin Johnston, and Mary-Anne Williams (eds.), (Lecture Notes in Artificial Intelligence 8755), Cham: Springer International Publishing, 135–145. doi:10.1007/978-3-319-11973-1_14
    • Dressel, Julia and Hany Farid, 2018, “The Accuracy, Fairness, and Limits of Predicting Recidivism”, Science Advances, 4(1): eaao5580. doi:10.1126/sciadv.aao5580
    • Drexler, K. Eric, 2019, “Reframing Superintelligence: Comprehensive AI Services as General Intelligence”, FHI Technical Report, 2019-1, 1-210. [Drexler 2019 available online]
    • Dreyfus, Hubert L., 1972, What Computers Still Can’t Do: A Critique of Artificial Reason, second edition, Cambridge, MA: MIT Press 1992.
    • Dreyfus, Hubert L., Stuart E. Dreyfus, and Tom Athanasiou, 1986, Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer, New York: Free Press.
    • Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith, 2006, Calibrating Noise to Sensitivity in Private Data Analysis, Berlin, Heidelberg.
    • Eden, Amnon H., James H. Moor, Johnny H. Søraker, and Eric Steinhart (eds.), 2012, Singularity Hypotheses: A Scientific and Philosophical Assessment, (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-32560-1
    • Eubanks, Virginia, 2018, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, London: St. Martin’s Press.
    • European Commission, 2013, “How Many People Work in Agriculture in the European Union? An Answer Based on Eurostat Data Sources”, EU Agricultural Economics Briefs, 8 (July 2013). [Anonymous 2013 available online]
    • European Group on Ethics in Science and New Technologies, 2018, “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, 9 March 2018, European Commission, Directorate-General for Research and Innovation, Unit RTD.01. [European Group 2018 available online ]
    • Ferguson, Andrew Guthrie, 2017, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement, New York: NYU Press.
    • Floridi, Luciano, 2016, “Should We Be Afraid of AI? Machines Seem to Be Getting Smarter and Smarter and Much Better at Human Jobs, yet True AI Is Utterly Implausible. Why?”, Aeon, 9 May 2016. URL = <Floridi 2016 available online>
    • Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, and Effy Vayena, 2018, “AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”, Minds and Machines, 28(4): 689–707. doi:10.1007/s11023-018-9482-5
    • Floridi, Luciano and Jeff W. Sanders, 2004, “On the Morality of Artificial Agents”, Minds and Machines, 14(3): 349–379. doi:10.1023/B:MIND.0000035461.63578.9d
    • Floridi, Luciano and Mariarosaria Taddeo, 2016, “What Is Data Ethics?”, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083): 20160360. doi:10.1098/rsta.2016.0360
    • Foot, Philippa, 1967, “The Problem of Abortion and the Doctrine of the Double Effect”, Oxford Review, 5: 5–15.
    • Fosch-Villaronga, Eduard and Jordi Albo-Canals, 2019, “‘I’ll Take Care of You,’ Said the Robot”, Paladyn, Journal of Behavioral Robotics, 10(1): 77–93. doi:10.1515/pjbr-2019-0006
    • Frank, Lily and Sven Nyholm, 2017, “Robot Sex and Consent: Is Consent to Sex between a Robot and a Human Conceivable, Possible, and Desirable?”, Artificial Intelligence and Law, 25(3): 305–323. doi:10.1007/s10506-017-9212-y
    • Frankfurt, Harry G., 1971, “Freedom of the Will and the Concept of a Person”, The Journal of Philosophy, 68(1): 5–20.
    • Frey, Carl Benedict, 2019, The Technology Trap: Capital, Labour, and Power in the Age of Automation, Princeton, NJ: Princeton University Press.
    • Frey, Carl Benedikt and Michael A. Osborne, 2013, “The Future of Employment: How Susceptible Are Jobs to Computerisation?”, Oxford Martin School Working Papers, 17 September 2013. [Frey and Osborne 2013 available online]
    • Ganascia, Jean-Gabriel, 2017, Le Mythe De La Singularité, Paris: Éditions du Seuil.
    • EU Parliament, 2016, “Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(Inl))”, Committee on Legal Affairs, 10.11.2016. https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html
    • EU Regulation, 2016/679, “General Data Protection Regulation: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/Ec”, Official Journal of the European Union, 119 (4 May 2016), 1–88. [Regulation (EU) 2016/679 available online]
    • Geraci, Robert M., 2008, “Apocalyptic AI: Religion and the Promise of Artificial Intelligence”, Journal of the American Academy of Religion, 76(1): 138–166. doi:10.1093/jaarel/lfm101
    • –––, 2010, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780195393026.001.0001
    • Gerdes, Anne, 2016, “The Issue of Moral Consideration in Robot Ethics”, ACM SIGCAS Computers and Society, 45(3): 274–279. doi:10.1145/2874239.2874278
    • German Federal Ministry of Transport and Digital Infrastructure, 2017, “Report of the Ethics Commission: Automated and Connected Driving”, June 2017, 1–36. [GFMTDI 2017 available online]
    • Gertz, Nolen, 2018, Nihilism and Technology, London: Rowman & Littlefield.
    • Gewirth, Alan, 1978, “The Golden Rule Rationalized”, Midwest Studies in Philosophy, 3(1): 133–147. doi:10.1111/j.1475-4975.1978.tb00353.x
    • Gibert, Martin, 2019, “Éthique Artificielle (Version Grand Public)”, in L’Encyclopédie Philosophique, Maxime Kristanek (ed.), accessed: 16 April 2020, URL = <Gibert 2019 available online>
    • Giubilini, Alberto and Julian Savulescu, 2018, “The Artificial Moral Advisor. The ‘Ideal Observer’ Meets Artificial Intelligence”, Philosophy & Technology, 31(2): 169–188. doi:10.1007/s13347-017-0285-z
    • Good, Irving John, 1965, “Speculations Concerning the First Ultraintelligent Machine”, in Advances in Computers 6, Franz L. Alt and Morris Rubinoff (eds.), New York & London: Academic Press, 31–88. doi:10.1016/S0065-2458(08)60418-0
    • Goodfellow, Ian, Yoshua Bengio, and Aaron Courville, 2016, Deep Learning, Cambridge, MA: MIT Press.
    • Goodman, Bryce and Seth Flaxman, 2017, “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’”, AI Magazine, 38(3): 50–57. doi:10.1609/aimag.v38i3.2741
    • Goos, Maarten, 2018, “The Impact of Technological Progress on Labour Markets: Policy Challenges”, Oxford Review of Economic Policy, 34(3): 362–375. doi:10.1093/oxrep/gry002
    • Goos, Maarten, Alan Manning, and Anna Salomons, 2009, “Job Polarization in Europe”, American Economic Review, 99(2): 58–63. doi:10.1257/aer.99.2.58
    • Graham, Sandra and Brian S. Lowery, 2004, “Priming Unconscious Racial Stereotypes about Adolescent Offenders”, Law and Human Behavior, 28(5): 483–504. doi:10.1023/B:LAHU.0000046430.65485.1f
    • Gunkel, David J., 2018a, “The Other Question: Can and Should Robots Have Rights?”, Ethics and Information Technology, 20(2): 87–99. doi:10.1007/s10676-017-9442-4
    • –––, 2018b, Robot Rights, Boston, MA: MIT Press.
    • Gunkel, David J. and Joanna J. Bryson (eds.), 2014, Machine Morality: The Machine as Moral Agent and Patient special issue of Philosophy & Technology, 27(1): 1–142.
    • Häggström, Olle, 2016, Here Be Dragons: Science, Technology and the Future of Humanity, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198723547.001.0001
    • Hakli, Raul and Pekka Mäkelä, 2019, “Moral Responsibility of Robots and Hybrid Agents”, The Monist, 102(2): 259–275. doi:10.1093/monist/onz009
    • Hanson, Robin, 2016, The Age of Em: Work, Love and Life When Robots Rule the Earth, Oxford: Oxford University Press.
    • Hansson, Sven Ove, 2013, The Ethics of Risk: Ethical Analysis in an Uncertain World, New York: Palgrave Macmillan.
    • –––, 2018, “How to Perform an Ethical Risk Analysis (eRA)”, Risk Analysis, 38(9): 1820–1829. doi:10.1111/risa.12978
    • Harari, Yuval Noah, 2016, Homo Deus: A Brief History of Tomorrow, New York: Harper.
    • Haskel, Jonathan and Stian Westlake, 2017, Capitalism without Capital: The Rise of the Intangible Economy, Princeton, NJ: Princeton University Press.
    • Houkes, Wybo and Pieter E. Vermaas, 2010, Technical Functions: On the Use and Design of Artefacts, (Philosophy of Engineering and Technology 1), Dordrecht: Springer Netherlands. doi:10.1007/978-90-481-3900-2
    • IEEE, 2019, Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems (First Version), <IEEE 2019 available online>.
    • Jasanoff, Sheila, 2016, The Ethics of Invention: Technology and the Human Future, New York: Norton.
    • Jecker, Nancy S., forthcoming, Ending Midlife Bias: New Values for Old Age, New York: Oxford University Press.
    • Jobin, Anna, Marcello Ienca, and Effy Vayena, 2019, “The Global Landscape of AI Ethics Guidelines”, Nature Machine Intelligence, 1(9): 389–399. doi:10.1038/s42256-019-0088-2
    • Johnson, Deborah G. and Mario Verdicchio, 2017, “Reframing AI Discourse”, Minds and Machines, 27(4): 575–590. doi:10.1007/s11023-017-9417-6
    • Kahnemann, Daniel, 2011, Thinking Fast and Slow, London: Macmillan.
    • Kamm, Frances Myrna, 2016, The Trolley Problem Mysteries, Eric Rakowski (ed.), Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190247157.001.0001
    • Kant, Immanuel, 1781/1787, Kritik der reinen Vernunft. Translated as Critique of Pure Reason, Norman Kemp Smith (trans.), London: Palgrave Macmillan, 1929.
    • Keeling, Geoff, 2020, “Why Trolley Problems Matter for the Ethics of Automated Vehicles”, Science and Engineering Ethics, 26(1): 293–307. doi:10.1007/s11948-019-00096-1
    • Keynes, John Maynard, 1930, “Economic Possibilities for Our Grandchildren”. Reprinted in his Essays in Persuasion, New York: Harcourt Brace, 1932, 358–373.
    • Kissinger, Henry A., 2018, “How the Enlightenment Ends: Philosophically, Intellectually—in Every Way—Human Society Is Unprepared for the Rise of Artificial Intelligence”, The Atlantic, June 2018. [Kissinger 2018 available online]
    • Kurzweil, Ray, 1999, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, London: Penguin.
    • –––, 2005, The Singularity Is Near: When Humans Transcend Biology, London: Viking.
    • –––, 2012, How to Create a Mind: The Secret of Human Thought Revealed, New York: Viking.
    • Lee, Minha, Sander Ackermans, Nena van As, Hanwen Chang, Enzo Lucas, and Wijnand IJsselsteijn, 2019, “Caring for Vincent: A Chatbot for Self-Compassion”, in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems—CHI ’19, Glasgow, Scotland: ACM Press, 1–13. doi:10.1145/3290605.3300932
    • Levy, David, 2007, Love and Sex with Robots: The Evolution of Human-Robot Relationships, New York: Harper & Co.
    • Lighthill, James, 1973, “Artificial Intelligence: A General Survey”, Artificial intelligence: A Paper Symposion, London: Science Research Council. [Lighthill 1973 available online]
    • Lin, Patrick, 2016, “Why Ethics Matters for Autonomous Cars”, in Autonomous Driving, Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner (eds.), Berlin, Heidelberg: Springer Berlin Heidelberg, 69–85. doi:10.1007/978-3-662-48847-8_4
    • Lin, Patrick, Keith Abney, and Ryan Jenkins (eds.), 2017, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, New York: Oxford University Press. doi:10.1093/oso/9780190652951.001.0001
    • Lin, Patrick, George Bekey, and Keith Abney, 2008, “Autonomous Military Robotics: Risk, Ethics, and Design”, ONR report, California Polytechnic State University, San Luis Obispo, 20 December 2008), 112 pp. [Lin, Bekey, and Abney 2008 available online]
    • Lomas, Meghann, Robert Chevalier, Ernest Vincent Cross, Robert Christopher Garrett, John Hoare, and Michael Kopack, 2012, “Explaining Robot Actions”, in Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction—HRI ’12, Boston, MA: ACM Press, 187–188. doi:10.1145/2157689.2157748
    • Macnish, Kevin, 2017, The Ethics of Surveillance: An Introduction, London: Routledge.
    • Mathur, Arunesh, Gunes Acar, Michael J. Friedman, Elena Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan, 2019, “Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites”, Proceedings of the ACM on Human-Computer Interaction, 3(CSCW): art. 81. doi:10.1145/3359183
    • Minsky, Marvin, 1985, The Society of Mind, New York: Simon & Schuster.
    • Misselhorn, Catrin, 2020, “Artificial Systems with Moral Capacities? A Research Design and Its Implementation in a Geriatric Care System”, Artificial Intelligence, 278: art. 103179. doi:10.1016/j.artint.2019.103179
    • Mittelstadt, Brent Daniel and Luciano Floridi, 2016, “The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts”, Science and Engineering Ethics, 22(2): 303–341. doi:10.1007/s11948-015-9652-2
    • Moor, James H., 2006, “The Nature, Importance, and Difficulty of Machine Ethics”, IEEE Intelligent Systems, 21(4): 18–21. doi:10.1109/MIS.2006.80
    • Moravec, Hans, 1990, Mind Children, Cambridge, MA: Harvard University Press.
    • –––, 1998, Robot: Mere Machine to Transcendent Mind, New York: Oxford University Press.
    • Mozorov, Eygeny, 2013, To Save Everything, Click Here: The Folly of Technological Solutionism, New York: Public Affairs.
    • Müller, Vincent C., 2012, “Autonomous Cognitive Systems in Real-World Environments: Less Control, More Flexibility and Better Interaction”, Cognitive Computation, 4(3): 212–215. doi:10.1007/s12559-012-9129-4
    • –––, 2016a, “Autonomous Killer Robots Are Probably Good News”, In Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the Use of Remotely Controlled Weapons, Ezio Di Nucci and Filippo Santoni de Sio (eds.), London: Ashgate, 67–81.
    • ––– (ed.), 2016b, Risks of Artificial Intelligence, London: Chapman & Hall - CRC Press. doi:10.1201/b19187
    • –––, 2018, “In 30 Schritten zum Mond? Zukünftiger Fortschritt in der KI”, Medienkorrespondenz, 20: 5–15. [Müller 2018 available online]
    • –––, 2020, “Measuring Progress in Robotics: Benchmarking and the ‘Measure-Target Confusion’”, in Metrics of Sensory Motor Coordination and Integration in Robots and Animals, Fabio Bonsignorio, Elena Messina, Angel P. del Pobil, and John Hallam (eds.), (Cognitive Systems Monographs 36), Cham: Springer International Publishing, 169–179. doi:10.1007/978-3-030-14126-4_9
    • –––, forthcoming-a, Can Machines Think? Fundamental Problems of Artificial Intelligence, New York: Oxford University Press.
    • ––– (ed.), forthcoming-b, Oxford Handbook of the Philosophy of Artificial Intelligence, New York: Oxford University Press.
    • Müller, Vincent C. and Nick Bostrom, 2016, “Future Progress in Artificial Intelligence: A Survey of Expert Opinion”, in Fundamental Issues of Artificial Intelligence, Vincent C. Müller (ed.), Cham: Springer International Publishing, 555–572. doi:10.1007/978-3-319-26485-1_33
    • Newport, Cal, 2019, Digital Minimalism: On Living Better with Less Technology, London: Penguin.
    • Nørskov, Marco (ed.), 2017, Social Robots, London: Routledge.
    • Nyholm, Sven, 2018a, “Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci”, Science and Engineering Ethics, 24(4): 1201–1219. doi:10.1007/s11948-017-9943-x
    • –––, 2018b, “The Ethics of Crashes with Self-Driving Cars: A Roadmap, II”, Philosophy Compass, 13(7): e12506. doi:10.1111/phc3.12506
    • Nyholm, Sven, and Lily Frank, 2017, “From Sex Robots to Love Robots: Is Mutual Love with a Robot Possible?”, in Danaher and McArthur 2017: 219–243.
    • O’Connell, Mark, 2017, To Be a Machine: Adventures among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death, London: Granta.
    • O’Neil, Cathy, 2016, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Largo, ML: Crown.
    • Omohundro, Steve, 2014, “Autonomous Technology and the Greater Human Good”, Journal of Experimental & Theoretical Artificial Intelligence, 26(3): 303–315. doi:10.1080/0952813X.2014.895111
    • Ord, Toby, 2020, The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury.
    • Powers, Thomas M. and Jean-Gabriel Ganascia, forthcoming, “The Ethics of the Ethics of AI”, in Oxford Handbook of Ethics of Artificial Intelligence, Markus D. Dubber, Frank Pasquale, and Sunnit Das (eds.), New York: Oxford.
    • Rawls, John, 1971, A Theory of Justice, Cambridge, MA: Belknap Press.
    • Rees, Martin, 2018, On the Future: Prospects for Humanity, Princeton: Princeton University Press.
    • Richardson, Kathleen, 2016, “Sex Robot Matters: Slavery, the Prostituted, and the Rights of Machines”, IEEE Technology and Society Magazine, 35(2): 46–53. doi:10.1109/MTS.2016.2554421
    • Roessler, Beate, 2017, “Privacy as a Human Right”, Proceedings of the Aristotelian Society, 117(2): 187–206. doi:10.1093/arisoc/aox008
    • Royakkers, Lambèr and Rinie van Est, 2016, Just Ordinary Robots: Automation from Love to War, Boca Raton, LA: CRC Press, Taylor & Francis. doi:10.1201/b18899
    • Russell, Stuart, 2019, Human Compatible: Artificial Intelligence and the Problem of Control, New York: Viking.
    • Russell, Stuart, Daniel Dewey, and Max Tegmark, 2015, “Research Priorities for Robust and Beneficial Artificial Intelligence”, AI Magazine, 36(4): 105–114. doi:10.1609/aimag.v36i4.2577
    • SAE International, 2018, “Taxonomy and Definitions for Terms Related to Driving Automation Systems for on-Road Motor Vehicles”, J3016_201806, 15 June 2018. [SAE International 2015 available online]
    • Sandberg, Anders, 2013, “Feasibility of Whole Brain Emulation”, in Philosophy and Theory of Artificial Intelligence, Vincent C. Müller (ed.), (Studies in Applied Philosophy, Epistemology and Rational Ethics, 5), Berlin, Heidelberg: Springer Berlin Heidelberg, 251–264. doi:10.1007/978-3-642-31674-6_19
    • –––, 2019, “There Is Plenty of Time at the Bottom: The Economics, Risk and Ethics of Time Compression”, Foresight, 21(1): 84–99. doi:10.1108/FS-04-2018-0044
    • Santoni de Sio, Filippo and Jeroen van den Hoven, 2018, “Meaningful Human Control over Autonomous Systems: A Philosophical Account”, Frontiers in Robotics and AI, 5(February): 15. doi:10.3389/frobt.2018.00015
    • Schneier, Bruce, 2015, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World, New York: W. W. Norton.
    • Searle, John R., 1980, “Minds, Brains, and Programs”, Behavioral and Brain Sciences, 3(3): 417–424. doi:10.1017/S0140525X00005756
    • Selbst, Andrew D., Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi, 2019, “Fairness and Abstraction in Sociotechnical Systems”, in Proceedings of the Conference on Fairness, Accountability, and Transparency—FAT* ’19, Atlanta, GA: ACM Press, 59–68. doi:10.1145/3287560.3287598
    • Sennett, Richard, 2018, Building and Dwelling: Ethics for the City, London: Allen Lane.
    • Shanahan, Murray, 2015, The Technological Singularity, Cambridge, MA: MIT Press.
    • Sharkey, Amanda, 2019, “Autonomous Weapons Systems, Killer Robots and Human Dignity”, Ethics and Information Technology, 21(2): 75–87. doi:10.1007/s10676-018-9494-0
    • Sharkey, Amanda and Noel Sharkey, 2011, “The Rights and Wrongs of Robot Care”, in Robot Ethics: The Ethical and Social Implications of Robotics, Patrick Lin, Keith Abney and George Bekey (eds.), Cambridge, MA: MIT Press, 267–282.
    • Shoham, Yoav, Perrault Raymond, Brynjolfsson Erik, Jack Clark, James Manyika, Juan Carlos Niebles, … Zoe Bauer, 2018, “The AI Index 2018 Annual Report”, 17 December 2018, Stanford, CA: AI Index Steering Committee, Human-Centered AI Initiative, Stanford University. [Shoam et al. 2018 available online]
    • Silver, David, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis, 2018, “A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go through Self-Play”, Science, 362(6419): 1140–1144. doi:10.1126/science.aar6404
    • Simon, Herbert A. and Allen Newell, 1958, “Heuristic Problem Solving: The Next Advance in Operations Research”, Operations Research, 6(1): 1–10. doi:10.1287/opre.6.1.1
    • Simpson, Thomas W. and Vincent C. Müller, 2016, “Just War and Robots’ Killings”, The Philosophical Quarterly, 66(263): 302–322. doi:10.1093/pq/pqv075
    • Smolan, Sandy (director), 2016, “The Human Face of Big Data”, PBS Documentary, 24 February 2016, 56 mins.
    • Sparrow, Robert, 2007, “Killer Robots”, Journal of Applied Philosophy, 24(1): 62–77. doi:10.1111/j.1468-5930.2007.00346.x
    • –––, 2016, “Robots in Aged Care: A Dystopian Future?”, AI & Society, 31(4): 445–454. doi:10.1007/s00146-015-0625-4
    • Stahl, Bernd Carsten, Job Timmermans, and Brent Daniel Mittelstadt, 2016, “The Ethics of Computing: A Survey of the Computing-Oriented Literature”, ACM Computing Surveys, 48(4): art. 55. doi:10.1145/2871196
    • Stahl, Bernd Carsten and David Wright, 2018, “Ethics and Privacy in AI and Big Data: Implementing Responsible Research and Innovation”, IEEE Security Privacy, 16(3): 26–33.
    • Stone, Christopher D., 1972, “Should Trees Have Standing - toward Legal Rights for Natural Objects”, Southern California Law Review, 45: 450–501.
    • Stone, Peter, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller, 2016, “Artificial Intelligence and Life in 2030”, One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford, CA, September 2016. [Stone et al. 2016 available online]
    • Strawson, Galen, 1998, “Free Will”, in Routledge Encyclopedia of Philosophy, Taylor & Francis. doi:10.4324/9780415249126-V014-1
    • Sullins, John P., 2012, “Robots, Love, and Sex: The Ethics of Building a Love Machine”, IEEE Transactions on Affective Computing, 3(4): 398–409. doi:10.1109/T-AFFC.2012.31
    • Susser, Daniel, Beate Roessler, and Helen Nissenbaum, 2019, “Technology, Autonomy, and Manipulation”, Internet Policy Review, 8(2): 30 June 2019. [Susser, Roessler, and Nissenbaum 2019 available online]
    • Taddeo, Mariarosaria and Luciano Floridi, 2018, “How AI Can Be a Force for Good”, Science, 361(6404): 751–752. doi:10.1126/science.aat5991
    • Taylor, Linnet and Nadezhda Purtova, 2019, “What Is Responsible and Sustainable Data Science?”, Big Data & Society, 6(2): art. 205395171985811. doi:10.1177/2053951719858114
    • Taylor, Steve, et al., 2018, “Responsible AI – Key Themes, Concerns & Recommendations for European Research and Innovation: Summary of Consultation with Multidisciplinary Experts”, June. [Taylor, et al. 2018 available online]
    • Tegmark, Max, 2017, Life 3.0: Being Human in the Age of Artificial Intelligence, New York: Knopf.
    • Thaler, Richard H and Sunstein, Cass, 2008, Nudge: Improving decisions about health, wealth and happiness, New York: Penguin.
    • Thompson, Nicholas and Ian Bremmer, 2018, “The AI Cold War That Threatens Us All”, Wired, 23 November 2018. [Thompson and Bremmer 2018 available online]
    • Thomson, Judith Jarvis, 1976, “Killing, Letting Die, and the Trolley Problem”, Monist, 59(2): 204–217. doi:10.5840/monist197659224
    • Torrance, Steve, 2011, “Machine Ethics and the Idea of a More-Than-Human Moral World”, in Anderson and Anderson 2011: 115–137. doi:10.1017/CBO9780511978036.011
    • Trump, Donald J, 2019, “Executive Order on Maintaining American Leadership in Artificial Intelligence”, 11 February 2019. [Trump 2019 available online]
    • Turner, Jacob, 2019, Robot Rules: Regulating Artificial Intelligence, Berlin: Springer. doi:10.1007/978-3-319-96235-1
    • Tzafestas, Spyros G., 2016, Roboethics: A Navigating Overview, (Intelligent Systems, Control and Automation: Science and Engineering 79), Cham: Springer International Publishing. doi:10.1007/978-3-319-21714-7
    • Vallor, Shannon, 2017, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190498511.001.0001
    • Van Lent, Michael, William Fisher, and Michael Mancuso, 2004, “An Explainable Artificial Intelligence System for Small-Unit Tactical Behavior”, in Proceedings of the 16th Conference on Innovative Applications of Artifical Intelligence, (IAAI’04), San Jose, CA: AAAI Press, 900–907.
    • van Wynsberghe, Aimee, 2016, Healthcare Robots: Ethics, Design and Implementation, London: Routledge. doi:10.4324/9781315586397
    • van Wynsberghe, Aimee and Scott Robbins, 2019, “Critiquing the Reasons for Making Artificial Moral Agents”, Science and Engineering Ethics, 25(3): 719–735. doi:10.1007/s11948-018-0030-8
    • Vanderelst, Dieter and Alan Winfield, 2018, “The Dark Side of Ethical Robots”, in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans, LA: ACM, 317–322. doi:10.1145/3278721.3278726
    • Veale, Michael and Reuben Binns, 2017, “Fairer Machine Learning in the Real World: Mitigating Discrimination without Collecting Sensitive Data”, Big Data & Society, 4(2): art. 205395171774353. doi:10.1177/2053951717743530
    • Véliz, Carissa, 2019, “Three Things Digital Ethics Can Learn from Medical Ethics”, Nature Electronics, 2(8): 316–318. doi:10.1038/s41928-019-0294-2
    • Verbeek, Peter-Paul, 2011, Moralizing Technology: Understanding and Designing the Morality of Things, Chicago: University of Chicago Press.
    • Wachter, Sandra and Brent Daniel Mittelstadt, 2019, “A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI”, Columbia Business Law Review, 2019(2): 494–620.
    • Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi, 2017, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation”, International Data Privacy Law, 7(2): 76–99. doi:10.1093/idpl/ipx005
    • Wachter, Sandra, Brent Mittelstadt, and Chris Russell, 2018, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR”, Harvard Journal of Law & Technology, 31(2): 842–887. doi:10.2139/ssrn.3063289
    • Wallach, Wendell and Peter M. Asaro (eds.), 2017, Machine Ethics and Robot Ethics, London: Routledge.
    • Walsh, Toby, 2018, Machines That Think: The Future of Artificial Intelligence, Amherst, MA: Prometheus Books.
    • Westlake, Stian (ed.), 2014, Our Work Here Is Done: Visions of a Robot Economy, London: Nesta. [Westlake 2014 available online]
    • Whittaker, Meredith, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas, Varoon Mathur, … Jason Schultz, 2018, “AI Now Report 2018”, New York: AI Now Institute, New York University. [Whittaker et al. 2018 available online]
    • Whittlestone, Jess, Rune Nyrup, Anna Alexandrova, Kanta Dihal, and Stephen Cave, 2019, “Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research”, Cambridge: Nuffield Foundation, University of Cambridge. [Whittlestone 2019 available online]
    • Winfield, Alan, Katina Michael, Jeremy Pitt, and Vanessa Evers (eds.), 2019, Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems, special issue of Proceedings of the IEEE, 107(3): 501–632.
    • Woollard, Fiona and Frances Howard-Snyder, 2016, “Doing vs. Allowing Harm”, Stanford Encyclopedia of Philosophy (Winter 2016 edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2016/entries/doing-allowing/>
    • Woolley, Samuel C. and Philip N. Howard (eds.), 2017, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media, Oxford: Oxford University Press. doi:10.1093/oso/9780190931407.001.0001
    • Yampolskiy, Roman V. (ed.), 2018, Artificial Intelligence Safety and Security, Boca Raton, FL: Chapman and Hall/CRC. doi:10.1201/9781351251389
    • Yeung, Karen and Martin Lodge (eds.), 2019, Algorithmic Regulation, Oxford: Oxford University Press. doi:10.1093/oso/9780198838494.001.0001
    • Zayed, Yago and Philip Loft, 2019, “Agriculture: Historical Statistics”, House of Commons Briefing Paper, 3339(25 June 2019): 1-19. [Zayed and Loft 2019 available online]
    • Zerilli, John, Alistair Knott, James Maclaurin, and Colin Gavaghan, 2019, “Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?”, Philosophy & Technology, 32(4): 661–683. doi:10.1007/s13347-018-0330-6
    • Zuboff, Shoshana, 2019, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, New York: Public Affairs.


    [OTR]








    Analog Space Missions: Earth-Bound Training for Cosmic Exploration

    What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...