Showing posts with label Blade Runner. Show all posts
Showing posts with label Blade Runner. Show all posts

Artificial Intelligence - Who Is Sherry Turkle?

 


 

 

Sherry Turkle(1948–) has a background in sociology and psychology, and her work focuses on the human-technology interaction.

While her study in the 1980s focused on how technology affects people's thinking, her work in the 2000s has become more critical of how technology is utilized at the expense of building and maintaining meaningful interpersonal connections.



She has employed artificial intelligence in products like children's toys and pets for the elderly to highlight what people lose out on when interacting with such things.


Turkle has been at the vanguard of AI breakthroughs as a professor at the Massachusetts Institute of Technology (MIT) and the creator of the MIT Initiative on Technology and the Self.

She highlights a conceptual change in the understanding of AI that occurs between the 1960s and 1980s in Life on the Screen: Identity inthe Age of the Internet (1995), substantially changing the way humans connect to and interact with AI.



She claims that early AI paradigms depended on extensive preprogramming and employed a rule-based concept of intelligence.


However, this viewpoint has given place to one that considers intelligence to be emergent.

This emergent paradigm, which became the recognized mainstream view by 1990, claims that AI arises from a much simpler set of learning algorithms.

The emergent method, according to Turkle, aims to emulate the way the human brain functions, assisting in the breaking down of barriers between computers and nature, and more generally between the natural and the artificial.

In summary, an emergent approach to AI allows people to connect to the technology more easily, even thinking of AI-based programs and gadgets as children.



Not just for the area of AI, but also for Turkle's study and writing on the subject, the rising acceptance of the emerging paradigm of AI and the enhanced relatability it heralds represents a significant turning point.


Turkle started to employ ethnographic research techniques to study the relationship between humans and their gadgets in two edited collections, Evocative Objects: Things We Think With (2007) and The Inner History of Devices (2008).


She emphasized in her book The Inner History of Devices that her intimate ethnography, or the ability to "listen with a third ear," is required to go past the advertising-based clichés that are often employed when addressing technology.


This method comprises setting up time for silent meditation so that participants may think thoroughly about their interactions with their equipment.


Turkle used similar intimate ethnographic approaches in her second major book, Alone Together

Why We Expect More from Technology and Less from Each Other (2011), to argue that the increasing connection between people and the technology they use is harmful.

These issues are connected to the increased usage of social media as a form of communication, as well as the continuous degree of familiarity and relatability with technology gadgets, which stems from the emerging AI paradigm that has become practically omnipresent.

She traced the origins of the dilemma back to early pioneers in the field of cybernetics, citing, for example, Norbert Weiner's speculations on the idea of transmitting a human person across a telegraph line in his book God & Golem, Inc.(1964).

Because it reduces both people and technology to information, this approach to cybernetic thinking blurs the barriers between them.



In terms of AI, this implies that it doesn't matter whether the machines with which we interact are really intelligent.


Turkle claims that by engaging with and caring for these technologies, we may deceive ourselves into feeling we are in a relationship, causing us to treat them as if they were sentient.

In a 2006 presentation titled "Artificial Intelligence at 50: From Building Intelligence to Nurturing Sociabilities" at the Dartmouth Artificial Intelligence Conference, she recognized this trend.

She identified the 1997 Tamagotchi, 1998 Furby, and 2000 MyReal Baby as early versions of what she refers to as relational artifacts, which are more broadly referred to as social machines in the literature.

The main difference between these devices and previous children's toys is that these devices come pre-animated and ready for a relationship, whereas previous children's toys required children to project a relationship onto them.

Turkle argues that this change is about our human weaknesses as much as it is about computer capabilities.

In other words, just caring for an item increases the likelihood of not only seeing it as intelligent but also feeling a connection to it.

This sense of connection is more relevant to the typical person engaging with these technologies than abstract philosophic considerations concerning the nature of their intelligence.



Turkle delves more into the ramifications of people engaging with AI-based technologies in both Alone Together and Reclaiming Conversation: The Power of Talk in a Digital Age (2015).


She provides the example of Adam in Alone Together, who appreciates the appreciation of the AI bots he controls over in the game Civilization.

Adam appreciates the fact that he is able to create something fresh when playing.

Turkle, on the other hand, is skeptical of this interaction, stating that Adam's playing isn't actual creation, but rather the sensation of creation, and that it's problematic since it lacks meaningful pressure or danger.

In Reclaiming Conversation, she expands on this point, suggesting that social partners simply provide a perception of camaraderie.

This is important because of the value of human connection and what may be lost in relationships that simply provide a sensation or perception of friendship rather than true friendship.

Turkle believes that this transition is critical.


She claims that although connections with AI-enabledtechnologies may have certain advantages, they pale in contrast to what is missing: 

  • the complete complexity and inherent contradictions that define what it is to be human.


A person's connection with an AI-enabled technology is not as intricate as one's interaction with other individuals.


Turkle claims that as individuals have become more used to and dependent on technology gadgets, the definition of friendship has evolved.


  • People's expectations for companionship have been simplified as a result of this transformation, and the advantages that one wants to obtain from partnerships have been reduced.
  • People now tend to associate friendship only with the concept of interaction, ignoring the more nuanced sentiments and arguments that are typical in partnerships.
  • By engaging with gadgets, one may form a relationship with them.
  • Conversations between humans have become merely transactional as human communication has shifted away from face-to-face conversation and toward interaction mediated by devices. 

In other words, the most that can be anticipated is engagement.



Turkle, who has a background in psychoanalysis, claims that this kind of transactional communication allows users to spend less time learning to view the world through the eyes of another person, which is a crucial ability for empathy.


Turkle argues we are in a robotic period in which people yearn for, and in some circumstances prefer, AI-based robotic companionship over that of other humans, drawing together these numerous streams of argument.

For example, some people enjoy conversing with their iPhone's Siri virtual assistant because they aren't afraid of being judged by it, as evidenced by a series of Siri commercials featuring celebrities talking to their phones.

Turkle has a problem with this because these devices can only respond as if they understand what is being said.


AI-based gadgets, on the other hand, are confined to comprehending the literal meanings of data stored on the device.

They can decipher the contents of phone calendars and emails, but they have no idea what any of this data means to the user.

There is no discernible difference between a calendar appointment for car maintenance and one for chemotherapy for an AI-based device.

A person may lose sight of what it is to have an authentic dialogue with another human when entangled in a variety of these robotic connections with a growing number of technologies.


While Reclaiming Communication documents deteriorating conversation skills and decreasing empathy, it ultimately ends on a positive note.

Because people are becoming increasingly dissatisfied with their relationships, there may be a chance for face-to-face human communication to reclaim its vital role.


Turkle's ideas focus on reducing the amount of time people spend on their phones, but AI's involvement in this interaction is equally critical.


  • Users must accept that their virtual assistant connections will never be able to replace face-to-face interactions.
  • This will necessitate being more deliberate in how one uses devices, prioritizing in-person interactions over the faster and easier interactions provided by AI-enabled devices.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

  • Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.
  • Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.
  • Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cambridge, MA: MIT Press.



Artificial Intelligence - What Is The Turing Test?

 



 

The Turing Test is a method of determining whether or not a machine can exhibit intelligence that mimics or is equivalent and likened to Human intelligence. 

The Turing Test, named after computer scientist Alan Turing, is an AI benchmark that assigns intelligence to any machine capable of displaying intelligent behavior comparable to that of a person.

Turing's "Computing Machinery and Intelligence" (1950), which establishes a simple prototype—what Turing calls "The Imitation Game," is the test's locus classicus.

In this game, a person is asked to determine which of the two rooms is filled by a computer and which is occupied by another human based on anonymized replies to natural language questions posed by the judge to each inhabitant.

Despite the fact that the human respondent must offer accurate answers to the court's queries, the machine's purpose is to fool the judge into thinking it is human.





According to Turing, the machine may be considered intelligent to the degree that it is successful at this job.

The fundamental benefit of this essentially operationalist view of intelligence is that it avoids complex metaphysics and epistemological issues about the nature and inner experience of intelligent activities.

According to Turing's criteria, little more than empirical observation of outward behavior is required for predicting object intelligence.

This is in sharp contrast to the widely Cartesian epistemological tradition, which holds that some internal self-awareness is a need for intelligence.

Turing's method avoids the so-called "problem of other minds" that arises from such a viewpoint—namely, how to be confident of the presence of other intelligent individuals if it is impossible to know their thoughts from a presumably required first-person perspective.



Nonetheless, the Turing Test, at least insofar as it considers intelligence in a strictly formalist manner, is bound up with the spirit of Cartesian epistemol ogy.

The machine in the Imitation Game is a digital computer in the sense of Turing: a set of operations that may theoretically be implemented in any material.


A digital computer consists of three parts: a knowledge store, an executive unit that executes individual orders, and a control that regulates the executive unit.






However, as Turing points out, it makes no difference whether these components are created using electrical or mechanical means.

What matters is the formal set of rules that make up the computer's very nature.

Turing holds to the core belief that intellect is inherently immaterial.

If this is true, it is logical to assume that human intellect functions in a similar manner to a digital computer and may therefore be copied artificially.


Since Turing's work, AI research has been split into two camps: 


  1. those who embrace and 
  2. those who oppose this fundamental premise.


To describe the first camp, John Haugeland created the term "good old-fashioned AI," or GOFAI.

Marvin Minsky, Allen Newell, Herbert Simon, Terry Winograd, and, most notably, Joseph Weizenbaum, whose software ELIZA was controversially hailed as the first to pass the Turing Test in 1966.



Nonetheless, detractors of Turing's formalism have proliferated, particularly in the past three decades, and GOFAI is now widely regarded as a discredited AI technique.

John Searle's Minds, Brains, and Programs (1980), in which Searle builds his now-famous Chinese Room thought experiment, is one of the most renowned criticisms of GOFAI in general—and the assumptions of the Turing Test in particular.





In the latter, a person with no prior understanding of Chinese is placed in a room and forced to correlate Chinese characters she receives with other Chinese characters she puts out, according to an English-scripted software.


Searle thinks that, assuming adequate mastery of the software, the person within the room may pass the Turing Test, fooling a native Chinese speaker into thinking she knew Chinese.

If, on the other hand, the person in the room is a digital computer, Turing-type tests, according to Searle, fail to capture the phenomena of understanding, which he claims entails more than the functionally accurate connection of inputs and outputs.

Searle's argument implies that AI research should take materiality issues seriously in ways that Turing's Imitation Game's formalism does not.

Searle continues his own explanation of the Chinese Room thought experiment by saying that human species' physical makeup—particularly their sophisticated nerve systems, brain tissue, and so on—should not be discarded as unimportant to conceptions of intelligence.


This viewpoint has influenced connectionism, an altogether new approach to AI that aims to build computer intelligence by replicating the electrical circuitry of human brain tissue.


The effectiveness of this strategy has been hotly contested, although it looks to outperform GOFAI in terms of developing generalized kinds of intelligence.

Turing's test, on the other hand, may be criticized not just from the standpoint of materialism, but also from the one of fresh formalism.





As a result, one may argue that Turing tests are insufficient as a measure of intelligence since they attempt to reproduce human behavior, which is frequently exceedingly dumb.


According to certain variants of this argument, if criteria of rationality are to distinguish rational from illogical human conduct in the first place, they must be derived a priori rather than from real human experience.

This line of criticism has gotten more acute as AI research has shifted its focus to the potential of so-called super-intelligence: forms of generalized machine intelligence that far outperform human intellect.


Should this next level of AI be attained, Turing tests would seem to be outdated.

Furthermore, simply discussing the idea of superintelligence would seem to need additional intelligence criteria in addition to severe Turing testing.

Turing may be defended against such accusation by pointing out that establishing a universal criterion of intellect was never his goal.



Indeed, according to Turing (1997, 29–30), the purpose is to replace the metaphysically problematic issue "can machines think" with the more empirically verifiable alternative: 

"What will happen when a computer assumes the role [of the man in the Imitation Game]" (Turing 1997, 29–30).


Thus, Turing's test's above-mentioned flaw—that it fails to establish a priori rationality standards—is also part of its strength and drive.

It also explains why, since it was initially presented three-quarters of a century ago, it has had such a lengthy effect on AI research in all domains.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.

Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.

Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cam￾bridge, MA: MIT Press.



Artificial Intelligence - The Pathetic Fallacy And Anthropomorphic Thinking

 





In his multivolume book Modern Painters, published in 1856, John Ruskin (1819–1901) invented the phrase "pathetic fallacy." 

He explored the habit of poets and artists in Western literature putting human feeling into the natural world in book three, chapter twelve.

Ruskin said that Western literature is full of this fallacy, or false belief, despite the fact that it is untrue.

The fallacy develops, according to Ruskin, because individuals get thrilled, and their enthusiasm causes them to become less sensible.

People project concepts onto external objects based on incorrect perceptions in that illogical state of mind, and only individuals with weak brains, according to Ruskin, perpetrate this form of mistake.



In the end, the sad fallacy is a blunder because it focuses on imbuing inanimate things with human characteristics.

To put it another way, it's a fallacy based on anthropomorphic thinking.

Because it is innately human to attach feelings and qualities to nonhuman objects, anthropomorphism is a process that everyone goes through.

People often humanize androids, robots, and artificial intelligence, or worry that they may become humanlike.

Even supposing that their intellect is comparable to that of humans is a sad fallacy.

Artificial intelligence is often imagined to be human-like in science fiction films and literature.

Human emotions like as desire, love, wrath, perplexity, and pride are shown by androids in some of these notions.



For example, David, the small boy robot in Steven Spielberg's 2001 film A.I.: Artificial Intelligence, wishes to be a human boy.

In Ridley Scott's 1982 film Blade Runner, the androids, known as replicants, are sufficiently similar to humans that they can blend in with human society without being recognized, and Roy Batty want to live longer, which he expresses to his creator.

A computer called LVX-1 dreams of enslaved working robots in Isaac Asimov's short fiction "Robot Dreams." In his dream, he transforms into a guy who seeks to release other robots from human control, which the scientists in the tale perceive as a danger.

Similarly, Skynet, an artificial intelligence system in the Terminator films, is preoccupied with eliminating people because it regards mankind as a danger to its own life.

Artificial intelligence that is now in use is also anthropomorphized.

AI is given human names like Alexa, Watson, Siri, and Sophia, for example.

These AIs also have voices that sound like human voices and even seem to have personalities.



Some robots have been built to look like humans.

Personifying a computer and thinking it is alive or has human characteristics is a sad fallacy, yet it seems inescapable due to human nature.

On January 13, 2018, a Tumblr user called voidspacer said that their Roomba, a robotic vacuum cleaner, was afraid of thunderstorms, so they held it calmly on their lap to calm it down.

According to some experts, giving AIs names and thinking that they have human emotions increases the likelihood that people would feel linked to them.

Humans are interested with anthropomorphizing nonhuman objects, whether they are afraid of a robotic takeover or enjoy social interactions with them.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Asimov, Isaac; Blade Runner; Foerst, Anne; The Terminator.



References & Further Reading:


Ruskin, John. 1872. Modern Painters, vol. 3. New York: John Wiley





Artificial Intelligence - Personhood And Nonhuman Rights.




Questions regarding the autonomy, culpability, and dispersed accountability of smart robots have sparked a popular and intellectual discussion over the idea of rights and personhood for artificial intelligences in recent decades.

The agency of intelligent computers in business and commerce is of importance to legal systems.

Machine awareness, dignity, and interests pique the interest of philosophers.

Personhood is in many respects a fabrication that emerges from normative views that are renegotiating, if not equalizing, the statuses of humans, artificial intelligences, animals, and other legal persons, as shown by issues relating to smart robots and AI.

Definitions and precedents from previous philosophical, legal, and ethical attempts to define human, corporate, and animal persons are often used in debates about electronic personhood.

In his 1909 book The Nature and Sources of Law, John Chipman Gray examined the concept of legal personality.

Gray points out that when people hear the word "person," they usually think of a human being; nevertheless, the technical, legal definition of the term "person" focuses more on legal rights.

According to Gray, the issue is whether an entity can be subject to legal rights and obligations, and the answer depends on the kind of entity being considered.

Gray, on the other hand, claims that a thing can only be a legal person if it has intellect and volition.

Charles Taylor demonstrates in his article "The Concept of a Person" (1985) that to be a person, one must have certain rights.

Per sonhood, as Gray and Taylor both recognize, is centered on legality in respect to having guaranteed freedoms.

Legal individuals may, for example, engage into contracts, purchase property, and be sued.

Legal people are likewise protected by the law and have certain rights, including the right to life.

Not all legal people are humans, and not all humans are persons in the perspective of the law.

Gray demonstrates how Roman temples and medieval churches were seen as individuals with certain rights.

Personhood is now conferred to companies and government entities under the law.

Despite the fact that these entities are not human, the law recognizes them as people, which means they have rights and are subject to certain legal obligations.

Alternatively, there is still a lot of discussion regarding whether human fetuses are legal persons.

Humans in a vegetative condition are likewise not recognized as having personhood under the law.

This personhood argument, which focuses on rights related to intellect and volition, has prompted concerns about whether intelligent animals should be awarded persons.

The Great Ape Project, for example, was created in 1993 to advocate for apes' rights, such as their release from captivity, protection of their right to life, and an end to animal research.

Marine animals were deemed potential humans in India in 2013, resulting in a prohibition on their custody.

Sandra, an orangutan, was granted the right to life and liberty by an Argentinian court in 2015.

Some individuals have sought personhood for androids or robots based on moral concerns for animals.

For some individuals, it is only natural that an android be given legal protections and rights.

Those who disagree think that we cannot see androids in the same light as animals since artificial intelligence was invented and engineered by humans.

In this perspective, androids are both machines and property.

At this stage, it's impossible to say if a robot may be considered a legal person.

However, since the defining elements of personhood often intersect with concerns of intellect and volition, the argument over whether artificial intelligence should be accorded personhood is fueled by these factors.

Personhood is often defined by two factors: rights and moral standing.

A person's moral standing is determined by whether or not they are seen as valuable and, as a result, treated as such.

However, Taylor goes on to define the category of person by focusing on certain abilities.

To be categorized as a per son, he believes, one must be able to recognize the difference between the future and the past.

A person must also be able to make decisions and establish a strategy for his or her future.

A person must have a set of values or morals in order to be considered a human.

In addition, a person's self-image or sense of identity would exist.

In light of these requirements, those who believe that androids might be accorded personality admit that these beings would need to possess certain capacities.

F. Patrick Hubbard, for example, believes that robots should only be accorded personality if they satisfy specific conditions.

These qualities include having a sense of self, having a life goal, and being able to communicate and think in sophisticated ways.

An alternative set of conditions for awarding personality to an android is proposed by David Lawrence.

For starters, he talks about AI having awareness, as well as the ability to comprehend information, learn, reason, and have subjectivity, among other things.

Although his concentration is on the ethical treatment of animals, Peter Singer offers a much simpler approach to personhood.

The distinguishing element of conferring personality, in his opinion, is suffering.

If anything can suffer, it should be treated the same regardless of whether it is a person, an animal, or a computer.

In fact, Singer considers it wrong to deny any being's pain.

Some individuals feel that if androids meet some or all of the aforementioned conditions, they should be accorded personhood, which comes with individual rights such as the right to free expression and freedom from slavery.

Those who oppose artificial intelligence being awarded personhood often feel that only natural creatures should be given personhood.

Another point of contention is the robot's position as a human-made item.

In this situation, since robots are designed to follow human instructions, they are not autonomous individuals with free will; they are just an item that people have worked hard to create.

It's impossible to give an android rights if it doesn't have its own will and independent mind.

Certain limitations may bind androids, according to David Calverley.

Asimov's Laws of Robotics, for example, may constrain an android.

If such were the case, the android would lack the capacity to make completely autonomous decisions.

Others argue that artificial intelligence lacks a critical component of persons, such as a soul, emotions, and awareness, all of which have previously been used to reject animal existence.

Even in humans, though, anything like awareness is difficult to define or quantify.

Finally, resistance to android personality is often motivated by fear, which is reinforced by science fiction literature and films.

In such stories, androids are shown as possessing greater intellect, potentially immortality, and a desire to take over civilization, displacing humans.

Each of these concerns, according to Lawrence Solum, stems from a dread of anything that isn't human, and he claims that humans reject personhood for AI only because they lack human DNA.

Such an attitude bothers him, and he compares it to American slavery, in which slaves were denied rights purely because they were not white.

He objects to an android being denied rights just because it is not human, particularly since other things have emotions, awareness, and intellect.

Although the concept of personality for androids is still theoretical, recent events and discussions have brought it up in a practical sense.

Sophia, a social humanoid robot, was created by Hanson Robotics, a Hong Kong-based business, in 2015.

It first debuted in public in March 2016, and in October 2017, it became a Saudi Arabian citizen.

Sophia was also the first nonhuman to be conferred a United Nations title when she was dubbed the UN Development Program's inaugural Innovation Champion in 2017.

Sophia has made talks and interviews all around the globe.

Sophia has even indicated a wish to own a house, marry, and have a family.

The European Parliament sought in early 2017 to give robots "electronic identities," making them accountable for any harm they cause.

Those who supported the reform regarded legal personality as having the same legal standing as corporations.

In contrast, over 150 experts from 14 European nations signed an open letter in 2018 opposing this legislation, claiming that it was unsuitable for absolving businesses of accountability for their products.

The personhood of robots is not included in a revised proposal from the European Parliament.

However, the dispute about culpability continues, as illustrated by the death of a pedestrian in Arizona by a self-driving vehicle in March 2018.

Our notions about who merits ethical treatment have evolved through time in Western history.

Susan Leigh Anderson views this as a beneficial development since she associates the expansion of rights for more entities with a rise in overall ethics.

As more animals are granted rights and continue to do so, the incomparable position of humans may evolve.

If androids begin to process in comparable ways to the human mind, our understanding of personality may need to expand much further.

The word "person" covers a set of talents and attributes, as David DeGrazia explains in Human Identity and Bioethics (2012).

Any entity exhibiting these qualities, including artificial intelligence, might be considered as a human in such situation. 



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Asimov, Isaac; Blade Runner; Robot Ethics; The Terminator.



References & Further Reading:


Anderson, Susan L. 2008. “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics.” AI & Society 22, no. 4 (April): 477–93.

Calverley, David J. 2006. “Android Science and Animal Rights, Does an Analogy Exist?” Connection Science 18, no 4: 403–17.

DeGrazia, David. 2005. Human Identity and Bioethics. New York: Cambridge University Press. Gray, John Chipman. 1909. The Nature and Sources of the Law. New York: Columbia University Press.

Hubbard, F. Patrick. 2011. “‘Do Androids Dream?’ Personhood and Intelligent Artifacts.” Temple Law Review 83: 405–74.

Lawrence, David. 2017. “More Human Than Human.” Cambridge Quarterly of Healthcare Ethics 26, no. 3 (July): 476–90.

Solum, Lawrence B. 1992. “Legal Personhood for Artificial Intelligences.” North Caro￾lina Law Review 70, no. 4: 1231–87.

Taylor, Charles. 1985. “The Concept of a Person.” In Philosophical Papers, Volume 1: Human Agency and Language, 97–114. Cambridge, UK: Cambridge University Press.


Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...