Showing posts with label ELIZA. Show all posts
Showing posts with label ELIZA. Show all posts

Artificial Intelligence - Who Is Sherry Turkle?

 


 

 

Sherry Turkle(1948–) has a background in sociology and psychology, and her work focuses on the human-technology interaction.

While her study in the 1980s focused on how technology affects people's thinking, her work in the 2000s has become more critical of how technology is utilized at the expense of building and maintaining meaningful interpersonal connections.



She has employed artificial intelligence in products like children's toys and pets for the elderly to highlight what people lose out on when interacting with such things.


Turkle has been at the vanguard of AI breakthroughs as a professor at the Massachusetts Institute of Technology (MIT) and the creator of the MIT Initiative on Technology and the Self.

She highlights a conceptual change in the understanding of AI that occurs between the 1960s and 1980s in Life on the Screen: Identity inthe Age of the Internet (1995), substantially changing the way humans connect to and interact with AI.



She claims that early AI paradigms depended on extensive preprogramming and employed a rule-based concept of intelligence.


However, this viewpoint has given place to one that considers intelligence to be emergent.

This emergent paradigm, which became the recognized mainstream view by 1990, claims that AI arises from a much simpler set of learning algorithms.

The emergent method, according to Turkle, aims to emulate the way the human brain functions, assisting in the breaking down of barriers between computers and nature, and more generally between the natural and the artificial.

In summary, an emergent approach to AI allows people to connect to the technology more easily, even thinking of AI-based programs and gadgets as children.



Not just for the area of AI, but also for Turkle's study and writing on the subject, the rising acceptance of the emerging paradigm of AI and the enhanced relatability it heralds represents a significant turning point.


Turkle started to employ ethnographic research techniques to study the relationship between humans and their gadgets in two edited collections, Evocative Objects: Things We Think With (2007) and The Inner History of Devices (2008).


She emphasized in her book The Inner History of Devices that her intimate ethnography, or the ability to "listen with a third ear," is required to go past the advertising-based clichés that are often employed when addressing technology.


This method comprises setting up time for silent meditation so that participants may think thoroughly about their interactions with their equipment.


Turkle used similar intimate ethnographic approaches in her second major book, Alone Together

Why We Expect More from Technology and Less from Each Other (2011), to argue that the increasing connection between people and the technology they use is harmful.

These issues are connected to the increased usage of social media as a form of communication, as well as the continuous degree of familiarity and relatability with technology gadgets, which stems from the emerging AI paradigm that has become practically omnipresent.

She traced the origins of the dilemma back to early pioneers in the field of cybernetics, citing, for example, Norbert Weiner's speculations on the idea of transmitting a human person across a telegraph line in his book God & Golem, Inc.(1964).

Because it reduces both people and technology to information, this approach to cybernetic thinking blurs the barriers between them.



In terms of AI, this implies that it doesn't matter whether the machines with which we interact are really intelligent.


Turkle claims that by engaging with and caring for these technologies, we may deceive ourselves into feeling we are in a relationship, causing us to treat them as if they were sentient.

In a 2006 presentation titled "Artificial Intelligence at 50: From Building Intelligence to Nurturing Sociabilities" at the Dartmouth Artificial Intelligence Conference, she recognized this trend.

She identified the 1997 Tamagotchi, 1998 Furby, and 2000 MyReal Baby as early versions of what she refers to as relational artifacts, which are more broadly referred to as social machines in the literature.

The main difference between these devices and previous children's toys is that these devices come pre-animated and ready for a relationship, whereas previous children's toys required children to project a relationship onto them.

Turkle argues that this change is about our human weaknesses as much as it is about computer capabilities.

In other words, just caring for an item increases the likelihood of not only seeing it as intelligent but also feeling a connection to it.

This sense of connection is more relevant to the typical person engaging with these technologies than abstract philosophic considerations concerning the nature of their intelligence.



Turkle delves more into the ramifications of people engaging with AI-based technologies in both Alone Together and Reclaiming Conversation: The Power of Talk in a Digital Age (2015).


She provides the example of Adam in Alone Together, who appreciates the appreciation of the AI bots he controls over in the game Civilization.

Adam appreciates the fact that he is able to create something fresh when playing.

Turkle, on the other hand, is skeptical of this interaction, stating that Adam's playing isn't actual creation, but rather the sensation of creation, and that it's problematic since it lacks meaningful pressure or danger.

In Reclaiming Conversation, she expands on this point, suggesting that social partners simply provide a perception of camaraderie.

This is important because of the value of human connection and what may be lost in relationships that simply provide a sensation or perception of friendship rather than true friendship.

Turkle believes that this transition is critical.


She claims that although connections with AI-enabledtechnologies may have certain advantages, they pale in contrast to what is missing: 

  • the complete complexity and inherent contradictions that define what it is to be human.


A person's connection with an AI-enabled technology is not as intricate as one's interaction with other individuals.


Turkle claims that as individuals have become more used to and dependent on technology gadgets, the definition of friendship has evolved.


  • People's expectations for companionship have been simplified as a result of this transformation, and the advantages that one wants to obtain from partnerships have been reduced.
  • People now tend to associate friendship only with the concept of interaction, ignoring the more nuanced sentiments and arguments that are typical in partnerships.
  • By engaging with gadgets, one may form a relationship with them.
  • Conversations between humans have become merely transactional as human communication has shifted away from face-to-face conversation and toward interaction mediated by devices. 

In other words, the most that can be anticipated is engagement.



Turkle, who has a background in psychoanalysis, claims that this kind of transactional communication allows users to spend less time learning to view the world through the eyes of another person, which is a crucial ability for empathy.


Turkle argues we are in a robotic period in which people yearn for, and in some circumstances prefer, AI-based robotic companionship over that of other humans, drawing together these numerous streams of argument.

For example, some people enjoy conversing with their iPhone's Siri virtual assistant because they aren't afraid of being judged by it, as evidenced by a series of Siri commercials featuring celebrities talking to their phones.

Turkle has a problem with this because these devices can only respond as if they understand what is being said.


AI-based gadgets, on the other hand, are confined to comprehending the literal meanings of data stored on the device.

They can decipher the contents of phone calendars and emails, but they have no idea what any of this data means to the user.

There is no discernible difference between a calendar appointment for car maintenance and one for chemotherapy for an AI-based device.

A person may lose sight of what it is to have an authentic dialogue with another human when entangled in a variety of these robotic connections with a growing number of technologies.


While Reclaiming Communication documents deteriorating conversation skills and decreasing empathy, it ultimately ends on a positive note.

Because people are becoming increasingly dissatisfied with their relationships, there may be a chance for face-to-face human communication to reclaim its vital role.


Turkle's ideas focus on reducing the amount of time people spend on their phones, but AI's involvement in this interaction is equally critical.


  • Users must accept that their virtual assistant connections will never be able to replace face-to-face interactions.
  • This will necessitate being more deliberate in how one uses devices, prioritizing in-person interactions over the faster and easier interactions provided by AI-enabled devices.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

  • Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.
  • Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.
  • Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cambridge, MA: MIT Press.



Artificial Intelligence - What Is The Turing Test?

 



 

The Turing Test is a method of determining whether or not a machine can exhibit intelligence that mimics or is equivalent and likened to Human intelligence. 

The Turing Test, named after computer scientist Alan Turing, is an AI benchmark that assigns intelligence to any machine capable of displaying intelligent behavior comparable to that of a person.

Turing's "Computing Machinery and Intelligence" (1950), which establishes a simple prototype—what Turing calls "The Imitation Game," is the test's locus classicus.

In this game, a person is asked to determine which of the two rooms is filled by a computer and which is occupied by another human based on anonymized replies to natural language questions posed by the judge to each inhabitant.

Despite the fact that the human respondent must offer accurate answers to the court's queries, the machine's purpose is to fool the judge into thinking it is human.





According to Turing, the machine may be considered intelligent to the degree that it is successful at this job.

The fundamental benefit of this essentially operationalist view of intelligence is that it avoids complex metaphysics and epistemological issues about the nature and inner experience of intelligent activities.

According to Turing's criteria, little more than empirical observation of outward behavior is required for predicting object intelligence.

This is in sharp contrast to the widely Cartesian epistemological tradition, which holds that some internal self-awareness is a need for intelligence.

Turing's method avoids the so-called "problem of other minds" that arises from such a viewpoint—namely, how to be confident of the presence of other intelligent individuals if it is impossible to know their thoughts from a presumably required first-person perspective.



Nonetheless, the Turing Test, at least insofar as it considers intelligence in a strictly formalist manner, is bound up with the spirit of Cartesian epistemol ogy.

The machine in the Imitation Game is a digital computer in the sense of Turing: a set of operations that may theoretically be implemented in any material.


A digital computer consists of three parts: a knowledge store, an executive unit that executes individual orders, and a control that regulates the executive unit.






However, as Turing points out, it makes no difference whether these components are created using electrical or mechanical means.

What matters is the formal set of rules that make up the computer's very nature.

Turing holds to the core belief that intellect is inherently immaterial.

If this is true, it is logical to assume that human intellect functions in a similar manner to a digital computer and may therefore be copied artificially.


Since Turing's work, AI research has been split into two camps: 


  1. those who embrace and 
  2. those who oppose this fundamental premise.


To describe the first camp, John Haugeland created the term "good old-fashioned AI," or GOFAI.

Marvin Minsky, Allen Newell, Herbert Simon, Terry Winograd, and, most notably, Joseph Weizenbaum, whose software ELIZA was controversially hailed as the first to pass the Turing Test in 1966.



Nonetheless, detractors of Turing's formalism have proliferated, particularly in the past three decades, and GOFAI is now widely regarded as a discredited AI technique.

John Searle's Minds, Brains, and Programs (1980), in which Searle builds his now-famous Chinese Room thought experiment, is one of the most renowned criticisms of GOFAI in general—and the assumptions of the Turing Test in particular.





In the latter, a person with no prior understanding of Chinese is placed in a room and forced to correlate Chinese characters she receives with other Chinese characters she puts out, according to an English-scripted software.


Searle thinks that, assuming adequate mastery of the software, the person within the room may pass the Turing Test, fooling a native Chinese speaker into thinking she knew Chinese.

If, on the other hand, the person in the room is a digital computer, Turing-type tests, according to Searle, fail to capture the phenomena of understanding, which he claims entails more than the functionally accurate connection of inputs and outputs.

Searle's argument implies that AI research should take materiality issues seriously in ways that Turing's Imitation Game's formalism does not.

Searle continues his own explanation of the Chinese Room thought experiment by saying that human species' physical makeup—particularly their sophisticated nerve systems, brain tissue, and so on—should not be discarded as unimportant to conceptions of intelligence.


This viewpoint has influenced connectionism, an altogether new approach to AI that aims to build computer intelligence by replicating the electrical circuitry of human brain tissue.


The effectiveness of this strategy has been hotly contested, although it looks to outperform GOFAI in terms of developing generalized kinds of intelligence.

Turing's test, on the other hand, may be criticized not just from the standpoint of materialism, but also from the one of fresh formalism.





As a result, one may argue that Turing tests are insufficient as a measure of intelligence since they attempt to reproduce human behavior, which is frequently exceedingly dumb.


According to certain variants of this argument, if criteria of rationality are to distinguish rational from illogical human conduct in the first place, they must be derived a priori rather than from real human experience.

This line of criticism has gotten more acute as AI research has shifted its focus to the potential of so-called super-intelligence: forms of generalized machine intelligence that far outperform human intellect.


Should this next level of AI be attained, Turing tests would seem to be outdated.

Furthermore, simply discussing the idea of superintelligence would seem to need additional intelligence criteria in addition to severe Turing testing.

Turing may be defended against such accusation by pointing out that establishing a universal criterion of intellect was never his goal.



Indeed, according to Turing (1997, 29–30), the purpose is to replace the metaphysically problematic issue "can machines think" with the more empirically verifiable alternative: 

"What will happen when a computer assumes the role [of the man in the Imitation Game]" (Turing 1997, 29–30).


Thus, Turing's test's above-mentioned flaw—that it fails to establish a priori rationality standards—is also part of its strength and drive.

It also explains why, since it was initially presented three-quarters of a century ago, it has had such a lengthy effect on AI research in all domains.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.

Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.

Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cam￾bridge, MA: MIT Press.



Artificial Intelligence - What Is The PARRY Computer Program?




PARRY (short for paranoia) is the first computer program to imitate a mental patient, created by Stanford University psychiatrist Kenneth Colby.

PARRY is communicated with by the psychiatrist-user in simple English.

PARRY's responses are intended to mirror the cognitive (mal)functioning of a paranoid patient.

In the late 1960s and early 1970s, Colby experimented with mental patient chatbots, which led to the development of PARRY.

Colby sought to illustrate that cognition is fundamentally a symbol manipulation process and that computer simulations may help psychiatric research.

Many technical aspects of PARRY were shared with Joseph Weizenbaum's ELIZA.

Both of these applications were conversational in nature, allowing the user to submit remarks in plain English.

PARRY's underlying algorithms, like ELIZA's, examined inputted phrases for essential terms to create plausible answers.





PARRY, on the other hand, was given a history in order to imitate the right paranoid behaviors.

Parry, who was fictitious, was a gambler who had gotten into a fight with a bookie.

Parry was paranoid enough to assume that the bookie would send the Mafia after him.

Since a result, PARRY freely shared information on its crazy Mafia ideas, as it would wish to enlist the user's assistance.

PARRY was also born with the ability to be "sensitive to his parents, religion, and sex" (Colby 1975, 36).

In most other topics of conversation, the show was neutral.

If PARRY couldn't find a match in its database, it may respond with "I don't know," "Why do you ask that?" or by returning to an earlier subject (Colby 1975, 77).

Whereas ELIZA's achievements made Weizenbaum a skeptic of AI, PARRY's findings bolstered Colby's support for computer simulations in psychiatry.

Colby picked paranoia as the mental state to mimic because it has the least fluid behavior and hence is the simplest to see.

Colby felt that human cognition was a process of symbol manipulation, as did artificial intelligence pioneers Herbert Simon and Allen Newell.

PARRY's cognitive functioning resembled that of a paranoid human being as a result of this.

Colby emphasized that a psychiatrist conversing with PARRY had learnt something about human paranoia.

He saw PARRY as a tool to help novice psychiatrists get started in their careers.

PARRY's reactions might also be used to determine the most successful therapeutic discourse lines.

Colby hoped that systems like PARRY would assist confirm or refute psychiatric hypotheses while also bolstering the field's scientific credibility.

On PARRY, Colby put his shame humiliation hypothesis of paranoid insanity to the test.

In the 1970s, Colby performed a series of studies to see how effectively PARRY could simulate true paranoia.

Two of these examinations resembled the Turing Test.

To begin, practicing psychiatrists were instructed to interview patients using a teletype terminal, an antiquated electromechanical typewriter that was used to send and receive typed messages over telecommunications.

The doctors were unaware that PARRY was one of the patients who took part in the interviews.

The transcripts of these interviews were then distributed to a group of 100 psychiatrists.

These psychiatrists were tasked with determining which version was created by a computer.

Twenty psychiatrists properly recognized PARRY, whereas the other twenty did not.

A total of 100 computer scientists received transcripts.

32 of the 67 computer scientists were accurate, while 35 were incorrect.

According to Colby, the findings "are akin to tossing a coin" statistically, and PARRY was not exposed (Colby 1975, 92).



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Chatbots and Loebner Prize; ELIZA; Expert Systems; Natural Language Processing and Speech Understanding; Turing Test.


References & Further Reading:


Cerf, Vincent. 1973. “Parry Encounters the Doctor: Conversation between a Simulated Paranoid and a Simulated Psychiatrist.” Datamation 19, no. 7 (July): 62–65.

Colby, Kenneth M. 1975. Artificial Paranoia: A Computer Simulation of Paranoid Pro￾cesses. New York: Pergamon Press.

Colby, Kenneth M., James B. Watt, and John P. Gilbert. 1966. “A Computer Method of Psychotherapy: Preliminary Communication.” Journal of Nervous and Mental Disease 142, no. 2 (February): 148–52.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence, 251–56, 308–28. San Francisco: W. H. Freeman and Company.

Warren, Jim. 1976. Artificial Paranoia: An NIMH Program Report. Rockville, MD: US. Department of Health, Education, and Welfare, Public Health Service, Alcohol, Drug Abuse, and Mental Health Administration, National Institute of Mental Health, Division of Scientific and Public Information, Mental Health Studies and Reports Branch.






Artificial Intelligence - What Is The ELIZA Software?

 



ELIZA is a conversational computer software created by German-American computer scientist Joseph Weizenbaum at Massachusetts Institute of Technology between 1964 and 1966.


Weizenbaum worked on ELIZA as part of a groundbreaking artificial intelligence research team on the DARPA-funded Project MAC, which was directed by Marvin Minsky (Mathematics and Computation).

Weizenbaum called ELIZA after Eliza Doolittle, a fictitious character in the play Pygmalion who learns correct English; that play had recently been made into the successful film My Fair Lady in 1964.


ELIZA was created with the goal of allowing a person to communicate with a computer system in plain English.


Weizenbaum became an AI skeptic as a result of ELIZA's popularity among users.

When communicating with ELIZA, users may input any statement into the system's open-ended interface.

ELIZA will often answer by asking a question, much like a Rogerian psychologist attempting to delve deeper into the patient's core ideas.

The application recycles portions of the user's comments while the user continues their chat with ELIZA, providing the impression that ELIZA is genuinely listening.


Weizenbaum had really developed ELIZA to have a tree-like decision structure.


The user's statements are first filtered for important terms.

The terms are ordered in order of significance if more than one keyword is discovered.

For example, if a user writes in "I suppose everyone laughs at me," the term "everybody," not "I," is the most crucial for ELIZA to reply to.

In order to generate a response, the computer uses a collection of algorithms to create a suitable sentence structure around those key phrases.

Alternatively, if the user's input phrase does not include any words found in ELIZA's database, the software finds a content-free comment or repeats a previous answer.


ELIZA was created by Weizenbaum to investigate the meaning of machine intelligence.


Weizenbaum derived his inspiration from a comment made by MIT cognitive scientist Marvin Minsky, according to a 1962 article in Datamation.

"Intelligence was just a characteristic human observers were willing to assign to processes they didn't comprehend, and only for as long as they didn't understand them," Minsky had claimed (Weizenbaum 1962).

If such was the case, Weizenbaum concluded, artificial intelligence's main goal was to "fool certain onlookers for a while" (Weizenbaum 1962).


ELIZA was created to accomplish precisely that by giving users reasonable answers while concealing how little the software genuinely understands in order to keep the user's faith in its intelligence alive for a bit longer.


Weizenbaum was taken aback by how successful ELIZA became.

ELIZA's Rogerian script became popular as a program renamed DOCTOR at MIT and distributed to other university campuses by the late 1960s—where the program was constructed from Weizenbaum's 1965 description published in the journal Communications of the Association for Computing Machinery.

The application deceived (too) many users, even those who were well-versed in its methods.


Most notably, some users grew so engrossed with ELIZA that they demanded that others leave the room so they could have a private session with "the" DOCTOR.


But it was the psychiatric community's reaction that made Weizenbaum very dubious of current artificial intelligence ambitions in general, and promises of computer comprehension of natural language in particular.

Kenneth Colby, a Stanford University psychiatrist with whom Weizenbaum had previously cooperated, created PARRY about the same time that Weizenbaum released ELIZA.


Colby, unlike Weizenbaum, thought that programs like PARRY and ELIZA were beneficial to psychology and public health.


They aided the development of diagnostic tools, enabling one psychiatric computer to treat hundreds of patients, according to him.

Weizenbaum's worries and emotional plea to the community of computer scientists were eventually conveyed in his book Computer Power and Human Reason (1976).

Weizenbaum railed against individuals who neglected the presence of basic distinctions between man and machine in this — at the time — hotly discussed book, arguing that "there are some things that computers ought not to execute, regardless of whether computers can be persuaded to do them" (Weizenbaum 1976, x).


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Chatbots and Loebner Prize; Expert Systems; Minsky, Marvin; Natural Lan￾guage Processing and Speech Understanding; PARRY; Turing Test


Further Reading:


McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence, 251–56, 308–28. San Francisco: W. H. Freeman and Company.

Weizenbaum, Joseph. 1962. “How to Make a Computer Appear Intelligent: Five in a Row Offers No Guarantees.” Datamation 8 (February): 24–26.

Weizenbaum, Joseph. 1966. “ELIZA: A Computer Program for the Study of Natural Language Communication between Man and Machine.” Communications of the ACM 1 (January): 36–45.

Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to Calculation. San Francisco: W.H. Freeman and Company



Artificial Intelligence - What Is The Loebner Prize For Chatbots? Who Was Lili Cheng?



A chatbot is a computer software that communicates with people using artificial intelligence. Text or voice input may be used in the talks.

In certain circumstances, chatbots are also intended to take automatic activities in response to human input, such as running an application or sending an email.


Most chatbots try to mimic human conversational behavior, however no chatbot has succeeded in doing so flawlessly to far.



\

Chatbots may assist with a number of requirements in a variety of circumstances.

The capacity to save time and money for people by employing a computer program to gather or disseminate information rather than needing a person to execute these duties is perhaps the most evident.

For example, a corporation may develop a customer service chatbot that replies to client inquiries with information that the chatbot believes to be relevant based on user queries using artificial intelligence.

The chatbot removes the requirement for a human operator to conduct this sort of customer service in this fashion.

Chatbots may also be useful in other situations since they give a more convenient means of interacting with a computer or software application.

A digital assistant chatbot, such as Apple's Siri or Google Assistant, for example, enables people to utilize voice input to get information (such as the address of a requested place) or conduct activities (such as sending a text message) on smartphones.

In cases when alternative input methods are cumbersome or unavailable, the ability to communicate with phones by speech, rather than needing to type information on the devices' displays, is helpful.


Consistency is a third benefit of chatbots.


Because most chatbots react to inquiries using preprogrammed algorithms and data sets, they will often respond with the same replies to the same questions.

Human operators cannot always be relied to act in the same manner; one person's response to a query may differ from another's, or the same person's replies may change from day to day.

Chatbots may aid with consistency in experience and information for the users with whom they communicate in this way.

However, chatbots that employ neural networks or other self-learning techniques to answer to inquiries may "evolve" over time, with the consequence that a query given to a chatbot one day may get a different response from a question posed the next day.

However, just a handful chatbots have been built to learn on their own thus far.

Some, such as Microsoft Tay, have proved to be ineffective.

Chatbots may be created using a number of ways and can be built in practically any programming language.

However, to fuel their conversational skills and automated decision-making, most chatbots depend on a basic set of traits.

Natural language processing, or the capacity to transform human words into data that software can use to make judgments, is one example.

Writing code that can process natural language is a difficult endeavor that involves knowledge of computer science, linguistics, and significant programming.

It requires the capacity to comprehend text or speech from individuals who use a variety of vocabulary, sentence structures, and accents, and who may talk sarcastically or deceptively at times.

Because programmers had to design natural language processing software from scratch before establishing a chatbot, the problem of creating good natural language processing engines made chatbots difficult and time-consuming to produce in the past.

Natural language processing programming frameworks and cloud-based services are now widely available, considerably lowering this barrier.

Modern programmers may either employ a cloud-based service like Amazon Comprehend or Azure Language Understanding to add the capability necessary to read human language, or they can simply import a natural language processing library into their apps.

Most chatbots also need a database of information to answer to queries.

They analyze their own data sets to choose which information to provide or which action to take in response to the inquiry after using natural language processing to comprehend the meaning of input.

Most chatbots do this by matching phrases in queries to predefined tags in their internal databases, which is a very simple process.

More advanced chatbots, on the other hand, may be programmed to continuously adjust or increase their internal databases by evaluating how users have reacted to previous behavior.

For example, a chatbot may ask a user whether the answer it provided in response to a specific query was helpful, and if the user replies no, the chatbot would adjust its internal data to avoid repeating the response the next time a user asks a similar question.



Although chatbots may be useful in a variety of settings, they are not without flaws and the potential for abuse.


One obvious flaw is that no chatbot has yet been proven to be capable of perfectly simulating human behavior, and chatbots can only perform tasks that they have been programmed to do.

They don't have the same aptitude as humans to "think outside the box" or solve issues imaginatively.

In many cases, people engaging with a chatbot may be looking for answers to queries that the chatbot was not designed to answer.


Chatbots raise certain ethical issues for similar reasons.


Chatbot critics have claimed that it is immoral for a computer program to replicate human behavior without revealing to individuals with whom it communicates that it is not a real person.

Some have also stated that chatbots may contribute to an epidemic of loneliness by replacing real human conversations with chatbot conversations that are less intellectually and socially gratifying for human users.

Chatbots, on the other hand, such as Replika, were designed with the express purpose of providing lonely people with an entity to communicate to when real people are unavailable.

Another issue with chatbots is that, like other software programs, they might be utilized in ways that their authors did not anticipate.

Misuse could occur as a result of software security flaws that allow malicious parties to gain control of a chatbot; for example, an attacker seeking to harm a company's reputation might try to compromise its customer-support chatbot in order to provide false or unhelpful support services.

In other circumstances, simple design flaws or oversights may result in chatbots acting unpredictably.

When Microsoft debuted the Tay chatbot in 2016, it learnt this lesson.

The Tay chatbot was meant to teach itself new replies based on past discussions.

When users engaged Tay in racist conversations, Tay began making public racist or inflammatory remarks of its own, prompting Microsoft to shut down the app.

The word "chatbot" was first used in the 1990s as an abbreviated version of chatterbot, a phrase invented in 1994 by computer scientist Michael Mauldin to describe a chatbot called Julia that he constructed in the early 1990s.


Chatbot-like computer programs, on the other hand, have been around for a long time.


The first was ELIZA, a computer program created by Joseph Weizenbaum at MIT's Artificial Intelligence Lab between 1964 and 1966.

Although the software was confined to just a few themes, ELIZA employed early natural language processing methods to participate in text-based discussions with human users.

Stanford psychiatrist Kenneth Colby produced a comparable chatbot software called PARRY in 1972.

It wasn't until the 1990s, when natural language processing techniques had advanced, that chatbot development gained traction and programmers got closer to their goal of building chatbots that could participate in discussion on any subject.

A.L.I.C.E., a chat bot debuted in 1995, and Jabberwacky, a chatbot created in the early 1980s and made accessible to users on the web in 1997, both have this purpose in mind.

The second significant wave of chatbot invention occurred in the early 2010s, when increased smartphone usage fueled demand for digital assistant chatbots that could engage with people through voice interactions, beginning with Apple's Siri in 2011.


The Loebner Prize competition has served to measure the efficacy of chatbots in replicating human behavior throughout most of the history of chatbot development.


The Loebner Prize, which was established in 1990, is given to computer systems (including, but not limited to, chatbots) that judges believe demonstrate the most human-like behavior.

A.L.I.C.E, which won the award three times in the early 2000s, and Jabberwacky, which won twice in 2005 and 2006, are two notable chatbots that have been examined for the Loebner Prize.


Lili Cheng




Lili Cheng is the Microsoft AI and Research division's Corporate Vice President and Distinguished Engineer.


She is in charge of the company's artificial intelligence platform's developer tools and services, which include cognitive services, intelligent software assistants and chatbots, as well as data analytics and deep learning tools.

Cheng has emphasized that AI solutions must gain the confidence of a larger segment of the community and secure users' privacy.

Her group is focusing on artificial intelligence bots and software apps that have human-like dialogues and interactions, according to her.


The ubiquity of social software—technology that lets people connect more effectively with one another—and the interoperability of software assistants, or AIs that chat to one another or pass tasks to one another, are two further ambitions.


Real-time language translation is one example of such an application.

Cheng is also a proponent of technical education and training for individuals, especially women, in order to prepare them for future careers (Davis 2018).

Cheng emphasizes the need of humanizing AI.

Rather than adapting human interactions to computer interactions, technology must adapt to people's working cycles.

Language recognition and conversational AI, according to Cheng, are insufficient technical advancements.

Human emotional needs must be addressed by AI.

One goal of AI research, she says, is to understand "the rational and surprising ways individuals behave." Cheng graduated from Cornell University with a bachelor's degree in architecture."

She started her work as an architect/urban designer at Nihon Sekkei International in Tokyo.

She also worked in Los Angeles for the architectural firm Skidmore Owings & Merrill.

Cheng opted to pursue a profession in information technology while residing in California.

She thought of architectural design as a well-established industry with well-defined norms and needs.

Cheng returned to school and graduated from New York University with a master's degree in Interactive Telecommunications, Computer Programming, and Design.

Her first position in this field was at Apple Computer in Cupertino, California, where she worked as a user experience researcher and designer for QuickTime VR and QuickTime Conferencing in the Advanced Technology Group-Human Interface Group.

In 1995, she joined Microsoft's Virtual Worlds Group, where she worked on the Virtual Worlds Platform and Microsoft V-Chat.

Kodu Game Lab, an environment targeted at teaching youngsters programming, was one of Cheng's efforts.

In 2001, she founded the Social Computing group with the goal of developing social networking prototypes.

She then worked at Microsoft Research-FUSE Labs as the General Manager of Windows User Experience for Windows Vista, eventually ascending to the post of Distinguished Engineer and General Manager.

Cheng has spoken at Harvard and New York Universities and is considered one of the country's top female engineers 

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Cheng, Lili; ELIZA; Natural Language Processing and Speech Understanding; PARRY; Turing Test.


Further Reading


Abu Shawar, Bayan, and Eric Atwell. 2007. “Chatbots: Are They Really Useful?” LDV Forum 22, no.1: 29–49.

Abu Shawar, Bayan, and Eric Atwell. 2015. “ALICE Chatbot: Trials and Outputs.” Computación y Sistemas 19, no. 4: 625–32.

Deshpande, Aditya, Alisha Shahane, Darshana Gadre, Mrunmayi Deshpande, and Prachi M. Joshi. 2017. “A Survey of Various Chatbot Implementation Techniques.” Inter￾national Journal of Computer Engineering and Applications 11 (May): 1–7.

Shah, Huma, and Kevin Warwick. 2009. “Emotion in the Turing Test: A Downward Trend for Machines in Recent Loebner Prizes.” In Handbook of Research on Synthetic Emotions and Sociable Robotics: New Applications in Affective Computing and Artificial Intelligence, 325–49. Hershey, PA: IGI Global.

Zemčík, Tomáš. 2019. “A Brief History of Chatbots.” In Transactions on Computer Science and Engineering, 14–18. Lancaster: DEStech.



Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...