A chatbot is a computer software that communicates with people using artificial intelligence. Text or voice input may be used in the talks.
In certain circumstances, chatbots are also intended to take
automatic activities in response to human input, such as running an application
or sending an email.
Most chatbots try to mimic human conversational behavior, however no chatbot has succeeded in doing so flawlessly to far.
Chatbots may assist with a number of requirements in a
variety of circumstances.
The capacity to save time and money for people by employing
a computer program to gather or disseminate information rather than needing a
person to execute these duties is perhaps the most evident.
For example, a corporation may develop a customer service
chatbot that replies to client inquiries with information that the chatbot
believes to be relevant based on user queries using artificial intelligence.
The chatbot removes the requirement for a human operator to
conduct this sort of customer service in this fashion.
Chatbots may also be useful in other situations since they
give a more convenient means of interacting with a computer or software
application.
A digital assistant chatbot, such as Apple's Siri or Google
Assistant, for example, enables people to utilize voice input to get
information (such as the address of a requested place) or conduct activities
(such as sending a text message) on smartphones.
In cases when alternative input methods are cumbersome or
unavailable, the ability to communicate with phones by speech, rather than
needing to type information on the devices' displays, is helpful.
Consistency is a third benefit of chatbots.
Because most chatbots react to inquiries using preprogrammed algorithms and data sets, they will often respond with the same replies to the same questions.
Human operators cannot always be relied to act in the same
manner; one person's response to a query may differ from another's, or the same
person's replies may change from day to day.
Chatbots may aid with consistency in experience and
information for the users with whom they communicate in this way.
However, chatbots that employ neural networks or other
self-learning techniques to answer to inquiries may "evolve" over
time, with the consequence that a query given to a chatbot one day may get a
different response from a question posed the next day.
However, just a handful chatbots have been built to learn on
their own thus far.
Some, such as Microsoft Tay, have proved to be ineffective.
Chatbots may be created using a number of ways and can be
built in practically any programming language.
However, to fuel their conversational skills and automated
decision-making, most chatbots depend on a basic set of traits.
Natural language processing, or the capacity to transform
human words into data that software can use to make judgments, is one example.
Writing code that can process natural language is a
difficult endeavor that involves knowledge of computer science, linguistics,
and significant programming.
It requires the capacity to comprehend text or speech from
individuals who use a variety of vocabulary, sentence structures, and accents,
and who may talk sarcastically or deceptively at times.
Because programmers had to design natural language
processing software from scratch before establishing a chatbot, the problem of
creating good natural language processing engines made chatbots difficult and time-consuming
to produce in the past.
Natural language processing programming frameworks and
cloud-based services are now widely available, considerably lowering this
barrier.
Modern programmers may either employ a cloud-based service
like Amazon Comprehend or Azure Language Understanding to add the capability
necessary to read human language, or they can simply import a natural language
processing library into their apps.
Most chatbots also need a database of information to answer
to queries.
They analyze their own data sets to choose which information
to provide or which action to take in response to the inquiry after using
natural language processing to comprehend the meaning of input.
Most chatbots do this by matching phrases in queries to
predefined tags in their internal databases, which is a very simple process.
More advanced chatbots, on the other hand, may be programmed
to continuously adjust or increase their internal databases by evaluating how
users have reacted to previous behavior.
For example, a chatbot may ask a user whether the answer it
provided in response to a specific query was helpful, and if the user replies
no, the chatbot would adjust its internal data to avoid repeating the response
the next time a user asks a similar question.
Although chatbots may be useful in a variety of settings, they are not without flaws and the potential for abuse.
One obvious flaw is that no chatbot has yet been proven to
be capable of perfectly simulating human behavior, and chatbots can only perform
tasks that they have been programmed to do.
They don't have the same aptitude as humans to "think
outside the box" or solve issues imaginatively.
In many cases, people engaging with a chatbot may be looking
for answers to queries that the chatbot was not designed to answer.
Chatbots raise certain ethical issues for similar reasons.
Chatbot critics have claimed that it is immoral for a
computer program to replicate human behavior without revealing to individuals
with whom it communicates that it is not a real person.
Some have also stated that chatbots may contribute to an
epidemic of loneliness by replacing real human conversations with chatbot
conversations that are less intellectually and socially gratifying for human
users.
Chatbots, on the other hand, such as Replika, were designed
with the express purpose of providing lonely people with an entity to
communicate to when real people are unavailable.
Another issue with chatbots is that, like other software
programs, they might be utilized in ways that their authors did not anticipate.
Misuse could occur as a result of software security flaws
that allow malicious parties to gain control of a chatbot; for example, an
attacker seeking to harm a company's reputation might try to compromise its customer-support
chatbot in order to provide false or unhelpful support services.
In other circumstances, simple design flaws or oversights
may result in chatbots acting unpredictably.
When Microsoft debuted the Tay chatbot in 2016, it learnt
this lesson.
The Tay chatbot was meant to teach itself new replies based
on past discussions.
When users engaged Tay in racist conversations, Tay began
making public racist or inflammatory remarks of its own, prompting Microsoft to
shut down the app.
The word "chatbot" was first used in the 1990s as
an abbreviated version of chatterbot, a phrase invented in 1994 by computer
scientist Michael Mauldin to describe a chatbot called Julia that he
constructed in the early 1990s.
Chatbot-like computer programs, on the other hand, have been around for a long time.
The first was ELIZA, a computer program created by Joseph
Weizenbaum at MIT's Artificial Intelligence Lab between 1964 and 1966.
Although the software was confined to just a few themes,
ELIZA employed early natural language processing methods to participate in
text-based discussions with human users.
Stanford psychiatrist Kenneth Colby produced a comparable
chatbot software called PARRY in 1972.
It wasn't until the 1990s, when natural language processing
techniques had advanced, that chatbot development gained traction and
programmers got closer to their goal of building chatbots that could
participate in discussion on any subject.
A.L.I.C.E., a chat bot debuted in 1995, and Jabberwacky, a
chatbot created in the early 1980s and made accessible to users on the web in
1997, both have this purpose in mind.
The second significant wave of chatbot invention occurred in
the early 2010s, when increased smartphone usage fueled demand for digital
assistant chatbots that could engage with people through voice interactions,
beginning with Apple's Siri in 2011.
The Loebner Prize competition has served to measure the efficacy of chatbots in replicating human behavior throughout most of the history of chatbot development.
The Loebner Prize, which was established in 1990, is given
to computer systems (including, but not limited to, chatbots) that judges
believe demonstrate the most human-like behavior.
A.L.I.C.E, which won the award three times in the early
2000s, and Jabberwacky, which won twice in 2005 and 2006, are two notable
chatbots that have been examined for the Loebner Prize.
Lili Cheng
Lili Cheng is the Microsoft AI and Research division's Corporate Vice President and Distinguished Engineer.
She is in charge of the company's artificial intelligence
platform's developer tools and services, which include cognitive services,
intelligent software assistants and chatbots, as well as data analytics and
deep learning tools.
Cheng has emphasized that AI solutions must gain the
confidence of a larger segment of the community and secure users' privacy.
Her group is focusing on artificial intelligence bots and
software apps that have human-like dialogues and interactions, according to
her.
The ubiquity of social software—technology that lets people connect more effectively with one another—and the interoperability of software assistants, or AIs that chat to one another or pass tasks to one another, are two further ambitions.
Real-time language translation is one example of such an
application.
Cheng is also a proponent of technical education and
training for individuals, especially women, in order to prepare them for future
careers (Davis 2018).
Cheng emphasizes the need of humanizing AI.
Rather than adapting human interactions to computer
interactions, technology must adapt to people's working cycles.
Language recognition and conversational AI, according to
Cheng, are insufficient technical advancements.
Human emotional needs must be addressed by AI.
One goal of AI research, she says, is to understand
"the rational and surprising ways individuals behave." Cheng
graduated from Cornell University with a bachelor's degree in architecture."
She started her work as an architect/urban designer at Nihon
Sekkei International in Tokyo.
She also worked in Los Angeles for the architectural firm
Skidmore Owings & Merrill.
Cheng opted to pursue a profession in information technology
while residing in California.
She thought of architectural design as a well-established
industry with well-defined norms and needs.
Cheng returned to school and graduated from New York
University with a master's degree in Interactive Telecommunications, Computer
Programming, and Design.
Her first position in this field was at Apple Computer in
Cupertino, California, where she worked as a user experience researcher and
designer for QuickTime VR and QuickTime Conferencing in the Advanced Technology
Group-Human Interface Group.
In 1995, she joined Microsoft's Virtual Worlds Group, where
she worked on the Virtual Worlds Platform and Microsoft V-Chat.
Kodu Game Lab, an environment targeted at teaching
youngsters programming, was one of Cheng's efforts.
In 2001, she founded the Social Computing group with the goal
of developing social networking prototypes.
She then worked at Microsoft Research-FUSE Labs as the
General Manager of Windows User Experience for Windows Vista, eventually
ascending to the post of Distinguished Engineer and General Manager.
~ Jai Krishna Ponnappan
You may also want to read more about Artificial Intelligence here.
See also:
Cheng, Lili; ELIZA; Natural Language Processing and Speech Understanding; PARRY; Turing Test.
Further Reading
Abu Shawar, Bayan, and Eric Atwell. 2007. “Chatbots: Are They Really Useful?” LDV Forum 22, no.1: 29–49.
Abu Shawar, Bayan, and Eric Atwell. 2015. “ALICE Chatbot: Trials and Outputs.” Computación y Sistemas 19, no. 4: 625–32.
Deshpande, Aditya, Alisha Shahane, Darshana Gadre, Mrunmayi Deshpande, and Prachi M. Joshi. 2017. “A Survey of Various Chatbot Implementation Techniques.” International Journal of Computer Engineering and Applications 11 (May): 1–7.
Shah, Huma, and Kevin Warwick. 2009. “Emotion in the Turing Test: A Downward Trend for Machines in Recent Loebner Prizes.” In Handbook of Research on Synthetic Emotions and Sociable Robotics: New Applications in Affective Computing and Artificial Intelligence, 325–49. Hershey, PA: IGI Global.
Zemčík, Tomáš. 2019. “A Brief History of Chatbots.” In Transactions on Computer Science and Engineering, 14–18. Lancaster: DEStech.