Showing posts sorted by date for query AI bots. Sort by relevance Show all posts
Showing posts sorted by date for query AI bots. Sort by relevance Show all posts

Artificial Intelligence - Who Is Sherry Turkle?

 


 

 

Sherry Turkle(1948–) has a background in sociology and psychology, and her work focuses on the human-technology interaction.

While her study in the 1980s focused on how technology affects people's thinking, her work in the 2000s has become more critical of how technology is utilized at the expense of building and maintaining meaningful interpersonal connections.



She has employed artificial intelligence in products like children's toys and pets for the elderly to highlight what people lose out on when interacting with such things.


Turkle has been at the vanguard of AI breakthroughs as a professor at the Massachusetts Institute of Technology (MIT) and the creator of the MIT Initiative on Technology and the Self.

She highlights a conceptual change in the understanding of AI that occurs between the 1960s and 1980s in Life on the Screen: Identity inthe Age of the Internet (1995), substantially changing the way humans connect to and interact with AI.



She claims that early AI paradigms depended on extensive preprogramming and employed a rule-based concept of intelligence.


However, this viewpoint has given place to one that considers intelligence to be emergent.

This emergent paradigm, which became the recognized mainstream view by 1990, claims that AI arises from a much simpler set of learning algorithms.

The emergent method, according to Turkle, aims to emulate the way the human brain functions, assisting in the breaking down of barriers between computers and nature, and more generally between the natural and the artificial.

In summary, an emergent approach to AI allows people to connect to the technology more easily, even thinking of AI-based programs and gadgets as children.



Not just for the area of AI, but also for Turkle's study and writing on the subject, the rising acceptance of the emerging paradigm of AI and the enhanced relatability it heralds represents a significant turning point.


Turkle started to employ ethnographic research techniques to study the relationship between humans and their gadgets in two edited collections, Evocative Objects: Things We Think With (2007) and The Inner History of Devices (2008).


She emphasized in her book The Inner History of Devices that her intimate ethnography, or the ability to "listen with a third ear," is required to go past the advertising-based clichés that are often employed when addressing technology.


This method comprises setting up time for silent meditation so that participants may think thoroughly about their interactions with their equipment.


Turkle used similar intimate ethnographic approaches in her second major book, Alone Together

Why We Expect More from Technology and Less from Each Other (2011), to argue that the increasing connection between people and the technology they use is harmful.

These issues are connected to the increased usage of social media as a form of communication, as well as the continuous degree of familiarity and relatability with technology gadgets, which stems from the emerging AI paradigm that has become practically omnipresent.

She traced the origins of the dilemma back to early pioneers in the field of cybernetics, citing, for example, Norbert Weiner's speculations on the idea of transmitting a human person across a telegraph line in his book God & Golem, Inc.(1964).

Because it reduces both people and technology to information, this approach to cybernetic thinking blurs the barriers between them.



In terms of AI, this implies that it doesn't matter whether the machines with which we interact are really intelligent.


Turkle claims that by engaging with and caring for these technologies, we may deceive ourselves into feeling we are in a relationship, causing us to treat them as if they were sentient.

In a 2006 presentation titled "Artificial Intelligence at 50: From Building Intelligence to Nurturing Sociabilities" at the Dartmouth Artificial Intelligence Conference, she recognized this trend.

She identified the 1997 Tamagotchi, 1998 Furby, and 2000 MyReal Baby as early versions of what she refers to as relational artifacts, which are more broadly referred to as social machines in the literature.

The main difference between these devices and previous children's toys is that these devices come pre-animated and ready for a relationship, whereas previous children's toys required children to project a relationship onto them.

Turkle argues that this change is about our human weaknesses as much as it is about computer capabilities.

In other words, just caring for an item increases the likelihood of not only seeing it as intelligent but also feeling a connection to it.

This sense of connection is more relevant to the typical person engaging with these technologies than abstract philosophic considerations concerning the nature of their intelligence.



Turkle delves more into the ramifications of people engaging with AI-based technologies in both Alone Together and Reclaiming Conversation: The Power of Talk in a Digital Age (2015).


She provides the example of Adam in Alone Together, who appreciates the appreciation of the AI bots he controls over in the game Civilization.

Adam appreciates the fact that he is able to create something fresh when playing.

Turkle, on the other hand, is skeptical of this interaction, stating that Adam's playing isn't actual creation, but rather the sensation of creation, and that it's problematic since it lacks meaningful pressure or danger.

In Reclaiming Conversation, she expands on this point, suggesting that social partners simply provide a perception of camaraderie.

This is important because of the value of human connection and what may be lost in relationships that simply provide a sensation or perception of friendship rather than true friendship.

Turkle believes that this transition is critical.


She claims that although connections with AI-enabledtechnologies may have certain advantages, they pale in contrast to what is missing: 

  • the complete complexity and inherent contradictions that define what it is to be human.


A person's connection with an AI-enabled technology is not as intricate as one's interaction with other individuals.


Turkle claims that as individuals have become more used to and dependent on technology gadgets, the definition of friendship has evolved.


  • People's expectations for companionship have been simplified as a result of this transformation, and the advantages that one wants to obtain from partnerships have been reduced.
  • People now tend to associate friendship only with the concept of interaction, ignoring the more nuanced sentiments and arguments that are typical in partnerships.
  • By engaging with gadgets, one may form a relationship with them.
  • Conversations between humans have become merely transactional as human communication has shifted away from face-to-face conversation and toward interaction mediated by devices. 

In other words, the most that can be anticipated is engagement.



Turkle, who has a background in psychoanalysis, claims that this kind of transactional communication allows users to spend less time learning to view the world through the eyes of another person, which is a crucial ability for empathy.


Turkle argues we are in a robotic period in which people yearn for, and in some circumstances prefer, AI-based robotic companionship over that of other humans, drawing together these numerous streams of argument.

For example, some people enjoy conversing with their iPhone's Siri virtual assistant because they aren't afraid of being judged by it, as evidenced by a series of Siri commercials featuring celebrities talking to their phones.

Turkle has a problem with this because these devices can only respond as if they understand what is being said.


AI-based gadgets, on the other hand, are confined to comprehending the literal meanings of data stored on the device.

They can decipher the contents of phone calendars and emails, but they have no idea what any of this data means to the user.

There is no discernible difference between a calendar appointment for car maintenance and one for chemotherapy for an AI-based device.

A person may lose sight of what it is to have an authentic dialogue with another human when entangled in a variety of these robotic connections with a growing number of technologies.


While Reclaiming Communication documents deteriorating conversation skills and decreasing empathy, it ultimately ends on a positive note.

Because people are becoming increasingly dissatisfied with their relationships, there may be a chance for face-to-face human communication to reclaim its vital role.


Turkle's ideas focus on reducing the amount of time people spend on their phones, but AI's involvement in this interaction is equally critical.


  • Users must accept that their virtual assistant connections will never be able to replace face-to-face interactions.
  • This will necessitate being more deliberate in how one uses devices, prioritizing in-person interactions over the faster and easier interactions provided by AI-enabled devices.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

  • Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.
  • Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.
  • Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cambridge, MA: MIT Press.



Artificial Intelligence - What Is The Loebner Prize For Chatbots? Who Was Lili Cheng?



A chatbot is a computer software that communicates with people using artificial intelligence. Text or voice input may be used in the talks.

In certain circumstances, chatbots are also intended to take automatic activities in response to human input, such as running an application or sending an email.


Most chatbots try to mimic human conversational behavior, however no chatbot has succeeded in doing so flawlessly to far.



\

Chatbots may assist with a number of requirements in a variety of circumstances.

The capacity to save time and money for people by employing a computer program to gather or disseminate information rather than needing a person to execute these duties is perhaps the most evident.

For example, a corporation may develop a customer service chatbot that replies to client inquiries with information that the chatbot believes to be relevant based on user queries using artificial intelligence.

The chatbot removes the requirement for a human operator to conduct this sort of customer service in this fashion.

Chatbots may also be useful in other situations since they give a more convenient means of interacting with a computer or software application.

A digital assistant chatbot, such as Apple's Siri or Google Assistant, for example, enables people to utilize voice input to get information (such as the address of a requested place) or conduct activities (such as sending a text message) on smartphones.

In cases when alternative input methods are cumbersome or unavailable, the ability to communicate with phones by speech, rather than needing to type information on the devices' displays, is helpful.


Consistency is a third benefit of chatbots.


Because most chatbots react to inquiries using preprogrammed algorithms and data sets, they will often respond with the same replies to the same questions.

Human operators cannot always be relied to act in the same manner; one person's response to a query may differ from another's, or the same person's replies may change from day to day.

Chatbots may aid with consistency in experience and information for the users with whom they communicate in this way.

However, chatbots that employ neural networks or other self-learning techniques to answer to inquiries may "evolve" over time, with the consequence that a query given to a chatbot one day may get a different response from a question posed the next day.

However, just a handful chatbots have been built to learn on their own thus far.

Some, such as Microsoft Tay, have proved to be ineffective.

Chatbots may be created using a number of ways and can be built in practically any programming language.

However, to fuel their conversational skills and automated decision-making, most chatbots depend on a basic set of traits.

Natural language processing, or the capacity to transform human words into data that software can use to make judgments, is one example.

Writing code that can process natural language is a difficult endeavor that involves knowledge of computer science, linguistics, and significant programming.

It requires the capacity to comprehend text or speech from individuals who use a variety of vocabulary, sentence structures, and accents, and who may talk sarcastically or deceptively at times.

Because programmers had to design natural language processing software from scratch before establishing a chatbot, the problem of creating good natural language processing engines made chatbots difficult and time-consuming to produce in the past.

Natural language processing programming frameworks and cloud-based services are now widely available, considerably lowering this barrier.

Modern programmers may either employ a cloud-based service like Amazon Comprehend or Azure Language Understanding to add the capability necessary to read human language, or they can simply import a natural language processing library into their apps.

Most chatbots also need a database of information to answer to queries.

They analyze their own data sets to choose which information to provide or which action to take in response to the inquiry after using natural language processing to comprehend the meaning of input.

Most chatbots do this by matching phrases in queries to predefined tags in their internal databases, which is a very simple process.

More advanced chatbots, on the other hand, may be programmed to continuously adjust or increase their internal databases by evaluating how users have reacted to previous behavior.

For example, a chatbot may ask a user whether the answer it provided in response to a specific query was helpful, and if the user replies no, the chatbot would adjust its internal data to avoid repeating the response the next time a user asks a similar question.



Although chatbots may be useful in a variety of settings, they are not without flaws and the potential for abuse.


One obvious flaw is that no chatbot has yet been proven to be capable of perfectly simulating human behavior, and chatbots can only perform tasks that they have been programmed to do.

They don't have the same aptitude as humans to "think outside the box" or solve issues imaginatively.

In many cases, people engaging with a chatbot may be looking for answers to queries that the chatbot was not designed to answer.


Chatbots raise certain ethical issues for similar reasons.


Chatbot critics have claimed that it is immoral for a computer program to replicate human behavior without revealing to individuals with whom it communicates that it is not a real person.

Some have also stated that chatbots may contribute to an epidemic of loneliness by replacing real human conversations with chatbot conversations that are less intellectually and socially gratifying for human users.

Chatbots, on the other hand, such as Replika, were designed with the express purpose of providing lonely people with an entity to communicate to when real people are unavailable.

Another issue with chatbots is that, like other software programs, they might be utilized in ways that their authors did not anticipate.

Misuse could occur as a result of software security flaws that allow malicious parties to gain control of a chatbot; for example, an attacker seeking to harm a company's reputation might try to compromise its customer-support chatbot in order to provide false or unhelpful support services.

In other circumstances, simple design flaws or oversights may result in chatbots acting unpredictably.

When Microsoft debuted the Tay chatbot in 2016, it learnt this lesson.

The Tay chatbot was meant to teach itself new replies based on past discussions.

When users engaged Tay in racist conversations, Tay began making public racist or inflammatory remarks of its own, prompting Microsoft to shut down the app.

The word "chatbot" was first used in the 1990s as an abbreviated version of chatterbot, a phrase invented in 1994 by computer scientist Michael Mauldin to describe a chatbot called Julia that he constructed in the early 1990s.


Chatbot-like computer programs, on the other hand, have been around for a long time.


The first was ELIZA, a computer program created by Joseph Weizenbaum at MIT's Artificial Intelligence Lab between 1964 and 1966.

Although the software was confined to just a few themes, ELIZA employed early natural language processing methods to participate in text-based discussions with human users.

Stanford psychiatrist Kenneth Colby produced a comparable chatbot software called PARRY in 1972.

It wasn't until the 1990s, when natural language processing techniques had advanced, that chatbot development gained traction and programmers got closer to their goal of building chatbots that could participate in discussion on any subject.

A.L.I.C.E., a chat bot debuted in 1995, and Jabberwacky, a chatbot created in the early 1980s and made accessible to users on the web in 1997, both have this purpose in mind.

The second significant wave of chatbot invention occurred in the early 2010s, when increased smartphone usage fueled demand for digital assistant chatbots that could engage with people through voice interactions, beginning with Apple's Siri in 2011.


The Loebner Prize competition has served to measure the efficacy of chatbots in replicating human behavior throughout most of the history of chatbot development.


The Loebner Prize, which was established in 1990, is given to computer systems (including, but not limited to, chatbots) that judges believe demonstrate the most human-like behavior.

A.L.I.C.E, which won the award three times in the early 2000s, and Jabberwacky, which won twice in 2005 and 2006, are two notable chatbots that have been examined for the Loebner Prize.


Lili Cheng




Lili Cheng is the Microsoft AI and Research division's Corporate Vice President and Distinguished Engineer.


She is in charge of the company's artificial intelligence platform's developer tools and services, which include cognitive services, intelligent software assistants and chatbots, as well as data analytics and deep learning tools.

Cheng has emphasized that AI solutions must gain the confidence of a larger segment of the community and secure users' privacy.

Her group is focusing on artificial intelligence bots and software apps that have human-like dialogues and interactions, according to her.


The ubiquity of social software—technology that lets people connect more effectively with one another—and the interoperability of software assistants, or AIs that chat to one another or pass tasks to one another, are two further ambitions.


Real-time language translation is one example of such an application.

Cheng is also a proponent of technical education and training for individuals, especially women, in order to prepare them for future careers (Davis 2018).

Cheng emphasizes the need of humanizing AI.

Rather than adapting human interactions to computer interactions, technology must adapt to people's working cycles.

Language recognition and conversational AI, according to Cheng, are insufficient technical advancements.

Human emotional needs must be addressed by AI.

One goal of AI research, she says, is to understand "the rational and surprising ways individuals behave." Cheng graduated from Cornell University with a bachelor's degree in architecture."

She started her work as an architect/urban designer at Nihon Sekkei International in Tokyo.

She also worked in Los Angeles for the architectural firm Skidmore Owings & Merrill.

Cheng opted to pursue a profession in information technology while residing in California.

She thought of architectural design as a well-established industry with well-defined norms and needs.

Cheng returned to school and graduated from New York University with a master's degree in Interactive Telecommunications, Computer Programming, and Design.

Her first position in this field was at Apple Computer in Cupertino, California, where she worked as a user experience researcher and designer for QuickTime VR and QuickTime Conferencing in the Advanced Technology Group-Human Interface Group.

In 1995, she joined Microsoft's Virtual Worlds Group, where she worked on the Virtual Worlds Platform and Microsoft V-Chat.

Kodu Game Lab, an environment targeted at teaching youngsters programming, was one of Cheng's efforts.

In 2001, she founded the Social Computing group with the goal of developing social networking prototypes.

She then worked at Microsoft Research-FUSE Labs as the General Manager of Windows User Experience for Windows Vista, eventually ascending to the post of Distinguished Engineer and General Manager.

Cheng has spoken at Harvard and New York Universities and is considered one of the country's top female engineers 

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Cheng, Lili; ELIZA; Natural Language Processing and Speech Understanding; PARRY; Turing Test.


Further Reading


Abu Shawar, Bayan, and Eric Atwell. 2007. “Chatbots: Are They Really Useful?” LDV Forum 22, no.1: 29–49.

Abu Shawar, Bayan, and Eric Atwell. 2015. “ALICE Chatbot: Trials and Outputs.” Computación y Sistemas 19, no. 4: 625–32.

Deshpande, Aditya, Alisha Shahane, Darshana Gadre, Mrunmayi Deshpande, and Prachi M. Joshi. 2017. “A Survey of Various Chatbot Implementation Techniques.” Inter￾national Journal of Computer Engineering and Applications 11 (May): 1–7.

Shah, Huma, and Kevin Warwick. 2009. “Emotion in the Turing Test: A Downward Trend for Machines in Recent Loebner Prizes.” In Handbook of Research on Synthetic Emotions and Sociable Robotics: New Applications in Affective Computing and Artificial Intelligence, 325–49. Hershey, PA: IGI Global.

Zemčík, Tomáš. 2019. “A Brief History of Chatbots.” In Transactions on Computer Science and Engineering, 14–18. Lancaster: DEStech.



Artificial Intelligence - What Are Robot Caregivers?

 


Personal support robots, or caregiver robots, are meant to help individuals who, for a number of reasons, need assistive technology for long-term care, disability, or monitoring.

Although not widely used, caregiver robots are seen as useful in countries with rapidly rising older populations or in situations when a significant number of individuals are afflicted at the same time with a severe sickness.


Caregiver robots have elicited a wide variety of reactions, from terror to comfort.


As they attempt to eliminate the toil from caring rituals, some ethicists have claimed that robotics researchers misunderstand or underappreciate the role of compassionate caretakers.

The majority of caregiver robots are personal robots for use at home, however some are used in institutions including hospitals, nursing homes, and schools.

Some of them are geriatric care robots.

Others, dubbed "robot nannies," are meant to do childcare tasks.

Many have been dubbed "social robots." Interest in caregiving robots has risen in tandem with the world's aging population.

Japan has one of the largest percentage of old people in the world and is a pioneer in the creation of caregiver robots.

According to the United Nations, by 2050, one-third of the island nation's population would be 65 or older, much outnumbering the natural supply of nursing care employees.

The Ministry of Health, Labor, and Welfare of the nation initiated a pilot demonstration project in 2013 to bring bionic nursing robots into eldercare facilities.

By 2050, the number of eligible retirees in the United States will have doubled, and those beyond the age of 85 will have tripled.

In the same year, there will be 1.5 billion persons over the age of 65 all throughout the globe (United Nations 2019).

For a number of reasons, people are becoming more interested in caregiver robot technology.


The physical difficulties of caring for the elderly, infirm, and children are often mentioned as a driving force for the creation of assistive robots.


The caregiver position may be challenging, especially when the client has a severe or long-term illness such as Alzheimer's disease, dementia, or schizoid disorder.

A partial answer to family economic misery has also been proposed: caregiver robots.

Robots may one day be able to take the place of human relatives who must work.

They've also been suggested as a possible solution to nursing home and other care facility staffing shortages.

In addition to technological advancements, societal and cultural factors are driving the creation of caregiver robots.

Because of unfavorable attitudes of outsiders, robot caregivers are favored in Japan than overseas health-care employees.

The demand for independence and the dread of losing behavioral, emotional, and cognitive autonomy are often acknowledged by the elderly themselves.

In the literature, several robot caregiver functions have been recognized.

Some robots are thought to be capable of minimizing human carers' mundane work.

Others are better at more difficult jobs.

Intelligent service robots have been designed to help with feeding, cleaning of houses and bodies, and mobility support, all of which save time and effort (including lifting and turning).



Safety monitoring, data collecting, and surveillance are some of the other functions of these assistive technologies.


Clients with severe to profound impairments may benefit from robot carers for coaching and stimulation.

For patients who require frequent reminders to accomplish chores or take medication, these robots might be used as cognitive prosthesis or mobile memory aides.

These caregiver robots may also include telemedicine capabilities, allowing them to call doctors or nurses for routine or emergency consultations.


Robot caretakers have been offered as a source of social connection and companionship, which has sparked debate.

Although social robots have a human-like appearance, they are often interactive smart toys or artificial pets.

In Japan, robots are referred to as iyashi, a term that also refers to a style of anime and manga that focuses on emotional rehabilitation.

As huggable friends, Japanese children and adults may choose from a broad range of soft-tronic robots.

Matsushita Electric Industrial (MEI) created Wandakun, a fluffy koala bear-like robot, in the 1990s.

When petted, the bear wiggled, sang, and responded to touch with a few Japanese sentences.


Babyloid is a plush mechanical baby beluga whale created by Masayoshi Kano at Chukyo University to help elderly patients with despair.


Babyloid is only seventeen inches long, yet his eyes flicker and he "naps" when rocked.

When it is "glad," LED lights imbedded in its cheeks shine.

When the robot is in a bad mood, it may also drop blue LED tears.

Babyloid can produce almost a hundred distinct noises.

It is hardly a toy, since each one costs more than $1,000.

The infant harp seal is a replica.

The National Institute of Advanced Industrial Science and Technology (AIST) in Japan invented Paro to provide consolation to individuals suffering from dementia, anxiety, or sadness.

Thirteen surface and whisker sensors, three microphones, two vision sensors, and seven actuators for the neck, fins, and eyelids are all included in the eighth-generation Paro.

When patients with dementia use Paro, the robot's developer, Taka nori Shibata of AIST's Intelligent System Research Institute, reports that they experience less hostility and roaming, as well as increased social interaction.

In the United States, Paro is classified as a Class II medical equipment, which puts it in the same danger category as electric wheelchairs and X-ray machines.


Taizou, a twenty-eight-inch robot that can duplicate the motions of thirty various workouts, was developed by AIST.


In Japan, Taizou is utilized to encourage older adults to exercise and keep in shape.

Sony Corporation's well-known AIBO is a robotic therapy dog as well as a very expensive toy.

In 2018, Sony's Life Care Design division started introducing a new generation of dog robots into the company's retirement homes.

The humanoid QRIO robot, AIBO's successor, has been suggested as a platform for basic childcare activities including interactive games and sing-alongs.

Palro, another Fujisoft robot for eldercare treatment, is already in use in over 1,000 senior citizen institutions.

Since its original release in 2010, its artificial intelligence software has been modified multiple times.

Both are used to alleviate dementia symptoms and provide enjoyment.

A bigger section of users of so-called partner-type personal robots has also been promoted by Japanese firms.

These robots are designed to encourage human-machine connection and to alleviate feelings of loneliness and mild melancholy.


In the late 1990s, NEC Corporation started developing the adorable PaPeRo (Partner-Type Personal Robot).


PaPeRo communications robots have the ability to look, listen, communicate, and move in a variety of ways.

Current versions include twin camera eyes that can recognize faces and are intended to allow family members who live in different houses keep an eye on one other.

PaPeRo's Childcare Version interacts with youngsters and serves as a temporary babysitter.

In 2005, Toyota debuted its humanoid Partner Robots family.

The company's robots are intended for a broad range of applications, including human assistance and rehabilitation, as well as socializing and innovation.


In 2012, Toyota launched the Partner Robots line with a customized Human Support Robot (HSR).


HSR robots are designed to help older adults maintain their independence.

In Japan, prototypes are currently being used in eldercare facilities and handicapped people's homes.

HSR robots are capable of picking up and retrieving things as well as avoiding obstacles.

They may also be controlled remotely by a human caregiver and offer internet access and communication.

Japanese roboticists are likewise taking a more focused approach to automated caring.


The RI-MAN robot, developed by the RIKEN Collaboration Center for Human-Interactive Robot Research, is an autonomous humanoid patient-lifting robot.


The forearms, upper arms, and torso of the robot are made of a soft sili cone skin layer and are equipped with touch sensors for safe lifting.

RI-MAN has odor detectors and can follow human faces.

RIBA (Robot for Interactive Body Assistance) is a second-generation RIKEN lifting robot that securely moves patients from bed to wheelchair while responding to simple voice instructions.

Capacitance-type tactile sensors made completely of rubber monitor patient weight in the RIBA-II.


RIKEN's current-generation hydraulic patient life-and-transfer equipment is called Robear.

The robot, which has the look of an anthropomorphic robotic bear, is lighter than its predecessors.

Toshiharu Mukai, RIKEN's inventor and lab leader, invented the lifting robots.


SECOM's MySpoon, Cyberdine's Hybrid Assistive Limb (HAL), and Panasonic's Resyone robotic care bed are examples of narrower approaches to caregiver robots in Japan.

MySpoon is a meal-assistance robot that allows customers to feed themselves using a joystick as a replacement for a human arm and eating utensil.

People with physical limitations may employ the Cyberdine Hybrid Assistive Limb (HAL), a powered robotic exoskeleton outfit.

For patients who would ordinarily need daily lift help, the Panasonic Resyone robotic care bed merges bed and wheelchair.

Projects to develop caregiver robots are also ongoing in Australia and New Zealand.

The Australian Research Council's Centre of Excellence for Autonomous Systems (CAS) was established in the early 2000s as a collaboration between the University of Technology Sydney, the University of Sydney, and the University of New South Wales.

The center's mission was to better understand and develop robotics in order to promote the widespread and ubiquitous use of autonomous systems in society.

The work of CAS has now been separated and placed on an independent footing at the University of Technology Sydney's Centre for Autonomous Systems and the University of Sydney's Australian Centre for Field Robotics.

Bruce Mac Donald of the University of Auckland is leading the creation of Healthbot, a socially assistive robot.

Healthbot is a mobile health robot that reminds seniors to take their meds, check vitals and monitor their physical condition, and call for aid in an emergency.

In the European Union, a number of caregiver robots are being developed.

The GiraffPlus (Giraff+) project, which was just finished at rebro University in Sweden, intends to develop an intelligent system for monitoring the blood pressure, temperature, and movements of elderly individuals at home (to detect falls and other health emergencies).

Giraff may also be utilized as a telepresence robot for virtual visits with family members and health care providers.

The robot is roughly five and a half feet tall and has basic controls as well as a night-vision camera.


The European Mobiserv project's interdisciplinary, collaborative goal is to develop a robot that reminds elderly customers to take their prescriptions, consume meals, and keep active.


Mobiserv is part of a smart home ecosystem that includes sensors, optical sensors, and other automated devices.

Mobiserv is a mobile application that works with smart clothing that collects health-related data.

Mobiserv is a collaboration between Systema Technologies and nine European partners that represent seven different nations.

The EU CompanionAble Project, which involves fifteen institutions and is led by the University of Reading, aims to develop a transportable robotic companion to illustrate the benefits of information and communication technology in aged care.

In the early stages of dementia, the CompanionAble robot tries to solve emergency and security issues, offer cognitive stimulation and reminders, and call human caregiver support.

In a smart home scenario, CompanionAble also interacts with a range of sensors and devices.

The QuoVADis Project at Brova Hospital Paris, a public university hospital specializing in geriatrics, has a similar goal: to develop a robot for at-home care of cognitively challenged old persons.

The Fraunhofer Institute for Manufacturing Engineering and Automation is still designing and manufacturing Care-O-Bots, which are modular robots.

It's designed for hospitals, hotels, and nursing homes.

With its long arms and rotating, bending hip joint, the Care-O-Bot 4 service robot can reach from the floor to a shelf.

The robot is intended to be regarded as friendly, helpful, courteous, and intelligent.


ROBOSWARM and IWARD, intelligent and programmable hospital robot swarms developed by the European Union, provide a fresh approach.


ROBOSWARM is a distributed agent cleaning system for hospitals.

Cleaning, patient monitoring and guiding, environmental monitoring, medicine distribution, and patient surveillance are all covered by the more flexible IWARD.

Because the AI systems incorporated in these systems display adaptive and self-organizing characteristics, multi-institutional partners determined that certifying that they would operate adequately under real-world conditions would be challenging.

They also discovered that onlookers sometimes questioned the robots' motions, asking whether they were doing the proper tasks.


The Ludwig humanoid robot, developed at the University of Toronto, is intended to assist caretakers in dealing with aging-related issues in their clients.


The robot converses with elderly people suffering from dementia or Alzheimer's disease.

Goldie Nejat, AGE-WELL Investigator and Canada Research Chair in Robots for Society and Director of the University of Toronto's Institute for Robots and Mechatronics, is employing robotics technology to assist individuals by guiding them through ordinary everyday chores.

Brian, the university's robot, is sociable and reacts to emotional human interaction.


HomeLab is creating assistive robots for use in health-care delivery at the Toronto Rehabilitation Institute (iDAPT), Canada's biggest academic rehabilitation research facility.


Ed the Robot, created by HomeLab, is a low-cost robot built using the iRobot Create toolset.

The robot, like Brian, is designed to remind dementia sufferers of the appropriate steps to take while doing everyday tasks.


In the United States, caregiver robot technology is also on the rise.

The Acrotek Actron MentorBot surveillance and security robot, which was created in the early 2000s, could follow a human client using visual and aural cues, offer food or medicine reminders, inform family members about concerns, and call emergency services.


Bandit is a socially supportive robot created by Maja Matari of the Robotics and Autonomous Systems Center at the University of Southern California.


The robot is employed in therapeutic settings with patients who have had catastrophic injuries or strokes, as well as those who have aging disorders, autism, or who are obese.

Stroke sufferers react swiftly to imitation exercise movements produced by clever robots in rehabilitation sessions, according to the institute.

Robotic-assisted rehabilitative exercises were also effective in prompting and cueing tasks for youngsters with autism spectrum disorders.

Through the business Embodied Inc., Matari is currently attempting to bring cheap social robots to market.


Nursebots Flo and Pearl, assistive robots for the care of the elderly and infirm, were developed in collaboration between the University of Pittsburgh, Carnegie Mellon University, and the University of Michigan.


The National Science Foundation-funded Nursebot project created a platform for intelligent reminders, telepresence, data gathering and monitoring, mobile manipulation, and social engagement.

Today, Carnegie Mellon is home to the Quality of Life Technology (QoLT) Center, a National Science Foundation Engineering Research Center (ERC) whose objective is to use intelligent technologies to promote independence and improve the functional capabilities of the elderly and handicapped.

The transdisciplinary AgeLab at the Massachusetts Institute of Technology was founded in 1999 to aid in the development of marketable ideas and assistive technology for the aged.

Joe Coughlin, the creator and director of AgeLab, has concentrated on developing the technological requirements for conversational robots for senior care that have the difficult-to-define attribute of likeability.

Walter Dan Stiehl and associates in the Media Lab created The HuggableTM teddy bear robotic companion at MIT.

A video camera eye, 1,500 sensors, silent actuators, an inertial measurement unit, a speaker, and an internal personal computer with wireless networking capabilities are all included in the bear.

Virtual agents are used in other forms of caregiving technology.

Softbots are a term used to describe these agents.

The MIT Media Lab's CASPER affect management agent, created by Jonathan Klein, Youngme Moon, and Rosalind Picard in the early 2000s, is an example of a virtual agent designed to relieve unpleasant emotional states, notably impatience.

To reply to a user who is sharing their ideas and emotions with the computer, the human-computer interaction (HCI) agent employs text-only social-affective feedback mechanisms.



The MIT FITrack exercise advisor agent uses a browser-based client with a relational database and text-to-speech engine on the backend.



The goal of FITrack is to create an interactive simulation of a professional fitness trainer called Laura working with a client.

Amanda Sharkey and Noel Sharkey, computer scientists at the University of Sheffield, are often mentioned in studies on the ethics of caregiver robot technology.

The Shar keys are concerned about robotic carers and the loss of human dignity they may cause.

They claim that such technology has both advantages and disadvantages.

On the one hand, care provider robots have the potential to broaden the variety of options accessible to graying populations, and these features of technology should be promoted.

The technologies, on the other hand, might be used to mislead or deceive society's most vulnerable people, or to further isolate the elderly from frequent companionship and social engagement.

The Sharkeys point out that robotic caretakers may someday outperform humans in certain areas, such as when speed, power, or accuracy are required.


Robots might be trained to avoid or lessen eldercare abuse, impatience, or ineptitude, all of which are typical complaints among the elderly.


Indeed, if societal institutions for caregiver assistance are weak or defective, an ethical obligation to utilize caregiver robots may apply.

Robots, on the other hand, can not comprehend complicated human constructions like loyalty or adapt perfectly to the delicate, tailored demands of specific consumers.

"The old may find themselves in a barren world of machines, a world of automated care: a factory for the aged," the Sharkeys wrote if they don't plan ahead (Sharkey and Sharkey 2012, 282).

In her groundbreaking book Alone Together: Why We Expect More From Technology and Less From Each Other (2011), Sherry Turkle includes a chapter to caregiver robots.

She points out that researchers in robotics and artificial intelligence are driven by the need to make the elderly feel desired via their work, assuming that older folks are often lonely or abandoned.

In aging populations, it is true that attention and labor are in short supply.


Robots are used as a kind of entertainment.


They make everyday living and household routines easier and safer.

Turkle admits that robots never get tired and can even function from a neutral stance in customer interactions.

Humans, on the other hand, can have reasons that go against even the most basic or traditional norms of caring.


"One may argue that individuals can act as though they care," Turkle observes.

"A robot is unconcerned. As a result, a robot cannot act since it can only act" (Turkle 2011, 124).


Turkle, on the other hand, is a critical critic of caregiving technology.

Most importantly, caring conduct and caring feelings are often misconstrued.

In her opinion, interactions between people and robots do not constitute true dialogues.

They may even cause consternation among vulnerable and reliant groups.

The risk of privacy invasion from caregiver robot monitoring is significant, and automated help might potentially sabotage human experience and memory development.


The emergence of a generation of older folks and youngsters who prefer machines to intimate human ties poses a significant threat.


On suitable behaviors and manufactured compassion, several philosophers and ethicists have chimed in.

Human touch is very important in healing rituals, according to Sparrow and Sparrow (2006), robots may increase loss of control, and robot caring is false caregiving since robots are incapable of genuine concern.

Borenstein and Pearson (2011) and Van Wynsberghe (2013) believe that caregiver robots infringe on human dignity and senior rights, impeding freedom of choice.

Van Wynsberghe, in particular, advocates for value-sensitive robot designs that align with Joan Tronto's ethic of care, which includes attentiveness, responsibility, competence, and reciprocity, as well as broader concerns for respect, trust, empathy, and compassion, according to University of Minnesota professor Joan Tronto.

Vallor (2011) questioned the underlying assumptions of robot care by questioning the premise that caring for others is only a problem or a burden.

It's possible that excellent care is individualized to the individual, something that personable but mass-produced robots could fail to provide.


Robot caregiving will very certainly be frowned upon by many faiths and cultures.


By providing incorrect and unsuitable social connections, caregiver robots may potentially cause reactive attachment disorder in children.

The International Organization for Standardization (ISO) has defined rules for the creation of personal robots, but who is to blame when a robot is neglected? The courts are undecided, and robot caregiver legislation is still in its early stages.

According to Sharkey and Sharkey (2010), caregiver robots might be held accountable for breaches of privacy, injury caused by illegal constraint, misleading activities, psychological harm, and accountability failings.

Future robot ethical frameworks must prioritize the needs of patients above the wishes of caretakers.

In interviews with the elderly, Wu et al. (2010) discovered six themes connected to patient requirements.

Thirty people in their sixties and seventies agreed that assistive technology should initially aid them with simple, daily chores.

Other important needs included maintaining good health, stimulating memory and concentration, living alone "for as long as I wish without worrying my family circle" (Wu et al. 2010, 36), maintaining curiosity and growing interest in new activities, and communicating with relatives on a regular basis.


In popular culture, robot maids, nannies, and caregiver technologies are all prominent clichés.


Several early instances may be seen in the television series The Twilight Zone.

In "The Lateness of the Hour," a man develops a whole family of robot slaves (1960).

In "I Sing the Body Electric," Grandma is a robot babysitter (1962).


From the animated television series The Jetsons (1962–1963), Rosie the robotic maid is a notable character.

In the animated movie Wall-E (2008) and Big Hero 6 (2014), as well as the science fiction thriller I Am Mother, caregiver robots are a central narrative component (2019).

They're also commonly seen in manga and anime.

Roujin Z (1991), Kurogane Communication (1997), and The Umbrella Academy are just a few examples (2019).


In popular culture, Jake Schreier's 2012 science fiction film Robot and Frank dramatizes the limits and potential of caregiver robot technology.

A gruff former jewel thief with deteriorating mental health seeks to make his robotic sidekick into a criminal accomplice in the film.

The film delves into a number of ethical concerns including not just the care of the elderly, but also the rights of robots in slavery.

"We are psychologically evolved not merely to nurture what we love, but to love what we nurture," says MIT social scientist Sherry Turkle (Turkle 2011, 11).


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Ishiguro, Hiroshi; Robot Ethics; Turkle, Sherry.


Further Reading


Borenstein, Jason, and Yvette Pearson. 2011. “Robot Caregivers: Ethical Issues across the Human Lifespan.” In Robot Ethics: The Ethical and Social Implications ofRobotics, edited by Patrick Lin, Keith Abney, and George A. Bekey, 251–65. Cambridge, MA: MIT Press.

Sharkey, Noel, and Amanda Sharkey. 2010. “The Crying Shame of Robot Nannies: An Ethical Appraisal.” Interaction Studies 11, no. 2 (January): 161–90.

Sharkey, Noel, and Amanda Sharkey. 2012. “The Eldercare Factory.” Gerontology 58, no. 3: 282–88.

Sparrow, Robert, and Linda Sparrow. 2006 “In the Hands of Machines? The Future of Aged Care.” Minds and Machines 16, no. 2 (May): 141–61.

Turkle, Sherry. 2011. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books.

United Nations. 2019. World Population Ageing Highlights. New York: Department of Economic and Social Affairs. Population Division.

Vallor, Shannon. 2011. “Carebots and Caregivers: Sustaining the Ethical Ideal of Care in the Twenty-First Century.” Philosophy & Technology 24, no. 3 (September): 251–68.

Van Wynsberghe, Aimee. 2013. “Designing Robots for Care: Care Centered Value Sensitive Design.” Science and Engineering Ethics 19, no. 2 (June): 407–33.

Wu, Ya-Huei, Véronique Faucounau, Mélodie Boulay, Marina Maestrutti, and Anne Sophie Rigaud. 2010. “Robotic Agents for Supporting Community-Dwelling Elderly People with Memory Complaints: Perceived Needs and Preferences.” Health Informatics Journal 17, no. 1: 33–40.


Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...