Showing posts sorted by relevance for query social media. Sort by date Show all posts
Showing posts sorted by relevance for query social media. Sort by date Show all posts

Artificial Intelligence - Who Is Heather Knight?




Heather Knight is a robotics and artificial intelligence specialist best recognized for her work in the entertainment industry.

Her Collaborative Humans and Robots: Interaction, Sociability, Machine Learning, and Art (CHARISMA) Research Lab at Oregon State University aims to apply performing arts techniques to robots.

Knight identifies herself as a social roboticist, a person who develops non-anthropomorphic—and sometimes nonverbal—machines that interact with people.

She makes robots that act in ways that are modeled after human interpersonal communication.

These behaviors include speaking styles, greeting movements, open attitudes, and a variety of other context indicators that assist humans in establishing rapport with robots in ordinary life.

Knight examines social and political policies relating to robotics in the CHARISMA Lab, where he works with social robots and so-called charismatic machines.

The Marilyn Monrobot interactive robot theatre company was founded by Knight.

The Robot Film Festival provides a venue for roboticists to demonstrate their latest inventions in a live setting, as well as films that are relevant to the evolving state of the art in robotics and robot-human interaction.

The Marilyn Monrobot firm arose from Knight's involvement with the Syyn Labs creative collective and her observations of Guy Hoffman, Director of the MIT Media Innovation Lab, on robots built for performance reasons.

Knight's production firm specializes on robot humor.

Knight claims that theatrical spaces are ideal for social robotics research because they not only encourage playfulness—requiring robot actors to express themselves and interact—but also include creative constraints that robots thrive in, such as a fixed stage, trial-and-error learning, and repeat performances (with manipu lated variations).

The usage of robots in entertainment situations, according to Knight, is beneficial since it increases human culture, imagination, and creativity.

At the TEDWomen conference in 2010, Knight debuted Data, a stand-up comedy robot.

Aldebaran Robotics created Data, an Nao robot (now SoftBank Group).

Data performs a comedy performance (with roughly 200 pre-programmed jokes) while gathering input from the audience and fine-tuning its act in real time.

The robot was created at Carnegie Mellon University by Scott Satkin and Varun Ramakrisha.

Knight is presently collaborating with Ginger the Robot on a comedic project.

The development of algorithms for artificial social intelligence is also fueled by robot entertainment.

In other words, art is utilized to motivate the development of new technologies.

To evaluate audience responses and understand the noises made by audiences, Data and Ginger use a microphone and a machine learning system (laughter, chatter, clap ping, etc.).

After each joke, the audience is given green and red cards to hold up.

Green cards indicate to the robots that the audience enjoys the joke.

Red cards are given out when jokes fall flat.

Knight has discovered that excellent robot humor doesn't have to disguise the fact that it's about a robot.

Rather, Data makes people laugh by drawing attention to its machine-specific issues and making self-deprecating remarks about its limits.

In order to create expressive, captivating robots, Knight has found improvisational acting and dancing skills to be quite useful.

In the process, she has changed the original Robotic Paradigm's technique of Sense-Plan-Act, preferring Sensing-Character-Enactment, which is more similar to the procedure utilized in theatrical performance in practice.

Knight is presently experimenting with ChairBots, which are hybrid robots made by gluing IKEA wooden chairs to Neato Botvacs (a brand of intelligent robotic vacuum cleaner).

The ChairBots are being tested in public places to see how a basic robot might persuade people to get out of the way using just rudimentary gestures as a mode of communication.

They've also been used to persuade prospective café customers to come in, locate a seat, and settle down.

Knight collaborated on the synthetic organic robot art piece Public Anemone for the SIGGRAPH computer graphics conference while pursuing degrees at the MIT Media Lab with Personal Robots group head Professor Cynthia Breazeal.

The installation consisted of a fiberglass cave filled with glowing creatures that moved and responded to music and people.

The cave's centerpiece robot, also known as "Public Anemone," swayed and interacted with visitors, bathed in a waterfall, watered a plant, and interacted with other cave attractions.

Knight collaborated with animatronics designer Dan Stiehl to create capacitive sensor-equipped artificial tube worms.

The tubeworm's fiberoptic tentacles drew into their tubes and changed color when a human observer reached into the cave, as though prompted by protective impulses.

The team behind Public Anemone defined the initiative as "a step toward fully embodied robot theatrical performance" and "an example of intelligent staging." Knight also helped with the mechanical design of the Smithsonian/Cooper-Hewitt Design Museum's "Cyberflora" kinetic robot flower garden display in 2003.

Her master's thesis at MIT focused on the Sensate Bear, a huggable robot teddy bear with full-body capacitive touch sensors that she used to investigate real-time algorithms incorporating social touch and nonverbal communication.

In 2016, Knight received her PhD from Carnegie Mellon University.

Her dissertation focused on expressive motion in robots with a reduced degree of freedom.

Humans do not require robots to closely resemble humans in appearance or behavior to be treated as close associates, according to Knight's research.

Humans, on the other hand, are quick to anthropomorphize robots and offer them autonomy.

Indeed, she claims, when robots become more human-like in appearance, people may feel uneasy or anticipate a far higher level of humanlike conduct.

Professor Matt Mason of the School of Computer Science and Robotics Institute advised Knight.

She was formerly a robotic artist in residence at Alphabet's X, Google's parent company's research lab.

Knight has previously worked with Aldebaran Robotics and NASA's Jet Propulsion Laboratory as a research scientist and engineer.

While working as an engineer at Aldebaran Robotics, Knight created the touch sensing panel for the Nao autonomous family companion robot, as well as the infrared detection and emission capabilities in its eyes.

Syyn Labs won a UK Music Video Award for her work on the opening two minutes of the OK Go video "This Too Shall Pass," which contains a Rube Goldberg machine.

She is now assisting Clearpath Robotics in making its self-driving, mobile-transport robots more socially conscious. 





Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


RoboThespian; Turkle, Sherry.


Further Reading:



Biever, Celeste. 2010. “Wherefore Art Thou, Robot?” New Scientist 208, no. 2792: 50–52.

Breazeal, Cynthia, Andrew Brooks, Jesse Gray, Matt Hancher, Cory Kidd, John McBean, Dan Stiehl, and Joshua Strickon. 2003. “Interactive Robot Theatre.” Communications of the ACM 46, no. 7: 76–84.

Knight, Heather. 2013. “Social Robots: Our Charismatic Friends in an Automated Future.” Wired UK, April 2, 2013. https://www.wired.co.uk/article/the-inventor.

Knight, Heather. 2014. How Humans Respond to Robots: Building Public Policy through Good Design. Washington, DC: Brookings Institute, Center for Technology Innovation.



Artificial Intelligence - Climate Change Crisis And AI.

 




Artificial intelligence has a double-edged sword when it comes to climate change and the environment.


Artificial intelligence is being used by scientists to detect, adapt, and react to ecological concerns.

Civilization is becoming exposed to new environmental hazards and vulnerabilities as a result of the same technologies.

Much has been written on the importance of information technology in green economy solutions.

Data from natural and urban ecosystems is collected and analyzed using intelligent sensing systems and environmental information systems.

Machine learning is being applied in the development of sustainable infrastructure, citizen detection of environmental perturbations and deterioration, contamination detection and remediation, and the redefining of consumption habits and resource recycling.



Planet hacking is a term used to describe such operations.


Precision farming is one example of planet hacking.

Artificial intelligence is used in precision farming to diagnose plant illnesses and pests, as well as detect soil nutrition issues.

Agricultural yields are increased while water, fertilizer, and chemical pesticides are used more efficiently thanks to sensor technology directed by AI.

Controlled farming approaches offer more environmentally friendly land management and (perhaps) biodiversity conservation.

Another example is IBM Research's collaboration with the Chinese government to minimize pollution in the nation via the Green Horizons program.

Green Horizons is a ten-year effort that began in July 2014 with the goal of improving air quality, promoting renewable energy integration, and promoting industrial energy efficiency.

To provide air quality reports and track pollution back to its source, IBM is using cognitive computing, decision support technologies, and sophisticated sensors.

Green Horizons has grown to include global initiatives such as collaborations with Delhi, India, to link traffic congestion patterns with air pollution; Johannesburg, South Africa, to fulfill air quality objectives; and British wind farms, to estimate turbine performance and electricity output.

According to the National Renewable Energy Laboratory at the University of Maryland, AI-enabled automobiles and trucks are predicted to save a significant amount of gasoline, maybe in the region of 15% less use.


Smart cars eliminate inefficient combustion caused by stop-and-go and speed-up and slow-down driving behavior, resulting in increased fuel efficiency (Brown et al.2014).


Intelligent driver input is merely the first step toward a more environmentally friendly automobile.

According to the Society of Automotive Engineers and the National Renewable Energy Laboratory, linked automobiles equipped with vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication might save up to 30% on gasoline (Gonder et al.

2012).

Smart trucks and robotic taxis will be grouped together to conserve fuel and minimize carbon emissions.

Environmental robots (ecobots) are projected to make significant advancements in risk monitoring, management, and mitigation.

At nuclear power plants, service robots are in use.

Two iRobot PackBots were sent to Japan's Fukushima nuclear power plant to measure radioactivity.

Treebot is a dexterous tree-climbing robot that is meant to monitor arboreal environments that are too difficult for people to access.

The Guardian, a robot created by the same person who invented the Roomba, is being developed to hunt down and remove invasive lionfish that endanger coral reefs.

A similar service is being provided by the COTSbot, which employs visual recognition technology to wipe away crown-of-thorn starfish.

Artificial intelligence is assisting in the discovery of a wide range of human civilization's effects on the natural environment.

Cornell University's highly multidisciplinary Institute for Computer Sustainability brings together professional scientists and citizens to apply new computing techniques to large-scale environmental, social, and economic issues.

Birders are partnering with the Cornell Lab of Ornithology to submit millions of observations of bird species throughout North America, to provide just one example.

An app named eBird is used to record the observations.

To monitor migratory patterns and anticipate bird population levels across time and space, computational sustainability approaches are applied.

Wildbook, iNaturalist, Cicada Hunt, and iBats are some of the other crowdsourced nature observation apps.

Several applications are linked to open-access databases and big data initiatives, such as the Global Biodiversity Information Facility, which will include 1.4 billion searchable entries by 2020.


By modeling future climate change, artificial intelligence is also being utilized to assist human populations understand and begin dealing with environmental issues.

A multidisciplinary team from the Montreal Institute for Learning Algorithms, Microsoft Research, and ConscientAI Labs is using street view imagery of extreme weather events and generative adversarial networks—in which two neural networks are pitted against one another—to create realistic images depicting the effects of bushfires and sea level rise on actual neighborhoods.

Human behavior and lifestyle changes may be influenced by emotional reactions to photos.

Virtual reality simulations of contaminated ocean ecosystems are being developed by Stanford's Virtual Human Interaction Lab in order to increase human empathy and modify behavior in coastal communities.


Information technology and artificial intelligence, on the other hand, play a role in the climate catastrophe.


The pollution created by the production of electronic equipment and software is one of the most pressing concerns.

These are often seen as clean industries, however they often use harsh chemicals and hazardous materials.

With twenty-three active Superfund sites, California's Silicon Valley is one of the most contaminated areas in the country.

Many of these hazardous waste dumps were developed by computer component makers.

Trichloroethylene, a solvent used in semiconductor cleaning, is one of the most common soil pollutants.

Information technology uses a lot of energy and contributes a lot of greenhouse gas emissions.

Solar-powered data centers and battery storage are increasingly being used to power cloud computing data centers.


In recent years, a number of cloud computing facilities have been developed around the Arctic Circle to take use of the inherent cooling capabilities of the cold air and ocean.


The so-called Node Pole, situated in Sweden's northernmost county, is a favored location for such building.

In 2020, a data center project in Reykjavik, Iceland, will run entirely on renewable geo thermal and hydroelectric energy.

Recycling is also a huge concern, since life cycle engineering is just now starting to address the challenges of producing environmentally friendly computers.

Toxic electronic trash is difficult to dispose of in the United States, thus a considerable portion of all e-waste is sent to Asia and Africa.

Every year, some 50 million tons of e-waste are produced throughout the globe (United Nations 2019).

Jack Ma of the international e-commerce company Alibaba claimed at the World Economic Forum annual gathering in Davos, Switzerland, that artificial intelligence and big data were making the world unstable and endangering human life.

Artificial intelligence research's carbon impact is just now being quantified with any accuracy.

While Microsoft and Pricewaterhouse Coopers reported that artificial intelligence could reduce carbon dioxide emissions by 2.4 gigatonnes by 2030 (the combined emissions of Japan, Canada, and Australia), researchers at the University of Massachusetts, Amherst discovered that training a model for natural language processing can emit the equivalent of 626,000 pounds of greenhouse gases.

This is over five times the carbon emissions produced by a typical automobile throughout the course of its lifespan, including original production.

Artificial intelligence has a massive influence on energy usage and carbon emissions right now, especially when models are tweaked via a technique called neural architecture search (Strubell et al. 2019).

It's unclear if next-generation technologies like quantum artificial intelligence, chipset designs, and unique machine intelligence processors (such as neuromorphic circuits) would lessen AI's environmental effect.


Artificial intelligence is also being utilized to extract additional oil and gas from beneath, but more effectively.


Oilfield services are becoming more automated, and businesses like Google and Microsoft are opening offices and divisions to cater to them.

Since the 1990s, Total S.A., a French multinational oil firm, has used artificial intelligence to enhance production and understand subsurface data.

Total partnered up with Google Cloud Advanced Solutions Lab professionals in 2018 to use modern machine learning techniques to technical data analysis difficulties in the exploration and production of fossil fuels.

Every geoscience engineer at the oil company will have access to an AI intelligent assistant, according to Google.

With artificial intelligence, Google is also assisting Anadarko Petroleum (bought by Occidental Petroleum in 2019) in analyzing seismic data to discover oil deposits, enhance production, and improve efficiency.


Working in the emerging subject of evolutionary robotics, computer scientists Joel Lehman and Risto Miikkulainen claim that in the case of a future extinction catastrophe, superintelligent robots and artificial life may swiftly breed and push out humans.


In other words, robots may enter the continuing war between plants and animals.

To investigate evolvability in artificial and biological populations, Lehman and Miikkulainen created computer models to replicate extinction events.

The study is mostly theoretical, but it may assist engineers comprehend how extinction events could impact their work; how the rules of variation apply to evolutionary algorithms, artificial neural networks, and virtual organisms; and how coevolution and evolvability function in ecosystems.

As a result of such conjecture, Emerj Artificial Intelligence Research's Daniel Faggella notably questioned if the "environment matter[s] after the Singularity" (Faggella 2019).

Ian McDonald's River of Gods (2004) is a notable science fiction novel about climate change and artificial intelligence.

The book's events take place in 2047 in the Indian subcontinent.

A.I.Artificial Intelligence (2001) by Steven Spielberg is set in a twenty-second-century planet plagued by global warming and rising sea levels.

Humanoid robots are seen as important to the economy since they do not deplete limited resources.

Transcendence, a 2014 science fiction film starring Johnny Depp as an artificial intelligence researcher, portrays the cataclysmic danger of sentient computers as well as its unclear environmental effects.



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Chatbots and Loebner Prize; Gender and AI; Mobile Recommendation Assistants; Natural Language Processing and Speech Understanding.


Further Reading


Bort, Julie. 2017. “The 43 Most Powerful Female Engineers of 2017.” Business Insider. https://www.businessinsider.com/most-powerful-female-engineers-of-2017-2017-2.

Chan, Sharon Pian. 2011. “Tech-Savvy Dreamer Runs Microsoft’s Social-Media Lab.” Seattle Times. https://www.seattletimes.com/business/tech-savvy-dreamer-runs-microsofts-social-media-lab.

Cheng, Lili. 2018. “Why You Shouldn’t Be Afraid of Artificial Intelligence.” Time. http://time.com/5087385/why-you-shouldnt-be-afraid-of-artificial-intelligence.

Cheng, Lili, Shelly Farnham, and Linda Stone. 2002. “Lessons Learned: Building and Deploying Shared Virtual Environments.” In The Social Life of Avatars: Com￾puter Supported Cooperative Work, edited by Ralph Schroeder, 90–111. London: Springer.

Davis, Jeffrey. 2018. “In Chatbots She Trusts: An Interview with Microsoft AI Leader Lili Cheng.” Workflow. https://workflow.servicenow.com/customer-experience/lili-chang-ai-chatbot-interview.



Artificial Intelligence - Who Is Sherry Turkle?

 


 

 

Sherry Turkle(1948–) has a background in sociology and psychology, and her work focuses on the human-technology interaction.

While her study in the 1980s focused on how technology affects people's thinking, her work in the 2000s has become more critical of how technology is utilized at the expense of building and maintaining meaningful interpersonal connections.



She has employed artificial intelligence in products like children's toys and pets for the elderly to highlight what people lose out on when interacting with such things.


Turkle has been at the vanguard of AI breakthroughs as a professor at the Massachusetts Institute of Technology (MIT) and the creator of the MIT Initiative on Technology and the Self.

She highlights a conceptual change in the understanding of AI that occurs between the 1960s and 1980s in Life on the Screen: Identity inthe Age of the Internet (1995), substantially changing the way humans connect to and interact with AI.



She claims that early AI paradigms depended on extensive preprogramming and employed a rule-based concept of intelligence.


However, this viewpoint has given place to one that considers intelligence to be emergent.

This emergent paradigm, which became the recognized mainstream view by 1990, claims that AI arises from a much simpler set of learning algorithms.

The emergent method, according to Turkle, aims to emulate the way the human brain functions, assisting in the breaking down of barriers between computers and nature, and more generally between the natural and the artificial.

In summary, an emergent approach to AI allows people to connect to the technology more easily, even thinking of AI-based programs and gadgets as children.



Not just for the area of AI, but also for Turkle's study and writing on the subject, the rising acceptance of the emerging paradigm of AI and the enhanced relatability it heralds represents a significant turning point.


Turkle started to employ ethnographic research techniques to study the relationship between humans and their gadgets in two edited collections, Evocative Objects: Things We Think With (2007) and The Inner History of Devices (2008).


She emphasized in her book The Inner History of Devices that her intimate ethnography, or the ability to "listen with a third ear," is required to go past the advertising-based clichés that are often employed when addressing technology.


This method comprises setting up time for silent meditation so that participants may think thoroughly about their interactions with their equipment.


Turkle used similar intimate ethnographic approaches in her second major book, Alone Together

Why We Expect More from Technology and Less from Each Other (2011), to argue that the increasing connection between people and the technology they use is harmful.

These issues are connected to the increased usage of social media as a form of communication, as well as the continuous degree of familiarity and relatability with technology gadgets, which stems from the emerging AI paradigm that has become practically omnipresent.

She traced the origins of the dilemma back to early pioneers in the field of cybernetics, citing, for example, Norbert Weiner's speculations on the idea of transmitting a human person across a telegraph line in his book God & Golem, Inc.(1964).

Because it reduces both people and technology to information, this approach to cybernetic thinking blurs the barriers between them.



In terms of AI, this implies that it doesn't matter whether the machines with which we interact are really intelligent.


Turkle claims that by engaging with and caring for these technologies, we may deceive ourselves into feeling we are in a relationship, causing us to treat them as if they were sentient.

In a 2006 presentation titled "Artificial Intelligence at 50: From Building Intelligence to Nurturing Sociabilities" at the Dartmouth Artificial Intelligence Conference, she recognized this trend.

She identified the 1997 Tamagotchi, 1998 Furby, and 2000 MyReal Baby as early versions of what she refers to as relational artifacts, which are more broadly referred to as social machines in the literature.

The main difference between these devices and previous children's toys is that these devices come pre-animated and ready for a relationship, whereas previous children's toys required children to project a relationship onto them.

Turkle argues that this change is about our human weaknesses as much as it is about computer capabilities.

In other words, just caring for an item increases the likelihood of not only seeing it as intelligent but also feeling a connection to it.

This sense of connection is more relevant to the typical person engaging with these technologies than abstract philosophic considerations concerning the nature of their intelligence.



Turkle delves more into the ramifications of people engaging with AI-based technologies in both Alone Together and Reclaiming Conversation: The Power of Talk in a Digital Age (2015).


She provides the example of Adam in Alone Together, who appreciates the appreciation of the AI bots he controls over in the game Civilization.

Adam appreciates the fact that he is able to create something fresh when playing.

Turkle, on the other hand, is skeptical of this interaction, stating that Adam's playing isn't actual creation, but rather the sensation of creation, and that it's problematic since it lacks meaningful pressure or danger.

In Reclaiming Conversation, she expands on this point, suggesting that social partners simply provide a perception of camaraderie.

This is important because of the value of human connection and what may be lost in relationships that simply provide a sensation or perception of friendship rather than true friendship.

Turkle believes that this transition is critical.


She claims that although connections with AI-enabledtechnologies may have certain advantages, they pale in contrast to what is missing: 

  • the complete complexity and inherent contradictions that define what it is to be human.


A person's connection with an AI-enabled technology is not as intricate as one's interaction with other individuals.


Turkle claims that as individuals have become more used to and dependent on technology gadgets, the definition of friendship has evolved.


  • People's expectations for companionship have been simplified as a result of this transformation, and the advantages that one wants to obtain from partnerships have been reduced.
  • People now tend to associate friendship only with the concept of interaction, ignoring the more nuanced sentiments and arguments that are typical in partnerships.
  • By engaging with gadgets, one may form a relationship with them.
  • Conversations between humans have become merely transactional as human communication has shifted away from face-to-face conversation and toward interaction mediated by devices. 

In other words, the most that can be anticipated is engagement.



Turkle, who has a background in psychoanalysis, claims that this kind of transactional communication allows users to spend less time learning to view the world through the eyes of another person, which is a crucial ability for empathy.


Turkle argues we are in a robotic period in which people yearn for, and in some circumstances prefer, AI-based robotic companionship over that of other humans, drawing together these numerous streams of argument.

For example, some people enjoy conversing with their iPhone's Siri virtual assistant because they aren't afraid of being judged by it, as evidenced by a series of Siri commercials featuring celebrities talking to their phones.

Turkle has a problem with this because these devices can only respond as if they understand what is being said.


AI-based gadgets, on the other hand, are confined to comprehending the literal meanings of data stored on the device.

They can decipher the contents of phone calendars and emails, but they have no idea what any of this data means to the user.

There is no discernible difference between a calendar appointment for car maintenance and one for chemotherapy for an AI-based device.

A person may lose sight of what it is to have an authentic dialogue with another human when entangled in a variety of these robotic connections with a growing number of technologies.


While Reclaiming Communication documents deteriorating conversation skills and decreasing empathy, it ultimately ends on a positive note.

Because people are becoming increasingly dissatisfied with their relationships, there may be a chance for face-to-face human communication to reclaim its vital role.


Turkle's ideas focus on reducing the amount of time people spend on their phones, but AI's involvement in this interaction is equally critical.


  • Users must accept that their virtual assistant connections will never be able to replace face-to-face interactions.
  • This will necessitate being more deliberate in how one uses devices, prioritizing in-person interactions over the faster and easier interactions provided by AI-enabled devices.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

  • Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.
  • Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.
  • Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cambridge, MA: MIT Press.



Quantum Revolution 2.0 - Our First Civic Duty Is to Educate Ourselves.



One thing is certain: future quantum technologies will profoundly alter the planet. 


As a result, our current choices have a lot of clout. 

The scientific underpinnings for current car, rail, and air traffic, as well as modern communication and data processing, were established in the eighteenth and nineteenth centuries, and the foundations for the wonder technologies of the twenty-first century are being created now. 



There is just a short window of opportunity before technology and social norms become so entrenched that we won't be able to reverse them. 


This is why an active, wide-ranging social, and, of course, democratic debate is so critical. 

The ethical assessment and political molding of future technologies must go beyond individual, corporate, or governmental economic or military objectives. 

This will require a democratic commitment from everyone of us, including the responsibility to educate ourselves and share ideas. 

It should also be a requirement of ours that the media offer thorough coverage of scientific advances and advancements. 



When journalists and others who shape public opinion report on global events and significant social changes, there is much too little mention of physics, chemistry, or biology. 


In addition to ethical integrity, we must expect intellectual honesty from politicians and other social and economic decision-makers. 

This implies that intentional lies, as well as information distortion and filtering for the aim of imposing certain objectives, must be constantly combated. 

It is intolerable that false news can wield such devastating propagandistic influence these days, and that a worrying proportion of politicians, for example, continue to genuinely question climate change and Darwin's theory of evolution. 

The commandment of intellectual honesty, however, also applies to those who receive knowledge. 

We must learn to think things through before jumping to conclusions, to examine our own biases, and to participate in complicated interrelationships without oversimplifying everything. 



Last but not least, we must accept uncomfortable facts. 


Every citizen's role in influencing our technological future is to aim for a wide, reasonable, information- and fact-based debate. 

It will be beneficial to keep a careful eye on the progress of quantum physics research. 

The unique characteristics of the quantum universe are becoming an essential part of our daily lives, and we are seeing a watershed point in human history. 

Those who do not pay attention risk losing out and discovering what has occurred after it is too late. 


Our current knowledge of entanglement offers us a peek of what may be possible in the not-too-distant future of technology. However, the future has already started. 


~ Jai Krishna Ponnappan


You may also want to read more about Quantum Computing here.








Artificial Intelligence - How Are Accidents and Risk Assessment Done Using AI?

 



Many computer-based systems' most significant feature is their reliability.

Physical damage, data loss, economic disruption, and human deaths may all result from mechanical and software failures.

Many essential systems are now controlled by robotics, automation, and artificial intelligence.

Nuclear power plants, financial markets, social security payments, traffic lights, and military radar stations are all under their watchful eye.

High-tech systems may be designed purposefully hazardous to people, as with Trojan horses, viruses, and spyware, or they can be dangerous due to human programming or operation errors.

They may become dangerous in the future as a result of purposeful or unintended actions made by the machines themselves, or as a result of unanticipated environmental variables.

The first person to be murdered while working with a robot occurred in 1979.

A one-ton parts-retrieval robot built by Litton Industries hit Ford Motor Company engineer Robert Williams in the head.

After failing to entirely switch off a malfunctioning robot on the production floor at Kawasaki Heavy Industries two years later, Japanese engineer Kenji Urada was murdered.

Urada was shoved into a grinding machine by the robot's arm.

Accidents do not always result in deaths.

A 300-pound Knightscope K5 security robot on patrol at a retail business center in Northern California, for example, knocked down a kid and ran over his foot in 2016.

Only a few cuts and swelling were sustained by the youngster.

The Cold War's history is littered with stories of nuclear near-misses caused by faulty computer technology.

In 1979, a computer glitch at the North American Aerospace Defense Command (NORAD) misled the Strategic Air Command into believing that the Soviet Union had fired over 2,000 nuclear missiles towards the US.

An examination revealed that a training scenario had been uploaded to an active defense computer by mistake.

In 1983, a Soviet military early warning system identified a single US intercontinental ballistic missile launching a nuclear assault.

Stanislav Petrov, the missile defense system's operator, correctly discounted the signal as a false alarm.

The reason of this and subsequent false alarms was ultimately discovered to be sunlight hitting high altitude clouds.

Petrov was eventually punished for humiliating his superiors by disclosing faults, despite preventing global thermonuclear Armageddon.

The so-called "2010 Flash Crash" was caused by stock market trading software.

In slightly over a half-hour on May 6, 2010, the S&P 500, Dow Jones, and NASDAQ stock indexes lost—and then mainly regained—a trillion dollars in value.

Navin Dal Singh Sarao, a U.K. trader, was arrested after a five-year investigation by the US Department of Justice for allegedly manipulating an automated system to issue and then cancel huge numbers of sell orders, allowing his business to acquire equities at temporarily reduced prices.

In 2015, there were two more software-induced market flash crashes, and in 2017, there were flash crashes in the gold futures market and digital cryptocurrency sector.

Tay (short for "Thinking about you"), a Microsoft Corporation artificial intelligence social media chatterbot, went tragically wrong in 2016.

Tay was created by Microsoft engineers to imitate a nineteen-year-old American girl and to learn from Twitter discussions.

Instead, Tay was trained to use harsh and aggressive language by internet trolls, which it then repeated in tweets.

After barely sixteen hours, Microsoft deleted Tay's account.

More AI-related accidents in motor vehicle operating may occur in the future.

In 2016, the first fatal collision involving a self-driving car happened when a Tesla Model S in autopilot mode collided with a semi-trailer crossing the highway.

The motorist may have been viewing a Harry Potter movie on a portable DVD player when the accident happened, according to witnesses.

Tesla's software does not yet allow for completely autonomous driving, hence a human operator is required.

Despite these dangers, one management consulting company claims that autonomous automobiles might avert up to 90% of road accidents.

Artificial intelligence security is rapidly growing as a topic of cybersecurity study.

Militaries all around the globe are working on prototypes of dangerous autonomous weapons systems.

Automatic weapons, such as drones, that now rely on a human operator to make deadly force judgments against targets, might be replaced with automated systems that make life and death decisions.

Robotic decision-makers on the battlefield may one day outperform humans in extracting patterns from the fog of war and reacting quickly and logically to novel or challenging circumstances.

High technology is becoming more and more important in modern civilization, yet it is also becoming more fragile and prone to failure.

An inquisitive squirrel caused the NASDAQ's primary computer to collapse in 1987, bringing one of the world's major stock exchanges to its knees.

In another example, the ozone hole above Antarctica was not discovered for years because exceptionally low levels reported in data-processed satellite images were assumed to be mistakes.

It's likely that the complexity of autonomous systems, as well as society's reliance on them under quickly changing circumstances, will make completely tested AI unachievable.

Artificial intelligence is powered by software that can adapt to and interact with its surroundings and users.

Changes in variables, individual acts, or events may have unanticipated and even disastrous consequences.

One of the dark secrets of sophisticated artificial intelligence is that it is based on mathematical approaches and deep learning algorithms that are so complicated that even its creators are baffled as to how it makes accurate conclusions.

Autonomous cars, for example, depend on exclusively computer-written instructions while they watch people driving in real-world situations.

But how can a self-driving automobile learn to anticipate the unexpected?

Will attempts to adjust AI-generated code to decrease apparent faults, omissions, and impenetrability lessen the likelihood of unintended negative consequences, or will they merely magnify existing problems and produce new ones? Although it is unclear how to mitigate the risks of artificial intelligence, it is likely that society will rely on well-established and presumably trustworthy machine-learning systems to automatically provide rationales for their actions, as well as examine newly developed cognitive computing systems on our behalf.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



Also see: Algorithmic Error and Bias; Autonomy and Complacency; Beneficial AI, Asilo mar Meeting on; Campaign to Stop Killer Robots; Driverless Vehicles and Liability; Explainable AI; Product Liability and AI; Trolley Problem.


Further Reading

De Visser, Ewart Jan. 2012. “The World Is Not Enough: Trust in Cognitive Agents.” Ph.D. diss., George Mason University.

Forester, Tom, and Perry Morrison. 1990. “Computer Unreliability and Social Vulnerability.” Futures 22, no. 5 (June): 462–74.

Lee, John D., and Katrina A. See. 2004. “Trust in Automation: Designing for Appropriate Reliance.” Human Factors 46, no. 1 (Spring): 50–80.

Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan 

M. Ćirković, 308–45. New York: Oxford University Press.



Artificial Intelligence - What Are Robot Caregivers?

 


Personal support robots, or caregiver robots, are meant to help individuals who, for a number of reasons, need assistive technology for long-term care, disability, or monitoring.

Although not widely used, caregiver robots are seen as useful in countries with rapidly rising older populations or in situations when a significant number of individuals are afflicted at the same time with a severe sickness.


Caregiver robots have elicited a wide variety of reactions, from terror to comfort.


As they attempt to eliminate the toil from caring rituals, some ethicists have claimed that robotics researchers misunderstand or underappreciate the role of compassionate caretakers.

The majority of caregiver robots are personal robots for use at home, however some are used in institutions including hospitals, nursing homes, and schools.

Some of them are geriatric care robots.

Others, dubbed "robot nannies," are meant to do childcare tasks.

Many have been dubbed "social robots." Interest in caregiving robots has risen in tandem with the world's aging population.

Japan has one of the largest percentage of old people in the world and is a pioneer in the creation of caregiver robots.

According to the United Nations, by 2050, one-third of the island nation's population would be 65 or older, much outnumbering the natural supply of nursing care employees.

The Ministry of Health, Labor, and Welfare of the nation initiated a pilot demonstration project in 2013 to bring bionic nursing robots into eldercare facilities.

By 2050, the number of eligible retirees in the United States will have doubled, and those beyond the age of 85 will have tripled.

In the same year, there will be 1.5 billion persons over the age of 65 all throughout the globe (United Nations 2019).

For a number of reasons, people are becoming more interested in caregiver robot technology.


The physical difficulties of caring for the elderly, infirm, and children are often mentioned as a driving force for the creation of assistive robots.


The caregiver position may be challenging, especially when the client has a severe or long-term illness such as Alzheimer's disease, dementia, or schizoid disorder.

A partial answer to family economic misery has also been proposed: caregiver robots.

Robots may one day be able to take the place of human relatives who must work.

They've also been suggested as a possible solution to nursing home and other care facility staffing shortages.

In addition to technological advancements, societal and cultural factors are driving the creation of caregiver robots.

Because of unfavorable attitudes of outsiders, robot caregivers are favored in Japan than overseas health-care employees.

The demand for independence and the dread of losing behavioral, emotional, and cognitive autonomy are often acknowledged by the elderly themselves.

In the literature, several robot caregiver functions have been recognized.

Some robots are thought to be capable of minimizing human carers' mundane work.

Others are better at more difficult jobs.

Intelligent service robots have been designed to help with feeding, cleaning of houses and bodies, and mobility support, all of which save time and effort (including lifting and turning).



Safety monitoring, data collecting, and surveillance are some of the other functions of these assistive technologies.


Clients with severe to profound impairments may benefit from robot carers for coaching and stimulation.

For patients who require frequent reminders to accomplish chores or take medication, these robots might be used as cognitive prosthesis or mobile memory aides.

These caregiver robots may also include telemedicine capabilities, allowing them to call doctors or nurses for routine or emergency consultations.


Robot caretakers have been offered as a source of social connection and companionship, which has sparked debate.

Although social robots have a human-like appearance, they are often interactive smart toys or artificial pets.

In Japan, robots are referred to as iyashi, a term that also refers to a style of anime and manga that focuses on emotional rehabilitation.

As huggable friends, Japanese children and adults may choose from a broad range of soft-tronic robots.

Matsushita Electric Industrial (MEI) created Wandakun, a fluffy koala bear-like robot, in the 1990s.

When petted, the bear wiggled, sang, and responded to touch with a few Japanese sentences.


Babyloid is a plush mechanical baby beluga whale created by Masayoshi Kano at Chukyo University to help elderly patients with despair.


Babyloid is only seventeen inches long, yet his eyes flicker and he "naps" when rocked.

When it is "glad," LED lights imbedded in its cheeks shine.

When the robot is in a bad mood, it may also drop blue LED tears.

Babyloid can produce almost a hundred distinct noises.

It is hardly a toy, since each one costs more than $1,000.

The infant harp seal is a replica.

The National Institute of Advanced Industrial Science and Technology (AIST) in Japan invented Paro to provide consolation to individuals suffering from dementia, anxiety, or sadness.

Thirteen surface and whisker sensors, three microphones, two vision sensors, and seven actuators for the neck, fins, and eyelids are all included in the eighth-generation Paro.

When patients with dementia use Paro, the robot's developer, Taka nori Shibata of AIST's Intelligent System Research Institute, reports that they experience less hostility and roaming, as well as increased social interaction.

In the United States, Paro is classified as a Class II medical equipment, which puts it in the same danger category as electric wheelchairs and X-ray machines.


Taizou, a twenty-eight-inch robot that can duplicate the motions of thirty various workouts, was developed by AIST.


In Japan, Taizou is utilized to encourage older adults to exercise and keep in shape.

Sony Corporation's well-known AIBO is a robotic therapy dog as well as a very expensive toy.

In 2018, Sony's Life Care Design division started introducing a new generation of dog robots into the company's retirement homes.

The humanoid QRIO robot, AIBO's successor, has been suggested as a platform for basic childcare activities including interactive games and sing-alongs.

Palro, another Fujisoft robot for eldercare treatment, is already in use in over 1,000 senior citizen institutions.

Since its original release in 2010, its artificial intelligence software has been modified multiple times.

Both are used to alleviate dementia symptoms and provide enjoyment.

A bigger section of users of so-called partner-type personal robots has also been promoted by Japanese firms.

These robots are designed to encourage human-machine connection and to alleviate feelings of loneliness and mild melancholy.


In the late 1990s, NEC Corporation started developing the adorable PaPeRo (Partner-Type Personal Robot).


PaPeRo communications robots have the ability to look, listen, communicate, and move in a variety of ways.

Current versions include twin camera eyes that can recognize faces and are intended to allow family members who live in different houses keep an eye on one other.

PaPeRo's Childcare Version interacts with youngsters and serves as a temporary babysitter.

In 2005, Toyota debuted its humanoid Partner Robots family.

The company's robots are intended for a broad range of applications, including human assistance and rehabilitation, as well as socializing and innovation.


In 2012, Toyota launched the Partner Robots line with a customized Human Support Robot (HSR).


HSR robots are designed to help older adults maintain their independence.

In Japan, prototypes are currently being used in eldercare facilities and handicapped people's homes.

HSR robots are capable of picking up and retrieving things as well as avoiding obstacles.

They may also be controlled remotely by a human caregiver and offer internet access and communication.

Japanese roboticists are likewise taking a more focused approach to automated caring.


The RI-MAN robot, developed by the RIKEN Collaboration Center for Human-Interactive Robot Research, is an autonomous humanoid patient-lifting robot.


The forearms, upper arms, and torso of the robot are made of a soft sili cone skin layer and are equipped with touch sensors for safe lifting.

RI-MAN has odor detectors and can follow human faces.

RIBA (Robot for Interactive Body Assistance) is a second-generation RIKEN lifting robot that securely moves patients from bed to wheelchair while responding to simple voice instructions.

Capacitance-type tactile sensors made completely of rubber monitor patient weight in the RIBA-II.


RIKEN's current-generation hydraulic patient life-and-transfer equipment is called Robear.

The robot, which has the look of an anthropomorphic robotic bear, is lighter than its predecessors.

Toshiharu Mukai, RIKEN's inventor and lab leader, invented the lifting robots.


SECOM's MySpoon, Cyberdine's Hybrid Assistive Limb (HAL), and Panasonic's Resyone robotic care bed are examples of narrower approaches to caregiver robots in Japan.

MySpoon is a meal-assistance robot that allows customers to feed themselves using a joystick as a replacement for a human arm and eating utensil.

People with physical limitations may employ the Cyberdine Hybrid Assistive Limb (HAL), a powered robotic exoskeleton outfit.

For patients who would ordinarily need daily lift help, the Panasonic Resyone robotic care bed merges bed and wheelchair.

Projects to develop caregiver robots are also ongoing in Australia and New Zealand.

The Australian Research Council's Centre of Excellence for Autonomous Systems (CAS) was established in the early 2000s as a collaboration between the University of Technology Sydney, the University of Sydney, and the University of New South Wales.

The center's mission was to better understand and develop robotics in order to promote the widespread and ubiquitous use of autonomous systems in society.

The work of CAS has now been separated and placed on an independent footing at the University of Technology Sydney's Centre for Autonomous Systems and the University of Sydney's Australian Centre for Field Robotics.

Bruce Mac Donald of the University of Auckland is leading the creation of Healthbot, a socially assistive robot.

Healthbot is a mobile health robot that reminds seniors to take their meds, check vitals and monitor their physical condition, and call for aid in an emergency.

In the European Union, a number of caregiver robots are being developed.

The GiraffPlus (Giraff+) project, which was just finished at rebro University in Sweden, intends to develop an intelligent system for monitoring the blood pressure, temperature, and movements of elderly individuals at home (to detect falls and other health emergencies).

Giraff may also be utilized as a telepresence robot for virtual visits with family members and health care providers.

The robot is roughly five and a half feet tall and has basic controls as well as a night-vision camera.


The European Mobiserv project's interdisciplinary, collaborative goal is to develop a robot that reminds elderly customers to take their prescriptions, consume meals, and keep active.


Mobiserv is part of a smart home ecosystem that includes sensors, optical sensors, and other automated devices.

Mobiserv is a mobile application that works with smart clothing that collects health-related data.

Mobiserv is a collaboration between Systema Technologies and nine European partners that represent seven different nations.

The EU CompanionAble Project, which involves fifteen institutions and is led by the University of Reading, aims to develop a transportable robotic companion to illustrate the benefits of information and communication technology in aged care.

In the early stages of dementia, the CompanionAble robot tries to solve emergency and security issues, offer cognitive stimulation and reminders, and call human caregiver support.

In a smart home scenario, CompanionAble also interacts with a range of sensors and devices.

The QuoVADis Project at Brova Hospital Paris, a public university hospital specializing in geriatrics, has a similar goal: to develop a robot for at-home care of cognitively challenged old persons.

The Fraunhofer Institute for Manufacturing Engineering and Automation is still designing and manufacturing Care-O-Bots, which are modular robots.

It's designed for hospitals, hotels, and nursing homes.

With its long arms and rotating, bending hip joint, the Care-O-Bot 4 service robot can reach from the floor to a shelf.

The robot is intended to be regarded as friendly, helpful, courteous, and intelligent.


ROBOSWARM and IWARD, intelligent and programmable hospital robot swarms developed by the European Union, provide a fresh approach.


ROBOSWARM is a distributed agent cleaning system for hospitals.

Cleaning, patient monitoring and guiding, environmental monitoring, medicine distribution, and patient surveillance are all covered by the more flexible IWARD.

Because the AI systems incorporated in these systems display adaptive and self-organizing characteristics, multi-institutional partners determined that certifying that they would operate adequately under real-world conditions would be challenging.

They also discovered that onlookers sometimes questioned the robots' motions, asking whether they were doing the proper tasks.


The Ludwig humanoid robot, developed at the University of Toronto, is intended to assist caretakers in dealing with aging-related issues in their clients.


The robot converses with elderly people suffering from dementia or Alzheimer's disease.

Goldie Nejat, AGE-WELL Investigator and Canada Research Chair in Robots for Society and Director of the University of Toronto's Institute for Robots and Mechatronics, is employing robotics technology to assist individuals by guiding them through ordinary everyday chores.

Brian, the university's robot, is sociable and reacts to emotional human interaction.


HomeLab is creating assistive robots for use in health-care delivery at the Toronto Rehabilitation Institute (iDAPT), Canada's biggest academic rehabilitation research facility.


Ed the Robot, created by HomeLab, is a low-cost robot built using the iRobot Create toolset.

The robot, like Brian, is designed to remind dementia sufferers of the appropriate steps to take while doing everyday tasks.


In the United States, caregiver robot technology is also on the rise.

The Acrotek Actron MentorBot surveillance and security robot, which was created in the early 2000s, could follow a human client using visual and aural cues, offer food or medicine reminders, inform family members about concerns, and call emergency services.


Bandit is a socially supportive robot created by Maja Matari of the Robotics and Autonomous Systems Center at the University of Southern California.


The robot is employed in therapeutic settings with patients who have had catastrophic injuries or strokes, as well as those who have aging disorders, autism, or who are obese.

Stroke sufferers react swiftly to imitation exercise movements produced by clever robots in rehabilitation sessions, according to the institute.

Robotic-assisted rehabilitative exercises were also effective in prompting and cueing tasks for youngsters with autism spectrum disorders.

Through the business Embodied Inc., Matari is currently attempting to bring cheap social robots to market.


Nursebots Flo and Pearl, assistive robots for the care of the elderly and infirm, were developed in collaboration between the University of Pittsburgh, Carnegie Mellon University, and the University of Michigan.


The National Science Foundation-funded Nursebot project created a platform for intelligent reminders, telepresence, data gathering and monitoring, mobile manipulation, and social engagement.

Today, Carnegie Mellon is home to the Quality of Life Technology (QoLT) Center, a National Science Foundation Engineering Research Center (ERC) whose objective is to use intelligent technologies to promote independence and improve the functional capabilities of the elderly and handicapped.

The transdisciplinary AgeLab at the Massachusetts Institute of Technology was founded in 1999 to aid in the development of marketable ideas and assistive technology for the aged.

Joe Coughlin, the creator and director of AgeLab, has concentrated on developing the technological requirements for conversational robots for senior care that have the difficult-to-define attribute of likeability.

Walter Dan Stiehl and associates in the Media Lab created The HuggableTM teddy bear robotic companion at MIT.

A video camera eye, 1,500 sensors, silent actuators, an inertial measurement unit, a speaker, and an internal personal computer with wireless networking capabilities are all included in the bear.

Virtual agents are used in other forms of caregiving technology.

Softbots are a term used to describe these agents.

The MIT Media Lab's CASPER affect management agent, created by Jonathan Klein, Youngme Moon, and Rosalind Picard in the early 2000s, is an example of a virtual agent designed to relieve unpleasant emotional states, notably impatience.

To reply to a user who is sharing their ideas and emotions with the computer, the human-computer interaction (HCI) agent employs text-only social-affective feedback mechanisms.



The MIT FITrack exercise advisor agent uses a browser-based client with a relational database and text-to-speech engine on the backend.



The goal of FITrack is to create an interactive simulation of a professional fitness trainer called Laura working with a client.

Amanda Sharkey and Noel Sharkey, computer scientists at the University of Sheffield, are often mentioned in studies on the ethics of caregiver robot technology.

The Shar keys are concerned about robotic carers and the loss of human dignity they may cause.

They claim that such technology has both advantages and disadvantages.

On the one hand, care provider robots have the potential to broaden the variety of options accessible to graying populations, and these features of technology should be promoted.

The technologies, on the other hand, might be used to mislead or deceive society's most vulnerable people, or to further isolate the elderly from frequent companionship and social engagement.

The Sharkeys point out that robotic caretakers may someday outperform humans in certain areas, such as when speed, power, or accuracy are required.


Robots might be trained to avoid or lessen eldercare abuse, impatience, or ineptitude, all of which are typical complaints among the elderly.


Indeed, if societal institutions for caregiver assistance are weak or defective, an ethical obligation to utilize caregiver robots may apply.

Robots, on the other hand, can not comprehend complicated human constructions like loyalty or adapt perfectly to the delicate, tailored demands of specific consumers.

"The old may find themselves in a barren world of machines, a world of automated care: a factory for the aged," the Sharkeys wrote if they don't plan ahead (Sharkey and Sharkey 2012, 282).

In her groundbreaking book Alone Together: Why We Expect More From Technology and Less From Each Other (2011), Sherry Turkle includes a chapter to caregiver robots.

She points out that researchers in robotics and artificial intelligence are driven by the need to make the elderly feel desired via their work, assuming that older folks are often lonely or abandoned.

In aging populations, it is true that attention and labor are in short supply.


Robots are used as a kind of entertainment.


They make everyday living and household routines easier and safer.

Turkle admits that robots never get tired and can even function from a neutral stance in customer interactions.

Humans, on the other hand, can have reasons that go against even the most basic or traditional norms of caring.


"One may argue that individuals can act as though they care," Turkle observes.

"A robot is unconcerned. As a result, a robot cannot act since it can only act" (Turkle 2011, 124).


Turkle, on the other hand, is a critical critic of caregiving technology.

Most importantly, caring conduct and caring feelings are often misconstrued.

In her opinion, interactions between people and robots do not constitute true dialogues.

They may even cause consternation among vulnerable and reliant groups.

The risk of privacy invasion from caregiver robot monitoring is significant, and automated help might potentially sabotage human experience and memory development.


The emergence of a generation of older folks and youngsters who prefer machines to intimate human ties poses a significant threat.


On suitable behaviors and manufactured compassion, several philosophers and ethicists have chimed in.

Human touch is very important in healing rituals, according to Sparrow and Sparrow (2006), robots may increase loss of control, and robot caring is false caregiving since robots are incapable of genuine concern.

Borenstein and Pearson (2011) and Van Wynsberghe (2013) believe that caregiver robots infringe on human dignity and senior rights, impeding freedom of choice.

Van Wynsberghe, in particular, advocates for value-sensitive robot designs that align with Joan Tronto's ethic of care, which includes attentiveness, responsibility, competence, and reciprocity, as well as broader concerns for respect, trust, empathy, and compassion, according to University of Minnesota professor Joan Tronto.

Vallor (2011) questioned the underlying assumptions of robot care by questioning the premise that caring for others is only a problem or a burden.

It's possible that excellent care is individualized to the individual, something that personable but mass-produced robots could fail to provide.


Robot caregiving will very certainly be frowned upon by many faiths and cultures.


By providing incorrect and unsuitable social connections, caregiver robots may potentially cause reactive attachment disorder in children.

The International Organization for Standardization (ISO) has defined rules for the creation of personal robots, but who is to blame when a robot is neglected? The courts are undecided, and robot caregiver legislation is still in its early stages.

According to Sharkey and Sharkey (2010), caregiver robots might be held accountable for breaches of privacy, injury caused by illegal constraint, misleading activities, psychological harm, and accountability failings.

Future robot ethical frameworks must prioritize the needs of patients above the wishes of caretakers.

In interviews with the elderly, Wu et al. (2010) discovered six themes connected to patient requirements.

Thirty people in their sixties and seventies agreed that assistive technology should initially aid them with simple, daily chores.

Other important needs included maintaining good health, stimulating memory and concentration, living alone "for as long as I wish without worrying my family circle" (Wu et al. 2010, 36), maintaining curiosity and growing interest in new activities, and communicating with relatives on a regular basis.


In popular culture, robot maids, nannies, and caregiver technologies are all prominent clichés.


Several early instances may be seen in the television series The Twilight Zone.

In "The Lateness of the Hour," a man develops a whole family of robot slaves (1960).

In "I Sing the Body Electric," Grandma is a robot babysitter (1962).


From the animated television series The Jetsons (1962–1963), Rosie the robotic maid is a notable character.

In the animated movie Wall-E (2008) and Big Hero 6 (2014), as well as the science fiction thriller I Am Mother, caregiver robots are a central narrative component (2019).

They're also commonly seen in manga and anime.

Roujin Z (1991), Kurogane Communication (1997), and The Umbrella Academy are just a few examples (2019).


In popular culture, Jake Schreier's 2012 science fiction film Robot and Frank dramatizes the limits and potential of caregiver robot technology.

A gruff former jewel thief with deteriorating mental health seeks to make his robotic sidekick into a criminal accomplice in the film.

The film delves into a number of ethical concerns including not just the care of the elderly, but also the rights of robots in slavery.

"We are psychologically evolved not merely to nurture what we love, but to love what we nurture," says MIT social scientist Sherry Turkle (Turkle 2011, 11).


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Ishiguro, Hiroshi; Robot Ethics; Turkle, Sherry.


Further Reading


Borenstein, Jason, and Yvette Pearson. 2011. “Robot Caregivers: Ethical Issues across the Human Lifespan.” In Robot Ethics: The Ethical and Social Implications ofRobotics, edited by Patrick Lin, Keith Abney, and George A. Bekey, 251–65. Cambridge, MA: MIT Press.

Sharkey, Noel, and Amanda Sharkey. 2010. “The Crying Shame of Robot Nannies: An Ethical Appraisal.” Interaction Studies 11, no. 2 (January): 161–90.

Sharkey, Noel, and Amanda Sharkey. 2012. “The Eldercare Factory.” Gerontology 58, no. 3: 282–88.

Sparrow, Robert, and Linda Sparrow. 2006 “In the Hands of Machines? The Future of Aged Care.” Minds and Machines 16, no. 2 (May): 141–61.

Turkle, Sherry. 2011. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books.

United Nations. 2019. World Population Ageing Highlights. New York: Department of Economic and Social Affairs. Population Division.

Vallor, Shannon. 2011. “Carebots and Caregivers: Sustaining the Ethical Ideal of Care in the Twenty-First Century.” Philosophy & Technology 24, no. 3 (September): 251–68.

Van Wynsberghe, Aimee. 2013. “Designing Robots for Care: Care Centered Value Sensitive Design.” Science and Engineering Ethics 19, no. 2 (June): 407–33.

Wu, Ya-Huei, VĂ©ronique Faucounau, MĂ©lodie Boulay, Marina Maestrutti, and Anne Sophie Rigaud. 2010. “Robotic Agents for Supporting Community-Dwelling Elderly People with Memory Complaints: Perceived Needs and Preferences.” Health Informatics Journal 17, no. 1: 33–40.


Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...