Showing posts sorted by relevance for query AI programs. Sort by date Show all posts
Showing posts sorted by relevance for query AI programs. Sort by date Show all posts

Artificial Intelligence - Iterative AI Ethics In Complex Socio-Technical Systems

 



Title: The Need For Iterative And Evolving AI Ethics Processes And Frameworks To Ensure Relevant, Fair, And Ethical Scalable Complex Socio-Technical Systems.

Author: Jai Krishna Ponnappan




Ethics has strong fangs, but they are seldom applied in AI ethics today, therefore it's no surprise that AI ethics is criticized for lacking efficacy. 


This essay claims that present AI ethics 'ethics' is generally useless, caught in a 'ethical principles' approach and hence particularly vulnerable to manipulation, particularly by industrial players. 

Using ethics as a replacement for the law puts it at danger of being abused and misapplied. 

This severely restricts what ethics can accomplish, and it is a big setback for the AI field and its implications for people and society. 

This paper examines these dangers before focusing on the efficacy of ethics and the critical contribution they can – and should – provide to AI ethics right now. 



Ethics is a potent weapon. 


Unfortunately, we seldom use them in AI ethics, thus it's no surprise that AI ethics is dubbed "ineffective." 

This paper examines the different ethical procedures that have arisen in recent years in response to the widespread deployment and usage of AI in society, as well as the hazards that come with it. 

Lists of principles, ethical codes, suggestions, and guidelines are examples of these procedures. 


However, as many have showed, although these ethical innovations are exciting, they are also problematic: their usefulness has yet to be proven, and they are particularly susceptible to manipulation, notably by industry. 


This is a setback for AI, as it severely restricts what ethics may do for society and people. 

However, as this paper demonstrates, the problem isn't that ethics is meaningless (or ineffective) in the face of current AI deployment; rather, ethics is being utilized (or manipulated) in such a manner that it is made ineffectual for AI ethics. 

The paper starts by describing the current state of AI ethics: AI ethics is essentially principled, that is, it adheres to a 'law' view of ethics. 

It then demonstrates how this ethical approach fails to accomplish what it claims to do. 

The second section of this paper focuses on the true worth of ethics – its 'efficacy,' which we describe as the capacity to notice the new as it develops on a continuous basis. 



We explain how, in today's AI ethics, the ability to resist cognitive and perceptual inertia, which makes us inactive in the face of new advancements, is crucial. 


Finally, although we acknowledge that the legalistic approach to ethics is not entirely incorrect, we argue that it is the end of ethics, not its beginning, and that it ignores the most valuable and crucial components of ethics. 

In many stakeholder quarters, there are several ongoing conversations and activities on AI ethics (policy, academia, industry and even the media). This is something we can all be happy about. 


Policymakers (e.g., the European Commission and the European Parliament) and business, in particular, are concerned about doing things right in order to promote ethical and responsible AI research and deployment in society. 


It is now widely acknowledged that if AI is adopted without adequate attention and thought for its potentially detrimental effects on people, particular groups, and society as a whole, things might go horribly wrong (including, for example, bias and discrimination, injustice, privacy infringements, increase in surveillance, loss of autonomy, overdependency on technology, etc.). 

The focus then shifts to ethics, with the goal of ensuring that AI is implemented in a way that respects deeply held social values and norms, placing them at the center of responsible technology development and deployment (Hagendorff, 2020; Jobin et al., 2019). 

The 'Ethical guidelines for trustworthy AI,' established by the European Commission's High-Level Expert Group on AI in 2018, is one example of contemporary ethics efforts (High-Level Expert Group on Artificial Intelligence, 2019). 

However, the present use of the term "ethics" in the subject of AI ethics is questionable. 

Today's AI ethics is dominated by what British philosopher G.E.M. Anscombe refers to as a 'law conception of ethics,' i.e., a perspective on ethics that treats it as if it were a kind of law (Anscombe, 1958). 

It's customary to think of ethics as a "softer" version of the law (Jobin et al., 2019: 389). 


However, this is simply one approach to ethics, and it is problematic, as Anscombe has shown. It is problematic in at least two respects in terms of AI ethics. 

For starters, it's troublesome since it has the potential to be misapplied as a substitute for regulation (whether through law, policies or standards). 

Over the previous several years, many authors have advocated the following point: Article 19, 2019; Greene et al., 2019; Hagendorff, 2020; Jobin et al., 2019; Klöver and Fanta, 2019; Mittelstadt, 2019; Wagner, 2018); Greene et al., 2019; Hagendorff, 2020; Jobin et al., 2019; Klöver and Fanta, 2019; Klöver and Fanta, 2019; Klöver and Fanta, 2019; Klöver and Fanta, Wagner cites the situation of a member of the Google DeepMind ethics team continuously asserting 'how ethically Google DeepMind was working, while simultaneously dodging any accountability for the data security crisis at Google DeepMind' at the Conference on World Affairs 2018. (Wagner, 2018). 

'Ethical AI' discourse, according to Ochigame (2019), was "aligned strategically with a Silicon Valley campaign attempting to circumvent legally enforceable prohibitions of problematic technology." Ethics falls short in this regard because it lacks the instruments to enforce conformity. 


Ethics, according to Hagendorff, "lacks means to support its own normative assertions" (2020: 99). 


If ethics is about enforcing rules, then it is true that ethics is ineffective. 

Although ethical programs "bring forward great intentions," according to the human rights organization Article 19, "their general lack of accountability and enforcement measures" renders them ineffectual (Article 19, 2019: 18). 

Finally, and predictably, ethics is attacked for being ineffective. 

However, it's important to note that the problem isn't that ethics is being asked to perform something for which it is too weak or soft. 

It's more like it's being asked to do something it wasn't supposed to accomplish. 


Criticizing ethics for not having efficacy to enforce compliance with whatever it requires is like to blaming a fork for not correctly cutting meat: that is not what it is supposed to achieve. 


The goal of ethics is not to prescribe certain behaviors and then guarantee that they are followed. 

The issue occurs when it is utilized in this manner. 

This is especially true in the field of AI ethics, where ethical principles, norms, or criteria are required to control AI and guarantee that it does not damage people or society as a whole (e.g. AI HLEG). 

Some suggest that this ethical lapse is deliberate, motivated by a desire to guarantee that AI is not governed by legislation, i.e. 

that greater flexibility is available and that no firm boundaries are drawn constraining industrial and economic interests associated to this technology (Klöver and Fanta, 2019). 

For example, this criticism has been directed against the AI HLEG guidelines. 

Industry was extensively represented during debates at the European High-Level Expert Group on Artificial Intelligence (EU-HLEG), while academia and civil society did not have the same luxury, according to Article 19. 


While several non-negotiable ethical standards were initially specified in the text, owing to corporate pressure, they were eliminated from the final version. 


(Article 19, page 18 of the 2019 edition) It is a significant and concerning abuse and misuse of ethics to use ethics to hinder the execution of vital legal regulation. 

The result is ethics washing, as well as its cousins: ethics shopping, ethics shirking, and so on (Floridi, 2019; Greene et al., 2019; Wagner, 2018). 

Second, since the AI ethics area is dominated by this 'legal notion of ethics,' it fails to fully use what ethics has to give, namely, its correct efficacy, despite the critical need for them. 

What exactly are these ethical efficacy, and what value might they provide to the field? The true fangs of ethics are a never-failing capacity to perceive the new (Laugier, 2013). 


Ethics is basically a state of mind, a constantly renewed and nimble response to reality as it changes. 


The ethics of care has emphasized attention as a critical component of ethics (Tronto, 1993: 127). 

In this way, ethics is a strong instrument against cognitive and perceptual inertia, which prevents us from seeing what is different from before or in new settings, cultures, or circumstances, and hence necessitates a change in behavior (regulation included). 

This is particularly important for AI, given the significant changes and implications it has had and continues to have on society, as well as our basic ways of being and functioning. 

This ability to observe the environment is what keeps us from being cooked alive like the frog: it allows us to detect subtle changes as they happen. 

An extension and deepening of monitoring by governments and commercial enterprises, a rising reliance on technology, and the deployment of biased systems that lead to discrimination against women and minorities are all contributing to the increasingly hot water in AI. 


The positive changes they bring to society must be carefully examined and opposed when their negative consequences exceed their advantages. 


In this way, ethics has a tight relationship with social sciences, as an attempt to perceive what we don't otherwise notice, and ethics aids us in looking concretely at how the world evolves. 

It aids in the cleaning of the lens through which we see the world so that we may be more aware of its changes (and AI does bring many of these). 

It is critical that ethics back us up in this respect. 

It enables us to be less passive in the face of these changes, allowing us to better direct them in ways that benefit people and society while also improving our quality of life. 


Hagendorff makes a similar point in his essay on the 'Ethics of AI Ethics,' disputing the prevalent deontological approach to ethics in AI ethics (what we've referred to as a legalistic approach to ethics in this article), whose primary goal is to 'limit, control, or direct' (2020: 112). 


He emphasizes the necessity for AI to adopt virtue ethics, which strives to 'broaden the scope of action, disclose blind spots, promote autonomy and freedom, and cultivate self-responsibility' (Hagendorff, 2020: 112). 

Other ethical theory frameworks that might be useful in today's AI ethics discussion include the Spinozist approach, which focuses on the growth or loss of agency and action capability. 

So, are we just misinterpreting AI ethics, which, as we've seen, is now dominated by a 'law-concept of ethics'? Is today's legalistic approach to ethics entirely incorrect? No, not at all. 



The problem is that principles, norms, and values — the legal definition of ethics that is so prevalent in AI ethics today – are more of a means to a goal than an end in themselves. 


The word "end" has two meanings in this context. 

First, it is an end of ethics in the sense that it is the last destination of ethics, i.e., moulding laws, choices, behaviors, and acts in ways that are consistent with society's ideals. 

Ethics may be defined as the creation of principles (as in the AI HLEG criteria) or the application of ethical principles, values, or standards to particular situations. 

This process of operationalization of ethical standards may be observed, for example, in the European Commission's research funding program's Ethics evaluation procedure5 or in ethics impact assessments, which look at how a new technique or technology could alter ethical norms and values. 

These are unquestionably worthwhile endeavors that have a beneficial influence on society and people. 


Ethics, as the development of principles, is also useful in shaping policies and regulatory frameworks. 


The AI HLEG guidelines are heavily influenced by current policy and legislative developments at the EU level, such as the European Commission's "White Paper on Artificial Intelligence" (February 2020) and the European Parliament's proposed "Framework of ethical aspects of artificial intelligence, robotics, and related technologies" (April 2020). 

Ethics clearly lays forth the rights and wrongs, as well as what should be done and what should be avoided. 

It's important to recall, however, that ethics as ethical principles is also an end of ethics in another meaning: where it comes to a halt, where the thought is paused, and where this never-ending attention comes to an end. 

As a result, when ethics is reduced to a collection of principles, norms, or criteria, it has achieved its conclusion. 

There is no need for ethics if we have attained a sufficient degree of certainty and confidence in what are the correct judgments and acts. 



Ethics is about navigating muddy and dangerous seas while being vigilant. 


In the realm of AI, for example, ethical standards do not, by themselves, assist in the practical exploration of difficult topics such as fairness in extremely complex socio-technical systems. 


These must be thoroughly studied to ensure that we are not putting in place systems that violate deeply held norms and beliefs. 

Ethics is made worthless without a continual process of challenging what is or may be clear, of probing behind what seems to be resolved, and of keeping this inquiry alive. 

As a result, the process of settling ethics into established norms and principles comes to an end. 

It is vital to maintain ethics nimble and alive in light of AI's profound, huge, and broad influence on society. 

The ongoing renewal process of examining the world and the glasses through which we experience it — intentionally, consistently, and iteratively – is critical to AI ethics.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


AI ethics, law of AI, regulation of AI, ethics washing, EU HLEG on AI, ethical principles



Download PDF: 




Further Reading:



  • Anscombe, GEM (1958) Modern moral philosophy. Philosophy 33(124): 1–19.
  • European Committee for Standardization (2017) CEN Workshop Agreement: Ethics assessment for research and innovation – Part 2: Ethical impact assessment framework (by the SATORI project). Available at: https://satoriproject.eu/media/CWA17145-23d2017 .
  • Boddington, P (2017) Towards a Code of Ethics for Artificial Intelligence. Cham: Springer.
  • European Parliament JURI (April 2020) Framework of ethical aspects of artificial intelligence, robotics and related technologies, draft report (2020/2012(INL)). Available at: https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?lang=&reference=2020/2012 .
  • Gilligan, C (1982) In a Different Voice: Psychological Theory and Women’s Development. Cambridge: Harvard University Press.
  • Floridi, L (2019) Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology 32: 185–193.
  • Greene, D, Hoffmann, A, Stark, L (2019) Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In: Proceedings of the 52nd Hawaii international conference on system sciences, Maui, Hawaii, 2019, pp.2122–2131.
  • Gilligan, C (1982) In a Different Voice: Psychological Theory and Women’s Development. Cambridge: Harvard University Press.
  • Hagendorff, T (2020) The ethics of AI ethics. An evaluation of guidelines. Minds and Machines 30: 99–120.
  • Greene, D, Hoffmann, A, Stark, L (2019) Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In: Proceedings of the 52nd Hawaii international conference on system sciences, Maui, Hawaii, 2019, pp.2122–2131.
  • Jansen P, Brey P, Fox A, Maas J, Hillas B, Wagner N, Smith P, Oluoch I, Lamers L, van Gein H, Resseguier A, Rodrigues R, Wright D, Douglas D (2019) Ethical analysis of AI and robotics technologies. August, SIENNA D4.4, https://www.sienna-project.eu/digitalAssets/801/c_801912-l_1-k_d4.4_ethical-analysis–ai-and-r–with-acknowledgements.pdf
  • High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  • Jansen P, Brey P, Fox A, Maas J, Hillas B, Wagner N, Smith P, Oluoch I, Lamers L, van Gein H, Resseguier A, Rodrigues R, Wright D, Douglas D (2019) Ethical analysis of AI and robotics technologies. August, SIENNA D4.4, https://www.sienna-project.eu/digitalAssets/801/c_801912-l_1-k_d4.4_ethical-analysis–ai-and-r–with-acknowledgements.pdf .
  • Jobin, A, Ienca, M, Vayena, E (2019) The global landscape of AI ethics guidelines. Nature Machine Intelligence 1(9): 389–399.
  • Laugier, S (2013) The will to see: Ethics and moral perception of sense. Graduate Faculty Philosophy Journal 34(2): 263–281.
  • Klöver, C, Fanta, A (2019) No red lines: Industry defuses ethics guidelines for artificial intelligence. Available at: https://algorithmwatch.org/en/industry-defuses-ethics-guidelines-for-artificial-intelligence/
  • López, JJ, Lunau, J (2012) ELSIfication in Canada: Legal modes of reasoning. Science as Culture 21(1): 77–99.
  • Rodrigues, R, Rességuier, A (2019) The underdog in the AI ethical and legal debate: Human autonomy. In: Ethics Dialogues. Available at: https://www.ethicsdialogues.eu/2019/06/12/the-underdog-in-the-ai-ethical-and-legal-debate-human-autonomy/
  • Ochigame, R (2019) The invention of “Ethical AI” how big tech manipulates academia to avoid regulation. The Intercept. Available at: https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/?comments=1
  • Mittelstadt, B (2019) Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1: 501–507.
  • Tronto, J (1993) Moral Boundaries: A Political Argument for an Ethic of Care. New York: Routledge.
  • Wagner, B (2018) Ethics as an escape from regulation: From ethics-washing to ethics-shopping. In: Bayamlioglu, E, Baraliuc, I, Janssens, L, et al. (eds) Being Profiled: Cogitas Ergo Sum: 10 Years of Profiling the European Citizen. Amsterdam: Amsterdam University Press, pp. 84–89.



What Is Artificial General Intelligence?



Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the AGI system to solve problems when presented with new tasks. 

In other words, it's AI's capacity to learn similarly to humans.



Strong AI, full AI, and general intelligent action are some names for it. 

The phrase "strong AI," however, is only used in few academic publications to refer to computer systems that are sentient or aware. 

These definitions may change since specialists from many disciplines see human intelligence from various angles. 

For instance, computer scientists often characterize human intelligence as the capacity to accomplish objectives. 

On the other hand, general intelligence is defined by psychologists in terms of survival or adaptation.

Weak or narrow AI, in contrast to strong AI, is made up of programs created to address a single issue and lacks awareness since it is not meant to have broad cognitive capacities. 

Autonomous cars and IBM's Watson supercomputer are two examples. 

Nevertheless, AGI is defined in computer science as an intelligent system having full or comprehensive knowledge as well as cognitive computing skills.



As of right now, there are no real AGI systems; they are still the stuff of science fiction. 

The long-term objective of these systems is to perform as well as humans do. 

However, due to AGI's superior capacity to acquire and analyze massive amounts of data at a far faster rate than the human mind, it may be possible for AGI to be more intelligent than humans.



Artificial intelligence (AI) is now capable of carrying out a wide range of functions, including providing tailored suggestions based on prior web searches. 

Additionally, it can recognize various items for autonomous cars to avoid, recognize malignant cells during medical inspections, and serve as the brain of home automation. 

Additionally, it may be utilized to find possibly habitable planets, act as intelligent assistants, be in charge of security, and more.



Naturally, AGI seems to far beyond such capacities, and some scientists are concerned this may result in a dystopian future

Elon Musk said that sentient AI would be more hazardous than nuclear war, while Stephen Hawking advised against its creation because it would see humanity as a possible threat and act accordingly.


Despite concerns, most scientists agree that genuine AGI is decades or perhaps centuries away from being developed and must first meet a number of requirements (which are always changing) in order to be achieved. 

These include the capacity for logic, tact, puzzle-solving, and making decisions in the face of ambiguity. 



Additionally, it must be able to plan, learn, and communicate in natural language, as well as represent information, including common sense. 

AGI must also have the capacity to detect (hear, see, etc.) and output the ability to act, such as moving items and switching places to explore. 



How far along are we in the process of developing artificial general intelligence, and who is involved?

In accordance with a 2020 study from the Global Catastrophic Risk Institute (GCRI), academic institutions, businesses, and different governmental agencies are presently working on 72 recognized AGI R&D projects. 



According to the poll, projects nowadays are often smaller, more geographically diversified, less open-source, more focused on humanitarian aims than academic ones, and more centered in private firms than projects in 2017. 

The comparison also reveals a decline in projects with academic affiliations, an increase in projects sponsored by corporations, a rise in projects with a humanitarian emphasis, a decline in programs with ties to the military, and a decline in US-based initiatives.


In AGI R&D, particularly military initiatives that are solely focused on fundamental research, governments and organizations have very little roles to play. 

However, recent programs seem to be more varied and are classified using three criteria, including business projects that are engaged in AGI safety and have humanistic end objectives. 

Additionally, it covers tiny private enterprises with a variety of objectives including academic programs that do not concern themselves with AGI safety but rather the progress of knowledge.

One of the most well-known organizations working on AGI is Carnegie Mellon University, which has a project called ACT-R that aims to create a generic cognitive architecture based on the basic cognitive and perceptual functions that support the human mind. 

The project may be thought of as a method of describing how the brain is structured such that different processing modules can result in cognition.


Another pioneering organization testing the limits of AGI is Microsoft Research AI, which has carried out a number of research initiatives, including developing a data set to counter prejudice for machine-learning models. 

The business is also investigating ways to advance moral AI, create a responsible AI standard, and create AI strategies and evaluations to create a framework that emphasizes the advancement of mankind.


The person behind the well-known video game franchises Commander Keen and Doom has launched yet another intriguing endeavor. 

Keen Technologies, John Carmack's most recent business, is an AGI development company that has already raised $20 million in funding from former GitHub CEO Nat Friedman and Cue founder Daniel Gross. 

Carmack is one of the AGI optimists who believes that it would ultimately help mankind and result in the development of an AI mind that acts like a human, which might be used as a universal remote worker.


So what does AGI's future hold? 

The majority of specialists are doubtful that AGI will ever be developed, and others believe that the urge to even develop artificial intelligence comparable to humans will eventually go away. 

Others are working to develop it so that everyone will benefit.

Nevertheless, the creation of AGI is still in the planning stages, and in the next decades, little progress is anticipated. 

Nevertheless, throughout history, scientists have debated whether developing technologies with the potential to change people's lives will benefit society as a whole or endanger it. 

This proposal was considered before to the invention of the vehicle, during the development of AC electricity, and when the atomic bomb was still only a theory.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.

Be sure to refer to the complete & active AI Terms Glossary here.


Artificial Intelligence - Emotion Recognition And Emotional Intelligence.





A group of academics released a meta-analysis of studies in 2019 indicating that a person's mood may be determined from their facial movements. 

They came to the conclusion that there is no evidence that emotional state can be predicted from expression, regardless of whether the assessment is made by a person or by technology. 


The coauthors noted, "[Facial expressions] in issue are not 'fingerprints' or diagnostic displays that dependably and explicitly convey distinct emotional states independent of context, person, or culture."


  "It's impossible to deduce pleasure from a grin, anger from a scowl, or grief from a frown with certainty." 

This statement may be disputed by Alan Cowen. He's the creator of Hume AI, a new research lab and "empathetic AI" firm coming from stealth today. He's an ex-Google scientist. 


Hume claims to have created datasets and models that "react beneficially to [human] emotion signals," allowing clients ranging from huge tech firms to startups to recognize emotions based on a person's visual, vocal, and spoken expressions. 

"When I first entered the area of emotion science, the majority of researchers were focusing on a small number of posed emotional expressions in the lab. 

Cowen told, "I wanted to apply data science to study how individuals genuinely express emotion out in the world, spanning ethnicities and cultures." 

"I uncovered a new universe of nuanced and complicated emotional behaviors that no one had ever recorded before using new computational approaches, and I was quickly publishing in the top journals." That's when businesses started contacting me." 

Hume, which has 10 workers and just secured $5 million in investment, claims to train its emotion-recognizing algorithms using "huge, experimentally-controlled, culturally varied" datasets from individuals throughout North America, Africa, Asia, and South America. 

Regardless of the data's representativeness, some experts doubt the premise that emotion-detecting algorithms have a scientific base. 




"The kindest view I have is that there are some really well-intentioned folks who are naive enough that... the issue they're attempting to cure is caused by technology," 

~ Os Keyes, an AI ethics scientist at the University of Washington. 




"Their first offering raises severe ethical concerns... It's evident that they aren't addressing the topic as a problem to be addressed, interacting deeply with it, and contemplating the potential that they aren't the first to conceive of it." 

HireVue, Entropik Technology, Emteq, Neurodata Labs, Neilson-owned Innerscope, Realeyes, and Eyeris are among the businesses in the developing "emotional AI" sector. 

Entropik says that their technology can interpret emotions "through facial expressions, eye gazing, speech tone, and brainwaves," which it sells to companies wishing to track the effectiveness of their marketing efforts. 

Neurodata created a software that Russian bank Rosbank uses to assess the emotional state of clients phoning customer support centers. 



Emotion AI is being funded by more than just startups. 


Apple bought Emotient, a San Diego company that develops AI systems to assess face emotions, in 2016. 

When Alexa senses irritation in a user's voice, it apologizes and asks for clarification. 

Nuance, a speech recognition firm that Microsoft bought in April 2021, has shown off a device for automobiles that assesses driver emotions based on facial clues. 

In May, Swedish business Smart Eye bought Affectiva, an MIT Media Lab spin-off that claimed it could identify rage or dissatisfaction in speech in 1.2 seconds. 


According to Markets & Markets, the emotion AI market is expected to almost double in size from $19 billion in 2020 to $37.1 billion in 2026. 



Hundreds of millions of dollars have been invested in firms like Affectiva, Realeyes, and Hume by venture investors eager to get in on the first floor. 


According to the Financial Times, it is being used by film companies such as Disney and 20th Century Fox to gauge public response to new series and films. 

Meanwhile, marketing organizations have been putting the technology to the test for customers like Coca-Cola and Intel to examine how audiences react to commercials. 

The difficulty is that there are few – if any – universal indicators of emotion, which calls into doubt the accuracy of emotion AI. 

The bulk of emotion AI businesses are based on psychologist Paul Ekman's seven basic emotions (joy, sorrow, surprise, fear, anger, disgust, and contempt), which he introduced in the early 1970s. 

However, further study has validated the common sense assumption that individuals from diverse backgrounds express their emotions in quite different ways. 



Context, conditioning, relationality, and culture all have an impact on how individuals react to situations. 


For example, scowling, which is commonly linked with anger, has been observed to appear on the faces of furious persons fewer than 30% of the time. 

In Malaysia, the apparently universal expression for fear is the stereotype for a threat or anger. 


  • Later, Ekman demonstrated that there are disparities in how American and Japanese pupils respond to violent films, with Japanese students adopting "a whole distinct set of emotions" if another person is around, especially an authority figure. 
  • Gender and racial biases in face analysis algorithms have been extensively established, and are caused by imbalances in the datasets used to train the algorithm. 



In general, an AI system that has been trained on photographs of lighter-skinned humans may struggle with skin tones that are unknown to it. 


This isn't the only kind of prejudice that exists. 

Retorio, an AI employment tool, was seen to react differently to the identical applicant wearing glasses versus wearing a headscarf. 


  • Researchers from MIT, the Universitat Oberta de Catalunya in Barcelona, and the Universidad Autonoma de Madrid revealed in a 2020 study that algorithms may become biased toward specific facial expressions, such as smiling, lowering identification accuracy. 
  • Researchers from the University of Cambridge and the Middle East Technical University discovered that at least one of the public datasets often used to train emotion recognition systems was contaminated. 



There are substantially more Caucasian faces in AI systems than Asian or Black ones. 


  • Recent study has shown that major vendors' emotional analysis programs assign more negative feelings to Black men's faces than white men's looks, highlighting the repercussions. 
  • Persons with impairments, disorders like autism, and people who communicate in various languages and dialects, such as African-American Vernacular English, all have different voices (AAVE). 
  • A native French speaker doing an English survey could hesitate or enunciate a word with considerable trepidation, which an AI system might misinterpret as an emotion signal. 



Despite the faults in the technology, some businesses and governments are eager to use emotion AI to make high-stakes judgments. 


Employers use it to assess prospective workers by giving them a score based on their empathy or emotional intelligence. 

It's being used in schools to track pupils' participation in class — and even when they're doing homework at home. 

Emotion AI has also been tried at border checkpoints in the United States, Hungary, Latvia, and Greece to detect "risk persons." 

To reduce prejudice, Hume claims that "randomized studies" are used to collect "a vast variety" of facial and voice expressions from "people from a wide range of backgrounds." 

According to Cowen, the company has gathered over 1.1 million images and videos of facial expressions from over 30,000 people in the United States, China, Venezuela, India, South Africa, and Ethiopia, as well as over 900,000 audio recordings of people voicing their emotions labeled with people's self-reported emotional experiences. 

Hume's dataset is less than Affectiva's, which claimed to be the biggest of its sort at the time, with over 10 million people's expressions from 87 countries. 

Cowen, on the other hand, says that Hume's data can be used to train models to assess "an exceptionally broad spectrum of emotions," including over 28 facial expressions and 25 verbal expressions. 


"As demand for our empathetic AI models has grown, we've been prepared to provide access to them at a large scale." 


As a result, we'll be establishing a developer platform that will provide developers and researchers API documentation and a playground," Hume added. 

"We're also gathering data and developing training models for social interaction and conversational data, body language, and multi-modal expressions, which we expect will broaden our use cases and client base." 

Beyond Mursion, Hume claims it's collaborating with Hoomano, a firm that develops software for "social robots" like Softbank Robotics' Pepper, to build digital assistants that make better suggestions by taking into consideration the emotions of users. 

Hume also claims to have collaborated with Mount Sinai and University of California, San Francisco experts to investigate whether its models can detect depression and schizophrenia symptoms "that no prior methodologies have been able to capture." 


"A person's emotions have a big impact on their conduct, including what they pay attention to and click on." 


As a result, 'emotion AI' is already present in AI technologies such as search engines, social media algorithms, and recommendation systems. It's impossible to avoid. 

As a result, decision-makers must be concerned about how these technologies interpret and react to emotional signals, influencing their users' well-being in ways that their inventors are unaware of." Cowen remarked. 

"Hume AI provides the tools required to guarantee that technologies are built to increase the well-being of their users. There's no way of understanding how an AI system is interpreting these signals and altering people's emotions without means to assess them, and there's no way of designing the system to do so in a way that is compatible with people's well-being." 


Leaving aside the thorny issue of using artificial intelligence to diagnose mental disorder, Mike Cook, a Queen Mary University of London AI researcher, believes the company's message is "performative" and the language is questionable. 


"[T]hey've obviously gone to tremendous lengths to speak about diversity and inclusion and other such things, and I'm not going to whine about people creating datasets with greater geographic variety." "However, it seems a little like it was massaged by a PR person who knows how to make your organization appear to care," he remarked. 

Cowen claims that by forming The Hume Initiative, a nonprofit "committed to governing empathetic AI," Hume is taking a more rigorous look at the uses of emotion AI than rivals. 

The Hume Initiative, whose ethical committee includes Taniya Mishra, former director of AI at Affectiva, has established regulatory standards that the company claims it would follow when commercializing its innovations. 


The Hume Initiative's principles forbid uses like manipulation, fraud, "optimizing for diminished well-being," and "unbounded" emotion AI. 


It also establishes limitations for use cases such as platforms and interfaces, health and development, and education, such as mandating educators to utilize the output of an emotion AI model to provide constructive — but non-evaluative — input. 

Danielle Krettek Cobb, the creator of the Google Empathy Lab, Dacher Keltner, a professor of psychology at UC Berkeley, and Ben Bland, the head of the IEEE group establishing standards for emotion AI, are coauthors of the recommendations. 

"The Hume Initiative started by compiling a list of all known applications for empathetic AI. 

After that, they voted on the first set of specific ethical principles. 


The resultant principles are tangible and enforceable, unlike any prior attempt to AI ethics. 


They describe how empathetic AI may be used to increase mankind's finest traits of belonging, compassion, and well-being, as well as how it might be used to expose humanity to intolerable dangers," Cowen remarked. 

"Those who use Hume AI's data or AI models must agree to use them solely in accordance with The Hume Initiative's ethical rules, guaranteeing that any applications using our technology are intended to promote people's well-being." Companies have boasted about their internal AI ethical initiatives in the past, only to have such efforts fall by the wayside – or prove to be performative and ineffective. 


Google's AI ethics board was notoriously disbanded barely one week after it was established. 


Meta's (previously Facebook's) AI ethics unit has also been labeled as essentially useless in reports. 

It's referred to as "ethical washing" by some. 

Simply said, ethical washing is the practice of a firm inventing or inflating its interest in fair AI systems that benefit everyone. 



When a firm touts "AI for good" activities on the one hand while selling surveillance technology to governments and companies on the other, this is a classic example for tech titans. 


The coauthors of a report published by Trilateral Research, a London-based technology consultancy, claim that ethical principles and norms do not, by themselves, assist practitioners grapple with difficult concerns like fairness in emotion AI. 

They argue that these should be thoroughly explored to ensure that businesses do not deploy systems that are incompatible with societal norms and values. 


"Ethics is made ineffectual without a continual process of challenging what is or may be clear, of probing behind what seems to be resolved, of keeping this interrogation alive," they said. 


"As a result, the establishment of ethics into established norms and principles comes to an end." Cook identifies problems in The Hume Initiative's stated rules, especially in its use of ambiguous terminology. 

"A lot of the standards seem performatively written — if you believe manipulating the user is wrong, you'll read the guidelines and think to yourself, 'Yes, I won't do that.' And if you don't care, you'll read the rules and say, 'Yes, I can justify this,'" he explained. 

Cowen believes Hume is "open[ing] the door to optimize AI for human and societal well-being" rather than short-term corporate objectives like user engagement. 

"We don't have any actual competition since the other AI models for measuring emotional signals are so restricted." They concentrate on a small number of facial expressions, neglect the voice entirely, and have major demographic biases. 



These biases are often weaved into the data used to train AI systems. 


Furthermore, no other business has established explicit ethical criteria for the usage of empathetic AI," he said. 

"We're building a platform that will consolidate our model deployment and provide customers greater choice over how their data is utilized." 

Regardless of whether or not rules exist, politicians have already started to limit the use of emotion AI systems. 



The New York City Council recently established a regulation mandating companies to notify applicants when they are being evaluated by AI, as well as to audit the algorithms once a year. 


Candidates in Illinois must provide their agreement for video footage analysis, while Maryland has outlawed the use of face analysis entirely. 

Some firms have voluntarily ceased supplying emotion AI services or erected barriers around them. 

HireVue said that its algorithms will no longer use visual analysis. 

Microsoft's sentiment-detecting Face API, which once claimed it could detect emotions across cultures, now says in a caveat that "facial expressions alone do not reflect people's interior moods."

The Hume Initiative, according to Cook, "developed some ethical papers so people don't worry about what [Hume] is doing." 

"Perhaps the most serious problem I have is that I have no idea what they're doing." "Apart from whatever datasets they created, the part that's public doesn't appear to have anything on it," Cook added. 



Emotion recognition using AI. 


Emotion detection is a hot new field, with a slew of entrepreneurs marketing devices that promise to be able to read people's interior emotional states and AI academics attempting to increase computers' capacity to do so. 

Voice analysis, body language analysis, gait analysis, eye tracking, and remote assessment of physiological indications such as pulse and respiration rates are used to do this. 

The majority of the time, though, it's done by analyzing facial expressions. 

However, a recent research reveals that these items are constructed on a foundation of intellectual sand. 


The main issue is whether human emotions can be successfully predicted by looking at their faces. 


"Whether facial expressions of emotion are universal, whether you can look at someone's face and read emotion in their face," Lisa Feldman Barrett, a professor of psychology at Northeastern University and an expert on emotion, told me, "is a topic of great contention that scientists have been debating for at least 100 years." 


Despite this extensive history, she said that no full review of all emotion research conducted over the previous century had ever been completed. 


So, a few years ago, the Association for Psychological Science gathered five eminent scientists from opposing viewpoints to undertake a "systematic evaluation of the data challenging the popular opinion" that emotion can be consistently predicted by outward facial movements. 

According to Barrett, who was one of the five scientists, they "had extremely divergent theoretical ideas." "We arrived to the project with very different assumptions of what the data would reveal, and it was our responsibility to see if we could come to an agreement on what the data revealed and how to best interpret it." We weren't sure we could do it since it's such a divisive issue." The procedure, which was supposed to take a few months, took two years. 

Nonetheless, after evaluating over 1,000 scientific studies in the psychology literature, these experts arrived to an united conclusion: "a person's emotional state may be simply determined from his or her facial expressions" has no scientific basis. 


According to the researchers, there are three common misconceptions "about how emotions are communicated and interpreted in facial movements." 


The relationship between facial expressions and emotions is neither dependable, particular, or generalizable (i.e., the same emotions are not always exhibited in the same manner) (the effects of different cultures and contexts has not been sufficiently documented). 

"A scowling face may or may not be an indication of rage," Barrett said to me. 

People frown in rage at times, and you could grin, weep, or simply seethe with a neutral look at other moments. 

People grimace at other times as well, such as when they're perplexed, concentrating, or having gas." These results do not suggest that individuals move their faces at random or that [facial expressions] have no psychological significance, according to the researchers. 

Instead, they show that the facial configurations in issue aren't "fingerprints" or diagnostic displays that consistently and explicitly convey various emotional states independent of context, person, or culture. 

It's impossible to deduce pleasure from a grin, anger from a scowl, or sorrow from a frown, as most of today's technology seeks to accomplish when applying what are incorrectly considered to be scientific principles. 

Because an entire industry of automated putative emotion-reading devices is rapidly growing, this work is relevant. 


The market for emotion detection software is expected to reach at least $3.8 billion by 2025, according to our recent research on "Robot Surveillance." 


Emotion detection (also known as "affect recognition" or "affective computing") is already being used in devices for marketing, robotics, driving safety, and audio "aggression detectors," as we recently reported. 

Emotion identification is built on the same fundamental concept as polygraphs, or "lie detectors": that a person's internal mental state can be accurately associated with physical bodily motions and situations. 

They can't — and this includes face muscles in particular. 

It seems to reason that what is true of facial muscles would also be true of all other techniques of detecting emotion, such as body language and gait. 

However, the assumption that such mind reading is conceivable might cause serious damage. 


A jury's cultural misunderstanding of what a foreign defendant's facial expressions mean, for example, can lead to a death sentence rather than a prison sentence. 


When such mindset is translated into automated systems, it may lead to further problems. 

For example, a "smart" body camera that incorrectly informs a police officer that someone is hostile and angry might lead to an unnecessary shooting. 


"There is no automatic emotion identification. 

The top algorithms can confront a face — full frontal, no occlusions, optimal illumination — and are excellent at recognizing facial movements. 

They aren't able, however, to deduce what those facial gestures signify."


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See Also: 


AI Emotions, AI Emotion Recognition, AI Emotional Intelligence, Surveillance Technologies, Privacy and Technology, AI Bias, Human Rights.


Download PDF: 








Artificial Intelligence - Who Is Sherry Turkle?

 


 

 

Sherry Turkle(1948–) has a background in sociology and psychology, and her work focuses on the human-technology interaction.

While her study in the 1980s focused on how technology affects people's thinking, her work in the 2000s has become more critical of how technology is utilized at the expense of building and maintaining meaningful interpersonal connections.



She has employed artificial intelligence in products like children's toys and pets for the elderly to highlight what people lose out on when interacting with such things.


Turkle has been at the vanguard of AI breakthroughs as a professor at the Massachusetts Institute of Technology (MIT) and the creator of the MIT Initiative on Technology and the Self.

She highlights a conceptual change in the understanding of AI that occurs between the 1960s and 1980s in Life on the Screen: Identity inthe Age of the Internet (1995), substantially changing the way humans connect to and interact with AI.



She claims that early AI paradigms depended on extensive preprogramming and employed a rule-based concept of intelligence.


However, this viewpoint has given place to one that considers intelligence to be emergent.

This emergent paradigm, which became the recognized mainstream view by 1990, claims that AI arises from a much simpler set of learning algorithms.

The emergent method, according to Turkle, aims to emulate the way the human brain functions, assisting in the breaking down of barriers between computers and nature, and more generally between the natural and the artificial.

In summary, an emergent approach to AI allows people to connect to the technology more easily, even thinking of AI-based programs and gadgets as children.



Not just for the area of AI, but also for Turkle's study and writing on the subject, the rising acceptance of the emerging paradigm of AI and the enhanced relatability it heralds represents a significant turning point.


Turkle started to employ ethnographic research techniques to study the relationship between humans and their gadgets in two edited collections, Evocative Objects: Things We Think With (2007) and The Inner History of Devices (2008).


She emphasized in her book The Inner History of Devices that her intimate ethnography, or the ability to "listen with a third ear," is required to go past the advertising-based clichés that are often employed when addressing technology.


This method comprises setting up time for silent meditation so that participants may think thoroughly about their interactions with their equipment.


Turkle used similar intimate ethnographic approaches in her second major book, Alone Together

Why We Expect More from Technology and Less from Each Other (2011), to argue that the increasing connection between people and the technology they use is harmful.

These issues are connected to the increased usage of social media as a form of communication, as well as the continuous degree of familiarity and relatability with technology gadgets, which stems from the emerging AI paradigm that has become practically omnipresent.

She traced the origins of the dilemma back to early pioneers in the field of cybernetics, citing, for example, Norbert Weiner's speculations on the idea of transmitting a human person across a telegraph line in his book God & Golem, Inc.(1964).

Because it reduces both people and technology to information, this approach to cybernetic thinking blurs the barriers between them.



In terms of AI, this implies that it doesn't matter whether the machines with which we interact are really intelligent.


Turkle claims that by engaging with and caring for these technologies, we may deceive ourselves into feeling we are in a relationship, causing us to treat them as if they were sentient.

In a 2006 presentation titled "Artificial Intelligence at 50: From Building Intelligence to Nurturing Sociabilities" at the Dartmouth Artificial Intelligence Conference, she recognized this trend.

She identified the 1997 Tamagotchi, 1998 Furby, and 2000 MyReal Baby as early versions of what she refers to as relational artifacts, which are more broadly referred to as social machines in the literature.

The main difference between these devices and previous children's toys is that these devices come pre-animated and ready for a relationship, whereas previous children's toys required children to project a relationship onto them.

Turkle argues that this change is about our human weaknesses as much as it is about computer capabilities.

In other words, just caring for an item increases the likelihood of not only seeing it as intelligent but also feeling a connection to it.

This sense of connection is more relevant to the typical person engaging with these technologies than abstract philosophic considerations concerning the nature of their intelligence.



Turkle delves more into the ramifications of people engaging with AI-based technologies in both Alone Together and Reclaiming Conversation: The Power of Talk in a Digital Age (2015).


She provides the example of Adam in Alone Together, who appreciates the appreciation of the AI bots he controls over in the game Civilization.

Adam appreciates the fact that he is able to create something fresh when playing.

Turkle, on the other hand, is skeptical of this interaction, stating that Adam's playing isn't actual creation, but rather the sensation of creation, and that it's problematic since it lacks meaningful pressure or danger.

In Reclaiming Conversation, she expands on this point, suggesting that social partners simply provide a perception of camaraderie.

This is important because of the value of human connection and what may be lost in relationships that simply provide a sensation or perception of friendship rather than true friendship.

Turkle believes that this transition is critical.


She claims that although connections with AI-enabledtechnologies may have certain advantages, they pale in contrast to what is missing: 

  • the complete complexity and inherent contradictions that define what it is to be human.


A person's connection with an AI-enabled technology is not as intricate as one's interaction with other individuals.


Turkle claims that as individuals have become more used to and dependent on technology gadgets, the definition of friendship has evolved.


  • People's expectations for companionship have been simplified as a result of this transformation, and the advantages that one wants to obtain from partnerships have been reduced.
  • People now tend to associate friendship only with the concept of interaction, ignoring the more nuanced sentiments and arguments that are typical in partnerships.
  • By engaging with gadgets, one may form a relationship with them.
  • Conversations between humans have become merely transactional as human communication has shifted away from face-to-face conversation and toward interaction mediated by devices. 

In other words, the most that can be anticipated is engagement.



Turkle, who has a background in psychoanalysis, claims that this kind of transactional communication allows users to spend less time learning to view the world through the eyes of another person, which is a crucial ability for empathy.


Turkle argues we are in a robotic period in which people yearn for, and in some circumstances prefer, AI-based robotic companionship over that of other humans, drawing together these numerous streams of argument.

For example, some people enjoy conversing with their iPhone's Siri virtual assistant because they aren't afraid of being judged by it, as evidenced by a series of Siri commercials featuring celebrities talking to their phones.

Turkle has a problem with this because these devices can only respond as if they understand what is being said.


AI-based gadgets, on the other hand, are confined to comprehending the literal meanings of data stored on the device.

They can decipher the contents of phone calendars and emails, but they have no idea what any of this data means to the user.

There is no discernible difference between a calendar appointment for car maintenance and one for chemotherapy for an AI-based device.

A person may lose sight of what it is to have an authentic dialogue with another human when entangled in a variety of these robotic connections with a growing number of technologies.


While Reclaiming Communication documents deteriorating conversation skills and decreasing empathy, it ultimately ends on a positive note.

Because people are becoming increasingly dissatisfied with their relationships, there may be a chance for face-to-face human communication to reclaim its vital role.


Turkle's ideas focus on reducing the amount of time people spend on their phones, but AI's involvement in this interaction is equally critical.


  • Users must accept that their virtual assistant connections will never be able to replace face-to-face interactions.
  • This will necessitate being more deliberate in how one uses devices, prioritizing in-person interactions over the faster and easier interactions provided by AI-enabled devices.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

  • Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.
  • Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.
  • Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cambridge, MA: MIT Press.



Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...