Showing posts sorted by relevance for query AI. Sort by date Show all posts
Showing posts sorted by relevance for query AI. Sort by date Show all posts

Biased Data Isn't the Only Source of AI Bias.

 





In order to eliminate prejudice in artificial intelligence, it will be necessary to address both human and systemic biases. 


Bias in AI systems is often seen as a technological issue, but the NIST study recognizes that human prejudices, as well as systemic, institutional biases, have a role. 

Researchers at the National Institute of Standards and Technology (NIST) recommend broadening the scope of where we look for the source of these biases — beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed — as a step toward improving our ability to identify and manage the harmful effects of bias in artificial intelligence (AI) systems. 

The advice is at the heart of a new NIST article, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270), which incorporates feedback from the public on a draft version issued last summer. 


The publication provides guidelines related to the AI Risk Management Framework that NIST is creating as part of a wider effort to facilitate the development of trustworthy and responsible AI. 


The key difference between the draft and final versions of the article, according to NIST's Reva Schwartz, is the increased focus on how bias presents itself not just in AI algorithms and the data used to train them, but also in the sociocultural environment in which AI systems are employed. 

"Context is crucial," said Schwartz, one of the report's authors and the primary investigator for AI bias. 

"AI systems don't work in a vacuum. They assist individuals in making choices that have a direct impact on the lives of others. If we want to design trustworthy AI systems, we must take into account all of the elements that might undermine public confidence in AI. Many of these variables extend beyond the technology itself to its consequences, as shown by the responses we got from a diverse group of individuals and organizations." 

NIST contributes to the research, standards, and data needed to fulfill artificial intelligence's (AI) full potential as a driver of American innovation across industries and sectors. 

NIST is working with the AI community to define the technological prerequisites for cultivating confidence in AI systems that are accurate and dependable, safe and secure, explainable, and bias-free. 


AI bias is harmful to humans. 


AI may make choices on whether or not a student is admitted to a school, approved for a bank loan, or accepted as a rental applicant. 

Machine learning software, for example, might be taught on a dataset that underrepresents a certain gender or ethnic group. 

While these computational and statistical causes of bias remain relevant, the new NIST article emphasizes that they do not capture the whole story. 

Human and structural prejudices, which play a large role in the new edition, must be taken into consideration for a more thorough understanding of bias. 

Institutions that operate in ways that disfavor specific social groups, such as discriminating against persons based on race, are examples of systemic biases. 

Human biases may be related to how individuals utilize data to fill in gaps, such as a person's neighborhood impacting how likely police would consider them to be a criminal suspect. 

When human, institutional, and computational biases come together, they may create a dangerous cocktail – particularly when there is no specific direction for dealing with the hazards of deploying AI systems. 

"If we are to construct trustworthy AI systems, we must take into account all of the elements that might erode public faith in AI." 

Many of these considerations extend beyond the technology itself to the technology's consequences." —Reva Schwartz, AI bias main investigator To address these concerns, the NIST authors propose a "socio-technical" approach to AI bias mitigation. 


This approach recognizes that AI acts in a wider social context — and that attempts to overcome the issue of bias just on a technological level would fall short. 


"When it comes to AI bias concerns, organizations sometimes gravitate to highly technical solutions," Schwartz added. 

"However, these techniques fall short of capturing the social effect of AI systems. The growth of artificial intelligence into many facets of public life necessitates broadening our perspective to include AI as part of the wider social system in which it functions." 

According to Schwartz, socio-technical approaches to AI are a developing field, and creating measuring tools that take these elements into account would need a diverse mix of disciplines and stakeholders. 

"It's critical to bring in specialists from a variety of sectors, not just engineering," she added, "and to listen to other organizations and communities about the implications of AI." 

Over the next several months, NIST will host a series of public workshops aimed at creating a technical study on AI bias and integrating it to the AI Risk Management Framework.


Visit the AI RMF workshop website for further information and to register.



A Method for Reducing Artificial Intelligence Bias Risk. 


The National Institute of Standards and Technology (NIST) is advancing an approach for identifying and managing biases in artificial intelligence (AI) — and is asking for the public's help in improving it — in an effort to combat the often pernicious effect of biases in AI that can harm people's lives and public trust in AI. 


A Proposal for Identifying and Managing Bias in Artificial Intelligence (NIST Special Document 1270), a new publication from NIST, lays out the methodology. 


It's part of the agency's larger effort to encourage the development of trustworthy and responsible AI. 


NIST will welcome public comments on the paper through September 10, 2021 (an extension of the initial deadline of August 5, 2021), and the writers will utilize the feedback to help define the topic of numerous collaborative virtual events NIST will organize in the following months. 


This series of events aims to engage the stakeholder community and provide them the opportunity to contribute feedback and ideas on how to reduce the danger of bias in AI. 


"Managing the danger of bias in AI is an important aspect of establishing trustworthy AI systems, but the route to accomplishing this remains uncertain," said Reva Schwartz of the National Institute of Standards and Technology, who was one of the report's authors. 

"We intend to include the community in the development of voluntary, consensus-based norms for limiting AI bias and decreasing the likelihood of negative consequences." 


NIST contributes to the research, standards, and data needed to fulfill artificial intelligence's (AI) full potential as a catalyst for American innovation across industries and sectors. 


NIST is working with the AI community to define the technological prerequisites for cultivating confidence in AI systems that are accurate and dependable, safe and secure, explainable, and bias-free. 

Bias in AI-based goods and systems is a critical, but yet poorly defined, component of trustworthiness. 

This prejudice might be intentional or unintentional. 


NIST is working to get us closer to consensus on recognizing and quantifying bias in AI systems by organizing conversations and conducting research. 


Because AI can typically make sense of information faster and more reliably than humans, it has become a transformational technology. 

Everything from medical detection to digital assistants on our cellphones now uses AI. 

However, as AI's uses have developed, we've seen that its conclusions may be skewed by biases in the data it's given - data that either partially or erroneously represents the actual world. 

Furthermore, some AI systems are designed to simulate complicated notions that cannot be readily assessed or recorded by data, such as "criminality" or "employment appropriateness." 

Other criteria, such as where you live or how much education you have, are used as proxies for the notions these systems are attempting to mimic. 


The imperfect association of the proxy data with the original notion might result to undesirable or discriminatory AI outputs, such as wrongful arrests, or eligible candidates being erroneously refused for employment or loans. 


The strategy the authors suggest for controlling bias comprises a conscious effort to detect and manage bias at multiple phases in an AI system’s lifespan, from early idea through design to release. 

The purpose is to bring together stakeholders from a variety of backgrounds, both within and outside the technology industry, in order to hear viewpoints that haven't been heard before. 

“We want to bring together the community of AI developers of course, but we also want to incorporate psychologists, sociologists, legal experts and individuals from disadvantaged communities,” said NIST’s Elham Tabassi, a member of the National AI Research Resource Task Force. 

"We'd want to hear from individuals who are affected by AI, both those who design AI systems and those who aren't." 


Preliminary research for the NIST writers includes a study of peer-reviewed publications, books, and popular news media, as well as industry reports and presentations. 


It was discovered that bias may seep into AI systems at any level of development, frequently in different ways depending on the AI's goal and the social environment in which it is used. 

"An AI tool is often built for one goal, but it is subsequently utilized in a variety of scenarios," Schwartz said. 

"Many AI applications have also been inadequately evaluated, if at all, in the environment for which they were designed. All these elements might cause bias to go undetected.” 

Because the team members acknowledge that they do not have all of the answers, Schwartz believes it is critical to get public comment, particularly from those who are not often involved in technical conversations. 


"We'd want to hear from individuals who are affected by AI, both those who design AI systems and those who aren't." ~ Elham Tabassi.


"We know bias exists throughout the AI lifespan," added Schwartz. 

"It would be risky to not know where your model is biased or to assume that there is none. The next stage is to figure out how to see it and deal with it."


Comments on the proposed method may be provided by downloading and completing the template form (in Excel format) and emailing it to ai-bias@list.nist.gov by Sept. 10, 2021 (extended from the initial deadline of Aug. 5, 2021). 

This website will be updated with further information on the joint event series.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read and learn more Technology and Engineering here.

You may also want to read and learn more Artificial Intelligence here.




Artificial Intelligence - Iterative AI Ethics In Complex Socio-Technical Systems

 



Title: The Need For Iterative And Evolving AI Ethics Processes And Frameworks To Ensure Relevant, Fair, And Ethical Scalable Complex Socio-Technical Systems.

Author: Jai Krishna Ponnappan




Ethics has strong fangs, but they are seldom applied in AI ethics today, therefore it's no surprise that AI ethics is criticized for lacking efficacy. 


This essay claims that present AI ethics 'ethics' is generally useless, caught in a 'ethical principles' approach and hence particularly vulnerable to manipulation, particularly by industrial players. 

Using ethics as a replacement for the law puts it at danger of being abused and misapplied. 

This severely restricts what ethics can accomplish, and it is a big setback for the AI field and its implications for people and society. 

This paper examines these dangers before focusing on the efficacy of ethics and the critical contribution they can – and should – provide to AI ethics right now. 



Ethics is a potent weapon. 


Unfortunately, we seldom use them in AI ethics, thus it's no surprise that AI ethics is dubbed "ineffective." 

This paper examines the different ethical procedures that have arisen in recent years in response to the widespread deployment and usage of AI in society, as well as the hazards that come with it. 

Lists of principles, ethical codes, suggestions, and guidelines are examples of these procedures. 


However, as many have showed, although these ethical innovations are exciting, they are also problematic: their usefulness has yet to be proven, and they are particularly susceptible to manipulation, notably by industry. 


This is a setback for AI, as it severely restricts what ethics may do for society and people. 

However, as this paper demonstrates, the problem isn't that ethics is meaningless (or ineffective) in the face of current AI deployment; rather, ethics is being utilized (or manipulated) in such a manner that it is made ineffectual for AI ethics. 

The paper starts by describing the current state of AI ethics: AI ethics is essentially principled, that is, it adheres to a 'law' view of ethics. 

It then demonstrates how this ethical approach fails to accomplish what it claims to do. 

The second section of this paper focuses on the true worth of ethics – its 'efficacy,' which we describe as the capacity to notice the new as it develops on a continuous basis. 



We explain how, in today's AI ethics, the ability to resist cognitive and perceptual inertia, which makes us inactive in the face of new advancements, is crucial. 


Finally, although we acknowledge that the legalistic approach to ethics is not entirely incorrect, we argue that it is the end of ethics, not its beginning, and that it ignores the most valuable and crucial components of ethics. 

In many stakeholder quarters, there are several ongoing conversations and activities on AI ethics (policy, academia, industry and even the media). This is something we can all be happy about. 


Policymakers (e.g., the European Commission and the European Parliament) and business, in particular, are concerned about doing things right in order to promote ethical and responsible AI research and deployment in society. 


It is now widely acknowledged that if AI is adopted without adequate attention and thought for its potentially detrimental effects on people, particular groups, and society as a whole, things might go horribly wrong (including, for example, bias and discrimination, injustice, privacy infringements, increase in surveillance, loss of autonomy, overdependency on technology, etc.). 

The focus then shifts to ethics, with the goal of ensuring that AI is implemented in a way that respects deeply held social values and norms, placing them at the center of responsible technology development and deployment (Hagendorff, 2020; Jobin et al., 2019). 

The 'Ethical guidelines for trustworthy AI,' established by the European Commission's High-Level Expert Group on AI in 2018, is one example of contemporary ethics efforts (High-Level Expert Group on Artificial Intelligence, 2019). 

However, the present use of the term "ethics" in the subject of AI ethics is questionable. 

Today's AI ethics is dominated by what British philosopher G.E.M. Anscombe refers to as a 'law conception of ethics,' i.e., a perspective on ethics that treats it as if it were a kind of law (Anscombe, 1958). 

It's customary to think of ethics as a "softer" version of the law (Jobin et al., 2019: 389). 


However, this is simply one approach to ethics, and it is problematic, as Anscombe has shown. It is problematic in at least two respects in terms of AI ethics. 

For starters, it's troublesome since it has the potential to be misapplied as a substitute for regulation (whether through law, policies or standards). 

Over the previous several years, many authors have advocated the following point: Article 19, 2019; Greene et al., 2019; Hagendorff, 2020; Jobin et al., 2019; Klöver and Fanta, 2019; Mittelstadt, 2019; Wagner, 2018); Greene et al., 2019; Hagendorff, 2020; Jobin et al., 2019; Klöver and Fanta, 2019; Klöver and Fanta, 2019; Klöver and Fanta, 2019; Klöver and Fanta, Wagner cites the situation of a member of the Google DeepMind ethics team continuously asserting 'how ethically Google DeepMind was working, while simultaneously dodging any accountability for the data security crisis at Google DeepMind' at the Conference on World Affairs 2018. (Wagner, 2018). 

'Ethical AI' discourse, according to Ochigame (2019), was "aligned strategically with a Silicon Valley campaign attempting to circumvent legally enforceable prohibitions of problematic technology." Ethics falls short in this regard because it lacks the instruments to enforce conformity. 


Ethics, according to Hagendorff, "lacks means to support its own normative assertions" (2020: 99). 


If ethics is about enforcing rules, then it is true that ethics is ineffective. 

Although ethical programs "bring forward great intentions," according to the human rights organization Article 19, "their general lack of accountability and enforcement measures" renders them ineffectual (Article 19, 2019: 18). 

Finally, and predictably, ethics is attacked for being ineffective. 

However, it's important to note that the problem isn't that ethics is being asked to perform something for which it is too weak or soft. 

It's more like it's being asked to do something it wasn't supposed to accomplish. 


Criticizing ethics for not having efficacy to enforce compliance with whatever it requires is like to blaming a fork for not correctly cutting meat: that is not what it is supposed to achieve. 


The goal of ethics is not to prescribe certain behaviors and then guarantee that they are followed. 

The issue occurs when it is utilized in this manner. 

This is especially true in the field of AI ethics, where ethical principles, norms, or criteria are required to control AI and guarantee that it does not damage people or society as a whole (e.g. AI HLEG). 

Some suggest that this ethical lapse is deliberate, motivated by a desire to guarantee that AI is not governed by legislation, i.e. 

that greater flexibility is available and that no firm boundaries are drawn constraining industrial and economic interests associated to this technology (Klöver and Fanta, 2019). 

For example, this criticism has been directed against the AI HLEG guidelines. 

Industry was extensively represented during debates at the European High-Level Expert Group on Artificial Intelligence (EU-HLEG), while academia and civil society did not have the same luxury, according to Article 19. 


While several non-negotiable ethical standards were initially specified in the text, owing to corporate pressure, they were eliminated from the final version. 


(Article 19, page 18 of the 2019 edition) It is a significant and concerning abuse and misuse of ethics to use ethics to hinder the execution of vital legal regulation. 

The result is ethics washing, as well as its cousins: ethics shopping, ethics shirking, and so on (Floridi, 2019; Greene et al., 2019; Wagner, 2018). 

Second, since the AI ethics area is dominated by this 'legal notion of ethics,' it fails to fully use what ethics has to give, namely, its correct efficacy, despite the critical need for them. 

What exactly are these ethical efficacy, and what value might they provide to the field? The true fangs of ethics are a never-failing capacity to perceive the new (Laugier, 2013). 


Ethics is basically a state of mind, a constantly renewed and nimble response to reality as it changes. 


The ethics of care has emphasized attention as a critical component of ethics (Tronto, 1993: 127). 

In this way, ethics is a strong instrument against cognitive and perceptual inertia, which prevents us from seeing what is different from before or in new settings, cultures, or circumstances, and hence necessitates a change in behavior (regulation included). 

This is particularly important for AI, given the significant changes and implications it has had and continues to have on society, as well as our basic ways of being and functioning. 

This ability to observe the environment is what keeps us from being cooked alive like the frog: it allows us to detect subtle changes as they happen. 

An extension and deepening of monitoring by governments and commercial enterprises, a rising reliance on technology, and the deployment of biased systems that lead to discrimination against women and minorities are all contributing to the increasingly hot water in AI. 


The positive changes they bring to society must be carefully examined and opposed when their negative consequences exceed their advantages. 


In this way, ethics has a tight relationship with social sciences, as an attempt to perceive what we don't otherwise notice, and ethics aids us in looking concretely at how the world evolves. 

It aids in the cleaning of the lens through which we see the world so that we may be more aware of its changes (and AI does bring many of these). 

It is critical that ethics back us up in this respect. 

It enables us to be less passive in the face of these changes, allowing us to better direct them in ways that benefit people and society while also improving our quality of life. 


Hagendorff makes a similar point in his essay on the 'Ethics of AI Ethics,' disputing the prevalent deontological approach to ethics in AI ethics (what we've referred to as a legalistic approach to ethics in this article), whose primary goal is to 'limit, control, or direct' (2020: 112). 


He emphasizes the necessity for AI to adopt virtue ethics, which strives to 'broaden the scope of action, disclose blind spots, promote autonomy and freedom, and cultivate self-responsibility' (Hagendorff, 2020: 112). 

Other ethical theory frameworks that might be useful in today's AI ethics discussion include the Spinozist approach, which focuses on the growth or loss of agency and action capability. 

So, are we just misinterpreting AI ethics, which, as we've seen, is now dominated by a 'law-concept of ethics'? Is today's legalistic approach to ethics entirely incorrect? No, not at all. 



The problem is that principles, norms, and values — the legal definition of ethics that is so prevalent in AI ethics today – are more of a means to a goal than an end in themselves. 


The word "end" has two meanings in this context. 

First, it is an end of ethics in the sense that it is the last destination of ethics, i.e., moulding laws, choices, behaviors, and acts in ways that are consistent with society's ideals. 

Ethics may be defined as the creation of principles (as in the AI HLEG criteria) or the application of ethical principles, values, or standards to particular situations. 

This process of operationalization of ethical standards may be observed, for example, in the European Commission's research funding program's Ethics evaluation procedure5 or in ethics impact assessments, which look at how a new technique or technology could alter ethical norms and values. 

These are unquestionably worthwhile endeavors that have a beneficial influence on society and people. 


Ethics, as the development of principles, is also useful in shaping policies and regulatory frameworks. 


The AI HLEG guidelines are heavily influenced by current policy and legislative developments at the EU level, such as the European Commission's "White Paper on Artificial Intelligence" (February 2020) and the European Parliament's proposed "Framework of ethical aspects of artificial intelligence, robotics, and related technologies" (April 2020). 

Ethics clearly lays forth the rights and wrongs, as well as what should be done and what should be avoided. 

It's important to recall, however, that ethics as ethical principles is also an end of ethics in another meaning: where it comes to a halt, where the thought is paused, and where this never-ending attention comes to an end. 

As a result, when ethics is reduced to a collection of principles, norms, or criteria, it has achieved its conclusion. 

There is no need for ethics if we have attained a sufficient degree of certainty and confidence in what are the correct judgments and acts. 



Ethics is about navigating muddy and dangerous seas while being vigilant. 


In the realm of AI, for example, ethical standards do not, by themselves, assist in the practical exploration of difficult topics such as fairness in extremely complex socio-technical systems. 


These must be thoroughly studied to ensure that we are not putting in place systems that violate deeply held norms and beliefs. 

Ethics is made worthless without a continual process of challenging what is or may be clear, of probing behind what seems to be resolved, and of keeping this inquiry alive. 

As a result, the process of settling ethics into established norms and principles comes to an end. 

It is vital to maintain ethics nimble and alive in light of AI's profound, huge, and broad influence on society. 

The ongoing renewal process of examining the world and the glasses through which we experience it — intentionally, consistently, and iteratively – is critical to AI ethics.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


AI ethics, law of AI, regulation of AI, ethics washing, EU HLEG on AI, ethical principles



Download PDF: 




Further Reading:



  • Anscombe, GEM (1958) Modern moral philosophy. Philosophy 33(124): 1–19.
  • European Committee for Standardization (2017) CEN Workshop Agreement: Ethics assessment for research and innovation – Part 2: Ethical impact assessment framework (by the SATORI project). Available at: https://satoriproject.eu/media/CWA17145-23d2017 .
  • Boddington, P (2017) Towards a Code of Ethics for Artificial Intelligence. Cham: Springer.
  • European Parliament JURI (April 2020) Framework of ethical aspects of artificial intelligence, robotics and related technologies, draft report (2020/2012(INL)). Available at: https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?lang=&reference=2020/2012 .
  • Gilligan, C (1982) In a Different Voice: Psychological Theory and Women’s Development. Cambridge: Harvard University Press.
  • Floridi, L (2019) Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology 32: 185–193.
  • Greene, D, Hoffmann, A, Stark, L (2019) Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In: Proceedings of the 52nd Hawaii international conference on system sciences, Maui, Hawaii, 2019, pp.2122–2131.
  • Gilligan, C (1982) In a Different Voice: Psychological Theory and Women’s Development. Cambridge: Harvard University Press.
  • Hagendorff, T (2020) The ethics of AI ethics. An evaluation of guidelines. Minds and Machines 30: 99–120.
  • Greene, D, Hoffmann, A, Stark, L (2019) Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In: Proceedings of the 52nd Hawaii international conference on system sciences, Maui, Hawaii, 2019, pp.2122–2131.
  • Jansen P, Brey P, Fox A, Maas J, Hillas B, Wagner N, Smith P, Oluoch I, Lamers L, van Gein H, Resseguier A, Rodrigues R, Wright D, Douglas D (2019) Ethical analysis of AI and robotics technologies. August, SIENNA D4.4, https://www.sienna-project.eu/digitalAssets/801/c_801912-l_1-k_d4.4_ethical-analysis–ai-and-r–with-acknowledgements.pdf
  • High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  • Jansen P, Brey P, Fox A, Maas J, Hillas B, Wagner N, Smith P, Oluoch I, Lamers L, van Gein H, Resseguier A, Rodrigues R, Wright D, Douglas D (2019) Ethical analysis of AI and robotics technologies. August, SIENNA D4.4, https://www.sienna-project.eu/digitalAssets/801/c_801912-l_1-k_d4.4_ethical-analysis–ai-and-r–with-acknowledgements.pdf .
  • Jobin, A, Ienca, M, Vayena, E (2019) The global landscape of AI ethics guidelines. Nature Machine Intelligence 1(9): 389–399.
  • Laugier, S (2013) The will to see: Ethics and moral perception of sense. Graduate Faculty Philosophy Journal 34(2): 263–281.
  • Klöver, C, Fanta, A (2019) No red lines: Industry defuses ethics guidelines for artificial intelligence. Available at: https://algorithmwatch.org/en/industry-defuses-ethics-guidelines-for-artificial-intelligence/
  • López, JJ, Lunau, J (2012) ELSIfication in Canada: Legal modes of reasoning. Science as Culture 21(1): 77–99.
  • Rodrigues, R, Rességuier, A (2019) The underdog in the AI ethical and legal debate: Human autonomy. In: Ethics Dialogues. Available at: https://www.ethicsdialogues.eu/2019/06/12/the-underdog-in-the-ai-ethical-and-legal-debate-human-autonomy/
  • Ochigame, R (2019) The invention of “Ethical AI” how big tech manipulates academia to avoid regulation. The Intercept. Available at: https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/?comments=1
  • Mittelstadt, B (2019) Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1: 501–507.
  • Tronto, J (1993) Moral Boundaries: A Political Argument for an Ethic of Care. New York: Routledge.
  • Wagner, B (2018) Ethics as an escape from regulation: From ethics-washing to ethics-shopping. In: Bayamlioglu, E, Baraliuc, I, Janssens, L, et al. (eds) Being Profiled: Cogitas Ergo Sum: 10 Years of Profiling the European Citizen. Amsterdam: Amsterdam University Press, pp. 84–89.



Artificial Intelligence - What Is An AI Winter?

 



The term AI Winter was established during the American Association of Artificial Intelligence's annual conference in 1984.(now the Association for the Advancement of Artificial Intelligence or AAAI).

Marvin Minsky and Roger Schank, two top academics, used the phrase to describe the imminent bust in artificial intelligence research and development at the time.

Daniel Crevier, a Canadian AI researcher, has detailed how fear of an impending AI Winter caused a domino effect that started with skepticism in the AI research community, spread to the media, and eventually resulted in negative funding responses.

As a consequence, real AI research and development came to a halt.

The initial skepticism may now be ascribed mostly to the excessively optimistic promises made at the time, with AI's real outcomes being significantly less than expected.

Other reasons, such as a lack of computer capacity during the early days of AI research, led to the belief that an AI Winter was approaching.

This was especially true in the case of neural network research, which required a large amount of processing power.

Economic reasons, however, limited attention on more concrete investments, especially during overlapping times of economic crises.

AI Winters have occurred many times during the history of AI, with two of the most notable eras covering 1974 to 1980 and 1987 to 1993.

Although the dates of AI Winters are debatable and dependent on the source, times with overlapping patterns are associated with research abandonment and defunding.

The development of AI systems and technologies has progressed, similar to the hype and ultimate collapse of breakthrough technologies such as nanotechnology.

Not only has there been an unprecedented amount of money for basic research, but there has also been exceptional progress in the development of machine learning during the present boom time.

The reasons for the investment surge vary depending on the many stakeholders involved in artificial intelligence research and development.

For example, industry has staked a lot of money on the idea that discoveries in AI would result in dividends by changing whole market sectors.

Governmental agencies, such as the military, invest in AI research to improve the efficiency of both defensive and offensive technology and to protect troops from imminent damage.

Because AI Winters are triggered by a perceived lack of trust in what AI can provide, the present buzz around AI and its promises has sparked fears of another AI Winter.

On the other hand, others argue that current technology developments in applied AI research have secured future progress in this field.

This argument contrasts sharply with the so-called "pipeline issue," which claims that a lack of basic AI research will result in a limited number of applied outcomes.

One of the major elements of prior AI Winters has been the pipeline issue.

However, if the counterargument is accurate, a feedback loop between applied breakthroughs and basic research will generate enough pressure to keep the pipeline moving forward.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Minsky, Marvin.

Further Reading

Crevier, Daniel. 1993. AI: The Tumultuous Search for Artificial Intelligence. New York: Basic Books.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Muehlhauser, Luke. 2016. “What Should We Learn from Past AI Forecasts?” https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/what-should-we-learn-past-ai-forecasts.


Artificial Intelligence - What Are Non-Player Characters And Emergent Gameplay?

 


Emergent gameplay occurs when a player in a video game encounters complicated scenarios as a result of their interactions with other players in the game.


Players may fully immerse themselves in an intricate and realistic game environment and feel the consequences of their choices in today's video games.

Players may personalize and build their character and tale.

Players take on the role of a cyborg in a dystopian metropolis in the Deus Ex series (2000), for example, one of the first emergent game play systems.

They may change the physical appearance of their character as well as their skill sets, missions, and affiliations.

Players may choose between militarized adaptations that allow for more aggressive play and stealthier options.

The plot and experience are altered by the choices made on how to customize and play, resulting in unique challenges and results for each player.


When players interact with other characters or items, emergent gameplay guarantees that the game environment reacts.



Because of many options, the tale unfolds in surprising ways as the gaming world changes.

Specific outcomes are not predetermined by the designer, and emergent gameplay can even take advantage of game flaws to generate actions in the game world, which some consider to be a form of emergence.

Artificial intelligence has become more popular among game creators in order to have the game environment respond to player actions in a timely manner.

Artificial intelligence aids the behavior of video characters and their interactions via the use of algorithms, basic rule-based forms that help in generating the game environment in sophisticated ways.

"Game AI" refers to the usage of artificial intelligence in games.

The most common use of AI algorithms is to construct the form of a non-player character (NPC), which are characters in the game world with whom the player interacts but does not control.


In its most basic form, AI will use pre-scripted actions for the characters, who will then concentrate on reacting to certain events.


Pre-scripted character behaviors performed by AI are fairly rudimentary, and NPCs are meant to respond to certain "case" events.

The NPC will evaluate its current situation before responding in a range determined by the AI algorithm.

Pac-Man is a good early and basic illustration of this (1980).

Pac-Man is controlled by the player through a labyrinth while being pursued by a variety of ghosts, who are the game's non-player characters.


Players could only interact with ghosts (NPCs) by moving about; ghosts had limited replies and their own AI-programmed pre-scripted movement.




The AI planned reaction would occur if the ghost ran into a wall.

It would then roll an AI-created die that would determine whether or not the NPC would turn toward or away from the direction of the player.

If the NPC decided to go after the player, the AI pre-scripted pro gram would then detect the player’s location and turn the ghost toward them.

If the NPC decided not to go after the player, it would turn in an opposite or a random direction.

This NPC interaction is very simple and limited; however, this was an early step in AI providing emergent gameplay.



Contemporary games provide a variety of options available and a much larger set of possible interactions for the player.


Players in contemporary role-playing games (RPGs) are given an incredibly high number of potential options, as exemplified by Fallout 3 (2008) and its sequels.

Fallout is a role-playing game, where the player takes on the role of a survivor in a post-apocalyptic America.

The story narrative gives the player a goal with no direction; as a result, the player is given the freedom to play as they see fit.

The player can punch every NPC, or they can talk to them instead.

In addition to this variety of actions by the player, there are also a variety of NPCs controlled through AI.

Some of the NPCs are key NPCs, which means they have their own unique scripted dialogue and responses.

This provides them with a personality and provides a complexity through the use of AI that makes the game environment feel more real.


When talking to key NPCs, the player is given options for what to say, and the Key NPCs will have their own unique responses.


This differs from the background character NPCs, as the key NPCs are supposed to respond in such a way that it would emulate interaction with a real personality.

These are still pre-scripted responses to the player, but the NPC responses are emergent based on the possible combination of the interaction.

As the player makes decisions, the NPC will examine this decision and decide how to respond in accordance to its script.

The NPCs that the players help or hurt and the resulting interactions shape the game world.

Game AI can emulate personalities and present emergent gameplay in a narrative setting; however, AI is also involved in challenging the player in difficulty settings.


A variety of pre-scripted AI can still be used to create difficulty.

Pre scripted AI are often made to make suboptimal decisions for enemy NPCs in games where players fight.

This helps make the game easier and also makes the NPCs seem more human.

Suboptimal pre-scripted decisions make the enemy NPCs easier to handle.

Optimal decisions however make the opponents far more difficult to handle.

This can be seen in contemporary games like Tom Clancy’s The Division (2016), where players fight multiple NPCs.

The enemy NPCs range from angry rioters to fully trained paramilitary units.

The rioter NPCs offer an easier challenge as they are not trained in combat and make suboptimal decisions while fighting the player.

The military trained NPCs are designed to have more optimal decision-making AI capabilities in order to increase the difficulty for the player fighting them.



Emergent gameplay has evolved to its full potential through use of adaptive AI.


Similar to prescript AI, the character examines a variety of variables and plans about an action.

However, unlike the prescript AI that follows direct decisions, the adaptive AI character will make their own decisions.

This can be done through computer-controlled learning.


AI-created NPCs follow rules of interactions with the players.


As players go through the game, the player interactions are analyzed, and some AI judgments become more weighted than others.

This is done in order to provide distinct player experiences.

Various player behaviors are actively examined, and modifications are made by the AI when designing future challenges.

The purpose of the adaptive AI is to challenge the players to a degree that the game is fun while not being too easy or too challenging.

Difficulty may still be changed if players seek a different challenge.

This may be observed in the Left 4 Dead game (2008) series’ AI Director.

Players navigate through a level, killing zombies and gathering resources in order to live.


The AI Director chooses which zombies to spawn, where they will spawn, and what supplies will be spawned.

The choice to spawn them is not made at random; rather, it is based on how well the players performed throughout the level.

The AI Director makes its own decisions about how to respond; as a result, the AI Director adapts to the level's player success.

The AI Director gives less resources and spawns more adversaries as the difficulty level rises.


Changes in emergent gameplay are influenced by advancements in simulation and game world design.


As virtual reality technology develops, new technologies will continue to help in this progress.

Virtual reality games provide an even more immersive gaming experience.

Players may use their own hands and eyes to interact with the environment.

Computers are growing more powerful, allowing for more realistic pictures and animations to be rendered.


Adaptive AI demonstrates the capability of really autonomous decision-making, resulting in a truly participatory gaming experience.


Game makers are continuing to build more immersive environments as AI improves to provide more lifelike behavior.

These cutting-edge technology and new AI will elevate emergent gameplay to new heights.

The importance of artificial intelligence in videogames has emerged as a crucial part of the industry for developing realistic and engrossing gaming.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Brooks, Rodney; Distributed and Swarm Intelligence; General and Narrow AI.



Further Reading:



Brooks, Rodney. 1986. “A Robust Layered Control System for a Mobile Robot.” IEEE Journal of Robotics and Automation 2, no. 1 (March): 14–23.

Brooks, Rodney. 1990. “Elephants Don’t Play Chess.” Robotics and Autonomous Systems6, no. 1–2 (June): 3–15.

Brooks, Rodney. 1991. “Intelligence Without Representation.” Artificial Intelligence Journal 47: 139–60.

Dennett, Daniel C. 1997. “Cog as a Thought Experiment.” Robotics and Autonomous Systems 20: 251–56.

Gallagher, Shaun. 2005. How the Body Shapes the Mind. Oxford: Oxford University Press.

Pfeifer, Rolf, and Josh Bongard. 2007. How the Body Shapes the Way We Think: A New View of Intelligence. Cambridge, MA: MIT Press.




AI Glossary - What Is Artificial Intelligence in Medicine Or AIM?

 



AIM is an abbreviation for Artificial Intelligence in Medicine.

It is included in the field of Medical Informatics.


Artificial Intelligence in Medicine (the journal) publishes unique papers on the theory and application of artificial intelligence (AI) in medicine, medically-oriented human biology, and health care from a range of multidisciplinary viewpoints.

The study and implementation of ways to enhance the administration of patient data, clinical knowledge, demographic data, and other information related to patient care and community health is known as medical informatics. 

It is a relatively new science, having arisen in the decades after the 1940s discovery of the digital computer.


What Is Artificial intelligence's Importance in Healthcare and Medicine?


  • Artificial intelligence can help physicians choose the best cancer therapies from a variety of possibilities. 
  • AI helps physicians identify and choose the right drugs for the right patients by capturing data from various databases relating to the condition. 
  • AI also supports decision-making processes for existing drugs and expanded treatments for other conditions, as well as expediting clinical trials by finding the right patients from a variety of data sources.



What role does artificial intelligence play in medicine and healthcare?


Medical imaging analysis is aided by AI.

It aids in the evaluation of photos and scans by a doctor. 

This allows radiologists and cardiologists to find crucial information for prioritizing urgent patients, avoiding possible mistakes in reading electronic health records (EHRs), and establishing more exact diagnoses.


What Are The Advantages of AI in Healthcare?


Artificial intelligence (AI) has emerged as the most potent agent of change in the healthcare business over the previous decade. 

Learn how healthcare professionals might profit from artificial intelligence.

There are several potential for healthcare institutions to use AI to offer more effective, efficient, and precise interventions to their patients, ranging from diagnosis and risk assessment to treatment technique selection.


AI is positioned to generate innovations and benefits throughout the care continuum as the amount of healthcare data grows. 

This is based on AI technologies and machine learning (ML) algorithms' capacity to provide proactive, intelligent, and often concealed insights that guide diagnostic and treatment decisions.


When used in the areas of improved treatment, chronic illness management, early risk detection, and workflow automation and optimization, AI may be immensely valuable to both patients and clinicians. 


Below are some advantages of adopting AI in healthcare to assist providers better grasp how to use it in their ecosystem.


Management of Population Health Using AI.


Healthcare companies may utilize artificial intelligence to gather and analyze patient health data in order to proactively detect and avoid risk, reduce preventative care gaps, and get a better understanding of how clinical, genetic, behavioral, and environmental variables influence the population. 

Combining diagnostic data, exam results, and unstructured narrative data provides a complete perspective of a patient's health, as well as actionable insights that help to avoid illness and promote wellness. 


AI-powered systems may help compile, evaluate, and compare a slew of such data points to population-level trends in order to uncover early illness risks.


As these data points are accumulated to offer a picture into the population, predictive analytics may be obtained. 

These findings may subsequently be used to population risk stratification based on genetic and phenotypic variables, as well as behavioral and social determinants. 

Healthcare companies may use these insights to deliver more tailored, data-driven treatment while also optimizing resource allocation and use, resulting in improved patient outcomes.



Making Clinical Decisions Using AI.


Artificial intelligence may help minimize the time and money required to assess and diagnose patients in some healthcare procedures. 

Medical workers may save more lives by acting quicker as a result of this. 

Traditional procedures cannot detect danger as quickly or accurately as machine learning (ML) algorithms can. 

These algorithms, when used effectively, may automate inefficient, manual operations, speeding up diagnosis and lowering diagnostic mistakes, which are still the leading cause of medical malpractice lawsuits.


Furthermore, AI-enabled technologies may assemble and sift through enormous amounts of clinical data to provide doctors a more holistic perspective of patient populations' health state. 

These technologies provide the care team with real-time or near-real-time actionable information at the proper time and location to improve treatment outcomes dramatically. 

The whole care team may operate on top of licensing by automating the gathering and analysis of the terabytes of data streaming inside the hospital walls.



Artificial Intelligence-Assisted Surgery


Surgical robotics applications are one of the most inventive AI use cases in healthcare. 


AI surgical systems that can perform the slightest motions with flawless accuracy have been developed as AI robotics has matured. 

These devices can carry out difficult surgical procedures, lowering the typical procedure wait time as well as the danger of blood loss, complications, and other adverse effects.


Machine learning may also help to facilitate surgical procedures. 


It may give surgeons and healthcare workers with real-time data and sophisticated insights into a patient's present status. 

This AI-assisted data allows them to make quick, informed choices before, during, and after surgeries to assure the best possible results.



Improved Access to Healthcare Using AI.


As a consequence of restricted or no healthcare access, studies indicate considerable differences in average life expectancy between industrialized and developing countries. 


In terms of implementing and exploiting modern medical technology that can provide proper treatment to the public, developing countries lag behind their peers. 


In addition, a lack of skilled healthcare personnel (such as surgeons, radiologists, and ultrasound technicians) and appropriately equipped healthcare facilities has an influence on care delivery in these areas. 

To encourage a more efficient healthcare ecosystem, AI can offer a digital infrastructure that allows for speedier identification of symptoms and triage of patients to the appropriate level and modality of treatment.



In healthcare, AI may assist alleviate a scarcity of doctors in rural, low-resource locations by taking over some diagnostic responsibilities. 


Using machine learning for imaging, for example, enables for quick interpretation of diagnostic investigations like X-rays, CT scans, and MRIs. 

Furthermore, educational institutions are increasingly using these technologies to improve student, resident, and fellow training while reducing diagnostic mistakes and patient risk.



AI To Improve Operational Efficiency and Performance Of Healthcare Practices.


Modern healthcare operations are a complicated web of intricately linked systems and activities. 

This makes cost optimization challenging while also optimizing asset usage and guaranteeing minimal patient wait times.

Artificial intelligence is rapidly being used by health systems to filter through large amounts of big data inside their digital environment in order to generate insights that might help them improve operations, increase efficiency, and optimize performance. 



For example, AI and machine learning can: 


(1) improve throughput and effective and efficient use of facilities by prioritizing services based on patient acuity and resource availability, 

(2) improve revenue cycle performance by optimizing workflows such as prior authorization claims and denials, and 

(3) automate routine, repeatable tasks to better deploy human resources when and where they are most needed.


When used effectively, AI and machine learning may give administrators and clinical leaders with the knowledge they need to enhance the quality and timeliness of hundreds of choices they must make every day, allowing patients to move smoothly between different healthcare services.



The rapidly growing amount of patient data both within and outside of hospitals shows no signs of slowing down. 


Healthcare organizations are under pressure from ongoing financial challenges, operational inefficiencies, a global shortage of health workers, and rising costs. 

They need technology solutions that drive process improvement and better care delivery while meeting critical operational and clinical metrics.


The potential for AI in healthcare to enhance the quality and efficiency of healthcare delivery by analyzing and extracting intelligent insights from vast amounts of data is boundless and well-documented.



What role does AI play in medicine and informatics in the future?

According to Accenture Consulting, the artificial intelligence (AI) industry in healthcare is estimated to reach $6.6 billion by 2021. 

From AI-based software for managing medical data to Practice Management software to robots helping surgeries, this creative technology has led to numerous improvements.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...