Showing posts with label AI systems. Show all posts
Showing posts with label AI systems. Show all posts

AI Glossary - arboART.

 


arboART is a hierarchical agglomerative ART network.

Each layer's prototype vectors are fed into the following layer.


 

References:

 

  1. http://www.wi.leidenuniv.nl/art/
  2. ftp:://ftp.sas.com/pub/neural/FAQ2.html

~ Jai Krishna Ponnappan

AI Glossary - Analogy

 


A process of thinking or learning in which the present situation is compared to prior circumstances that are comparable in some way.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


Be sure to refer to the complete & active AI Terms Glossary here.

You may also want to read more about Artificial Intelligence here.



AI Glossary - Alpha-Beta Pruning.

 


Alpha-Beta Pruning is a pruning (or shortening) method for a search tree.

System that construct trees of potential movements or actions utilize it.

When it can be shown that a branch of a tree cannot lead to a solution that is any better than a known excellent solution, it is trimmed.

A tree keeps track of two values termed alpha and beta as it grows.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


Be sure to refer to the complete & active AI Terms Glossary here.

You may also want to read more about Artificial Intelligence here.




AI Glossary - Airty.

 



An object's airty is the total number of objects it contains or accepts.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


Be sure to refer to the complete & active AI Terms Glossary here.

You may also want to read more about Artificial Intelligence here.



AI Terms Glossary - Act-R

 



Act-R is a goal-oriented cognitive architecture based on a single goal stack.


It has declarative memory pieces as well as procedural memory, which comprises production rules.

Both activation values and association strengths with other components exist in declarative memory elements.



See Also: 


Soar.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


Be sure to refer to the complete & active AI Terms Glossary here.

You may also want to read more about Artificial Intelligence here.



AI Terms Glossary - ABSTRIPS.






The ABSTRIPS software, which was derived from the STRIPS program, was created to tackle robotic placement and movement challenges.


Unlike STRIPS, it works from the most significant to the least critical difference when comparing the present and desired states.




See Also: 


Means-Ends analysis.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


Be sure to refer to the complete & active AI Terms Glossary here.

You may also want to read more about Artificial Intelligence here.



Artificial Intelligence - History And Timeline

     




    1942

    The Three Laws of Robotics by science fiction author Isaac Asimov occur in the short tale "Runaround."


    1943


    Emil Post, a mathematician, talks about "production systems," a notion he adopted for the 1957 General Problem Solver.


    1943


    "A Logical Calculus of the Ideas of Immanent in Nervous Activity," a study by Warren McCulloch and Walter Pitts on a computational theory of neural networks, is published.


    1944


    The Teleological Society was founded by John von Neumann, Norbert Wiener, Warren McCulloch, Walter Pitts, and Howard Aiken to explore, among other things, nervous system communication and control.


    1945


    In his book How to Solve It, George Polya emphasizes the importance of heuristic thinking in issue solving.


    1946


    In New York City, the first of eleven Macy Conferences on Cybernetics gets underway. "Feedback Mechanisms and Circular Causal Systems in Biological and Social Systems" is the focus of the inaugural conference.



    1948


    Norbert Wiener, a mathematician, publishes Cybernetics, or Control and Communication in the Animal and the Machine.


    1949


    In his book The Organization of Behavior, psychologist Donald Hebb provides a theory for brain adaptation in human education: "neurons that fire together connect together."


    1949


    Edmund Berkeley's book Giant Brains, or Machines That Think, is published.


    1950


    Alan Turing's "Computing Machinery and Intelligence" describes the Turing Test, which attributes intelligence to any computer capable of demonstrating intelligent behavior comparable to that of a person.


    1950


    Claude Shannon releases "Programming a Computer for Playing Chess," a groundbreaking technical study that shares search methods and strategies.



    1951


    Marvin Minsky, a math student, and Dean Edmonds, a physics student, create an electronic rat that can learn to navigate a labyrinth using Hebbian theory.


    1951


    John von Neumann, a mathematician, releases "General and Logical Theory of Automata," which reduces the human brain and central nervous system to a computer.


    1951


    For the University of Manchester's Ferranti Mark 1 computer, Christopher Strachey produces a checkers software and Dietrich Prinz creates a chess routine.


    1952


    Cyberneticist W. Edwards wrote Design for a Brain: The Origin of Adaptive Behavior, a book on the logical underpinnings of human brain function. Ross Ashby is a British actor.


    1952


    At Cornell University Medical College, physiologist James Hardy and physician Martin Lipkin begin developing a McBee punched card system for mechanical diagnosis of patients.


    1954


    Science-Fiction Thinking Machines: Robots, Androids, Computers, edited by Groff Conklin, is a theme-based anthology.


    1954


    The Georgetown-IBM project exemplifies the power of text machine translation.


    1955


    Under the direction of economist Herbert Simon and graduate student Allen Newell, artificial intelligence research began at Carnegie Tech (now Carnegie Mellon University).


    1955


    For Scientific American, mathematician John Kemeny wrote "Man as a Machine."


    1955


    In a Rockefeller Foundation proposal for a Dartmouth University meeting, mathematician John McCarthy coined the phrase "artificial intelligence."



    1956


    Allen Newell, Herbert Simon, and Cliff Shaw created Logic Theorist, an artificial intelligence computer software for proving theorems in Alfred North Whitehead and Bertrand Russell's Principia Mathematica.


    1956


    The "Constitutional Convention of AI," a Dartmouth Summer Research Project, brings together specialists in cybernetics, automata, information theory, operations research, and game theory.


    1956


    On television, electrical engineer Arthur Samuel shows off his checkers-playing AI software.


    1957


    Allen Newell and Herbert Simon created the General Problem Solver AI software.


    1957


    The Rockefeller Medical Electronics Center shows how an RCA Bizmac computer application might help doctors distinguish between blood disorders.


    1958


    The Computer and the Brain, an unfinished work by John von Neumann, is published.


    1958


    At the "Mechanisation of Thought Processes" symposium at the UK's Teddington National Physical Laboratory, Firmin Nash delivers the Group Symbol Associator its first public demonstration.


    1958


    For linear data categorization, Frank Rosenblatt develops the single layer perceptron, which includes a neural network and supervised learning algorithm.


    1958


    The high-level programming language LISP is specified by John McCarthy of the Massachusetts Institute of Technology (MIT) for AI research.


    1959


    "The Reasoning Foundations of Medical Diagnosis," written by physicist Robert Ledley and radiologist Lee Lusted, presents Bayesian inference and symbolic logic to medical difficulties.


    1959


    At MIT, John McCarthy and Marvin Minsky create the Artificial Intelligence Laboratory.


    1960


    James L. Adams, an engineering student, built the Stanford Cart, a remote control vehicle with a television camera.


    1962


    In his short novel "Without a Thought," science fiction and fantasy author Fred Saberhagen develops sentient killing robots known as Berserkers.


    1963


    John McCarthy developed the Stanford Artificial Intelligence Laboratory (SAIL).


    1963


    Under Project MAC, the Advanced Research Experiments Agency of the United States Department of Defense began financing artificial intelligence projects at MIT.


    1964


    Joseph Weizenbaum of MIT created ELIZA, the first software allowing natural language conversation with a computer (a "chatbot").


    1965


    I am a statistician from the United Kingdom. J. Good's "Speculations Concerning the First Ultraintelligent Machine," which predicts an impending intelligence explosion, is published.


    1965


    Hubert L. Dreyfus and Stuart E. Dreyfus, philosophers and mathematicians, publish "Alchemy and AI," a study critical of artificial intelligence.


    1965


    Joshua Lederberg and Edward Feigenbaum founded the Stanford Heuristic Programming Project, which aims to model scientific reasoning and create expert systems.


    1965


    Donald Michie is the head of Edinburgh University's Department of Machine Intelligence and Perception.


    1965


    Georg Nees organizes the first generative art exhibition, Computer Graphic, in Stuttgart, West Germany.


    1965


    With the expert system DENDRAL, computer scientist Edward Feigenbaum starts a ten-year endeavor to automate the chemical analysis of organic molecules.


    1966


    The Automatic Language Processing Advisory Committee (ALPAC) issues a cautious assessment on machine translation's present status.


    1967


    On a DEC PDP-6 at MIT, Richard Greenblatt finishes work on Mac Hack, a computer that plays competitive tournament chess.


    1967


    Waseda University's Ichiro Kato begins work on the WABOT project, which culminates in the unveiling of a full-scale humanoid intelligent robot five years later.


    1968


    Stanley Kubrick's adaptation of Arthur C. Clarke's science fiction novel 2001: A Space Odyssey, about the artificially intelligent computer HAL 9000, is one of the most influential and highly praised films of all time.


    1968


    At MIT, Terry Winograd starts work on SHRDLU, a natural language understanding program.


    1969


    Washington, DC hosts the First International Joint Conference on Artificial Intelligence (IJCAI).


    1972


    Artist Harold Cohen develops AARON, an artificial intelligence computer that generates paintings.


    1972


    Ken Colby describes his efforts using the software program PARRY to simulate paranoia.


    1972


    In What Computers Can't Do, Hubert Dreyfus offers his criticism of artificial intelligence's intellectual basis.


    1972


    Ted Shortliffe, a doctorate student at Stanford University, has started work on the MYCIN expert system, which is aimed to identify bacterial illnesses and provide treatment alternatives.


    1972


    The UK Science Research Council releases the Lighthill Report on Artificial Intelligence, which highlights AI technological shortcomings and the challenges of combinatorial explosion.


    1972


    The Assault on Privacy: Computers, Data Banks, and Dossiers, by Arthur Miller, is an early study on the societal implications of computers.


    1972


    INTERNIST-I, an internal medicine expert system, is being developed by University of Pittsburgh physician Jack Myers, medical student Randolph Miller, and computer scientist Harry Pople.


    1974


    Paul Werbos, a social scientist, has completed his dissertation on a backpropagation algorithm that is currently extensively used in artificial neural network training for supervised learning applications.


    1974


    The memo discusses the notion of a frame, a "remembered framework" that fits reality by "changing detail as appropriate." Marvin Minsky distributes MIT AI Lab document 306 on "A Framework for Representing Knowledge."


    1975


    The phrase "genetic algorithm" is used by John Holland to explain evolutionary strategies in natural and artificial systems.


    1976


    In Computer Power and Human Reason, computer scientist Joseph Weizenbaum expresses his mixed feelings on artificial intelligence research.


    1978


    At Rutgers University, EXPERT, a generic knowledge representation technique for constructing expert systems, becomes live.


    1978


    Joshua Lederberg, Douglas Brutlag, Edward Feigenbaum, and Bruce Buchanan started the MOLGEN project at Stanford to solve DNA structures generated from segmentation data in molecular genetics research.


    1979


    Raj Reddy, a computer scientist at Carnegie Mellon University, founded the Robotics Institute.


    1979


    While working with a robot, the first human is slain.


    1979


    Hans Moravec rebuilds and equips the Stanford Cart with a stereoscopic vision system after it has evolved into an autonomous rover over almost two decades.


    1980


    The American Association of Artificial Intelligence (AAAI) holds its first national conference at Stanford University.


    1980


    In his Chinese Room argument, philosopher John Searle claims that a computer's modeling of action does not establish comprehension, intentionality, or awareness.


    1982


    Release of Blade Runner, a science fiction picture based on Philip K. Dick's tale Do Androids Dream of Electric Sheep? (1968).


    1982


    The associative brain network, initially developed by William Little in 1974, is popularized by physicist John Hopfield.


    1984


    In Fortune Magazine, Tom Alexander writes "Why Computers Can't Outthink the Experts."


    1984


    At the Microelectronics and Computer Consortium (MCC) in Austin, TX, computer scientist Doug Lenat launches the Cyc project, which aims to create a vast commonsense knowledge base and artificial intelligence architecture.


    1984


    Orion Pictures releases the first Terminator picture, which features robotic assassins from the future and an AI known as Skynet.


    1986


    Honda establishes a research facility to build humanoid robots that can cohabit and interact with humans.


    1986


    Rodney Brooks, an MIT roboticist, describes the subsumption architecture for behavior-based robots.


    1986


    The Society of Mind is published by Marvin Minsky, who depicts the brain as a collection of collaborating agents.


    1989


    The MIT Artificial Intelligence Lab's Rodney Brooks and Anita Flynn publish "Fast, Cheap, and Out of Control: A Robot Invasion of the Solar System," a paper discussing the possibility of sending small robots on interplanetary exploration missions.


    1993


    The Cog interactive robot project is launched at MIT by Rodney Brooks, Lynn Andrea Stein, Cynthia Breazeal, and others.


    1995


    The phrase "generative music" was used by musician Brian Eno to describe systems that create ever-changing music by modifying parameters over time.


    1995


    The MQ-1 Predator unmanned aerial aircraft from General Atomics has entered US military and reconnaissance duty.


    1997


    Under normal tournament settings, IBM's Deep Blue supercomputer overcomes reigning chess champion Garry Kasparov.


    1997


    In Nagoya, Japan, the inaugural RoboCup, an international tournament featuring over forty teams of robot soccer players, takes place.


    1997


    NaturallySpeaking is Dragon Systems' first commercial voice recognition software product.


    1999


    Sony introduces AIBO, a robotic dog, to the general public.


    2000


    The Advanced Step in Innovative Mobility humanoid robot, ASIMO, is unveiled by Honda.


    2001


    At Super Bowl XXXV, Visage Corporation unveils the FaceFINDER automatic face-recognition technology.


    2002


    The Roomba autonomous household vacuum cleaner is released by the iRobot Corporation, which was created by Rodney Brooks, Colin Angle, and Helen Greiner.


    2004


    In the Mojave Desert near Primm, NV, DARPA hosts its inaugural autonomous vehicle Grand Challenge, but none of the cars complete the 150-mile route.


    2005


    Under the direction of neurologist Henry Markram, the Swiss Blue Brain Project is formed to imitate the human brain.


    2006


    Netflix is awarding a $1 million prize to the first programming team to create the greatest recommender system based on prior user ratings.


    2007


    DARPA has announced the commencement of the Urban Challenge, an autonomous car competition that will test merging, passing, parking, and navigating traffic and junctions.


    2009


    Under the leadership of Sebastian Thrun, Google launches its self-driving car project (now known as Waymo) in the San Francisco Bay Area.


    2009


    Fei-Fei Li of Stanford University describes her work on ImageNet, a library of millions of hand-annotated photographs used to teach AIs to recognize the presence or absence of items visually.


    2010


    Human manipulation of automated trading algorithms causes a "flash collapse" in the US stock market.


    2011


    Demis Hassabis, Shane Legg, and Mustafa Suleyman developed DeepMind in the United Kingdom to educate AIs how to play and succeed at classic video games.


    2011


    Watson, IBM's natural language computer system, has beaten Jeopardy! Ken Jennings and Brad Rutter are the champions.


    2011


    The iPhone 4S comes with Apple's mobile suggestion assistant Siri.


    2011


    Andrew Ng, a computer scientist, and Google colleagues Jeff Dean and Greg Corrado have launched an informal Google Brain deep learning research cooperation.


    2013


    The European Union's Human Brain Project aims to better understand how the human brain functions and to duplicate its computing capabilities.


    2013


    Stop Killer Robots is a campaign launched by Human Rights Watch.


    2013


    Spike Jonze's science fiction drama Her has been released. A guy and his AI mobile suggestion assistant Samantha fall in love in the film.


    2014


    Ian Goodfellow and colleagues at the University of Montreal create Generative Adversarial Networks (GANs) for use in deep neural networks, which are beneficial in making realistic fake human photos.


    2014


    Eugene Goostman, a chatbot that plays a thirteen-year-old kid, is said to have passed a Turing-like test.


    2014


    According to physicist Stephen Hawking, the development of AI might lead to humanity's extinction.


    2015


    DeepFace is a deep learning face recognition system that Facebook has released on its social media platform.


    2016


    In a five-game battle, DeepMind's AlphaGo software beats Lee Sedol, a 9th dan Go player.


    2016


    Tay, a Microsoft AI chatbot, has been put on Twitter, where users may teach it to send abusive and inappropriate posts.


    2017


    The Asilomar Meeting on Beneficial AI is hosted by the Future of Life Institute.


    2017


    Anthony Levandowski, an AI self-driving start-up engineer, formed the Way of the Future church with the goal of creating a superintelligent robot god.


    2018


    Google has announced Duplex, an AI program that uses natural language to schedule appointments over the phone.


    2018


    The General Data Protection Regulation (GDPR) and "Ethics Guidelines for Trustworthy AI" are published by the European Union.


    2019


    A lung cancer screening AI developed by Google AI and Northwestern Medicine in Chicago, IL, surpasses specialized radiologists.


    2019


    Elon Musk cofounded OpenAI, which generates realistic tales and journalism via artificial intelligence text generation. Because of its ability to spread false news, it was previously judged "too risky" to utilize.


    2020


    TensorFlow Quantum, an open-source framework for quantum machine learning, was announced by Google AI in conjunction with the University of Waterloo, the "moonshot faculty" X, and Volkswagen.




    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read more about Artificial Intelligence here.










    Biased Data Isn't the Only Source of AI Bias.

     





    In order to eliminate prejudice in artificial intelligence, it will be necessary to address both human and systemic biases. 


    Bias in AI systems is often seen as a technological issue, but the NIST study recognizes that human prejudices, as well as systemic, institutional biases, have a role. 

    Researchers at the National Institute of Standards and Technology (NIST) recommend broadening the scope of where we look for the source of these biases — beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed — as a step toward improving our ability to identify and manage the harmful effects of bias in artificial intelligence (AI) systems. 

    The advice is at the heart of a new NIST article, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270), which incorporates feedback from the public on a draft version issued last summer. 


    The publication provides guidelines related to the AI Risk Management Framework that NIST is creating as part of a wider effort to facilitate the development of trustworthy and responsible AI. 


    The key difference between the draft and final versions of the article, according to NIST's Reva Schwartz, is the increased focus on how bias presents itself not just in AI algorithms and the data used to train them, but also in the sociocultural environment in which AI systems are employed. 

    "Context is crucial," said Schwartz, one of the report's authors and the primary investigator for AI bias. 

    "AI systems don't work in a vacuum. They assist individuals in making choices that have a direct impact on the lives of others. If we want to design trustworthy AI systems, we must take into account all of the elements that might undermine public confidence in AI. Many of these variables extend beyond the technology itself to its consequences, as shown by the responses we got from a diverse group of individuals and organizations." 

    NIST contributes to the research, standards, and data needed to fulfill artificial intelligence's (AI) full potential as a driver of American innovation across industries and sectors. 

    NIST is working with the AI community to define the technological prerequisites for cultivating confidence in AI systems that are accurate and dependable, safe and secure, explainable, and bias-free. 


    AI bias is harmful to humans. 


    AI may make choices on whether or not a student is admitted to a school, approved for a bank loan, or accepted as a rental applicant. 

    Machine learning software, for example, might be taught on a dataset that underrepresents a certain gender or ethnic group. 

    While these computational and statistical causes of bias remain relevant, the new NIST article emphasizes that they do not capture the whole story. 

    Human and structural prejudices, which play a large role in the new edition, must be taken into consideration for a more thorough understanding of bias. 

    Institutions that operate in ways that disfavor specific social groups, such as discriminating against persons based on race, are examples of systemic biases. 

    Human biases may be related to how individuals utilize data to fill in gaps, such as a person's neighborhood impacting how likely police would consider them to be a criminal suspect. 

    When human, institutional, and computational biases come together, they may create a dangerous cocktail – particularly when there is no specific direction for dealing with the hazards of deploying AI systems. 

    "If we are to construct trustworthy AI systems, we must take into account all of the elements that might erode public faith in AI." 

    Many of these considerations extend beyond the technology itself to the technology's consequences." —Reva Schwartz, AI bias main investigator To address these concerns, the NIST authors propose a "socio-technical" approach to AI bias mitigation. 


    This approach recognizes that AI acts in a wider social context — and that attempts to overcome the issue of bias just on a technological level would fall short. 


    "When it comes to AI bias concerns, organizations sometimes gravitate to highly technical solutions," Schwartz added. 

    "However, these techniques fall short of capturing the social effect of AI systems. The growth of artificial intelligence into many facets of public life necessitates broadening our perspective to include AI as part of the wider social system in which it functions." 

    According to Schwartz, socio-technical approaches to AI are a developing field, and creating measuring tools that take these elements into account would need a diverse mix of disciplines and stakeholders. 

    "It's critical to bring in specialists from a variety of sectors, not just engineering," she added, "and to listen to other organizations and communities about the implications of AI." 

    Over the next several months, NIST will host a series of public workshops aimed at creating a technical study on AI bias and integrating it to the AI Risk Management Framework.


    Visit the AI RMF workshop website for further information and to register.



    A Method for Reducing Artificial Intelligence Bias Risk. 


    The National Institute of Standards and Technology (NIST) is advancing an approach for identifying and managing biases in artificial intelligence (AI) — and is asking for the public's help in improving it — in an effort to combat the often pernicious effect of biases in AI that can harm people's lives and public trust in AI. 


    A Proposal for Identifying and Managing Bias in Artificial Intelligence (NIST Special Document 1270), a new publication from NIST, lays out the methodology. 


    It's part of the agency's larger effort to encourage the development of trustworthy and responsible AI. 


    NIST will welcome public comments on the paper through September 10, 2021 (an extension of the initial deadline of August 5, 2021), and the writers will utilize the feedback to help define the topic of numerous collaborative virtual events NIST will organize in the following months. 


    This series of events aims to engage the stakeholder community and provide them the opportunity to contribute feedback and ideas on how to reduce the danger of bias in AI. 


    "Managing the danger of bias in AI is an important aspect of establishing trustworthy AI systems, but the route to accomplishing this remains uncertain," said Reva Schwartz of the National Institute of Standards and Technology, who was one of the report's authors. 

    "We intend to include the community in the development of voluntary, consensus-based norms for limiting AI bias and decreasing the likelihood of negative consequences." 


    NIST contributes to the research, standards, and data needed to fulfill artificial intelligence's (AI) full potential as a catalyst for American innovation across industries and sectors. 


    NIST is working with the AI community to define the technological prerequisites for cultivating confidence in AI systems that are accurate and dependable, safe and secure, explainable, and bias-free. 

    Bias in AI-based goods and systems is a critical, but yet poorly defined, component of trustworthiness. 

    This prejudice might be intentional or unintentional. 


    NIST is working to get us closer to consensus on recognizing and quantifying bias in AI systems by organizing conversations and conducting research. 


    Because AI can typically make sense of information faster and more reliably than humans, it has become a transformational technology. 

    Everything from medical detection to digital assistants on our cellphones now uses AI. 

    However, as AI's uses have developed, we've seen that its conclusions may be skewed by biases in the data it's given - data that either partially or erroneously represents the actual world. 

    Furthermore, some AI systems are designed to simulate complicated notions that cannot be readily assessed or recorded by data, such as "criminality" or "employment appropriateness." 

    Other criteria, such as where you live or how much education you have, are used as proxies for the notions these systems are attempting to mimic. 


    The imperfect association of the proxy data with the original notion might result to undesirable or discriminatory AI outputs, such as wrongful arrests, or eligible candidates being erroneously refused for employment or loans. 


    The strategy the authors suggest for controlling bias comprises a conscious effort to detect and manage bias at multiple phases in an AI system’s lifespan, from early idea through design to release. 

    The purpose is to bring together stakeholders from a variety of backgrounds, both within and outside the technology industry, in order to hear viewpoints that haven't been heard before. 

    “We want to bring together the community of AI developers of course, but we also want to incorporate psychologists, sociologists, legal experts and individuals from disadvantaged communities,” said NIST’s Elham Tabassi, a member of the National AI Research Resource Task Force. 

    "We'd want to hear from individuals who are affected by AI, both those who design AI systems and those who aren't." 


    Preliminary research for the NIST writers includes a study of peer-reviewed publications, books, and popular news media, as well as industry reports and presentations. 


    It was discovered that bias may seep into AI systems at any level of development, frequently in different ways depending on the AI's goal and the social environment in which it is used. 

    "An AI tool is often built for one goal, but it is subsequently utilized in a variety of scenarios," Schwartz said. 

    "Many AI applications have also been inadequately evaluated, if at all, in the environment for which they were designed. All these elements might cause bias to go undetected.” 

    Because the team members acknowledge that they do not have all of the answers, Schwartz believes it is critical to get public comment, particularly from those who are not often involved in technical conversations. 


    "We'd want to hear from individuals who are affected by AI, both those who design AI systems and those who aren't." ~ Elham Tabassi.


    "We know bias exists throughout the AI lifespan," added Schwartz. 

    "It would be risky to not know where your model is biased or to assume that there is none. The next stage is to figure out how to see it and deal with it."


    Comments on the proposed method may be provided by downloading and completing the template form (in Excel format) and emailing it to ai-bias@list.nist.gov by Sept. 10, 2021 (extended from the initial deadline of Aug. 5, 2021). 

    This website will be updated with further information on the joint event series.



    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read and learn more Technology and Engineering here.

    You may also want to read and learn more Artificial Intelligence here.




    Analog Space Missions: Earth-Bound Training for Cosmic Exploration

    What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...