Artificial Intelligence - Autonomy And Complacency In AI Systems.

 




The concepts of machine autonomy and human autonomy and complacency are intertwined.

Artificial intelligences are undoubtedly getting more independent as they are trained to learn from their own experiences and data intake.

Machines that gain more skills than humans tend to become increasingly dependent on them to make judgments and react correctly to unexpected events.

This dependence on AI systems' decision-making processes might lead to a loss of human agency and complacency.

This complacency may result in the AI's system or decision-making processes failing to respond to major faults.

Autonomous machines are ones that can function in unsupervised settings, adapt to new situations and experiences, learn from previous errors, and decide the best potential outcomes in each case without the need for fresh programming input.

To put it another way, these robots learn from their experiences and are capable of going beyond their original programming in certain respects.

The concept is that programmers won't be able to foresee every circumstance that an AI-enabled machine could experience based on its activities, thus it must be able to adapt.

This is not widely recognized, since others say that these systems' adaptability is inherent in their programming, as their programs are designed to be adaptable.

The disagreement over whether any agent, including humans, can express free will and act autonomously exacerbates these debates.

With the advancement of technology, the autonomy of AI programs is not the only element of autonomy that is being explored.

Worries have also been raised concerning the influence on human autonomy, as well as concerns about machine complacency.

People who gain from the machine's choice being irrelevant since they no longer have to make decisions as AI systems grow increasingly tuned to anticipate people's wishes and preferences.

The interaction of human employees and automated systems has gotten a lot of attention.

According to studies, humans are more prone to overlook flaws in these procedures, particularly when they get routinized, which leads to a positive expectation of success rather than a negative expectation of failure.

This sense of accomplishment causes the operators or supervisors of automated processes to place their confidence in inaccurate readouts or machine judgments, which may lead to mistakes and accidents.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Accidents and Risk Assessment; Autonomous and Semiautonomous Systems.



Further Reading


André, Quentin, Ziv Carmon, Klaus Wertenbroch, Alia Crum, Frank Douglas, William 
Goldstein, Joel Huber, Leaf Van Boven, Bernd Weber, and Haiyang Yang. 2018. 

“Consumer Choice and Autonomy in the Age of Artificial Intelligence and Big Data.” Customer Needs and Solutions 5, no. 1–2: 28–37.

Bahner, J. Elin, Anke-Dorothea Hüper, and Dietrich Manzey. 2008. “Misuse of Auto￾mated Decision Aids: Complacency, Automation Bias, and the Impact of Training 
Experience.” International Journal of Human-Computer Studies 66, no. 9: 
688–99.

Lawless, W. F., Ranjeev Mittu, Donald Sofge, and Stephen Russell, eds. 2017. Autonomy 
and Intelligence: A Threat or Savior? Cham, Switzerland: Springer.

Parasuraman, Raja, and Dietrich H. Manzey. 2010. “Complacency and Bias in Human 
Use of Automation: An Attentional Integration.” Human Factors 52, no. 3: 
381–410.





Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...