The concepts of machine autonomy and human autonomy and
complacency are intertwined.
Artificial intelligences are undoubtedly getting more
independent as they are trained to learn from their own experiences and data
intake.
Machines that gain more skills than humans tend to become
increasingly dependent on them to make judgments and react correctly to
unexpected events.
This dependence on AI systems' decision-making processes
might lead to a loss of human agency and complacency.
This complacency may result in the AI's system or
decision-making processes failing to respond to major faults.
Autonomous machines are ones that can function in
unsupervised settings, adapt to new situations and experiences, learn from
previous errors, and decide the best potential outcomes in each case without
the need for fresh programming input.
To put it another way, these robots learn from their
experiences and are capable of going beyond their original programming in
certain respects.
The concept is that programmers won't be able to foresee every
circumstance that an AI-enabled machine could experience based on its
activities, thus it must be able to adapt.
This is not widely recognized, since others say that these
systems' adaptability is inherent in their programming, as their programs are
designed to be adaptable.
The disagreement over whether any agent, including humans,
can express free will and act autonomously exacerbates these debates.
With the advancement of technology, the autonomy of AI
programs is not the only element of autonomy that is being explored.
Worries have also been raised concerning the influence on
human autonomy, as well as concerns about machine complacency.
People who gain from the machine's choice being irrelevant
since they no longer have to make decisions as AI systems grow increasingly
tuned to anticipate people's wishes and preferences.
The interaction of human employees and automated systems has
gotten a lot of attention.
According to studies, humans are more prone to overlook
flaws in these procedures, particularly when they get routinized, which leads
to a positive expectation of success rather than a negative expectation of
failure.
This sense of accomplishment causes the operators or
supervisors of automated processes to place their confidence in inaccurate
readouts or machine judgments, which may lead to mistakes and accidents.
~ Jai Krishna Ponnappan
You may also want to read more about Artificial Intelligence here.