Many computer-based systems' most significant feature is
their reliability.
Physical damage, data loss, economic disruption, and human
deaths may all result from mechanical and software failures.
Many essential systems are now controlled by robotics,
automation, and artificial intelligence.
Nuclear power plants, financial markets, social security payments,
traffic lights, and military radar stations are all under their watchful eye.
High-tech systems may be designed purposefully hazardous to
people, as with Trojan horses, viruses, and spyware, or they can be dangerous
due to human programming or operation errors.
They may become dangerous in the future as a result of
purposeful or unintended actions made by the machines themselves, or as a
result of unanticipated environmental variables.
The first person to be murdered while working with a robot
occurred in 1979.
A one-ton parts-retrieval robot built by Litton Industries
hit Ford Motor Company engineer Robert Williams in the head.
After failing to entirely switch off a malfunctioning robot
on the production floor at Kawasaki Heavy Industries two years later, Japanese
engineer Kenji Urada was murdered.
Urada was shoved into a grinding machine by the robot's arm.
Accidents do not always result in deaths.
A 300-pound Knightscope K5 security robot on patrol at a
retail business center in Northern California, for example, knocked down a kid
and ran over his foot in 2016.
Only a few cuts and swelling were sustained by the
youngster.
The Cold War's history is littered with stories of nuclear
near-misses caused by faulty computer technology.
In 1979, a computer glitch at the North American Aerospace
Defense Command (NORAD) misled the Strategic Air Command into believing that
the Soviet Union had fired over 2,000 nuclear missiles towards the US.
An examination revealed that a training scenario had been uploaded
to an active defense computer by mistake.
In 1983, a Soviet military early warning system identified a
single US intercontinental ballistic missile launching a nuclear assault.
Stanislav Petrov, the missile defense system's operator,
correctly discounted the signal as a false alarm.
The reason of this and subsequent false alarms was
ultimately discovered to be sunlight hitting high altitude clouds.
Petrov was eventually punished for humiliating his superiors
by disclosing faults, despite preventing global thermonuclear Armageddon.
The so-called "2010 Flash Crash" was caused by
stock market trading software.
In slightly over a half-hour on May 6, 2010, the S&P
500, Dow Jones, and NASDAQ stock indexes lost—and then mainly regained—a
trillion dollars in value.
Navin Dal Singh Sarao, a U.K. trader, was arrested after a
five-year investigation by the US Department of Justice for allegedly
manipulating an automated system to issue and then cancel huge numbers of sell
orders, allowing his business to acquire equities at temporarily reduced
prices.
In 2015, there were two more software-induced market flash
crashes, and in 2017, there were flash crashes in the gold futures market and
digital cryptocurrency sector.
Tay (short for "Thinking about you"), a Microsoft
Corporation artificial intelligence social media chatterbot, went tragically
wrong in 2016.
Tay was created by Microsoft engineers to imitate a
nineteen-year-old American girl and to learn from Twitter discussions.
Instead, Tay was trained to use harsh and aggressive
language by internet trolls, which it then repeated in tweets.
After barely sixteen hours, Microsoft deleted Tay's account.
More AI-related accidents in motor vehicle operating may
occur in the future.
In 2016, the first fatal collision involving a self-driving
car happened when a Tesla Model S in autopilot mode collided with a
semi-trailer crossing the highway.
The motorist may have been viewing a Harry Potter movie on a
portable DVD player when the accident happened, according to witnesses.
Tesla's software does not yet allow for completely
autonomous driving, hence a human operator is required.
Despite these dangers, one management consulting company
claims that autonomous automobiles might avert up to 90% of road accidents.
Artificial intelligence security is rapidly growing as a
topic of cybersecurity study.
Militaries all around the globe are working on prototypes of
dangerous autonomous weapons systems.
Automatic weapons, such as drones, that now rely on a human
operator to make deadly force judgments against targets, might be replaced with
automated systems that make life and death decisions.
Robotic decision-makers on the battlefield may one day
outperform humans in extracting patterns from the fog of war and reacting
quickly and logically to novel or challenging circumstances.
High technology is becoming more and more important in
modern civilization, yet it is also becoming more fragile and prone to failure.
An inquisitive squirrel caused the NASDAQ's primary computer
to collapse in 1987, bringing one of the world's major stock exchanges to its
knees.
In another example, the ozone hole above Antarctica was not
discovered for years because exceptionally low levels reported in
data-processed satellite images were assumed to be mistakes.
It's likely that the complexity of autonomous systems, as
well as society's reliance on them under quickly changing circumstances, will
make completely tested AI unachievable.
Artificial intelligence is powered by software that can
adapt to and interact with its surroundings and users.
Changes in variables, individual acts, or events may have
unanticipated and even disastrous consequences.
One of the dark secrets of sophisticated artificial
intelligence is that it is based on mathematical approaches and deep learning
algorithms that are so complicated that even its creators are baffled as to how
it makes accurate conclusions.
Autonomous cars, for example, depend on exclusively
computer-written instructions while they watch people driving in real-world
situations.
But how can a self-driving automobile learn to anticipate
the unexpected?
Will attempts to adjust AI-generated code to decrease
apparent faults, omissions, and impenetrability lessen the likelihood of
unintended negative consequences, or will they merely magnify existing problems
and produce new ones? Although it is unclear how to mitigate the risks of
artificial intelligence, it is likely that society will rely on
well-established and presumably trustworthy machine-learning systems to
automatically provide rationales for their actions, as well as examine newly
developed cognitive computing systems on our behalf.
~ Jai Krishna Ponnappan
You may also want to read more about Artificial Intelligence here.
Also see: Algorithmic Error and Bias; Autonomy and
Complacency; Beneficial AI, Asilo mar Meeting on; Campaign to Stop Killer
Robots; Driverless Vehicles and Liability; Explainable AI; Product Liability
and AI; Trolley Problem.
Further Reading
De Visser, Ewart Jan. 2012. “The World Is Not Enough: Trust in Cognitive Agents.” Ph.D. diss., George Mason University.
Forester, Tom, and Perry Morrison. 1990. “Computer Unreliability and Social Vulnerability.” Futures 22, no. 5 (June): 462–74.
Lee, John D., and Katrina A. See. 2004. “Trust in Automation: Designing for Appropriate Reliance.” Human Factors 46, no. 1 (Spring): 50–80.
Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan
M. Ćirković, 308–45. New York: Oxford University Press.