Showing posts with label Beneficial AI. Show all posts
Showing posts with label Beneficial AI. Show all posts

Artificial Intelligence - What Has Been Isaac Asimov's Influence On AI?



(c. 1920–1992) Isaac Asimov was a professor of biochemistry at Boston University and a well-known science fiction novelist.

Asimov was a prolific writer in a variety of genres, and his corpus of science fiction has had a major impact on not just the genre, but also on ethical concerns surrounding science and technology.

Asimov was born in the Russian Federation.

He celebrated his birthday on January 2, 1920, despite not knowing his official birth date.

In 1923, his family moved to New York City.

At the age of sixteen, Asimov applied to Columbia College, the undergraduate school of Columbia University, but was refused admission owing to anti-Semitic restrictions on the number of Jewish students.

He finally enrolled in Seth Low Junior College, a connected undergraduate institution.

Asimov switched to Columbia College when Seth Low closed its doors, but obtained a Bachelor of Science rather than a Bachelor of Arts, which he regarded as "a gesture of second-class citizenship" (Asimov 1994, n.p.).

Asimov grew interested in science fiction about this time and started writing letters to science fiction periodicals, ultimately attempting to write his own short tales.

His debut short story, "Marooned off Vesta," was published in Amazing Stories in 1938.

His early works placed him in the company of science fiction pioneers like Robert Heinlein.

After graduation, Asimov attempted, but failed, to enroll in medical school.

Instead, at the age of nineteen, he enrolled in graduate school for chemistry.

World War II halted Asimov's graduate studies, and at Heinlein's recommendation, he completed his military duty by working at the Naval Air Experimental Station in Philadelphia.

He created short tales while stationed there, which constituted the foundation for Foundation (1951), one of his most well-known works and the first of a multi-volume series that he would eventually tie to numerous of his other pieces.

He earned his doctorate from Columbia University in 1948.

The pioneering Robots series by Isaac Asimov (1950s–1990s) has served as a foundation for ethical norms to alleviate human worries about technology gone awry.

The Three Laws of Robotics, for example, are often mentioned as guiding principles for artificial intelligence and robotics.

The Three Laws were initially mentioned in the short tale "Run about" (1942), which was eventually collected in I, Robot (1950):

1. A robot may not damage a human being or enable a human being to come to danger as a result of its inactivity.

2. Except when such commands contradict with the First Law, a robot shall follow orders provided to it by humans.

3. A robot must defend its own existence as long as this does not violate the First or Second Laws.

A "zeroth rule" is devised in Robots and Empire (1985) in order for robots to prevent a scheme to destroy Earth: "A robot may not damage mankind, or enable humanity to come to danger via inactivity." The original Three Laws are superseded by this statute.

Characters in Asimov's Robot series of books and short stories are often tasked with solving a mystery in which a robot seems to have broken one of the Three Laws.

In "Runaround," for example, two field experts with U.S. Robots and Mechanical Men, Inc. discover they're in danger of being stuck on Mercury since their robot "Speedy" hasn't returned with selenium required to power a protective shield in an abandoned base to screen them from the sun.

Speedy has malfunctioned because he is stuck in a conflict between the Second and Third Laws: when the robot approaches the selenium, it is obliged to recede in order to defend itself from a corrosive quantity of carbon monoxide near the selenium.

The humans must discover out how to apply the Three Laws to free Speedy from a conflict-induced feedback cycle.

More intricate arguments concerning the application of the Three Laws appear in later tales and books.

The Machines manage the world's economy in "The Evitable Conflict" (1950), and "robopsychologist" Susan Calvin notices that they have changed the First Law into a predecessor of Asimov's zeroth law: "the Machines work not for any one human being, but for all humanity" (Asimov 2004b, 222).

Calvin is concerned that the Machines are guiding mankind toward "the ultimate good of humanity" (Asimov 2004b, 222), even if humanity is unaware of what it is.

Furthermore, Asimov's Basis trilogy (1940s–1990s) coined the word "psychohistory," which may be interpreted as foreshadowing the algorithms that provide the foundation for artificial intelligence today.

In Foundation, the main character Hari Seldon creates psychohistory as a method of making broad predictions about the future behavior of extremely large groups of people, such as the breakdown of civilization (here, the Galactic Empire) and the ensuing Dark Ages.

Seldon, on the other hand, claims that using psychohistory may shorten the era of anarchy: Psychohistory, which has the ability to foretell the fall, may make pronouncements about the coming dark times.

The Empire has been in existence for almost a thousand years.

The next dark ages will last thirty thousand years, not twelve.

A Second Empire will develop, but there will be a thousand generations of suffering humanity between it and our civilization...

If my party is permitted to operate immediately, it is conceivable to cut the period of anarchy to a single millennium.

30–31 (Asimov, 2004a) Psychohistory produces "a mathematical prediction" (Asimov, 2004a, 30), similar to how artificial intelligence would make a forecast.

Seldon established the Basis in the Foundation trilogy, a hidden collection of people enacting humanity's collective knowledge and so serving as the physical foundation for a hypothetical second Galactic Empire.

The Foundation is threatened by the Mule in later parts of the Foundation series, a mutant and consequently an aberration that was not predicted by psychohistory's predictive research.

Although Seldon's thousand-year plan depends on macro conceptions—"the future isn't nebulous," the friction between large-scale theories and individual actions is a crucial factor driving Foundation.

Seldon has computed and plotted it" (Asimov 2004a, 100)—individual acts may save or destroy the scheme.

Asimov's works were frequently foreshadowing, prompting some to label his work as "future history" or "speculative fiction." Asimov's ethical challenges are often cited in legal, political, and policy arguments years after they were published.

For example, in 2007, the South Korean Ministry of Commerce, Industry, and Energy established a Robot Ethics Charter based on the Three Laws, predicting that by 2020, every Korean household will have a robot.

The British House of Lords' Artificial Intelligence Committee adopted a set of guidelines in 2017 that are similar to the Three Laws.

The Three Laws' utility has been questioned by others.

First, some opponents point out that robots are often employed for military purposes, and that the Three Laws would restrict this usage, which Asimov would have supported given his anti-war short tales like "The Gentle Vultures" (1957).

Second, some argue that today's robots and AI applications vary significantly from those shown in the Robot series.

Asimov's imaginary robots are still powered by a "positronic brain," which is still science fiction and beyond current computer capacity.

Third, the Three Laws are clearly fiction, and Asimov's Robot series is founded on misinterpretations in order to advance ethical concerns and for dramatic impact.

Critics claim that the Three Laws cannot serve as a true moral framework for controlling AI or robotics since they may be misunderstood just like any other legislation.

Finally, some argue that these ethical principles should be applied to all people.

Asimov died in 1992 from symptoms related to AIDS, which he caught after receiving a tainted blood transfusion during a heart bypass operation in 1983.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Beneficial AI, Asilomar Meeting on; Pathetic Fallacy; Robot Ethics.


Further Reading

Asimov, Isaac. 1994. I, Asimov: A Memoir. New York: Doubleday.

Asimov, Isaac. 2002. It’s Been a Good Life. Amherst: Prometheus Books.

Asimov, Isaac. 2004a. The Foundation Novels. New York: Bantam Dell.

Asimov, Isaac. 2004b. I, Robot. New York: Bantam Dell.





Artificial Intelligence - How Are Accidents and Risk Assessment Done Using AI?

 



Many computer-based systems' most significant feature is their reliability.

Physical damage, data loss, economic disruption, and human deaths may all result from mechanical and software failures.

Many essential systems are now controlled by robotics, automation, and artificial intelligence.

Nuclear power plants, financial markets, social security payments, traffic lights, and military radar stations are all under their watchful eye.

High-tech systems may be designed purposefully hazardous to people, as with Trojan horses, viruses, and spyware, or they can be dangerous due to human programming or operation errors.

They may become dangerous in the future as a result of purposeful or unintended actions made by the machines themselves, or as a result of unanticipated environmental variables.

The first person to be murdered while working with a robot occurred in 1979.

A one-ton parts-retrieval robot built by Litton Industries hit Ford Motor Company engineer Robert Williams in the head.

After failing to entirely switch off a malfunctioning robot on the production floor at Kawasaki Heavy Industries two years later, Japanese engineer Kenji Urada was murdered.

Urada was shoved into a grinding machine by the robot's arm.

Accidents do not always result in deaths.

A 300-pound Knightscope K5 security robot on patrol at a retail business center in Northern California, for example, knocked down a kid and ran over his foot in 2016.

Only a few cuts and swelling were sustained by the youngster.

The Cold War's history is littered with stories of nuclear near-misses caused by faulty computer technology.

In 1979, a computer glitch at the North American Aerospace Defense Command (NORAD) misled the Strategic Air Command into believing that the Soviet Union had fired over 2,000 nuclear missiles towards the US.

An examination revealed that a training scenario had been uploaded to an active defense computer by mistake.

In 1983, a Soviet military early warning system identified a single US intercontinental ballistic missile launching a nuclear assault.

Stanislav Petrov, the missile defense system's operator, correctly discounted the signal as a false alarm.

The reason of this and subsequent false alarms was ultimately discovered to be sunlight hitting high altitude clouds.

Petrov was eventually punished for humiliating his superiors by disclosing faults, despite preventing global thermonuclear Armageddon.

The so-called "2010 Flash Crash" was caused by stock market trading software.

In slightly over a half-hour on May 6, 2010, the S&P 500, Dow Jones, and NASDAQ stock indexes lost—and then mainly regained—a trillion dollars in value.

Navin Dal Singh Sarao, a U.K. trader, was arrested after a five-year investigation by the US Department of Justice for allegedly manipulating an automated system to issue and then cancel huge numbers of sell orders, allowing his business to acquire equities at temporarily reduced prices.

In 2015, there were two more software-induced market flash crashes, and in 2017, there were flash crashes in the gold futures market and digital cryptocurrency sector.

Tay (short for "Thinking about you"), a Microsoft Corporation artificial intelligence social media chatterbot, went tragically wrong in 2016.

Tay was created by Microsoft engineers to imitate a nineteen-year-old American girl and to learn from Twitter discussions.

Instead, Tay was trained to use harsh and aggressive language by internet trolls, which it then repeated in tweets.

After barely sixteen hours, Microsoft deleted Tay's account.

More AI-related accidents in motor vehicle operating may occur in the future.

In 2016, the first fatal collision involving a self-driving car happened when a Tesla Model S in autopilot mode collided with a semi-trailer crossing the highway.

The motorist may have been viewing a Harry Potter movie on a portable DVD player when the accident happened, according to witnesses.

Tesla's software does not yet allow for completely autonomous driving, hence a human operator is required.

Despite these dangers, one management consulting company claims that autonomous automobiles might avert up to 90% of road accidents.

Artificial intelligence security is rapidly growing as a topic of cybersecurity study.

Militaries all around the globe are working on prototypes of dangerous autonomous weapons systems.

Automatic weapons, such as drones, that now rely on a human operator to make deadly force judgments against targets, might be replaced with automated systems that make life and death decisions.

Robotic decision-makers on the battlefield may one day outperform humans in extracting patterns from the fog of war and reacting quickly and logically to novel or challenging circumstances.

High technology is becoming more and more important in modern civilization, yet it is also becoming more fragile and prone to failure.

An inquisitive squirrel caused the NASDAQ's primary computer to collapse in 1987, bringing one of the world's major stock exchanges to its knees.

In another example, the ozone hole above Antarctica was not discovered for years because exceptionally low levels reported in data-processed satellite images were assumed to be mistakes.

It's likely that the complexity of autonomous systems, as well as society's reliance on them under quickly changing circumstances, will make completely tested AI unachievable.

Artificial intelligence is powered by software that can adapt to and interact with its surroundings and users.

Changes in variables, individual acts, or events may have unanticipated and even disastrous consequences.

One of the dark secrets of sophisticated artificial intelligence is that it is based on mathematical approaches and deep learning algorithms that are so complicated that even its creators are baffled as to how it makes accurate conclusions.

Autonomous cars, for example, depend on exclusively computer-written instructions while they watch people driving in real-world situations.

But how can a self-driving automobile learn to anticipate the unexpected?

Will attempts to adjust AI-generated code to decrease apparent faults, omissions, and impenetrability lessen the likelihood of unintended negative consequences, or will they merely magnify existing problems and produce new ones? Although it is unclear how to mitigate the risks of artificial intelligence, it is likely that society will rely on well-established and presumably trustworthy machine-learning systems to automatically provide rationales for their actions, as well as examine newly developed cognitive computing systems on our behalf.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



Also see: Algorithmic Error and Bias; Autonomy and Complacency; Beneficial AI, Asilo mar Meeting on; Campaign to Stop Killer Robots; Driverless Vehicles and Liability; Explainable AI; Product Liability and AI; Trolley Problem.


Further Reading

De Visser, Ewart Jan. 2012. “The World Is Not Enough: Trust in Cognitive Agents.” Ph.D. diss., George Mason University.

Forester, Tom, and Perry Morrison. 1990. “Computer Unreliability and Social Vulnerability.” Futures 22, no. 5 (June): 462–74.

Lee, John D., and Katrina A. See. 2004. “Trust in Automation: Designing for Appropriate Reliance.” Human Factors 46, no. 1 (Spring): 50–80.

Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan 

M. Ćirković, 308–45. New York: Oxford University Press.



Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...