Showing posts with label Killer Robots. Show all posts
Showing posts with label Killer Robots. Show all posts

Artificial Intelligence - What Is The Stop Killer Robots Campaign?

 



The Campaign to Stop Killer Robots is a non-profit organization devoted to mobilize and campaign against the development and deployment of deadly autonomous weapon systems (LAWS).

The campaign's main issue is that armed robots making life-or-death decisions undercut legal and ethical restraints on violence in human conflicts.

Advocates for LAWS argue that these technologies are compatible with current weapons and regulations, such as cruise missiles that are planned and fired by humans to hunt out and kill a specific target.

Advocates also say that robots are completely reliant on people, that they are bound by their design and must perform the behaviors that have been assigned to them, and that with appropriate monitoring, they may save lives by substituting humans in hazardous situations.


The Campaign to Stop Killer Robots dismisses responsible usage as a viable option, stating fears that the development of LAWS could result in a new arms race.


The advertisement underlines the danger of losing human control over the use of lethal force in situations when armed robots identify and remove a threat before human intervention is feasible.

Human Rights Watch, an international nongovernmental organization (NGO) that promotes fundamental human rights and investigates violations of those rights, organized and managed the campaign, which was officially launched on April 22, 2013, in London, England.


Many member groups make up the Campaign to Stop Killer Robots, including the International Committee for Robot Arms Control and Amnesty International.


A steering group and a worldwide coordinator are in charge of the campaign's leadership.

As of 2018, the steering committee consists of eleven non-governmental organizations.

Mary Wareham, who formerly headed international efforts to ban land mines and cluster bombs, is the campaign's worldwide coordinator.

Efforts to ban armed robots, like those to ban land mines and cluster bombs, concentrate on their potential to inflict needless suffering and indiscriminate damage to humans.


The United Nations Convention on Certain Conventional Weapons (CCW), which originally went into force in 1983, coordinates the worldwide ban of weapons.




Because the CCW has yet to agree on a ban on armed robots, and because the CCW lacks any mechanism for enforcing agreed-upon restrictions, the Campaign to Stop Killer Robots calls for the inclusion of LAWS in the CCW.

The Campaign to Stop Killer Robots also promotes the adoption of new international treaties to implement more preemptive restrictions.

The Campaign to Stop Killer Robots offers tools for educating and mobilizing the public, including multimedia databases, campaign reports, and a mailing list, in addition to lobbying governing authorities for treaty and convention prohibitions.

The Campaign also seeks the participation of technological businesses, requesting that they refuse to participate in the creation of LAWS on their own will.

The @BanKillerRobots account on Twitter is where the Campaign keeps track of and broadcasts the names of companies that have pledged not to engage in the creation or marketing of intelligent weapons.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Autonomous Weapons Systems, Ethics of; Battlefield AI and Robotics; Lethal Autonomous Weapons Systems.


Further Reading


Baum, Seth. 2015. “Stopping Killer Robots and Other Future Threats.” Bulletin of the Atomic Scientists, February 22, 2015. https://thebulletin.org/2015/02/stopping-killer-robots-and-other-future-threats/.

Campaign to Stop Killer Robots. 2020. https://www.stopkillerrobots.org/.

Carpenter, Charli. 2016. “Rethinking the Political / -Science- / Fiction Nexus: Global Policy Making and the Campaign to Stop Killer Robots.” Perspectives on Politics 14, no. 1 (March): 53–69.

Docherty, Bonnie. 2012. Losing Humanity: The Case Against Killer Robots. New York: Human Rights Watch.

Garcia, Denise. 2015. “Killer Robots: Why the US Should Lead the Ban.” Global Policy6, no. 1 (February): 57–63.


Artificial Intelligence - What Are AI Berserkers?

 


Berserkers are intelligent killing robots initially described by science fiction and fantasy novelist Fred Saberhagen (1930–2007) in his 1962 short tale "Without a Thought." Berserkers later emerged as frequent antagonists in many more of Saberhagen's books and novellas.

Berserkers are a sentient, self-replicating race of space-faring robots with the mission of annihilating all life.

They were built as an ultimate doomsday weapon in a long-forgotten interplanetary conflict between two extraterrestrial cultures (i.e., one intended as a threat or deterrent more than actual use).

The facts of how the Berserkers were released are lost to time, since they seem to have killed off their creators as well as their foes and have been ravaging the Milky Way galaxy ever since.

They come in a variety of sizes, from human-scale units to heavily armored planetoids (cf.

Death Star), and are equipped with a variety of weaponry capable of sterilizing worlds.

Any sentient species that fights back, such as humans, is a priority for the Berserkers.

They construct factories in order to duplicate and better themselves, but their basic objective of removing life remains unchanged.

It's uncertain how far they evolve; some individual units end up questioning or even changing their intentions, while others gain strategic brilliance (e.g., Brother Assassin, "Mr.Jester," Rogue Berserker, Shiva in Steel).

While the Berserkers' ultimate purpose of annihilating all life is evident, their tactical activities are uncertain owing to unpredictability in their cores caused by radioactive decay.

Their name is derived from Norse mythology's Berserkers, powerful human warriors who battled in a fury.

Berserkers depict a worst-case scenario for artificial intelligence: murdering robots that think, learn, and reproduce in a wild and emotionless manner.

They demonstrate the deadly arrogance of providing AI with strong weapons, harmful purpose, and unrestrained self-replication in order to transcend its creators' comprehension and control.

If Berserkers are ever developed and released, they may represent an inexhaustible danger to living creatures over enormous swaths of space and time.

They're quite hard to get rid of after they've been unbottled.

This is owing to their superior defenses and weaponry, as well as their widespread distribution, ability to repair and multiply, autonomous functioning (i.e., without centralized control), capacity to learn and adapt, and limitless patience to lay in wait.

The discovery of Berserkers is so horrifying in Saberhagen's books that human civilizations are terrified of constructing their own AI for fear that it may turn against its creators.

Some astute humans, on the other hand, find a fascinating Berserker counter-weapon: Qwib-Qwibs, self-replicating robots designed to eliminate all Berserkers rather than all life ("Itself Surprised" by Roger Zelazny).

Humans have also utilized cyborgs as an anti-Berserker technique, pushing the boundaries of what constitutes biological intelligence (Berserker Man, Ber serker Prime, Berserker Kill).

Berserkers also exemplifies artificial intelligence's potential for inscrutability and strangeness.

Even while Berserkers can communicate with each other, their huge brains are generally unintelligible to sentient organic lifeforms fleeing or battling them, and they are difficult to study owing to their proclivity to self-destruct if caught.

What can be deduced from their reasoning is that they see life as a plague, a material illness that must be eradicated.

In consequence, the Berserkers lack a thorough understanding of biological intellect and have never been able to adequately duplicate organic life, despite several tries.

They do, however, sometimes enlist human defectors (dubbed "goodlife") to aid the Berserkers in their struggle against "badlife" (i.e., any life that resists extermination).

Nonetheless, Berserkers and humans think in almost irreconcilable ways, hindering attempts to reach a common understanding between life and nonlife.

The seeming contrasts between human and machine intellect are at the heart of most of the conflict in the tales (e.g., artistic appreciation, empathy for animals, a sense of humor, a tendency to make mistakes, the use of acronyms for mnemonics, and even fake encyclopedia entries made to detect pla giarism).

Berserkers have been known to be defeated by non-intelligent living forms such as plants and mantis shrimp ("Pressure" and "Smasher").

Berserkers may be seen of as a specific example of the von Neumann probe, which was invented by mathematician and physicist John von Neumann (1903–1957): self-replicating space-faring robots that might be deployed over the galaxy to efficiently investigate it In the Berserker tales, the Turing Test, developed by mathematician and computer scientist Alan Turing (1912–1954), is both investigated and upended.

In "Inhuman Error," human castaways compete with a Berserker to persuade a rescue crew that they are human, while in "Without a Thought," a Berserker tries to figure out whether its game opponent is human.

The Fermi paradox—the concept that if intelligent extraterrestrial civilizations exist, we should have heard from them by now—is also explained by Berserkers.

It's possible that extraterrestrial civilizations haven't contacted Earth because they were destroyed by Berserker-like robots or are hiding from them.

Berserkers, or anything like them, have featured in a number of science fiction books in addition to Saberhagen's (e.g., works by Greg Bear, Gregory Benford, David Brin, Ann Leckie, and Martha Wells; the Terminator series of movies; and the Mass Effect series of video games).

All of these instances demonstrate how the potential for existential risks posed by AI may be investigated in the lab of fiction.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

de Garis, Hugo; Superintelligence; The Terminator.


Further Reading


Saberhagen, Fred. 2015a. Berserkers: The Early Tales. Albuquerque: JSS Literary Productions.

Saberhagen, Fred. 2015b. Berserkers: The Later Tales. Albuquerque: JSS Literary Productions.

Saberhagen’s Worlds of SF and Fantasy. http://www.berserker.com.

The TAJ: Official Fan site of Fred Saberhagen’s Berserker® Universe. http://www.berserkerfan.org.




Artificial Intelligence - How Are Accidents and Risk Assessment Done Using AI?

 



Many computer-based systems' most significant feature is their reliability.

Physical damage, data loss, economic disruption, and human deaths may all result from mechanical and software failures.

Many essential systems are now controlled by robotics, automation, and artificial intelligence.

Nuclear power plants, financial markets, social security payments, traffic lights, and military radar stations are all under their watchful eye.

High-tech systems may be designed purposefully hazardous to people, as with Trojan horses, viruses, and spyware, or they can be dangerous due to human programming or operation errors.

They may become dangerous in the future as a result of purposeful or unintended actions made by the machines themselves, or as a result of unanticipated environmental variables.

The first person to be murdered while working with a robot occurred in 1979.

A one-ton parts-retrieval robot built by Litton Industries hit Ford Motor Company engineer Robert Williams in the head.

After failing to entirely switch off a malfunctioning robot on the production floor at Kawasaki Heavy Industries two years later, Japanese engineer Kenji Urada was murdered.

Urada was shoved into a grinding machine by the robot's arm.

Accidents do not always result in deaths.

A 300-pound Knightscope K5 security robot on patrol at a retail business center in Northern California, for example, knocked down a kid and ran over his foot in 2016.

Only a few cuts and swelling were sustained by the youngster.

The Cold War's history is littered with stories of nuclear near-misses caused by faulty computer technology.

In 1979, a computer glitch at the North American Aerospace Defense Command (NORAD) misled the Strategic Air Command into believing that the Soviet Union had fired over 2,000 nuclear missiles towards the US.

An examination revealed that a training scenario had been uploaded to an active defense computer by mistake.

In 1983, a Soviet military early warning system identified a single US intercontinental ballistic missile launching a nuclear assault.

Stanislav Petrov, the missile defense system's operator, correctly discounted the signal as a false alarm.

The reason of this and subsequent false alarms was ultimately discovered to be sunlight hitting high altitude clouds.

Petrov was eventually punished for humiliating his superiors by disclosing faults, despite preventing global thermonuclear Armageddon.

The so-called "2010 Flash Crash" was caused by stock market trading software.

In slightly over a half-hour on May 6, 2010, the S&P 500, Dow Jones, and NASDAQ stock indexes lost—and then mainly regained—a trillion dollars in value.

Navin Dal Singh Sarao, a U.K. trader, was arrested after a five-year investigation by the US Department of Justice for allegedly manipulating an automated system to issue and then cancel huge numbers of sell orders, allowing his business to acquire equities at temporarily reduced prices.

In 2015, there were two more software-induced market flash crashes, and in 2017, there were flash crashes in the gold futures market and digital cryptocurrency sector.

Tay (short for "Thinking about you"), a Microsoft Corporation artificial intelligence social media chatterbot, went tragically wrong in 2016.

Tay was created by Microsoft engineers to imitate a nineteen-year-old American girl and to learn from Twitter discussions.

Instead, Tay was trained to use harsh and aggressive language by internet trolls, which it then repeated in tweets.

After barely sixteen hours, Microsoft deleted Tay's account.

More AI-related accidents in motor vehicle operating may occur in the future.

In 2016, the first fatal collision involving a self-driving car happened when a Tesla Model S in autopilot mode collided with a semi-trailer crossing the highway.

The motorist may have been viewing a Harry Potter movie on a portable DVD player when the accident happened, according to witnesses.

Tesla's software does not yet allow for completely autonomous driving, hence a human operator is required.

Despite these dangers, one management consulting company claims that autonomous automobiles might avert up to 90% of road accidents.

Artificial intelligence security is rapidly growing as a topic of cybersecurity study.

Militaries all around the globe are working on prototypes of dangerous autonomous weapons systems.

Automatic weapons, such as drones, that now rely on a human operator to make deadly force judgments against targets, might be replaced with automated systems that make life and death decisions.

Robotic decision-makers on the battlefield may one day outperform humans in extracting patterns from the fog of war and reacting quickly and logically to novel or challenging circumstances.

High technology is becoming more and more important in modern civilization, yet it is also becoming more fragile and prone to failure.

An inquisitive squirrel caused the NASDAQ's primary computer to collapse in 1987, bringing one of the world's major stock exchanges to its knees.

In another example, the ozone hole above Antarctica was not discovered for years because exceptionally low levels reported in data-processed satellite images were assumed to be mistakes.

It's likely that the complexity of autonomous systems, as well as society's reliance on them under quickly changing circumstances, will make completely tested AI unachievable.

Artificial intelligence is powered by software that can adapt to and interact with its surroundings and users.

Changes in variables, individual acts, or events may have unanticipated and even disastrous consequences.

One of the dark secrets of sophisticated artificial intelligence is that it is based on mathematical approaches and deep learning algorithms that are so complicated that even its creators are baffled as to how it makes accurate conclusions.

Autonomous cars, for example, depend on exclusively computer-written instructions while they watch people driving in real-world situations.

But how can a self-driving automobile learn to anticipate the unexpected?

Will attempts to adjust AI-generated code to decrease apparent faults, omissions, and impenetrability lessen the likelihood of unintended negative consequences, or will they merely magnify existing problems and produce new ones? Although it is unclear how to mitigate the risks of artificial intelligence, it is likely that society will rely on well-established and presumably trustworthy machine-learning systems to automatically provide rationales for their actions, as well as examine newly developed cognitive computing systems on our behalf.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



Also see: Algorithmic Error and Bias; Autonomy and Complacency; Beneficial AI, Asilo mar Meeting on; Campaign to Stop Killer Robots; Driverless Vehicles and Liability; Explainable AI; Product Liability and AI; Trolley Problem.


Further Reading

De Visser, Ewart Jan. 2012. “The World Is Not Enough: Trust in Cognitive Agents.” Ph.D. diss., George Mason University.

Forester, Tom, and Perry Morrison. 1990. “Computer Unreliability and Social Vulnerability.” Futures 22, no. 5 (June): 462–74.

Lee, John D., and Katrina A. See. 2004. “Trust in Automation: Designing for Appropriate Reliance.” Human Factors 46, no. 1 (Spring): 50–80.

Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan 

M. Ćirković, 308–45. New York: Oxford University Press.



Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...