Showing posts with label Lethal Autonomous Weapons Systems. Show all posts
Showing posts with label Lethal Autonomous Weapons Systems. Show all posts

Artificial Intelligence - Lethal Autonomous Weapons Systems.

  




Lethal Autonomous Weapons Systems.(LAWS), also known as "lethal autonomous weapons," "robotic weapons," or "killer robots," are unmanned robotic systems that can choose and engage targets autonomously and determine whether or not to employ lethal force.

While human-like robots waging wars or utilizing fatal force against people are common in popular culture (ED-209 in RoboCop, T-800 in The Terminator, etc. ), fully autonomous robots are still under development.

LAWS raise serious ethical issues, which are increasingly being contested by AI specialists, NGOs, and the international community.

While the concept of autonomy varies depending on the debate over LAWS, it is often defined as "the capacity to select and engage a target without further human interference after being commanded to do so" (Arkin 2017).


However, according on their amount of autonomy, LAWS are typically categorized into three categories: 

1. Weapons with a person in the loop: These weapons can only identify targets and deliver force in response to a human order.

2. Weapons with a person on the loop: These weapons may choose targets and administer force while being monitored by a human supervisor who can overrule their actions.

3. Human-out-of-the-loop weapons: they can choose targets and deliver force without any human involvement or input.

These three categories of unmanned weapons are covered under the LAWS.


The phrase "totally autonomous weapons" applies to both human-out-of-the-loop and "human-on-the-loop weapons" (or weapons with monitored autonomy) if the monitoring is restricted (for example, if their response time cannot be matched by a human operator).

Robotic weapons aren't a new concept.

Anti-tank mines, for example, have been frequently utilized since World War II (1939–1945), when they were first activated by a human and then engaged targets on their own.

Furthermore, LAWS covers a wide range of unmanned weapons with varying degrees of autonomy and lethality, ranging from ground mines to remote-controlled Unmanned Combat Aerial Vehicles (UCAV), also known as com bat drones, and fire-and-forget missiles.

To far, the only weapons in use that have total autonomy are "defensive" systems (such as landmines).

Neither completely "offensive" autonomous lethal weapons nor machine learning-based LAWS have been deployed yet.

Even though military research is often kept secret, it is known that a number of nations (including the United States, China, Russia, the United Kingdom, Israel, and South Korea) are significantly investing in military AI applications.

The inter-national AI arms race, which began in the early 2010s, has resulted in a rapid pace of progress in this sector, with fully autonomous deadly weapons on the horizon.

There are numerous obvious forerunners of such weapons.

The MK 15 Phalanx CIWS, for example, is a close-in weapon system capable of autonomously performing search, detection, evaluation, tracking, engagement, and kill assessment duties.

It is primarily used by the US Navy.

Another example is Israel's Harpy, a self-destructing anti-radar "fire-and-forget" drone that is dispatched without a specified target and flies a search pattern before attacking targets.

The deployment of LAWS has the potential to revolutionize combat in the same way as gunpowder and nuclear weapons did earlier.

It would eliminate the distinction between fighters and weaponry, and it would make battlefield delimitation more difficult.

However, LAWS may be linked to a variety of military advantages.

Their employment would undoubtedly be a force multiplier, reducing the number of human warriors on the battlefield.

As a result, military lives would be saved.

Because to its quicker reaction time, capacity to undertake movements that human fighters cannot (due to human physical restrictions), and ability to make more efficient judgments (from a military viewpoint), LAWS may be superior to many conventional weapons in terms of force projection.

The use of LAWS, on the other hand, involves significant ethical and political difficulties.

In addition to violating the "Three Laws of Robotics," the deployment of LAWS might lead to the normalization of deadly force, since armed confrontations involve less and fewer human fighters.

Some argue that LAWS are a danger to mankind in this way.

Concerns about the use of LAWS by non-state organizations and nations in non-international armed situations have also been raised.

Delegating life-or-death choices to computers might be seen as a violation of human dignity.

Furthermore, the capacity of LAWS to comply with the norms of international humanitarian law, particularly the rules of proportionality and military necessity, is frequently contested.

Despite their lack of compassion, others claim that LAWS would not act on emotions like as rage, which may lead to purposeful pain such as torture or rape.

Given the difficult difficulty of avoiding war crimes, as seen by countless incidents in previous armed conflicts, it is even possible to claim that LAWS might commit fewer crimes than human warriors.

The effect of LAWS deployment on noncombatants is also a hot topic of debate.

Some argue that the adoption of LAWS will result in fewer civilian losses (Arkin 2017), since AI may be more efficient in decision-making than human warriors.

Some detractors, however, argue that there is a greater chance of bystanders getting caught in the crossfire.

Furthermore, the capacity of LAWS to adhere to the principle of distinction is a hot topic, since differentiating fighters from civilians may be particularly difficult, especially in non-international armed conflicts and asymmetric warfare.

Because they are not moral actors, LAWS cannot be held liable for any of their conduct.

This lack of responsibility may cause further suffering to war victims.

It may also inspire war crimes to be committed.

However, it is debatable whether the authority that chose to deploy LAWS or the persons who created or constructed it have moral culpability.

LAWS has attracted a lot of scientific and political interest in the recent 10 years.

Eighty-seven non-governmental organizations have joined the group that began the "Stop Killer Robots" campaign in 2012.

Civil society mobilizations have emerged from its campaign for a preemptive prohibition on the creation, manufacturing, and use of LAWS.

A statement signed by over 4,000 AI and robotics academics in 2016 called for a ban on LAWS.

Over 240 technology businesses and organizations promised not to engage in or promote the creation, manufacturing, exchange, or use of LAWS in 2018.

Because current international law may not effectively handle the challenges created by LAWS, the UN's Convention on Certain Conventional Weapons launched a consultation process on the subject.

It formed a Group of Governmental Experts in 2016. (GGE). 

Due to a lack of consensus and the resistance of certain nations, the GGE has yet to establish an international agreement to outlaw LAWS (especially the United States, Russia, South Korea, and Israel).

However, twenty-six UN member nations have backed the request for a ban on LAWS, and the European Parliament passed a resolution in June 2018 asking for "an international prohibition on weapon systems that lack human supervision over the use of force." Because there is no example of a technical invention that has not been employed, LAWS will almost certainly be used in the future of conflict.

Nonetheless, there is widespread agreement that humans should be kept "in the loop" and that the use of Regulations should be governed by international and national laws.

However, as the deployment of nuclear and chemical weapons, as well as anti-personal landmines, has shown, a worldwide legal prohibition on the use of LAWS is unlikely to be enforced by all governments and non-state groups.

Hacking the Mac Mac Hack IV, a 1967 chess software built by Richard Greenblatt, gained notoriety for being the first computer chess program to engage in a chess tournament and to play adequately against humans, obtaining a USCF rating of 1,400 to 1,500.

Greenblatt's software, written in the macro assembly language MIDAS, operated on a DEC PDP-6 computer with a clock speed of 200 kilohertz.

While a graduate student at MIT's Artificial Intelligence Laboratory, he built the software as part of Project MAC.

"Chess is the drosophila [fruit fly] of artificial intelligence," according to Russian mathematician Alexander Kronrod, the field's chosen experimental organ ism (Quoted in McCarthy 1990, 227).

Creating a champion chess software has been a cherished goal in artificial intelligence since 1950, when Claude Shan ley first described chess play as a task for computer programmers.

Chess and games in general involve difficult but well-defined issues with well-defined rules and objectives.

Chess has long been seen as a prime illustration of human-like intelligence.

Chess is a well-defined example of human decision-making in which movements must be chosen with a specific purpose in mind, with limited knowledge and uncertainty about the result.

The processing capability of computers in the mid-1960s severely restricted the depth to which a chess move and its alternative answers could be studied since the number of different configurations rises exponentially with each consecutive reply.

The greatest human players have been proven to examine a small number of moves in greater detail rather than a large number of moves in lower depth.

Greenblatt aimed to recreate the methods used by good players to locate significant game tree branches.

He created Mac Hack to reduce the number of nodes analyzed while choosing moves by using a minimax search of the game tree along with alpha-beta pruning and heuristic components.

In this regard, Mac Hack's style of play was more human-like than that of more current chess computers (such as Deep Thought and Deep Blue), which use the sheer force of high processing rates to study tens of millions of branches of the game tree before making moves.

In a contest hosted by MIT mathematician Seymour Papert in 1967, Mac Hack defeated MIT philosopher Hubert Dreyfus and gained substantial renown among artificial intelligence researchers.

The RAND Corporation published a mimeographed version of Dreyfus's paper, Alchemy and Artificial Intelligence, in 1965, which criticized artificial intelligence researchers' claims and aspirations.

Dreyfus claimed that no computer could ever acquire intelligence since human reason and intelligence are not totally rule-bound, and hence a computer's data processing could not imitate or represent human cognition.

In a part of the paper titled "Signs of Stagnation," Dreyfus highlighted attempts to construct chess-playing computers, among his many critiques of AI.

Mac Hack's victory against Dreyfus was first seen as vindication by the AI community.

Machine Learning Regressions are a kind of regression that is used in machine learning.

"Machine learning," a phrase originated by Arthur Samuel in 1959, is a kind of artificial intelligence that produces results without requiring explicit programming.

Instead, the system learns from a database on its own and improves over time.

Machine learning techniques have a wide range of applications (e.g., computer vision, natural language processing, autonomous gaming agents, classification, and regressions) and are used in practically every sector due to their resilience and simplicity of implementation (e.g., tech, finance, research, education, gaming, and navigation).

Machine learning algorithms may be generically classified into three learning types: supervised, unsupervised, and reinforcement, notwithstanding their vast range of applications.

Supervised learning is exemplified by machine learning regressions.

They use algorithms that have been trained on data with labeled continuous numerical outputs.

The quantity of training data or validation criteria required once the regression algorithm has been suitably trained and verified will depend on the issues being addressed.

For data with comparable input structures, the newly developed predictive models give inferred outputs.

These aren't static models.

They may be updated on a regular basis with new training data or by displaying the actual right outputs on previously unlabeled inputs.

Despite machine learning methods' generalizability, there is no one program that is optimal for all regression issues.

When choosing the best machine learning regression method for the present situation, there are a lot of things to think about (e.g., programming languages, available libraries, algo rithm types, data size, and data structure).

There are machine learning programs that employ single or multivariable linear regression approaches, much like other classic statistical methods.

These models represent the connections between a single or several independent feature variables and a dependent target variable.

The models provide linear representations of the combined input variables as their output.

These models are only applicable to noncomplex and small data sets.

Polynomial regressions may be used with nonlinear data.

This necessitates the programmers' knowledge of the data structure, which is often the goal of utilizing machine learning models in the first place.

These methods are unlikely to be appropriate for most real-world data, but they give a basic starting point and might provide users with models that are straightforward to understand.

Decision trees, as the name implies, are tree-like structures that map the input features/attributes of programs to determine the eventual output goal.

The answer to the condition of that node splits into edges in a decision tree algorithm, which starts with the root node (i.e., an input variable).

A leaf is defined as an edge that no longer divides; an internal edge is defined as one that continues to split.

For example, age, weight, and family diabetic history might be used as input factors in a dataset of diabetic and nondiabetic patients to estimate the likelihood of a new patient developing diabetes.

The age variable might be used as the root node (e.g., age 40), with the dataset being divided into those who are more than or equal to 40 and those who are 39 and younger.

The model provides that leaf as the final output if the following internal node after picking more than or equal to 40 is whether or not a parent has/had diabetes, and the leaf estimates the affirmative responses to have a 60% likelihood of this patient acquiring diabetes.

This is a very basic decision tree that demonstrates the decision-making process.

Thousands of nodes may readily be found in a decision tree.

Random forest algorithms are just decision tree mashups.

They are made up of hundreds of decision trees, the ultimate outputs of which are the averaged outputs of the individual trees.

Although decision trees and random forests are excellent at learning very complex data structures, they are prone to overfitting.

With adequate pruning (e.g., establishing the n values limits for splitting and leaves) and big enough random forests, overfitting may be reduced.

Machine learning techniques inspired by the neural connections of the human brain are known as neural networks.


Neurons are the basic unit of neural network algorithms, much as they are in the human brain, and they are organized into numerous layers.

The input layer contains the input variables, the hidden layers include the layers of neurons (there may be numerous hidden levels), and the output layer contains the final neuron.

A single neuron in a feedforward process 

(a) takes the input feature variables, 

(b) multiplies the feature values by a weight, 

(c) adds the resultant feature products, together with a bias variable, and 

(d) passes the sums through an activation function, most often a sigmoid function.


The partial derivative computations of the previous neurons and neural layers are used to alter the weights and biases of each neuron.

Backpropagation is the term for this practice.


The output of the activation function of a single neuron is distributed to all neurons in the next hidden layer or final output layer.

As a result, the projected value is the last neuron's output.

Because neural networks are exceptionally adept at learning exceedingly complicated variable associations, programmers may spend less time reconstructing their data.

Neural network models, on the other hand, are difficult to interpret due to their complexity, and the intervariable relationships are largely hidden.

When used on extremely big datasets, neural networks operate best.

They need meticulous hyper-tuning and considerable processing capacity.

For data scientists attempting to comprehend massive datasets, machine learning has become the standard technique.

Machine learning systems are always being improved in terms of accuracy and usability by researchers.

Machine learning algorithms, on the other hand, are only as useful as the data used to train the model.

Poor data produces dramatically erroneous outcomes, while biased data combined with a lack of knowledge deepens societal disparities.

 


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Autonomous Weapons Systems, Ethics of; Battlefield AI and Robotics.


Further Reading:



Arkin, Ronald. 2017. “Lethal Autonomous Systems and the Plight of the Non-Combatant.” In The Political Economy of Robots, edited by Ryan Kiggins, 317–26. Basingstoke, UK: Palgrave Macmillan.

Heyns, Christof. 2013. Report of the Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions. Geneva, Switzerland: United Nations Human Rights Council. http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf.

Human Rights Watch. 2012. Losing Humanity: The Case against Killer Robots. https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots.

Krishnan, Armin. 2009. Killer Robots: Legality and Ethicality of Autonomous Weapons. Aldershot, UK: Ashgate.

Roff, Heather. M. 2014. “The Strategic Robot Problem: Lethal Autonomous Weapons in War.” Journal of Military Ethics 13, no. 3: 211–27.

Simpson, Thomas W., and Vincent C. Müller. 2016. “Just War and Robots’ Killings.” Philosophical Quarterly 66, no. 263 (April): 302–22.

Singer, Peter. 2009. Wired for War: The Robotics Revolution and Conflict in the 21st Century. New York: Penguin.

Sparrow, Robert. 2007. “Killer Robots.” Journal of Applied Philosophy 24, no. 1: 62–77. 


Artificial Intelligence - What Is The Stop Killer Robots Campaign?

 



The Campaign to Stop Killer Robots is a non-profit organization devoted to mobilize and campaign against the development and deployment of deadly autonomous weapon systems (LAWS).

The campaign's main issue is that armed robots making life-or-death decisions undercut legal and ethical restraints on violence in human conflicts.

Advocates for LAWS argue that these technologies are compatible with current weapons and regulations, such as cruise missiles that are planned and fired by humans to hunt out and kill a specific target.

Advocates also say that robots are completely reliant on people, that they are bound by their design and must perform the behaviors that have been assigned to them, and that with appropriate monitoring, they may save lives by substituting humans in hazardous situations.


The Campaign to Stop Killer Robots dismisses responsible usage as a viable option, stating fears that the development of LAWS could result in a new arms race.


The advertisement underlines the danger of losing human control over the use of lethal force in situations when armed robots identify and remove a threat before human intervention is feasible.

Human Rights Watch, an international nongovernmental organization (NGO) that promotes fundamental human rights and investigates violations of those rights, organized and managed the campaign, which was officially launched on April 22, 2013, in London, England.


Many member groups make up the Campaign to Stop Killer Robots, including the International Committee for Robot Arms Control and Amnesty International.


A steering group and a worldwide coordinator are in charge of the campaign's leadership.

As of 2018, the steering committee consists of eleven non-governmental organizations.

Mary Wareham, who formerly headed international efforts to ban land mines and cluster bombs, is the campaign's worldwide coordinator.

Efforts to ban armed robots, like those to ban land mines and cluster bombs, concentrate on their potential to inflict needless suffering and indiscriminate damage to humans.


The United Nations Convention on Certain Conventional Weapons (CCW), which originally went into force in 1983, coordinates the worldwide ban of weapons.




Because the CCW has yet to agree on a ban on armed robots, and because the CCW lacks any mechanism for enforcing agreed-upon restrictions, the Campaign to Stop Killer Robots calls for the inclusion of LAWS in the CCW.

The Campaign to Stop Killer Robots also promotes the adoption of new international treaties to implement more preemptive restrictions.

The Campaign to Stop Killer Robots offers tools for educating and mobilizing the public, including multimedia databases, campaign reports, and a mailing list, in addition to lobbying governing authorities for treaty and convention prohibitions.

The Campaign also seeks the participation of technological businesses, requesting that they refuse to participate in the creation of LAWS on their own will.

The @BanKillerRobots account on Twitter is where the Campaign keeps track of and broadcasts the names of companies that have pledged not to engage in the creation or marketing of intelligent weapons.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Autonomous Weapons Systems, Ethics of; Battlefield AI and Robotics; Lethal Autonomous Weapons Systems.


Further Reading


Baum, Seth. 2015. “Stopping Killer Robots and Other Future Threats.” Bulletin of the Atomic Scientists, February 22, 2015. https://thebulletin.org/2015/02/stopping-killer-robots-and-other-future-threats/.

Campaign to Stop Killer Robots. 2020. https://www.stopkillerrobots.org/.

Carpenter, Charli. 2016. “Rethinking the Political / -Science- / Fiction Nexus: Global Policy Making and the Campaign to Stop Killer Robots.” Perspectives on Politics 14, no. 1 (March): 53–69.

Docherty, Bonnie. 2012. Losing Humanity: The Case Against Killer Robots. New York: Human Rights Watch.

Garcia, Denise. 2015. “Killer Robots: Why the US Should Lead the Ban.” Global Policy6, no. 1 (February): 57–63.


Artificial Intelligence - AI And Robotics In The Battlefield.

 



Because of the growth of artificial intelligence (AI) and robots and their application to military matters, generals on the contemporary battlefield are seeing a possible tactical and strategic revolution.

Unmanned aerial vehicles (UAVs), also known as drones, and other robotic devices played a key role in the wars in Afghanistan (2001–) and Iraq (2003–2011).

It is possible that future conflicts will be waged without the participation of humans.

Without human control or guidance, autonomous robots will fight in war on land, in the air, and beneath the water.

While this vision remains in the realm of science fiction, battlefield AI and robotics raise a slew of practical, ethical, and legal issues that military leaders, technologists, jurists, and philosophers must address.

When many people think about AI and robotics on the battlefield, the first image that springs to mind is "killer robots," armed machines that indiscriminately destroy everything in their path.

There are, however, a variety of applications for battlefield AI that do not include killing.

In recent wars, the most notable application of such technology has been peaceful in character.

UAVs are often employed for surveillance and reconnaissance.

Other robots, like as iRobot's PackBot (the same firm that makes the vacuum-cleaning Roomba), are employed to locate and assess improvised explosive devices (IEDs), making their safe disposal easier.

Robotic devices can navigate treacherous terrain, such as Afghanistan's caves and mountain crags, as well as areas too dangerous for humans, such as under a vehicle suspected of being rigged with an IED.

Unmanned Underwater Vehicles (UUVs) are also used to detect mines underwater.

IEDs and explosives are so common on today's battlefields that these robotic gadgets are priceless.

Another potential life-saving capacity of battlefield robots that has yet to be realized is in the realm of medicine.

Robots can safely collect injured troops on the battlefield in areas that are inaccessible to their human counterparts, without jeopardizing their own lives.

Robots may also transport medical supplies and medications to troops on the battlefield, as well as conduct basic first aid and other emergency medical operations.

AI and robots have the greatest potential to change the battlefield—whether on land, sea, or in the air—in the arena of deadly power.

The Aegis Combat System (ACS) is an example of an autonomous system used by several militaries across the globe aboard destroyers and other naval combat vessels.

Through radar and sonar, the system can detect approaching threats, such as missiles from the surface or air, mines, or torpedoes from the water.

The system is equipped with a powerful computer system and can use its own munitions to eliminate identified threats.

Despite the fact that Aegis is activated and supervised manually, it has the potential to operate autonomously in order to counter threats faster than humans could.

In addition to partly automated systems like the ACS and UAVs, completely autonomous military robots capable of making judgments and acting on their own may be developed in the future.

The most significant feature of AI-powered robotics is the development of lethal autonomous weapons (LAWs), sometimes known as "killer robots." On a sliding scale, robot autonomy exists.

At one extreme of the spectrum are robots that are designed to operate autonomously, but only in reaction to a specific stimulus and in one direction.

This degree of autonomy is shown by a mine that detonates autonomously when stepped on.

Remotely operated machines, which are unmanned yet controlled remotely by a person, are also available at the lowest end of the range.

Semiautonomous systems occupy the midpoint of the spectrum.

These systems may be able to work without the assistance of a person, but only to a limited extent.

A robot commanded to launch, go to a certain area, and then return at a specific time is an example of such a system.

The machine does not make any "decisions" on its own in this situation.

Semiautonomous devices may also be configured to accomplish part of a task before waiting for further inputs before moving on to the next step.

Full autonomy is the last step.

Fully autonomous robots are designed with a purpose and are capable of achieving it entirely on their own.

This might include the capacity to use deadly force without direct human guidance in warfare circumstances.

Robotic gadgets that are lethally equipped, AI-enhanced, and totally autonomous have the ability to radically transform the current warfare.

Military ground troops made up of both humans and robots, or entirely of robots with no humans, would expand armies.

Small, armed UAVs would not be constrained by the requirement for human operators, and they might be assembled in massive swarms to overwhelm bigger, but less mobile troops.

Such technological advancements will entail equally dramatic shifts in tactics, strategy, and even the notion of combat.

This technology will become less expensive as it becomes more widely accessible.

This might disturb the present military power balance.

Even minor governments, and maybe even non-state organizations like terrorist groups, may be able to develop their own robotic army.

Fully autonomous LAWs bring up a slew of practical, ethical, and legal issues.

One of the most pressing practical considerations is safety.

A completely autonomous robot with deadly armament that malfunctions might represent a major threat to everyone who comes in contact with it.

Fully autonomous missiles might theoretically wander off course and kill innocent people due to a mechanical failure.

Unpredictable technological faults and malfunctions may occur in any kind of apparatus.

Such issues offer a severe safety concern to individuals who deploy deadly robotic gadgets as well as unwitting bystanders.

Even if there are no possible problems, restrictions in program ming may result in potentially disastrous errors.

Programming robots to discriminate between combatants and noncombatants, for example, is a big challenge, and it's simple to envisage misidentification leading to unintentional fatalities.

The greatest concern, though, is that robotic AI may grow too quickly and become independent of human control.

Sentient robots might turn their armament on humans, like in popular science fiction movies and literature, and in fulfillment of eminent scientist Stephen Hawking's grim forecast that the development of AI could end in humanity's annihilation.

Laws may also lead to major legal issues.

The rules of war apply to human beings.

Robots cannot be held accountable for prospective law crimes, whether criminally, civilly, or in any other manner.

As a result, there's a chance that war crimes or other legal violations may go unpunished.

Here are some serious issues to consider: Can the programmer or engineer of a robot be held liable for the machine's actions? Could a person who gave the robot its "command" be held liable for the robot's unpredictability or blunders on a mission that was otherwise self-directed? Such considerations must be thoroughly considered before any completely autonomous deadly equipment is deployed.

Aside from legal issues of duty, a slew of ethical issues must be addressed.

The conduct of war necessitates split-second moral judgments.

Will self-driving robots be able to tell the difference between a kid and a soldier, or between a wounded and helpless soldier and an active combatant? Will a robotic military force always be seen as a cold, brutal, and merciless army of destruction, or can a robot be designed to behave kindly when the situation demands it? Because combat is riddled with moral dilemmas, LAWs involved in war will always be confronted with them.

Experts wonder that dangerous autonomous robots can ever be trusted to do the right thing.

Moral action requires not just rationality—which robots may be capable of—but also emotions, empathy, and wisdom.

These later items are much more difficult to implement in code.

Many individuals have called for an absolute ban on research in this field because to legal, ethical, and practical problems highlighted by the potential of ever more powerful AI-powered robotic military hardware.

Others, on the other hand, believe that scientific advancement cannot be halted.

Rather than prohibiting such study, they argue that scientists and society as a whole should seek realistic answers to the difficulties.

Some argue that keeping continual human supervision and control over robotic military units may address many of the ethical and legal issues.

Others argue that direct supervision is unlikely in the long term because human intellect will be unable to match the pace with which computers think and act.

As the side that gives its robotic troops more autonomy gains an overwhelming advantage over those who strive to preserve human control, there will be an inevitable trend toward more and more autonomy.

They warn that fully autonomous forces will always triumph.

Despite the fact that it is still in its early stages, the introduction of more complex AI and robotic equipment to the battlefield has already resulted in significant change.

AI and robotics on the battlefield have the potential to drastically transform the future of warfare.

It remains to be seen if and how this technology's technical, practical, legal, and ethical limits can be addressed.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Autonomous Weapons Systems, Ethics of; Lethal Autonomous Weapons 
Systems.


Further Reading

Borenstein, Jason. 2008. “The Ethics of Autonomous Military Robots.” Studies in Ethics, 
Law, and Technology 2, no. 1: n.p. https://www.degruyter.com/view/journals/selt/2/1/article-selt.2008.2.1.1036.xml.xml.

Morris, Zachary L. 2018. “Developing a Light Infantry-Robotic Company as a System.” 
Military Review 98, no. 4 (July–August): 18–29.

Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. New 
York: W. W. Norton.

Singer, Peter W. 2009. Wired for War: The Robotics Revolution and Conflict in the 21st 
Century. London: Penguin.

Sparrow, Robert. 2007. “Killer Robots,” Journal of Applied Philosophy 24, no. 1: 62–77.



Artificial Intelligence - Ethics Of Autonomous Weapons Systems.

 



Autonomous weapons systems (AWS) are armaments that are designed to make judgments without the constant input of their programmers.

Navigation, target selection, and when to attack opposing fighters are just a few of the decisions that must be made.

Because of the imminence of this technology, numerous ethical questions and arguments have arisen regarding whether it should be developed and how it should be utilized.

The technology's seeming inevitability prompted Human Rights Watch to launch a campaign in 2013 called "Stop Killer Robots," which pushes for universal bans on their usage.

This movement continues to exist now.

Other academics and military strategists point to AWS' strategic and resource advantages as reasons for continuing to develop and use them.

A discussion of whether it is desirable or feasible to construct an international agreement on their development and/or usage is central to this argument.

Those who advocate for further technological advancement in these areas focus on the advantages that a military power can gain from using AWS.

These technologies have the potential to reduce collateral damage, battle casualties, the capacity to minimize needless risk, more efficient military operations, reduced psychological harm to troops from war, and armies with declining human numbers.

In other words, they concentrate on the advantages of the weapon to the military that will use it.

The essential assumption in these discussions is that the military's aims are morally worthwhile in and of themselves.

AWS may result in less civilian deaths since the systems can make judgments faster than humans; however, this is not always the case with technology, as the decision-making procedures of AWS may result in higher civilian fatalities rather than the opposite.

However, if they can avoid civilian fatalities and property damage more effectively than conventional fighting, they are more efficient and hence preferable.

In times of conflict, they might also improve efficiency by minimizing resource waste.

Transportation of people and the resources required to keep them alive is a time-consuming and challenging part of battle.

AWS provides a solution to complex logistical issues.

Drones and other autonomous systems don't need rain gear, food, drink, or medical attention, making them less cumbersome and perhaps more successful in completing their objectives.

AWS are considered as eliminating waste and offering the best possible outcome in a combat situation in these and other ways.

The employment of AWS in military operations is inextricably linked to Just War Theory.

Just War Theory examines whether it is morally acceptable or essential for a military force to engage in war, as well as what activities are ethically justifiable during wartime.

If an autonomous system may be used in a military strike, it can only be done if the attack is justifiable in the first place.

According to this viewpoint, the manner in which one is murdered is less essential than the reason for one's death.

Those who believe AWS is unethical concentrate on the hazards that such technology entails.

These scenarios include scenarios in which enemy combatants obtain weaponry and use it against the military power that deploys it, as well as scenarios in which there is increased (and uncontrollable) collateral damage, reduced retaliation capability (against enemy combatant aggressors), and loss of human dignity.

One key concern is whether being murdered by a computer without a person as the final decision-maker is consistent with human dignity.

There appears to be something demeaning about being murdered by an AWS that has had minimal human interaction.

Another key worry is the risk aspect, which includes the danger to the user of the technology that if the AWS is taken down (either because to a malfunction or an enemy assault), it will be seized and used against the owner.

Those who oppose the use of AWS are likewise concerned about the concept of just war.

The targeting of civilians by military agents is expressly prohibited under Just War Theory; the only lawful military targets are other military bases or personnel.

However, the introduction of autonomous weapons may imply that a state, particularly one without access to AWS, may be unable to react to military attacks launched by AWS.

In a scenario where one side has access to AWS but the other does not, the side without the weapons will inevitably be without a legal military target, forcing them to either target nonmilitary (civilian) targets or not react at all.

Neither alternative is feasible in terms of ethics or practicality.

Because automated weaponry is widely assumed to be on the horizon, another ethical consideration is how to regulate its use.

Because of the United States' extensive use of remote control drones in the Middle East, this debate has gotten a lot of attention.

Some advocate for a worldwide ban on the technology; although this is often seen as foolish and hence impractical, these advocates frequently point to the UN's restriction against blinding lasers, which has been ratified by 108 countries.

Others want to create an international convention that controls the proper use of these technologies, with consequences and punishments for nations that break these standards, rather than a full prohibition.

There is currently no such agreement, and each state must decide how to govern the usage of these technologies on its own.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Battlefield AI and Robotics; Campaign to Stop Killer Robots; Lethal Autonomous Weapons Systems; Robot Ethics.



Further Reading

Arkin, Ronald C. 2010. “The Case for Ethical Autonomy in Unmanned Systems.” Journal 
of Military Ethics 9, no. 4: 332–41.

Bhuta, Nehal, Susanne Beck, Robin Geiss, Hin-Yan Liu, and Claus Kress, eds. 2016. 
Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge, UK: Cambridge 
University Press.

Killmister, Suzy. 2008. “Remote Weaponry: The Ethical Implications.” Journal of 
Applied Philosophy 25, no. 2: 121–33.

Leveringhaus, Alex. 2015. “Just Say ‘No!’ to Lethal Autonomous Robotic Weapons.” 
Journal of Information, Communication, and Ethics in Society 13, no. 3–4: 
299–313.

Sparrow, Robert. 2016. “Robots and Respect: Assessing the Case Against Autonomous 
Weapon Systems.” Ethics & International Affairs 30, no. 1: 93–116.





Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...