Artificial Intelligence - Lethal Autonomous Weapons Systems.

  




Lethal Autonomous Weapons Systems.(LAWS), also known as "lethal autonomous weapons," "robotic weapons," or "killer robots," are unmanned robotic systems that can choose and engage targets autonomously and determine whether or not to employ lethal force.

While human-like robots waging wars or utilizing fatal force against people are common in popular culture (ED-209 in RoboCop, T-800 in The Terminator, etc. ), fully autonomous robots are still under development.

LAWS raise serious ethical issues, which are increasingly being contested by AI specialists, NGOs, and the international community.

While the concept of autonomy varies depending on the debate over LAWS, it is often defined as "the capacity to select and engage a target without further human interference after being commanded to do so" (Arkin 2017).


However, according on their amount of autonomy, LAWS are typically categorized into three categories: 

1. Weapons with a person in the loop: These weapons can only identify targets and deliver force in response to a human order.

2. Weapons with a person on the loop: These weapons may choose targets and administer force while being monitored by a human supervisor who can overrule their actions.

3. Human-out-of-the-loop weapons: they can choose targets and deliver force without any human involvement or input.

These three categories of unmanned weapons are covered under the LAWS.


The phrase "totally autonomous weapons" applies to both human-out-of-the-loop and "human-on-the-loop weapons" (or weapons with monitored autonomy) if the monitoring is restricted (for example, if their response time cannot be matched by a human operator).

Robotic weapons aren't a new concept.

Anti-tank mines, for example, have been frequently utilized since World War II (1939–1945), when they were first activated by a human and then engaged targets on their own.

Furthermore, LAWS covers a wide range of unmanned weapons with varying degrees of autonomy and lethality, ranging from ground mines to remote-controlled Unmanned Combat Aerial Vehicles (UCAV), also known as com bat drones, and fire-and-forget missiles.

To far, the only weapons in use that have total autonomy are "defensive" systems (such as landmines).

Neither completely "offensive" autonomous lethal weapons nor machine learning-based LAWS have been deployed yet.

Even though military research is often kept secret, it is known that a number of nations (including the United States, China, Russia, the United Kingdom, Israel, and South Korea) are significantly investing in military AI applications.

The inter-national AI arms race, which began in the early 2010s, has resulted in a rapid pace of progress in this sector, with fully autonomous deadly weapons on the horizon.

There are numerous obvious forerunners of such weapons.

The MK 15 Phalanx CIWS, for example, is a close-in weapon system capable of autonomously performing search, detection, evaluation, tracking, engagement, and kill assessment duties.

It is primarily used by the US Navy.

Another example is Israel's Harpy, a self-destructing anti-radar "fire-and-forget" drone that is dispatched without a specified target and flies a search pattern before attacking targets.

The deployment of LAWS has the potential to revolutionize combat in the same way as gunpowder and nuclear weapons did earlier.

It would eliminate the distinction between fighters and weaponry, and it would make battlefield delimitation more difficult.

However, LAWS may be linked to a variety of military advantages.

Their employment would undoubtedly be a force multiplier, reducing the number of human warriors on the battlefield.

As a result, military lives would be saved.

Because to its quicker reaction time, capacity to undertake movements that human fighters cannot (due to human physical restrictions), and ability to make more efficient judgments (from a military viewpoint), LAWS may be superior to many conventional weapons in terms of force projection.

The use of LAWS, on the other hand, involves significant ethical and political difficulties.

In addition to violating the "Three Laws of Robotics," the deployment of LAWS might lead to the normalization of deadly force, since armed confrontations involve less and fewer human fighters.

Some argue that LAWS are a danger to mankind in this way.

Concerns about the use of LAWS by non-state organizations and nations in non-international armed situations have also been raised.

Delegating life-or-death choices to computers might be seen as a violation of human dignity.

Furthermore, the capacity of LAWS to comply with the norms of international humanitarian law, particularly the rules of proportionality and military necessity, is frequently contested.

Despite their lack of compassion, others claim that LAWS would not act on emotions like as rage, which may lead to purposeful pain such as torture or rape.

Given the difficult difficulty of avoiding war crimes, as seen by countless incidents in previous armed conflicts, it is even possible to claim that LAWS might commit fewer crimes than human warriors.

The effect of LAWS deployment on noncombatants is also a hot topic of debate.

Some argue that the adoption of LAWS will result in fewer civilian losses (Arkin 2017), since AI may be more efficient in decision-making than human warriors.

Some detractors, however, argue that there is a greater chance of bystanders getting caught in the crossfire.

Furthermore, the capacity of LAWS to adhere to the principle of distinction is a hot topic, since differentiating fighters from civilians may be particularly difficult, especially in non-international armed conflicts and asymmetric warfare.

Because they are not moral actors, LAWS cannot be held liable for any of their conduct.

This lack of responsibility may cause further suffering to war victims.

It may also inspire war crimes to be committed.

However, it is debatable whether the authority that chose to deploy LAWS or the persons who created or constructed it have moral culpability.

LAWS has attracted a lot of scientific and political interest in the recent 10 years.

Eighty-seven non-governmental organizations have joined the group that began the "Stop Killer Robots" campaign in 2012.

Civil society mobilizations have emerged from its campaign for a preemptive prohibition on the creation, manufacturing, and use of LAWS.

A statement signed by over 4,000 AI and robotics academics in 2016 called for a ban on LAWS.

Over 240 technology businesses and organizations promised not to engage in or promote the creation, manufacturing, exchange, or use of LAWS in 2018.

Because current international law may not effectively handle the challenges created by LAWS, the UN's Convention on Certain Conventional Weapons launched a consultation process on the subject.

It formed a Group of Governmental Experts in 2016. (GGE). 

Due to a lack of consensus and the resistance of certain nations, the GGE has yet to establish an international agreement to outlaw LAWS (especially the United States, Russia, South Korea, and Israel).

However, twenty-six UN member nations have backed the request for a ban on LAWS, and the European Parliament passed a resolution in June 2018 asking for "an international prohibition on weapon systems that lack human supervision over the use of force." Because there is no example of a technical invention that has not been employed, LAWS will almost certainly be used in the future of conflict.

Nonetheless, there is widespread agreement that humans should be kept "in the loop" and that the use of Regulations should be governed by international and national laws.

However, as the deployment of nuclear and chemical weapons, as well as anti-personal landmines, has shown, a worldwide legal prohibition on the use of LAWS is unlikely to be enforced by all governments and non-state groups.

Hacking the Mac Mac Hack IV, a 1967 chess software built by Richard Greenblatt, gained notoriety for being the first computer chess program to engage in a chess tournament and to play adequately against humans, obtaining a USCF rating of 1,400 to 1,500.

Greenblatt's software, written in the macro assembly language MIDAS, operated on a DEC PDP-6 computer with a clock speed of 200 kilohertz.

While a graduate student at MIT's Artificial Intelligence Laboratory, he built the software as part of Project MAC.

"Chess is the drosophila [fruit fly] of artificial intelligence," according to Russian mathematician Alexander Kronrod, the field's chosen experimental organ ism (Quoted in McCarthy 1990, 227).

Creating a champion chess software has been a cherished goal in artificial intelligence since 1950, when Claude Shan ley first described chess play as a task for computer programmers.

Chess and games in general involve difficult but well-defined issues with well-defined rules and objectives.

Chess has long been seen as a prime illustration of human-like intelligence.

Chess is a well-defined example of human decision-making in which movements must be chosen with a specific purpose in mind, with limited knowledge and uncertainty about the result.

The processing capability of computers in the mid-1960s severely restricted the depth to which a chess move and its alternative answers could be studied since the number of different configurations rises exponentially with each consecutive reply.

The greatest human players have been proven to examine a small number of moves in greater detail rather than a large number of moves in lower depth.

Greenblatt aimed to recreate the methods used by good players to locate significant game tree branches.

He created Mac Hack to reduce the number of nodes analyzed while choosing moves by using a minimax search of the game tree along with alpha-beta pruning and heuristic components.

In this regard, Mac Hack's style of play was more human-like than that of more current chess computers (such as Deep Thought and Deep Blue), which use the sheer force of high processing rates to study tens of millions of branches of the game tree before making moves.

In a contest hosted by MIT mathematician Seymour Papert in 1967, Mac Hack defeated MIT philosopher Hubert Dreyfus and gained substantial renown among artificial intelligence researchers.

The RAND Corporation published a mimeographed version of Dreyfus's paper, Alchemy and Artificial Intelligence, in 1965, which criticized artificial intelligence researchers' claims and aspirations.

Dreyfus claimed that no computer could ever acquire intelligence since human reason and intelligence are not totally rule-bound, and hence a computer's data processing could not imitate or represent human cognition.

In a part of the paper titled "Signs of Stagnation," Dreyfus highlighted attempts to construct chess-playing computers, among his many critiques of AI.

Mac Hack's victory against Dreyfus was first seen as vindication by the AI community.

Machine Learning Regressions are a kind of regression that is used in machine learning.

"Machine learning," a phrase originated by Arthur Samuel in 1959, is a kind of artificial intelligence that produces results without requiring explicit programming.

Instead, the system learns from a database on its own and improves over time.

Machine learning techniques have a wide range of applications (e.g., computer vision, natural language processing, autonomous gaming agents, classification, and regressions) and are used in practically every sector due to their resilience and simplicity of implementation (e.g., tech, finance, research, education, gaming, and navigation).

Machine learning algorithms may be generically classified into three learning types: supervised, unsupervised, and reinforcement, notwithstanding their vast range of applications.

Supervised learning is exemplified by machine learning regressions.

They use algorithms that have been trained on data with labeled continuous numerical outputs.

The quantity of training data or validation criteria required once the regression algorithm has been suitably trained and verified will depend on the issues being addressed.

For data with comparable input structures, the newly developed predictive models give inferred outputs.

These aren't static models.

They may be updated on a regular basis with new training data or by displaying the actual right outputs on previously unlabeled inputs.

Despite machine learning methods' generalizability, there is no one program that is optimal for all regression issues.

When choosing the best machine learning regression method for the present situation, there are a lot of things to think about (e.g., programming languages, available libraries, algo rithm types, data size, and data structure).

There are machine learning programs that employ single or multivariable linear regression approaches, much like other classic statistical methods.

These models represent the connections between a single or several independent feature variables and a dependent target variable.

The models provide linear representations of the combined input variables as their output.

These models are only applicable to noncomplex and small data sets.

Polynomial regressions may be used with nonlinear data.

This necessitates the programmers' knowledge of the data structure, which is often the goal of utilizing machine learning models in the first place.

These methods are unlikely to be appropriate for most real-world data, but they give a basic starting point and might provide users with models that are straightforward to understand.

Decision trees, as the name implies, are tree-like structures that map the input features/attributes of programs to determine the eventual output goal.

The answer to the condition of that node splits into edges in a decision tree algorithm, which starts with the root node (i.e., an input variable).

A leaf is defined as an edge that no longer divides; an internal edge is defined as one that continues to split.

For example, age, weight, and family diabetic history might be used as input factors in a dataset of diabetic and nondiabetic patients to estimate the likelihood of a new patient developing diabetes.

The age variable might be used as the root node (e.g., age 40), with the dataset being divided into those who are more than or equal to 40 and those who are 39 and younger.

The model provides that leaf as the final output if the following internal node after picking more than or equal to 40 is whether or not a parent has/had diabetes, and the leaf estimates the affirmative responses to have a 60% likelihood of this patient acquiring diabetes.

This is a very basic decision tree that demonstrates the decision-making process.

Thousands of nodes may readily be found in a decision tree.

Random forest algorithms are just decision tree mashups.

They are made up of hundreds of decision trees, the ultimate outputs of which are the averaged outputs of the individual trees.

Although decision trees and random forests are excellent at learning very complex data structures, they are prone to overfitting.

With adequate pruning (e.g., establishing the n values limits for splitting and leaves) and big enough random forests, overfitting may be reduced.

Machine learning techniques inspired by the neural connections of the human brain are known as neural networks.


Neurons are the basic unit of neural network algorithms, much as they are in the human brain, and they are organized into numerous layers.

The input layer contains the input variables, the hidden layers include the layers of neurons (there may be numerous hidden levels), and the output layer contains the final neuron.

A single neuron in a feedforward process 

(a) takes the input feature variables, 

(b) multiplies the feature values by a weight, 

(c) adds the resultant feature products, together with a bias variable, and 

(d) passes the sums through an activation function, most often a sigmoid function.


The partial derivative computations of the previous neurons and neural layers are used to alter the weights and biases of each neuron.

Backpropagation is the term for this practice.


The output of the activation function of a single neuron is distributed to all neurons in the next hidden layer or final output layer.

As a result, the projected value is the last neuron's output.

Because neural networks are exceptionally adept at learning exceedingly complicated variable associations, programmers may spend less time reconstructing their data.

Neural network models, on the other hand, are difficult to interpret due to their complexity, and the intervariable relationships are largely hidden.

When used on extremely big datasets, neural networks operate best.

They need meticulous hyper-tuning and considerable processing capacity.

For data scientists attempting to comprehend massive datasets, machine learning has become the standard technique.

Machine learning systems are always being improved in terms of accuracy and usability by researchers.

Machine learning algorithms, on the other hand, are only as useful as the data used to train the model.

Poor data produces dramatically erroneous outcomes, while biased data combined with a lack of knowledge deepens societal disparities.

 


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Autonomous Weapons Systems, Ethics of; Battlefield AI and Robotics.


Further Reading:



Arkin, Ronald. 2017. “Lethal Autonomous Systems and the Plight of the Non-Combatant.” In The Political Economy of Robots, edited by Ryan Kiggins, 317–26. Basingstoke, UK: Palgrave Macmillan.

Heyns, Christof. 2013. Report of the Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions. Geneva, Switzerland: United Nations Human Rights Council. http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf.

Human Rights Watch. 2012. Losing Humanity: The Case against Killer Robots. https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots.

Krishnan, Armin. 2009. Killer Robots: Legality and Ethicality of Autonomous Weapons. Aldershot, UK: Ashgate.

Roff, Heather. M. 2014. “The Strategic Robot Problem: Lethal Autonomous Weapons in War.” Journal of Military Ethics 13, no. 3: 211–27.

Simpson, Thomas W., and Vincent C. Müller. 2016. “Just War and Robots’ Killings.” Philosophical Quarterly 66, no. 263 (April): 302–22.

Singer, Peter. 2009. Wired for War: The Robotics Revolution and Conflict in the 21st Century. New York: Penguin.

Sparrow, Robert. 2007. “Killer Robots.” Journal of Applied Philosophy 24, no. 1: 62–77. 


Artificial Intelligence - Machine Learning Regressions.

 


"Machine learning," a phrase originated by Arthur Samuel in 1959, is a kind of artificial intelligence that produces results without requiring explicit programming.

Instead, the system learns from a database on its own and improves over time.

Machine learning techniques have a wide range of applications (e.g., computer vision, natural language processing, autonomous gaming agents, classification, and regressions) and are used in practically every sector due to their resilience and simplicity of implementation (e.g., tech, finance, research, education, gaming, and navigation).

Machine learning algorithms may be generically classified into three learning types: supervised, unsupervised, and reinforcement, notwithstanding their vast range of applications.

Supervised learning is exemplified by machine learning regressions.

They use algorithms that have been trained on data with labeled continuous numerical outputs.

The quantity of training data or validation criteria required once the regression algorithm has been suitably trained and verified will depend on the issues being addressed.

For data with comparable input structures, the newly developed predictive models give inferred outputs.

These aren't static models.

They may be updated on a regular basis with new training data or by displaying the actual right outputs on previously unlabeled inputs.

Despite machine learning methods' generalizability, there is no one program that is optimal for all regression issues.

When choosing the best machine learning regression method for the present situation, there are a lot of things to think about (e.g., programming languages, available libraries, algorithm types, data size, and data structure).





There are machine learning programs that employ single or multivariable linear regression approaches, much like other classic statistical methods.

These models represent the connections between a single or several independent feature variables and a dependent target variable.

The models provide linear representations of the combined input variables as their output.

These models are only applicable to noncomplex and small data sets.

Polynomial regressions may be used with nonlinear data.

This necessitates the programmers' knowledge of the data structure, which is often the goal of utilizing machine learning models in the first place.

These methods are unlikely to be appropriate for most real-world data, but they give a basic starting point and might provide users with models that are straightforward to understand.

Decision trees, as the name implies, are tree-like structures that map the input features/attributes of programs to determine the eventual output goal.

The answer to the condition of that node splits into edges in a decision tree algorithm, which starts with the root node (i.e., an input variable).

A leaf is defined as an edge that no longer divides; an internal edge is defined as one that continues to split.

For example, age, weight, and family diabetic history might be used as input factors in a dataset of diabetic and nondiabetic patients to estimate the likelihood of a new patient developing diabetes.

The age variable might be used as the root node (e.g., age 40), with the dataset being divided into those who are more than or equal to 40 and those who are 39 and younger.

The model provides that leaf as the final output if the following internal node after picking more than or equal to 40 is whether or not a parent has/had diabetes, and the leaf estimates the affirmative responses to have a 60% likelihood of this patient acquiring diabetes.

This is a very basic decision tree that demonstrates the decision-making process.

Thousands of nodes may readily be found in a decision tree.

Random forest algorithms are just decision tree mashups.

They are made up of hundreds of decision trees, the ultimate outputs of which are the averaged outputs of the individual trees.

Although decision trees and random forests are excellent at learning very complex data structures, they are prone to overfitting.

With adequate pruning (e.g., establishing the n values limits for splitting and leaves) and big enough random forests, overfitting may be reduced.

Machine learning techniques inspired by the neural connections of the human brain are known as neural networks.


Neurons are the basic unit of neural network algorithms, much as they are in the human brain, and they are organized into numerous layers.

The input layer contains the input variables, the hidden layers include the layers of neurons (there may be numerous hidden levels), and the output layer contains the final neuron.

A single neuron in a feedforward process 

(a) takes the input feature variables, 

(b) multiplies the feature values by a weight, 

(c) adds the resultant feature products, together with a bias variable, and 

(d) passes the sums through an activation function, most often a sigmoid function.


The partial derivative computations of the previous neurons and neural layers are used to alter the weights and biases of each neuron.

Backpropagation is the term for this practice.


The output of the activation function of a single neuron is distributed to all neurons in the next hidden layer or final output layer.

As a result, the projected value is the last neuron's output.

Because neural networks are exceptionally adept at learning exceedingly complicated variable associations, programmers may spend less time reconstructing their data.

Neural network models, on the other hand, are difficult to interpret due to their complexity, and the intervariable relationships are largely hidden.

When used on extremely big datasets, neural networks operate best.

They need meticulous hyper-tuning and considerable processing capacity.

For data scientists attempting to comprehend massive datasets, machine learning has become the standard technique.

Machine learning systems are always being improved in terms of accuracy and usability by researchers.

Machine learning algorithms, on the other hand, are only as useful as the data used to train the model.

Poor data produces dramatically erroneous outcomes, while biased data combined with a lack of knowledge deepens societal disparities.

 



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Algorithmic Bias and Error; Automated Machine Learning; Deep Learning; Explainable AI; Gender and AI.



Further Reading:


Garcia, Megan. 2016. “Racist in the Machine: The Disturbing Implications of Algorithmic Bias.” World Policy Journal 33, no. 4 (Winter): 111–17.

Géron, Aurelien. 2019. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. Sebastopol, CA: O’Reilly.



Artificial Intelligence - Who Is Demis Hassabis (1976–)?




Demis Hassabis lives in the United Kingdom and works as a computer game programmer, cognitive scientist, and artificial intelligence specialist.

He is a cofounder of DeepMind, the company that created the AlphaGo deep learning engine.

Hassabis is well-known for being a skilled game player.

His passion for video games paved the way for his career as an artificial intelligence researcher and computer game entrepreneur.

Hassabis' parents noticed his chess prowess at a young age.

At the age of thirteen, he had achieved the status of chess master.

He's also a World Team Champion in the strategic board game Diplomacy, a World Series of Poker Main Event participant, and numerous World Pentamind and World Deca mentathlon Champions in the London Mind Sports Olympiad.

Hassabis began working at Bullfrog Games in Guildford, England, with renowned game designer Peter Molyneux when he was seventeen years old.

Bullfrog was notable for creating a variety of popular computer "god games." A god game is a computer-generated life simulation in which the user has power and influence over semiautonomous people in a diverse world.

Molyneux's Populous, published in 1989, is generally regarded as the first god game.

Has sabis co-designed and coded Theme Park, a simulation management game published by Bullfrog in 1994.

Hassabis dropped out of Bullfrog Games to pursue a degree at Queens' College, Cambridge.

In 1997, he earned a bachelor's degree in computer science.

Following graduation, Hassabis rejoined Molyneux at Lionhead Studios, a new gaming studio.

Hassabis worked on the artificial intelligence for the game Black & White, another god game in which the user reigned over a virtual island inhabited by different tribes, for a short time.

Hassabis departed Lionhead after a year to launch his own video game studio, Elixir Studios.

Hassabis has signed arrangements with major publishers such as Microsoft and Vivendi Universal.

Before closing in 2005, Elixir created a variety of games, including the diplomatic strategy simulation game Republic: The Revolution and the real-time strategy game Evil Genius.

Republic's artificial intelligence is modeled after Elias Canetti's 1960 book People and Authority, which explores problems concerning how and why crowds follow rulers' power (which Hassabis boiled down to force, money, and influence).

Republic required the daily programming efforts of twenty-five programmers over the course of four years.

Hassabis thought that the AI in the game would be valuable to academics.

Hassabis took a break from game creation to pursue additional studies at University College London (UCL).

In 2009, he received his PhD in Cognitive Neuroscience.

In his research of individuals with hippocampal injury, Hassabis revealed links between memory loss and poor imagination.

These findings revealed that the brain's memory systems may splice together recalled fragments of previous experiences to imagine hypothetical futures.

Hassabis continued his academic studies at the Gatsby Computational Neuroscience Unit at UCL and as a Wellcome Trust fellow for another two years.

He was also a visiting researcher at MIT and Harvard University.

Hassabis' cognitive science study influenced subsequent work on unsupervised learning, memory and one-shot learning, and imagination-based planning utilizing generic models in artificial intelligence.

With Shane Legg and Mustafa Suleyman, Hassabis cofounded the London-based AI start-up DeepMind Technologies in 2011.

The organization was focused on interdisciplinary science, bringing together premier academics and concepts from machine learning, neurology, engineering, and mathematics.

The mission of DeepMind was to create scientific breakthroughs in artificial intelligence and develop new artificial general-purpose learning capabilities.

Hassabis has compared the project to the Apollo Program for AI.

DeepMind was tasked with developing a computer capable of defeating human opponents in the abstract strategic board game Go.

Hassabis didn't want to build an expert system, a brute-force computer preprogrammed with Go-specific algorithms and heuristics.

Rather than the chess-playing single-purpose Deep Blue system, he intended to construct a computer that adapted to play ing games in ways comparable to human chess champ Garry Kasparov.

He sought to build a machine that could learn to deal with new issues and have universality, which he defined as the ability to do a variety of jobs.

The reinforcement learning architecture was used by the company's AlphaGo artificial intelligence agent, which was built to compete against Lee Sedol, an eighteen-time world champion Go player.

Agents in the environment (in this example, the Go board) aim to attain a certain objective via reinforcement learning (winning the game).

The agents have perceptual inputs (such as visual input) as well as a statistical model based on environmental data.

The agent creates plans and goes through simulations of actions that will modify the model in order to accomplish the objective while collecting perceptual input and developing a representation of its surroundings.

The agent is always attempting to choose behaviors that will get it closer to its goal.

Hassabis argues that resolving all of the issues of goal-oriented agents in a reinforcement learning framework would be adequate to fulfill artificial general intelligence's promise.

He claims that biological systems work in a similar manner.

The dopamine system in human brains is responsible for implementing a reinforcement learning framework.

To master the game of Go, it usually takes a lifetime of study and practice.

Go includes a significantly broader search area than chess.

On the board, there are more potential Go locations than there are atoms in the cosmos.

It is also thought to be almost hard to develop an evaluation function that covers a significant portion of those places in order to determine where the next stone should be placed on the board.

Each game is essentially unique, and exceptional players describe their decisions as being guided by intuition rather than logic.

AlphaGo addressed these obstacles by leveraging data gathered from thousands of strong amateur games played by human Go players to train a neural network.

After that, AlphaGo played millions of games against itself, predicting how probable each side was to win based on the present board positions.

No specific assessment standards were required in this manner.

In Seoul, South Korea, in 2006, AlphaGo beat Go champion Lee Sedol (four games to one).

The way AlphaGo plays is considered cautious.

It favors diagonal stone placements known as "shoulder hits" to enhance victory while avoiding risk or point spread—thus putting less apparent focus on achieving territorial gains on the board.

In order to play any two-person game, AlphaGo has subsequently been renamed AlphaZero.

Without any human training data or sample games, AlphaZero learns from begin.

It only learns from random play.

After just four hours of training, AlphaZero destroyed Stock fish, one of the best free and open-source chess engines (28 games to 0 with 72 draws).

AlphaZero prefers the mobility of the pieces above their materiality while playing chess, which results in a creative style of play (similar to Go).

Another task the business took on was to develop a versatile, adaptable, and durable AI that could teach itself how to play more than 50 Atari video games just by looking at the pixels and scores on a video screen.

Hassabis introduced deep reinforcement learning, which combines reinforcement learning and deep learning, for this difficulty.

To create a neural network capable of reliable perceptual identification, deep neural networks need an input layer of observations, weighting mechanisms, and backpropagation.

In the instance of the Atari challenge, the network was trained using the 20,000-pixel values that flashed on the videogame screen at any given time.

Under deep learning, reinforcement learning takes the machine from the point where it perceives and recognizes a given input to the point where it can take meaningful action toward a goal.

In the Atari challenge, the computer learnt how to win over hundreds of hours of playtime by doing eighteen distinct exact joystick actions in a certain time-step.

To put it another way, a deep reinforcement learning machine is an end-to-end learning system capable of analyzing perceptual inputs, devising a strategy, and executing the strategy from start.

DeepMind was purchased by Google in 2014.

Hassabis continues to work at Google with DeepMind's deep learning technology.

Optical coherence tomography scans for eye disorders are used in one of these attempts.

By triaging patients and proposing how they should be referred for further treatment, DeepMind's AI system can swiftly and reliably diagnose from eye scans.

AlphaFold is a machine learning, physics, and structural biology system that predicts three-dimensional protein structures simply based on its genetic sequence.

AlphaFold took first place in the 2018 "world championship" for Critical Assessment of Techniques for Protein Structure Prediction, successfully predicting the most accurate structure for 25 of 43 proteins.

AlphaStar is currently mastering the real-time strategy game StarCraft II. 



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Deep Learning.



Further Reading:


“Demis Hassabis, Ph.D.: Pioneer of Artificial Intelligence.” 2018. Biography and interview. American Academy of Achievement. https://www.achievement.org/achiever/demis-hassabis-ph-d/.

Ford, Martin. 2018. Architects of Intelligence: The Truth about AI from the People Building It. Birmingham, UK: Packt Publishing Limited.

Gibney, Elizabeth. 2015. “DeepMind Algorithm Beats People at Classic Video Games.” Nature 518 (February 26): 465–66.

Gibney, Elizabeth. 2016. “Google AI Algorithm Masters Ancient Game of Go.” Nature 529 (January 27): 445–46.

Proudfoot, Kevin, Josh Rosen, Gary Krieg, and Greg Kohs. 2017. AlphaGo. Roco Films.


Artificial Intelligence - Intelligent Tutoring Systems.

  



Intelligent tutoring systems are artificial intelligence-based instructional systems that adapt instruction based on a variety of learner variables, such as dynamic measures of students' ongoing knowledge growth, personal interest, motivation to learn, affective states, and aspects of how they self-regulate their learning.

For a variety of problem areas, such as STEM, computer programming, language, and culture, intelligent tutoring systems have been created.

Complex problem-solving activities, collaborative learning activities, inquiry learning or other open-ended learning activities, learning through conversations, game-based learning, and working with simulations or virtual reality environments are among the many types of instructional activities they support.

Intelligent tutoring systems arose from a field of study known as AI in Education (AIED).

MATHia® (previously Cognitive Tutor), SQL-Tutor, ALEKS, and Rea soning Mind's Genie system are among the commercially successful and widely used intelligent tutoring systems.

Intelligent tutoring systems are frequently more successful than conventional kinds of training, according to six comprehensive meta-analyses.

This efficiency might be due to a number of things.

First, intelligent tutoring systems give adaptive help inside issues, allowing classroom instructors to scale one-on-one tutoring beyond what they could do without it.

Second, they allow adaptive problem selection based on the understanding of particular pupils.

Third, cognitive task analysis, cognitive theory, and learning sciences ideas are often used in intelligent tutoring systems.

Fourth, the employment of intelligent tutoring tools in so-called blended classrooms may result in favorable cultural adjustments by allowing teachers to spend more time working one-on-one with pupils.

Fifth, more sophisticated tutoring systems are repeatedly developed using new approaches from the area of educational data mining, based on data.

Finally, Open Learner Models (OLMs), which are visual representations of the system's internal student model, are often used in intelligent tutoring systems.

OLMs have the potential to assist learners in productively reflecting on their current level of learning.

Model-tracing tutors, constraint-based tutors, example-tracing tutors, and ASSISTments are some of the most common intelligent tutoring system paradigms.

These paradigms vary in how they are created, as well as in tutoring behaviors and underlying representations of domain knowledge, student knowledge, and pedagogical knowledge.

For domain reasoning (e.g., producing future steps in a problem given a student's partial answer), assessing student solutions and partial solutions, and student modeling, intelligent tutoring systems use a number of AI approaches (i.e., dynamically estimating and maintaining a range of learner vari ables).

To increase systems' student modeling skills, a range of data mining approaches (including Bayesian models, hidden Markov models, and logistic regression models) are increasingly being applied.

Machine learning approaches, such as reinforcement learning, are utilized to build instructional policies to a lesser extent.

Researchers are looking at concepts for the smart classroom of the future that go beyond the capabilities of present intelligent tutoring technologies.

AI systems, in their visions, typically collaborate with instructors and students to provide excellent learning experiences for all pupils.

Recent research suggests that rather than designing intelligent tutoring systems to handle all aspects of adaptation, such as providing teachers with real-time analytics from an intelligent tutoring system to draw their attention to learners who may need additional support, promising approaches that adaptively share regulation of learning processes across students, teachers, and AI systems—rather than designing intelligent tutoring systems to handle all aspects of adaptation, for example—by providing teachers with real-time analytics from an intelligent tutoring system to draw their attention to learners who may need additional support.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Natural Language Processing and Speech Understanding; Workplace Automation.



Further Reading:




Aleven, Vincent, Bruce M. McLaren, Jonathan Sewall, Martin van Velsen, Octav Popescu, Sandra Demi, Michael Ringenberg, and Kenneth R. Koedinger. 2016. “Example-Tracing Tutors: Intelligent Tutor Development for Non-Programmers.” International Journal of Artificial Intelligence in Education 26, no. 1 (March): 224–69.

Aleven, Vincent, Elizabeth A. McLaughlin, R. Amos Glenn, and Kenneth R. Koedinger. 2017. “Instruction Based on Adaptive Learning Technologies.” In Handbook of Research on Learning and Instruction, Second edition, edited by Richard E. Mayer and Patricia Alexander, 522–60. New York: Routledge.

du Boulay, Benedict. 2016. “Recent Meta-Reviews and Meta-Analyses of AIED Systems.” International Journal of Artificial Intelligence in Education 26, no. 1: 536–37.

du Boulay, Benedict. 2019. “Escape from the Skinner Box: The Case for Contemporary Intelligent Learning Environments.” British Journal of Educational Technology, 50, no. 6: 2902–19.

Heffernan, Neil T., and Cristina Lindquist Heffernan. 2014. “The ASSISTments Ecosystem: Building a Platform that Brings Scientists and Teachers Together for Minimally Invasive Research on Human Learning and Teaching.” International Journal of Artificial Intelligence in Education 24, no. 4: 470–97.

Koedinger, Kenneth R., and Albert T. Corbett. 2006. “Cognitive Tutors: Technology Bringing Learning Sciences to the Classroom.” In The Cambridge Handbook of the Learning Sciences, edited by Robert K. Sawyer, 61–78. New York: Cambridge University Press.

Mitrovic, Antonija. 2012. “Fifteen Years of Constraint-Based Tutors: What We Have Achieved and Where We Are Going.” User Modeling and User-Adapted Interaction 22, no. 1–2: 39–72.

Nye, Benjamin D., Arthur C. Graesser, and Xiangen Hu. 2014. “AutoTutor and Family: A Review of 17 Years of Natural Language Tutoring.” International Journal of Artificial Intelligence in Education 24, no. 4: 427–69.

Pane, John F., Beth Ann Griffin, Daniel F. McCaffrey, and Rita Karam. 2014. “Effectiveness of Cognitive Tutor Algebra I at Scale.” Educational Evaluation and Policy Analysis 36, no. 2: 127–44.

Schofield, Janet W., Rebecca Eurich-Fulcer, and Chen L. Britt. 1994. “Teachers, Computer Tutors, and Teaching: The Artificially Intelligent Tutor as an Agent for Classroom Change.” American Educational Research Journal 31, no. 3: 579–607.

VanLehn, Kurt. 2016. “Regulative Loops, Step Loops, and Task Loops.” International Journal of Artificial Intelligence in Education 26, no. 1: 107–12.


Artificial Intelligence - Intelligent Transportation.

  



The use of advanced technology, artificial intelligence, and control systems to manage highways, cars, and traffic is known as intelligent transportation.

Traditional American highway engineering disciplines such as driver routing, junction management, traffic distribution, and system-wide command and control inspired the notion.

Because it attempts to embed monitoring equipment in pavements, signaling systems, and individual automobiles in order to decrease congestion and increase safety, intelligent transportation has significant privacy and security considerations.

Highway engineers of the 1950s and 1960s were commonly referred to as "communications engineers," since they used information in the form of signs, signals, and statistics to govern vehicle and highway interactions and traffic flow.

During these decades, computers were primarily employed to simulate crossings and calculate highway capacity.

S. Y. Wong's Traffic Simulator, which used the facilities of the Institute for Advanced Investigate (IAS) computer in Princeton, New Jersey, to study traffic engineering, is one of the early applications of computing technology in this respect.

To show road systems, traffic regulations, driver behavior, and weather conditions, Wong's mid-1950s simulator used computational tools previously established to investigate electrical networks.

Dijkstra's Algorithm, named for computer scientist Edsger Dijkstra, was a pioneering use of information technology to automatically construct and map least distance routes.

In 1959, Dijkstra created an algorithm that finds the shortest path between a starting point and a destination point on a map.

Online mapping systems still use Dijsktra's routing method, and it has significant economic utility in traffic management planning.

Other algorithms and automated devices for guidance control, traffic signaling, and ramp metering were developed by the automotive industry during the 1960s.

Many of these devices, such as traffic right-of-way signals coupled to transistorized fixed-time control boxes, synchronized signals, and traffic-actuated vehicle pressure detectors, became commonplace among the public.

Despite this, traffic control system simulation in experimental labs has remained an essential use of information technology in transportation.

Despite the engineers' best efforts, the rising popularity of vehicles and long-distance travel stressed the national highway system in the 1960s, resulting in a "crisis" in surface transportation network operations.

By the mid-1970s, engineers were considering information technology as a viable alternative to traditional signaling, road expansion, and grade separation approaches for decreasing traffic congestion and increasing safety.

Much of this work was focused on the individual driver, who was supposed to be able to utilize data to make real-time choices that would make driving more enjoyable.

With onboard instrument panels and diagnostics, computing technology promised to make navigation simpler and maximize safety while lowering journey times, particularly when combined with other technologies like as radar, the telephone, and television cameras.

Automobiling became more informed as computer chip prices dropped in the 1980s.

Electronic fuel gauges and oil level indicators, as well as digital speedometer readouts and other warnings, were added to high-end automobile models.

Most states' television broadcasts started delivering pre-trip travel information and weather briefings in the 1990s, based on data and video collected automatically by roadside sensing stations and video cameras.

These summaries were made accessible at roadside way stations, where passengers could get live text-based weather forecasts and radar pictures on public computer displays, as well as as text on pagers and mobile phones.

Few of these innovations have a significant influence on personal privacy or liberty.

However, in 1991, Congress approved the Intermodal Surface Transportation Efficiency Act (ISTEA or "Ice Tea"), which allowed $660 million for the creation of the country's Intelligent Vehicle Highway System (IVHS), which was coauthored by Secretary of Transportation Norman Mineta.

Improved safety, decreased congestion, greater mobility, energy efficiency, economic productivity, increased usage of public transit, and environmental cleanup are among the aims set out in the act.

All of these objectives would be achieved via the effective use of information technology to facilitate transportation in the aggregate as well as on a vehicle-by-vehicle basis.

Hundreds of projects were funded to provide new infrastructure and possibilities for travel and traffic management, public transit management, electronic toll payment, commercial fleet management, emergency management, and vehicle safety, among other things.

While some applications of intelligent transportation technology remained underutilized in the 1990s—for example, carpool matching—other applications became virtually standard on American highways: for example, onboard safety monitoring and precrash deployment of airbags in cars, or automated weigh stations, roadside safety inspections, and satellite Global Positioning System (GPS) tracking for tractor-trailers.

Private enterprise had joined the government in augmenting several of these services by the mid-1990s.

OnStar, a factory-installed telematics system that utilizes GPS and cell phone communications to give route guidance, summon emergency and roadside assistance services, track stolen cars, remotely diagnose mechanical faults, and access locked doors, is included in all General Motors vehicles.

Automobile manufacturers also started experimenting with infrared sensors coupled to expert systems for autonomous collision avoidance, as well as developing technology that allows automobiles to be "platooned" into huge groups of closely spaced vehicles to optimize highway lane capacity.

The computerized toll and traffic management system, which was launched in the 1990s, was perhaps the most widely used implementation of intelligent transportation technology (ETTM).

By placing a radio transponder on their cars, ETTM enabled drivers to pay their tolls on highways without having to slow down.

Florida, New York State, New Jersey, Michigan, Illinois, and California were all using ETTM systems by 1995.

Since then, ETTM has expanded to a number of additional states as well as internationally.

Because of its potential for government intrusion, intelligent transportation initiatives have sparked debate.

Hong Kong's government deployed electronic road pricing (ERP) in the mid-1980s, with radar transmitter-receivers triggered when cars went through tolled tunnels or highway checkpoints.

This system's billing bills supplied drivers with a full record of where they had gone and when they had been there.

The system was put on hold before the British turned Hong Kong over to the Chinese in 1997, due to concerns about potential human rights violations.

On the other hand, the basic purpose of political surveillance is sometimes broadened to include transportation goals.

The UK government, for example, placed street-based closed-circuit television cameras (CCTV) in a "Ring of Steel" around London's financial sector in 1993 to defend against Irish Republican Army terror bombs.

Extreme CCTV enhanced monitoring of downtown London ten years later, in 2003, to incorporate several infrared illuminators enabling the "capture" of license plate numbers on autos.

A daily usage tax was imposed on drivers entering the crowded downtown area.

Vehicles with a unique identification like the Vehicle Identification Number (VIN) or an electronic tag may be tracked using technologies like GPS and electronic payment tollbooth software, for example.

This opens the door to continuous monitoring and tracking of driving choices, as well as the prospect of permanent movement records.

Individual toll crossing locations and timings, the car's average speed, and photos of all passengers are routinely obtained by intelligent transportation surveillance.

In the early 2000s, state transportation bureaus in Florida and California utilized comparable data to send out surveys to individual drivers who used certain roads.

Several state motor vehicle agencies have also contemplated establishing "dual-use" intelligent transportation databanks to supply or sell traffic and driver-related data to law enforcement and marketers.

Artificial intelligence methods are becoming an increasingly important part of intelligent transportation planning, especially as large amounts of data from actual driving experiences are now being collected.

They're being used more and more to control vehicles, predict traffic congestion, and meter traffic, as well as reduce accident rates and fatalities.

Artificial neural networks, genetic algorithms, fuzzy logic, and expert systems are among the AI techniques already in use in various intelligent transportation applications, both singly and in combination.

These methods are being used to develop new vehicle control systems for autonomous and semiautonomous driving, automatic braking control, and real-time energy consumption and emissions monitoring.

Surtrac, for example, is a scalable, adaptive traffic control system created by Carnegie Mellon University, which uses theoretical modeling and artificial intelligence algorithms.

The amount of traffic on particular roadways and intersections may change dramatically throughout the day.

Traditional automated traffic control technology adapts to established patterns on a set timetable or depends on traffic control observations from a central location.

Intersections can communicate with one another, and automobiles may possibly exchange their user-programmed travel paths, thanks to adaptive traffic management.

Vivacity Labs in the United Kingdom employs video sensors at junctions and AI technology to monitor and anticipate traffic conditions in real time during an individual motorist's trip, as well as perform mobility assessments for enterprises and local government bodies at the city scale.

Future paths in intelligent transportation research and development may be determined by fuel costs and climate change consequences.

When oil costs are high, rules may encourage sophisticated traveler information systems that advise drivers to the best routes and departure times, as well as expected (and costly) idling and wait periods.

If traffic grows more crowded, cities may use smart city technologies like real-time traffic and parking warnings, automated incident detection and vehicle recovery, and linked surroundings to govern human-piloted vehicles, autonomous automobiles, and mass transit systems.

More cities throughout the globe are going to use dynamic cordon pricing, which entails calculating and collecting fees to enter or drive in crowded regions.

Vehicle occupancy detection monitors and vehicle categorization detectors are examples of artificial intelligence systems that enable congestion charging.

 



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Driverless Cars and Trucks; Trolley Problem.


Further Reading:



Alpert, Sheri. 1995. “Privacy and Intelligent Highway: Finding the Right of Way.” Santa Clara Computer and High Technology Law Journal 11: 97–118.

Blum, A. M. 1970. “A General-Purpose Digital Traffic Simulator.” Simulation 14, no. 1: 9–25.

Diebold, John. 1995. Transportation Infostructures: The Development of Intelligent Transportation Systems. Westport, CT: Greenwood Publishing Group.

Garfinkel, Simson L. 1996. “Why Driver Privacy Must Be a Part of ITS.” In Converging Infrastructures: Intelligent Transportation and the National Information Infrastructure, edited by Lewis M. Branscomb and James H. Keller, 324–40. Cambridge, MA: MIT Press.

High-Tech Highways: Intelligent Transportation Systems and Policy. 1995. Washington, DC: Congressional Budget Office.

Machin, Mirialys, Julio A. Sanguesa, Piedad Garrido, and Francisco J. Martinez. 2018. “On the Use of Artificial Intelligence Techniques in Intelligent Transportation Systems.” In IEEE Wireless Communications and Networking Conference Worshops (WCNCW), 332–37. Piscataway, NJ: IEEE.

Rodgers, Lionel M., and Leo G. Sands. 1969. Automobile Traffic Signal Control Systems. Philadelphia: Chilton Book Company.

Wong, S. Y. 1956. “Traffic Simulator with a Digital Computer.” In Proceedings of the Western Joint Computer Conference, 92–94. New York: American Institute of Electrical Engineers.




Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...