Showing posts with label AI and Embodiment. Show all posts
Showing posts with label AI and Embodiment. Show all posts

Artificial Intelligence - General and Narrow Categories Of AI.






There are two types of artificial intelligence: general (or powerful or complete) and narrow (or limited) (or weak or specialized).

In the actual world, general AI, such as that seen in science fiction, does not yet exist.

Machines with global intelligence would be capable of completing every intellectual endeavor that humans are capable of.

This sort of system would also seem to think in abstract terms, establish connections, and communicate innovative ideas in the same manner that people do, displaying the ability to think abstractly and solve problems.



Such a computer would be capable of thinking, planning, and recalling information from the past.

While the aim of general AI has yet to be achieved, there are more and more instances of narrow AI.

These are machines that perform at human (or even superhuman) levels on certain tasks.

Computers that have learnt to play complicated games have abilities, techniques, and behaviors that are comparable to, if not superior to, those of the most skilled human players.

AI systems that can translate between languages in real time, interpret and respond to natural speech (both spoken and written), and recognize images have also been developed (being able to recognize, identify, and sort photos or images based on the content).

However, the ability to generalize knowledge or skills is still largely a human accomplishment.

Nonetheless, there is a lot of work being done in the field of general AI right now.

It will be difficult to determine when a computer develops human-level intelligence.

Several serious and hilarious tests have been suggested to determine whether a computer has reached the level of General AI.

The Turing Test is arguably the most renowned of these examinations.

A machine and a person speak in the background, as another human listens in.

The human eavesdropper must figure out which speaker is a machine and which is a human.

The machine passes the test if it can fool the human evaluator a prescribed percentage of the time.

The Coffee Test is a more fantastical test in which a machine enters a typical household and brews coffee.



It has to find the coffee machine, look for the coffee, add water, boil the coffee, and pour it into a cup.

Another is the Flat Pack Furniture Test, which involves a machine receiving, unpacking, and assembling a piece of furniture based only on the instructions supplied.

Some scientists, as well as many science fiction writers and fans, believe that once intelligent machines reach a tipping point, they will be able to improve exponentially.

AI-based beings that far exceed human capabilities might be one conceivable result.

The Singularity, or artificial superintelligence, is the point at which AI assumes control of its own self-improvement (ASI).

If ASI is achieved, it will have unforeseeable consequences for human society.

Some pundits worry that ASI would jeopardize humanity's safety and dignity.

It's up for dispute whether the Singularity will ever happen, and how dangerous it may be.

Narrow AI applications are becoming more popular across the globe.

Machine learning (ML) is at the heart of most new applications, and most AI examples in the news are connected to this subset of technology.

Traditional or conventional algorithms are not the same as machine learning programs.

In programs that cannot learn, a computer programmer actively adds code to account for every action of an algorithm.

All of the decisions made along the process are governed by the programmer's guidelines.

This necessitates the programmer imagining and coding for every possible circumstance that an algorithm may face.

This kind of program code is bulky and often inadequate, especially if it is updated frequently to accommodate for new or unanticipated scenarios.

The utility of hard-coded algorithms approaches its limit in cases where the criteria for optimum judgments are unclear or impossible for a human programmer to foresee.

Machine learning is the process of training a computer to detect and identify patterns via examples rather than predefined rules.



This is achieved, according to Google engineer Jason Mayes, by reviewing incredibly huge quantities of training data or participating in some other kind of programmed learning step.

New patterns may be extracted by processing the test data.

The system may then classify newly unknown data based on the patterns it has already found.

Machine learning allows an algorithm to recognize patterns or rules underlying decision-making processes on its own.

Machine learning also allows a system's output to improve over time as it gains more experience (Mayes 2017).

A human programmer continues to play a vital role in this learning process, influencing results by making choices like developing the exact learning algorithm, selecting the training data, and choosing other design elements and settings.

Machine learning is powerful once it's up and running because it can adapt and enhance its ability to categorize new data without the need for direct human interaction.

In other words, the output quality increases as the user gains experience.

Artificial intelligence is a broad word that refers to the science of making computers intelligent.

AI is a computer system that can collect data and utilize it to make judgments or solve issues, according to scientists.

Another popular scientific definition of AI is "a software program paired with hardware that can receive (or sense) inputs from the world around it, evaluate and analyze those inputs, and create outputs and suggestions without the assistance of a person." When programmers claim an AI system can learn, they're referring to the program's ability to change its own processes in order to provide more accurate outputs or predictions.

AI-based systems are now being developed and used in practically every industry, from agriculture to space exploration, and in applications ranging from law enforcement to online banking.

The methods and techniques used in computer science are always evolving, extending, and improving.

Other terminology linked to machine learning, such as reinforcement learning and neural networks, are important components of cutting-edge artificial intelligence systems.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 

Embodiment, AI and; Superintelligence; Turing, Alan; Turing Test.


Further Reading:


Kelnar, David. 2016. “The Fourth Industrial Revolution: A Primer on Artificial Intelligence (AI).” Medium, December 2, 2016. https://medium.com/mmc-writes/the-fourth-industrial-revolution-a-primer-on-artificial-intelligence-ai-ff5e7fffcae1.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Mayes, Jason. 2017. Machine Learning 101. https://docs.google.com/presentation/d/1kSuQyW5DTnkVaZEjGYCkfOxvzCqGEFzWBy4e9Uedd9k/htmlpresent.

Müller, Vincent C., and Nick Bostrom. 2016. “Future Progress in Artificial Intelligence: A Survey of Expert Opinion.” In Fundamental Issues of Artificial Intelligence, edited by Vincent C. Müller, 553–71. New York: Springer.

Russell, Stuart, and Peter Norvig. 2003. Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice Hall.

Samuel, Arthur L. 1988. “Some Studies in Machine Learning Using the Game of Checkers I.” In Computer Games I, 335–65. New York: Springer.



Artificial Intelligence - Who Is Anne Foerst?

 


 

Anne Foerst (1966–) is a Lutheran clergyman, theologian, author, and computer science professor at Allegany, New York's St. Bonaventure University.



In 1996, Foerst earned a doctorate in theology from the Ruhr-University of Bochum in Germany.

She has worked as a research associate at Harvard Divinity School, a project director at MIT, and a research scientist at the Massachusetts Institute of Technology's Artificial Intelligence Laboratory.

She supervised the God and Computers Project at MIT, which encouraged people to talk about existential questions brought by scientific research.



Foerst has written several scientific and popular pieces on the need for improved conversation between religion and science, as well as shifting concepts of personhood in the light of robotics research.



God in the Machine, published in 2004, details her work as a theological counselor to the MIT Cog and Kismet robotics teams.

Foerst's study has been influenced by her work as a hospital counselor, her years at MIT collecting ethnographic data, and the writings of German-American Lutheran philosopher and theologian Paul Tillich.



As a medical counselor, she started to rethink what it meant to be a "normal" human being.


Foerst was inspired to investigate the circumstances under which individuals are believed to be persons after seeing variations in physical and mental capabilities in patients.

In her work, Foerst contrasts between the terms "human" and "person," with human referring to members of our biological species and per son referring to a person who has earned a form of reversible social inclusion.



Foerst uses the Holocaust as an illustration of how personhood must be conferred but may also be revoked.


As a result, personhood is always vulnerable.

Foerst may explore the inclusion of robots as individuals using this personhood schematic—something people bestow to each other.


Tillich's ideas on sin, alienation, and relationality are extended to the connections between humans and robots, as well as robots and other robots, in her work on robots as potential people.


  • People become alienated, according to Tillich, when they ignore opposing polarities in their life, such as the need for safety and novelty or freedom.
  • People reject reality, which is fundamentally ambiguous, when they refuse to recognize and interact with these opposing forces, cutting out or neglecting one side in order to concentrate entirely on the other.
  • People are alienated from their lives, from the people around them, and (for Tillich) from God if they do not accept the complicated conflicts of existence.


The threat of reducing all things to items or data that can be measured and studied, as well as the possibility to enhance people's capacity to create connections and impart identity, are therefore opposites of danger and opportunity in AI research.



Foerst has attempted to establish a dialogue between theology and other structured fields of inquiry, following Tillich's paradigm.


Despite being highly welcomed in labs and classrooms, Foerst's work has been met with skepticism and pushback from some concerned that she is bringing counter-factual notions into the realm of science.

These concerns are crucial data for Foerst, who argues for a mutualistic approach in which AI researchers and theologians accept strongly held preconceptions about the universe and the human condition in order to have fruitful discussions.

Many valuable discoveries come from these dialogues, according to Foerst's study, as long as the parties have the humility to admit that neither side has a perfect grasp of the universe or human existence.



Foerst's work on AI is marked by humility, as she claims that researchers are startled by the vast complexity of the human person while seeking to duplicate human cognition, function, and form in the figure of the robot.


The way people are socially rooted, socially conditioned, and socially accountable adds to the complexity of any particular person.

Because human beings' embedded complexity is intrinsically physical, Foerst emphasizes the significance of an embodied approach to AI.

Foerst explored this embodied technique while at MIT, where having a physical body capable of interaction is essential for robotic research and development.


When addressing the evolution of artificial intelligence, Foerst emphasizes a clear difference between robots and computers in her work (AI).


Robots have bodies, and those bodies are an important aspect of their learning and interaction abilities.

Although supercomputers can accomplish amazing analytic jobs and participate in certain forms of communication, they lack the ability to learn through experience and interact with others.

Foerst is dismissive of research that assumes intelligent computers may be created by re-creating the human brain.

Rather, she contends that bodies are an important part of intellect.


Foerst proposes for growing robots in a way similar to human child upbringing, in which robots are given opportunities to interact with and learn from the environment.


This process is costly and time-consuming, just as it is for human children, and Foerst reports that funding for creative and time-intensive AI research has vanished, replaced by results-driven and military-focused research that justifies itself through immediate applications, especially since the terrorist attacks of September 11, 2001.

Foerst's work incorporates a broad variety of sources, including religious texts, popular films and television programs, science fiction, and examples from the disciplines of philosophy and computer science.



Loneliness, according to Foerst, is a fundamental motivator for humans' desire of artificial life.


Both fictional imaginings of the construction of a mechanical companion species and con actual robotics and AI research are driven by feelings of alienation, which Foerst ties to the theological position of a lost contact with God.


Academic opponents of Foerst believe that she has replicated a paradigm initially proposed by German theologian and scholar Rudolph Otto in his book The Idea of the Holy (1917).


The heavenly experience, according to Otto, may be discovered in a moment of attraction and dread, which he refers to as the numinous.

Critics contend that Foerst used this concept when she claimed that humans sense attraction and dread in the figure of the robot.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 


Embodiment, AI and; Nonhuman Rights and Personhood; Pathetic Fallacy; Robot Ethics; Spiritual Robots.


Further Reading:


Foerst, Anne. 2005. God in the Machine: What Robots Teach Us About Humanity and God. New York: Plume.

Geraci, Robert M. 2007. “Robots and the Sacred in Science Fiction: Theological Implications of Artificial Intelligence.” Zygon 42, no. 4 (December): 961–80.

Gerhart, Mary, and Allan Melvin Russell. 2004. “Cog Is to Us as We Are to God: A Response to Anne Foerst.” Zygon 33, no. 2: 263–69.

Groks Science Radio Show and Podcast with guest Anne Foerst. Audio available online at http://ia800303.us.archive.org/3/items/groks146/Groks122204_vbr.mp3. Transcript available at https://grokscience.wordpress.com/transcripts/anne-foerst/.

Reich, Helmut K. 2004. “Cog and God: A Response to Anne Foerst.” Zygon 33, no. 2: 255–62.



Artificial Intelligence - What Is Swarm Intelligence and Distributed Intelligence?



From developing single autonomous agents to building groups of distributed autonomous agents that coordinate themselves, distributed intelligence is the obvious next step.

A multi-agent system is made up of many agents.

Communication is a prerequisite for cooperation.

The fundamental concept is to allow for distributed problem-solving rather than employing a collection of agents as a simple parallelization of the single-agent technique.

Agents effectively cooperate, exchange information, and assign duties to one another.

Sensor data, for example, is exchanged to learn about the current condition of the environment, and an agent is given a task based on who is in the best position to complete that job at the time.

Agents might be software or embodied agents in the form of robots, resulting in a multi-robot system.

RoboCup Soccer (Kitano et al.1997) is an example of this, in which two teams of robots compete in soccer.

Typical challenges include detecting the ball cooperatively and sharing that knowledge, as well as assigning tasks, such as who will go after the ball next.



Agents may have a complete global perspective or simply a partial picture of the surroundings.

The agent's and the entire approach's complexity may be reduced by restricting information to the local area.

Regardless of their local perspective, agents may communicate, disseminate, and transmit information across the agent group, resulting in a distributed collective vision of global situations.





A scalable decentralized system, a non-scalable decentralized system, and a decentralized system are three separate concepts of distributed intelligence that may be used to construct distributed intelligence.

Without a master-slave hierarchy or a central control element, all agents in scalable decentralized systems function in equal roles.

Because the system only allows for local agent-to-agent communication, there is no need for all agents to coordinate with each other.

This allows for potentially huge system sizes.

All-to-all communication is an important aspect of the coordination mechanism in non-scalable decentralized systems, but it may become a bottleneck in systems with too many agents.

A typical RoboCup-Soccer system, for example, requires all robots to cooperate with all other robots at all times.

Finally, in decentralized systems with central components, the agents may interact with one another through a central server (e.g., cloud) or be coordinated by a central control.

It is feasible to mix the decentralized and central approaches by delegating basic tasks to the agents, who will complete them independently and locally, while more difficult activities will be managed centrally.

Vehicle ad hoc networks are an example of a use case (Liang et al.2015).

Each agent is self-contained, yet collaboration aids in traffic coordination.

For example, intelligent automobiles may build dynamic multi-hop networks to notify others about an accident that is still hidden from view.

For a safer and more efficient traffic flow, cars may coordinate passing moves.

All of this may be accomplished by worldwide communication with a central server or, depending on the stability of the connection, through local car-to-car communication.

Natural swarm systems and artificial, designed distributed systems are combined in swarm intelligence research.

Extracting fundamental principles from decentralized biological systems and translating them into design principles for decentralized engineering systems is a core notion in swarm intelligence (scalable decentralized systems as defined above).

Swarm intelligence was inspired by flocks, swarms, and herds' collective activities.

Social insects such as ants, honeybees, wasps, and termites are a good example.

These swarm systems are built on self-organization and work in a fundamentally decentralized manner.

Crystallization, pattern creation in embryology, and synchronization in swarms are examples of self-organization, which is a complex interaction of positive (deviations are encouraged) and negative feedback (deviations are damped).

In swarm intelligence, four key features of systems are investigated: • The system is made up of a large number of autonomous agents that are homogeneous in terms of their capabilities and behaviors.

• Each agent follows a set of relatively simple rules compared to the task's complexity.

• The resulting system behavior is heavily reliant on agent interaction and collaboration.

Reynolds (1987) produced a seminal paper detailing flocking behavior in birds based on three basic local rules: alignment (align direction of movement with neighbors), cohesiveness (remain near to your neighbors), and separation (stay away from your neighbors) (keep a minimal distance to any agent).

As a consequence, a real-life mimicked self-organizing flocking behavior emerges.

By depending only on local interactions between agents, a high level of resilience may be achieved.

Any agent, at any moment, has only a limited understanding of the system's global state (swarm-level state) and relies on communication with nearby agents to complete its duty.

Because the swarm's knowledge is spread, a single point of failure is rare.

An perfectly homogenous swarm has a high degree of redundancy; that is, all agents have the same capabilities and can therefore be replaced by any other.

By depending only on local interactions between agents, a high level of scalability may be obtained.

Due to the dispersed data storage architecture, there is less requirement to synchronize or maintain data coherent.

Because the communication and coordination overhead for each agent is dictated by the size of its neighborhood, the same algorithms may be employed for systems of nearly any scale.

Ant Colony Optimization (ACO) and Particle Swarm Optimization are two well-known examples of swarm intelligence in engineered systems from the optimization discipline (PSO).

Both are metaheuristics, which means they may be used to solve a wide range of optimization problems.

Ants and their use of pheromones to locate the shortest pathways inspired ACO.

A graph must be used to depict the optimization issue.

A swarm of virtual ants travels from node to node, choosing which edge to use next based on the likelihood of how many other ants have used it before (through pheromone, implementing positive feedback) and a heuristic parameter, such as journey length (greedy search).

Evaporation of pheromones balances the exploration-exploitation trade-off (negative feedback).

The traveling salesman dilemma, automobile routing, and network routing are all examples of ACO applications.

Flocking is a source of inspiration for PSO.

Agents navigate search space using average velocity vectors that are impacted by global and local best-known solutions (positive feedback), the agent's past path, and a random direction.

While both ACO and PSO conceptually function in a completely distributed manner, they do not need parallel computing to be deployed.

They may, however, be parallelized with ease.

Swarm robotics is the application of swarm intelligence to embodied systems, while ACO and PSO are software-based methods.

Swarm robotics applies the concept of self-organizing systems based on local information to multi-robot systems with a high degree of resilience and scalability.

Following the example of social insects, the goal is to make each individual robot relatively basic in comparison to the task complexity while yet allowing them to collaborate to perform complicated problems.

A swarm robot can only communicate with other swarm robots since it can only function on local information.

Given a fixed swarm density, the applied control algorithms are meant to allow maximum scalability (i.e., constant number of robots per area).

The same control methods should perform effectively regardless of the system size whether the swarm size is grown or lowered by adding or deleting robots.

A super-linear performance improvement is often found, meaning that doubling the size of the swarm improves the swarm's performance by more than two.

As a result, each robot is more productive than previously.

Swarm robotics systems have been demonstrated to be effective for a wide range of activities, including aggregation and dispersion behaviors, as well as more complicated tasks like item sorting, foraging, collective transport, and collective decision-making.

Rubenstein et al. (2014) conducted the biggest scientific experiment using swarm robots to date, using 1024 miniature mobile robots to mimic self-assembly behavior by arranging the robots in predefined designs.

The majority of the tests were conducted in the lab, but new research has taken swarm robots to the field.

Duarte et al. (2016), for example, built a swarm of autonomous surface watercraft that cruise the ocean together.

Modeling the relationship between individual behavior and swarm behavior, creating advanced design principles, and deriving assurances of system attributes are all major issues in swarm intelligence.

The micro-macro issue is defined as the challenge of identifying the ensuing swarm behavior based on a given individual behavior and vice versa.

It has shown to be a difficult challenge that manifests itself in both mathematical modeling and the robot controller design process as an engineering difficulty.

The creation of complex tactics to design swarm behavior is not only crucial to swarm intelligence research, but it has also proved to be very difficult.

Similarly, due to the combinatorial explosion of action-to-agent assignments, multi-agent learning and evolutionary swarm robotics (i.e., application of evolutionary computation techniques to swarm robotics) do not scale well with task complexity.

Despite the benefits of robustness and scalability, obtaining strong guarantees for swarm intelligence systems is challenging.

Swarm systems' availability and reliability can only be assessed experimentally in general. 


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


AI and Embodiment.


Further Reading:


Bonabeau, Eric, Marco Dorigo, and Guy Theraulaz. 1999. Swarm Intelligence: From Natural to Artificial System. New York: Oxford University Press.

Duarte, Miguel, Vasco Costa, Jorge Gomes, Tiago Rodrigues, Fernando Silva, Sancho Moura Oliveira, Anders Lyhne Christensen. 2016. “Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots.” PloS One 11, no. 3: e0151834.

Hamann, Heiko. 2018. Swarm Robotics: A Formal Approach. New York: Springer.

Kitano, Hiroaki, Minoru Asada, Yasuo Kuniyoshi, Itsuki Noda, Eiichi Osawa, Hitoshi Matsubara. 1997. “RoboCup: A Challenge Problem for AI.” AI Magazine 18, no. 1: 73–85.

Liang, Wenshuang, Zhuorong Li, Hongyang Zhang, Shenling Wang, Rongfang Bie. 2015. “Vehicular Ad Hoc Networks: Architectures, Research Issues, Methodologies, Challenges, and Trends.” International Journal of Distributed Sensor Networks 11, no. 8: 1–11.

Reynolds, Craig W. 1987. “Flocks, Herds, and Schools: A Distributed Behavioral Model.” Computer Graphics 21, no. 4 (July): 25–34.

Rubenstein, Michael, Alejandro Cornejo, and Radhika Nagpal. 2014. “Programmable Self-Assembly in a Thousand-Robot Swarm.” Science 345, no. 6198: 795–99.




Artificial Intelligence - Who Is Rodney Brooks?

 


Rodney Brooks (1954–) is a business and policy adviser, as well as a computer science researcher.

He is a recognized expert in the fields of computer vision, artificial intelligence, robotics, and artificial life.

Brooks is well-known for his work in artificial intelligence and behavior-based robotics.

His iRobot Roomba autonomous robotic vacuum cleaners are among the most widely used home robots in America.

Brooks is well-known for his support for a bottom-up approach to computer science and robotics, which he discovered while on a lengthy, continuous visit to his wife's relatives in Thailand.

Brooks claims that situatedness, embodiment, and perception are just as crucial as cognition in describing the dynamic actions of intelligent beings.

This method is currently known as behavior-based artificial intelligence or action-based robotics.

Brooks' approach to intelligence, which avoids explicitly planned reasoning, contrasts with the symbolic reasoning and representation method that dominated artificial intelligence research over the first few decades.

Much of the early advances in robotics and artificial intelligence, according to Brooks, was based on the formal framework and logical operators of Alan Turing and John von Neumann's universal computer architecture.

He argued that these artificial systems had become far far from the biological systems that they were supposed to reflect.

Low-speed, massively parallel processing and adaptive interaction with their surroundings were essential for living creatures.

These were not, in his opinion, elements of traditional computer design, but rather components of what Brooks coined the term "subsumption architecture" in the mid-1980s.

According to Brooks, behavior-based robots are placed in real-world contexts and learn effective behaviors from them.

They need to be embodied in order to be able to interact with the environment and get instant feedback from their sensory inputs.

Specific conditions, signal changes, and real-time physical interactions are usually the source of intelligence.

Intelligence may be difficult to define functionally since it comes through a variety of direct and indirect interactions between different robot components and the environment.

As a professor at the Massachusetts Institute of Technology's Artificial Intelligence Laboratory, Brooks developed numerous notable mobile robots based on the subsumption architecture.

Allen, the first of these behavior-based robots, was outfitted with sonar range and motion sensors.

Three tiers of control were present on the robot.

Its capacity to avoid static and dynamic impediments was given to it by the first rudimentary layer.

The second implanted a random walk algorithm that gave the robot the ability to shift course on occasion.

The third behavioral layer kept an eye on faraway locations that may be objectives while the other two control levels were turned off.

Another robot, Herbert, used a dispersed array of 8-bit microprocessors and 30 infrared proximity sensors to avoid obstructions, navigate low walls, and gather empty drink cans scattered across several workplaces.

Genghis was a six-legged robot that could walk across rugged terrain and had four onboard microprocessors, 22 sensors, and 12 servo motors.

Genghis was able to stand, balance, and maintain itself, as well as climb stairs and follow humans.

Brooks started thinking scenarios in which behavior-based robots may assist in the exploration of the surface of other planets with the support of Anita Flynn, an MIT Mobile Robotics Group research scientist.

The two roboticists argued in their 1989 essay "Fast, Cheap, and Out of Control," published by the British Interplanetary Society, that space organizations like the Jet Propulsion Laboratory should reconsider plans for expensive, large, and slow-moving mission rovers and instead consider using larger sets of small mission rovers to save money and avoid risk.

Brooks and Flynn came to the conclusion that their autonomous robot technology could be constructed and tested swiftly by space agencies, and that it could serve dependably on other planets even when it was out of human control.

When the Sojourner rover arrived on Mars in 1997, it had some behavior-based autonomous robot capabilities, despite its size and unique design.

Brooks and a new Humanoid Robotics Group at the MIT Artificial Intelligence Laboratory started working on Cog, a humanoid robot, in the 1990s.

The term "cog" had two meanings: it referred to the teeth on gears as well as the word "cognitive." Cog had a number of objectives, many of which were aimed at encouraging social communication between the robot and a human.

Cog had a human-like visage and a lot of motor mobility in his head, trunk, arms, and legs when he was created.

Cog was equipped with sensors that allowed him to see, hear, touch, and speak.

Cynthia Breazeal, the group researcher who designed Cog's mechanics and control system, used the lessons learned from human interaction with the robot to create Kismet, a new robot in the lab.

Kismet is an affective robot that is capable of recognizing, interpreting, and replicating human emotions.

The meeting of Cog and Kis is a watershed moment in the history of artificial emotional intelligence.

Rodney Brooks, cofounder and chief technology officer of iRobot Corporation, has sought commercial and military applications of his robotics research in recent decades.

PackBot, a robot commonly used to detect and defuse improvised explosive devices in Iraq and Afghanistan, was developed with a grant from the Defense Advanced Research Projects Agency (DARPA) in 1998.

PackBot was also used to examine the damage to the Fukushima Daiichi nuclear power facility in Japan after the terrorist attacks of September 11, 2001, and at the site of the World Trade Center after the terrorist attacks on September 11, 2001.

Brooks and others at iRobot created a toy robot that was sold by Hasbro in 2000.

My Real Baby, the end product, is a realistic doll that can cry, fuss, sleep, laughing, and showing hunger.

The Roomba cleaning robot was created by the iRobot Corporation.

Roomba is a disc-shaped vacuum cleaner featuring roller wheels, brushes, filters, and a squeegee vacuum that was released in 2002.

The Roomba, like other Brooks behavior-based robots, uses sensors to detect obstacles and avoid dangers such as falling down stairs.

For self-charging and room mapping, newer versions use infrared beams and photocell sensors.

By 2019, iRobot has sold over 25 million robots throughout the globe.

Brooks is also Rethink Robotics' cofounder and chief technology officer.

Heartland Robotics, which was formed in 2008 as Heartland Robotics, creates low-cost industrial robots.

Baxter, Rethink's first robot, can do basic repetitive activities including loading, unloading, assembling, and sorting.

Baxter poses in front of a computer screen with an animated human face created on it.

Bax vehicle has sensors and cameras integrated in it that allow it identify and prevent crashes when people are around, which is a critical safety feature.

Baxter may be utilized in normal industrial settings without the need for a security cage.

Unskilled personnel may rapidly train the robot by simply moving its arms around in the desired direction to control its actions.

Baxter remembers these gestures and adjusts them to other jobs.

The controls on its arms may be used to make fine motions.

Sawyer is a smaller version of Rethink's Baxter collaborative robot, which is advertised for accomplishing risky or boring industrial jobs in restricted places.

Brooks has often said that science are still unable to solve the difficult challenges of consciousness.

He claims that artificial intelligence and artificial life researchers have overlooked an essential aspect of living systems that maintains the gap between nonliving and living worlds wide.

Even if all of our world's live aspects are made out of nonliving atoms, this remains true.

Brooks speculates that some of the AI and ALife researchers' parameters are incorrect, or that current models are too simple.

It's also possible that researchers are still lacking in raw computing power.

However, Brooks thinks that there may be something about biological life and subjective experience—an component or a property—that is now undetectable or concealed from scientific perspective.

Brooks attended Flinders University in Adelaide, South Australia, to study pure mathematics.

At Stanford University, he earned his PhD under the supervision of John McCarthy, an American computer scientist and cognitive scientist.

Model-Based Computer Vision was the title of his dissertation thesis, which he extended and published (1984).

From 1997 until 2007, he was the Director of the MIT Artificial Intelligence Laboratory (CSAIL), which was renamed Computer Science & Artificial Intelligence Laboratory (CSAIL) in 2003.

Brooks has received various distinctions and prizes for his contributions to artificial intelligence and robotics.

He is a member of both the American Academy of Arts and Sciences and the Association for Computing Machinery.

Brooks has won the IEEE Robotics and Automation Award as well as the Joseph F. Engelberger Robotics Award for Leadership.

He is now the vice chairman of the Toyota Research Institute's advisory board.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Embodiment, AI and; Tilden, Mark.



Further Reading

Brooks, Rodney A. 1984. Model-Based Computer Vision. Ann Arbor, MI: UMI Research Press.

Brooks, Rodney A. 1990. “Elephants Don’t Play Chess.” Robotics and Autonomous Systems 6, no. 1–2 (June): 3–15.

Brooks, Rodney A. 1991. “Intelligence without Reason.” AI Memo No. 1293. Cambridge, MA: MIT Artificial Intelligence Laboratory.

Brooks, Rodney A. 1999. Cambrian Intelligence: The Early History of the New AI. Cambridge, MA: MIT Press.

Brooks, Rodney A. 2002. Flesh and Machines: How Robots Will Change Us. New York: Pantheon.

Brooks, Rodney A., and Anita M. Flynn. 1989. “Fast, Cheap, and Out of Control.” Journal of the British Interplanetary Society 42 (December): 478–85.

Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...