Artificial Intelligence - Who Is Anne Foerst?

 


 

Anne Foerst (1966–) is a Lutheran clergyman, theologian, author, and computer science professor at Allegany, New York's St. Bonaventure University.



In 1996, Foerst earned a doctorate in theology from the Ruhr-University of Bochum in Germany.

She has worked as a research associate at Harvard Divinity School, a project director at MIT, and a research scientist at the Massachusetts Institute of Technology's Artificial Intelligence Laboratory.

She supervised the God and Computers Project at MIT, which encouraged people to talk about existential questions brought by scientific research.



Foerst has written several scientific and popular pieces on the need for improved conversation between religion and science, as well as shifting concepts of personhood in the light of robotics research.



God in the Machine, published in 2004, details her work as a theological counselor to the MIT Cog and Kismet robotics teams.

Foerst's study has been influenced by her work as a hospital counselor, her years at MIT collecting ethnographic data, and the writings of German-American Lutheran philosopher and theologian Paul Tillich.



As a medical counselor, she started to rethink what it meant to be a "normal" human being.


Foerst was inspired to investigate the circumstances under which individuals are believed to be persons after seeing variations in physical and mental capabilities in patients.

In her work, Foerst contrasts between the terms "human" and "person," with human referring to members of our biological species and per son referring to a person who has earned a form of reversible social inclusion.



Foerst uses the Holocaust as an illustration of how personhood must be conferred but may also be revoked.


As a result, personhood is always vulnerable.

Foerst may explore the inclusion of robots as individuals using this personhood schematic—something people bestow to each other.


Tillich's ideas on sin, alienation, and relationality are extended to the connections between humans and robots, as well as robots and other robots, in her work on robots as potential people.


  • People become alienated, according to Tillich, when they ignore opposing polarities in their life, such as the need for safety and novelty or freedom.
  • People reject reality, which is fundamentally ambiguous, when they refuse to recognize and interact with these opposing forces, cutting out or neglecting one side in order to concentrate entirely on the other.
  • People are alienated from their lives, from the people around them, and (for Tillich) from God if they do not accept the complicated conflicts of existence.


The threat of reducing all things to items or data that can be measured and studied, as well as the possibility to enhance people's capacity to create connections and impart identity, are therefore opposites of danger and opportunity in AI research.



Foerst has attempted to establish a dialogue between theology and other structured fields of inquiry, following Tillich's paradigm.


Despite being highly welcomed in labs and classrooms, Foerst's work has been met with skepticism and pushback from some concerned that she is bringing counter-factual notions into the realm of science.

These concerns are crucial data for Foerst, who argues for a mutualistic approach in which AI researchers and theologians accept strongly held preconceptions about the universe and the human condition in order to have fruitful discussions.

Many valuable discoveries come from these dialogues, according to Foerst's study, as long as the parties have the humility to admit that neither side has a perfect grasp of the universe or human existence.



Foerst's work on AI is marked by humility, as she claims that researchers are startled by the vast complexity of the human person while seeking to duplicate human cognition, function, and form in the figure of the robot.


The way people are socially rooted, socially conditioned, and socially accountable adds to the complexity of any particular person.

Because human beings' embedded complexity is intrinsically physical, Foerst emphasizes the significance of an embodied approach to AI.

Foerst explored this embodied technique while at MIT, where having a physical body capable of interaction is essential for robotic research and development.


When addressing the evolution of artificial intelligence, Foerst emphasizes a clear difference between robots and computers in her work (AI).


Robots have bodies, and those bodies are an important aspect of their learning and interaction abilities.

Although supercomputers can accomplish amazing analytic jobs and participate in certain forms of communication, they lack the ability to learn through experience and interact with others.

Foerst is dismissive of research that assumes intelligent computers may be created by re-creating the human brain.

Rather, she contends that bodies are an important part of intellect.


Foerst proposes for growing robots in a way similar to human child upbringing, in which robots are given opportunities to interact with and learn from the environment.


This process is costly and time-consuming, just as it is for human children, and Foerst reports that funding for creative and time-intensive AI research has vanished, replaced by results-driven and military-focused research that justifies itself through immediate applications, especially since the terrorist attacks of September 11, 2001.

Foerst's work incorporates a broad variety of sources, including religious texts, popular films and television programs, science fiction, and examples from the disciplines of philosophy and computer science.



Loneliness, according to Foerst, is a fundamental motivator for humans' desire of artificial life.


Both fictional imaginings of the construction of a mechanical companion species and con actual robotics and AI research are driven by feelings of alienation, which Foerst ties to the theological position of a lost contact with God.


Academic opponents of Foerst believe that she has replicated a paradigm initially proposed by German theologian and scholar Rudolph Otto in his book The Idea of the Holy (1917).


The heavenly experience, according to Otto, may be discovered in a moment of attraction and dread, which he refers to as the numinous.

Critics contend that Foerst used this concept when she claimed that humans sense attraction and dread in the figure of the robot.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 


Embodiment, AI and; Nonhuman Rights and Personhood; Pathetic Fallacy; Robot Ethics; Spiritual Robots.


Further Reading:


Foerst, Anne. 2005. God in the Machine: What Robots Teach Us About Humanity and God. New York: Plume.

Geraci, Robert M. 2007. “Robots and the Sacred in Science Fiction: Theological Implications of Artificial Intelligence.” Zygon 42, no. 4 (December): 961–80.

Gerhart, Mary, and Allan Melvin Russell. 2004. “Cog Is to Us as We Are to God: A Response to Anne Foerst.” Zygon 33, no. 2: 263–69.

Groks Science Radio Show and Podcast with guest Anne Foerst. Audio available online at http://ia800303.us.archive.org/3/items/groks146/Groks122204_vbr.mp3. Transcript available at https://grokscience.wordpress.com/transcripts/anne-foerst/.

Reich, Helmut K. 2004. “Cog and God: A Response to Anne Foerst.” Zygon 33, no. 2: 255–62.



Artificial Intelligence - Who Is Martin Ford?


 


Martin Ford (active from 2009 until the present) is a futurist and author who focuses on artificial intelligence, automation, and the future of employment.


Rise of the Robots, his 2015 book, was named the Financial Times and McKinsey Business Book of the Year, as well as a New York Times bestseller.



Artificial intelligence, according to Ford, is the "next killer app" in the American economy.


Ford highlights in his writings that most economic sectors in the United States are becoming more mechanized.


  • The transportation business is being turned upside down by self-driving vehicles and trucks.
  • Self-checkout is transforming the retail industry.
  • The hotel business is being transformed by food preparation robots.


According to him, each of these developments will have a significant influence on the American workforce.



Not only will robots disrupt blue-collar labor, but they will also pose a danger to white-collar employees and professionals in fields such as medicine, media, and finance.


  • According to Ford, the majority of this job is similarly regular and can be automated.
  • Under particular, middle management is in jeopardy.
  • According to Ford, there will be no link between human education and training and automation vulnerability in the future, just as worker productivity and remuneration have become unrelated phenomena.

Artificial intelligence will alter knowledge and information work as sophisticated algorithms, machine-learning tools, and clever virtual assistants are incorporated into operating systems, business software, and databases.


Ford’s viewpoint has been strengthened by a 2013 research by Carl Benedikt Frey and Michael Osborne of the Oxford University Martin Program on the Impacts of Future Technology and the Oxford University Engineering Sciences Department.

Frey and Osborne’s study, done with the assistance of machine-learning algorithms, indicated that over half of 702 various types of American employment may be automated in the next 10 to twenty years.



Ford points out that when automation precipitates primary job losses in areas susceptible to computerization, it will also cause a secondary wave of job destruction in sectors that are sustained by them, even if they are themselves automation resistant.


  • Ford suggests that capitalism will not go away in the process, but it will need to adapt if it is to survive.
  • Job losses will not be immediately staunched by new technology jobs in the highly automated future.

Ford has advocated a universal basic income—or “citizens dividend”—as one way to help American workers transition to the economy of the future.


  • Without consumers making wages, he asserts, there simply won’t be markets for the abundant goods and services that robots will produce.
  • And those displaced workers would no longer have access to home owner ship or a college education.
  • A universal basic income could be guaranteed by placing value added taxes on automated industries.
  • The wealthy owners in these industries would agree to this tax out of necessity and survival.



Further financial incentives, he argues, should be targeted at individuals who are working to enhance human culture, values, and wisdom, engaged in earning new credentials or innovating outside the mainstream automated economy.


  • Political and sociocultural changes will be necessary as well.
  • Automation and artificial intelligence, he says, have exacerbated economic inequality and given extraordinary power to special interest groups in places like the Silicon Valley.
  • He also suggests that Americans will need to rethink the purpose of employment as they are automated out of jobs.



Work, Ford believes, will not primarily be about earning a living, but rather about finding purpose and meaning and community.


  • Education will also need to change.
  • As the number of high-skill jobs is depleted, fewer and fewer highly educated students will find work after graduation.



Ford has been criticized for assuming that hardly any job will remain untouched by computerization and robotics.


  • It may be that some occupational categories are particularly resistant to automation, for instance, the visual and performing arts, counseling psychology, politics and governance, and teaching.
  • It may also be the case that human energies currently focused on manufacture and service will be replaced by work pursuits related to entrepreneurship, creativity, research, and innovation.



Ford speculates that it will not be possible for all of the employed Americans in the manufacturing and service economy to retool and move to what is likely to be a smaller, shallower pool of jobs.



In The Lights in the Tunnel: Automation, Accelerating Technology, and the Economy of the Future (2009), Ford introduced the metaphor of “lights in a tunnel” to describe consumer purchasing power in the mass market.


A billion individual consumers are represented as points of light that vary in intensity corresponding to purchasing power.

An overwhelming number of lights are of middle intensity, corresponding to the middle classes around the world.

  • Companies form the tunnel. Five billion other people, mostly poor, exist outside the tunnel.
  • In Ford’s view, automation technologies threaten to dim the lights and collapse the tunnel.
  • Automation poses dangers to markets, manufacturing, capitalist economics, and national security.



In Rise of the Robots: Technology and the Threat of a Jobless Future (2015), Ford focused on the differences between the current wave of automation and prior waves.


  • He also commented on disruptive effects of information technology in higher education, white-collar jobs, and the health-care industry.
  • He made a case for a new economic paradigm grounded in the basic income, incentive structures for risk-taking, and environmental sensitivity, and he described scenarios where inaction might lead to economic catastrophe or techno-feudalism.


Ford’s book Architects of Intelligence: The Truth about AI from the People Building It (2018) includes interviews and conversations with two dozen leading artificial intelligence researchers and entrepreneurs.


  • The focus of the book is the future of artificial general intelligence and predictions about how and when human-level machine intelligence will be achieved.



Ford holds an undergraduate degree in Computer Engineering from the University of Michigan.

He earned an MBA from the UCLA Anderson School of Management.

He is the founder and chief executive officer of the software development company Solution-Soft located in Santa Clara, California.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Brynjolfsson, Erik; Workplace Automation.


Further Reading:


Ford, Martin. 2009. The Lights in the Tunnel: Automation, Accelerating Technology, and the Economy of the Future. Charleston, SC: Acculant.

Ford, Martin. 2013. “Could Artificial Intelligence Create an Unemployment Crisis?” Communications of the ACM 56 7 (July): 37–39.

Ford, Martin. 2016. Rise of the Robots: Technology and the Threat of a Jobless Future. New York: Basic Books.

Ford, Martin. 2018. Architects of Intelligence: The Truth about AI from the People Build￾ing It. Birmingham, UK: Packt Publishing



ISRO's Chandrayaan - 3 Lunar Mission Launch Date.






    The Indian Space Research Organization (ISRO), which has a busy year ahead of it, has made significant progress on Chandrayaan-3, the country's third moon mission. 

    The team is getting closer to integrated testing after successfully completing numerous associated hardware and special tests. 





    Chandrayaan-3 has undergone a number of evaluations, enhancements, and strengthening. 


    Some of them are based on the problems we've already experienced in Chandrayaan - 2. However, the problems discovered thus far may not be the only ones. 

    The queries are many, and we must anticipate many of them, which may need more revisions. The hardware is also being developed.

    While Somanath is set to undertake a formal evaluation of the project later this month, Jitendra Singh, minister of state for space, has said that Chandrayaan-3 is being developed based on the lessons learned from Chandrayaan-2 and proposals offered by national-level specialists. 





    "The Chandrayaan - 3 launch date is set for August 2022." ~ Somanath , ISRO Chairman.


    Testing and construction of the lander and other equipment that will be part of Chandrayaan-3 are proceeding at several ISRO centers, according to many scientists involved with the project, while design alterations are nearing completion. 




    ISRO came up with Chandrayaan-3 after failing to soft-land Vikram (lander) on the lunar surface, despite the fact that it still had a fully operating Chandrayaan-2 orbiter orbiting the Moon. 

    While the mission was originally scheduled for late 2020 or early 2021, the Department of Space (DoS) announced earlier this year that the launch will be delayed until 2022 due to Covid-19. 


    "...Because we need a certain launch window, we must work toward a deadline, failing which the launch will be postponed until the next year." 

    "However, high management has made it plain that all stages in the procedure must be completed before the mission can be launched," a source added. 





    "Chandrayaan-3 design adjustments integrating and testing has witnessed significant progress, by the middle of the year, the mission may be launched." stated then-ISRO chairman K Sivan in his New Year speech on January 3. 







    Chandrayaan-3 would have significant design differences from the previous mission.





    1. The most notable of which being the decision to remove the fifth engine, which was installed at the last minute to Vikram (Chandrayaan-2's lander). 
    2. The lander for this mission will only have four engines, and the supervising committee has proposed a slight alteration to the lander's legs, as well as the addition of a laser doppler velocimeter (LDV) for improved speed measurement during landing. 
    3. Upgrades in software and algorithms, leg strengthening, and improved power and communication systems are among the suggested changes for Chandrayaan-3, which were signs of deficiencies in Chandrayaan-2. 




    The fact that one GSLV-Mk3 mission is named in the Union Budget 2022-23, among other things, is considered as a source of hope for Chandrayaan-3. 


    "The realization of Chandrayaan-3 is in process, based on the lessons learned from Chandrayaan-2 and proposals offered by national level specialists." 




    Many associated hardware and special testing have been done satisfactorily. 

    Mr. Singh responded to Ravneet Singh and Subburaman Thirunavukkarasar's question about the mission's delay by saying, "The launch is slated for August 2022." 

    The Minister blamed the delays to "pandemic-related" delays and project "reprioritization." 


    The ISRO's (Indian Space Research Organization) most recent big satellite launches were the Earth Observation Satellite-3 in August and the Amazonia satellite in February. 




    • Up till December, the ISRO has scheduled 19 missions, including eight launch vehicle missions, seven spacecraft missions, and four technology demonstration flights. 
    • This financial year, the ISRO has been given a budget of Rs 13,700 crore, approximately 1,000 crore higher than the previous year. 
    • Despite the fact that multiple missions are scheduled this year, the projected expenditure is lower than the 13,949 crore allocated last year. 
    • Minister Jitendra Singh informed the Lok Sabha on Wednesday that India wants to launch the Chandrayaan-3 mission in August. 



    Despite the fact that the government previously declared that the mission will take place in 2022, this is the first time a particular month has been confirmed. 



    • The Chandrayaan-3 mission is a follow-up to Chandrayaan-2, which landed a rover on the lunar South Pole in July 2019. 
    • It was launched on the GSLV-Mk 3, the country's most powerful geosynchronous launch vehicle. 



    However, instead of a safe landing, lander Vikram crashed on the moon's surface on September 7, 2019, preventing rover Pragyaan from reaching the surface. 



    It would have been the first time a nation landed a rover on the moon in its first attempt if the mission had been successful. 


    Following the failure of Chandrayaan-2's soft landing attempt following a successful orbital insertion due to a last-minute malfunction in the soft landing guidance software, another lunar mission to demonstrate soft landing was suggested. 




    Chandrayaan-3 will be a mission duplicate of Chandrayaan-2.



    With the exception that Chandrayaan-3 will only include a lander and rover comparable to Chandrayaan-2. 


    • It will be devoid of an orbiter. In August 2022, the spacecraft will be launched. 
    • The rocket for the spacecraft's launch has been deemed ready and is awaiting the module. 



    ISRO launched Chandrayaan-2 with a GSLV Mk III launch vehicle, which included an orbiter, a lander, and a rover, in the second phase of the Chandrayaan mission to test soft landing on the Moon. 


    Earlier rumors suggested that India and Japan will collaborate on a mission to the lunar south pole, with India supplying the lander and Japan providing both the rocket and the rover. 

    Site sampling and lunar night survival technology may be included in the expedition. 


    The failure of the Vikram lander led to the development of a new mission to show the landing capabilities required for the Lunar Polar Exploration Mission, which is planned for 2024 in collaboration with Japan. 


    • The lander for Chandrayaan-3 will only have four throttle-able engines, as opposed to Vikram's five 800 Newtons engines on Chandrayaan-2, one of which was centrally positioned with a set thrust. 
    • A Laser Doppler Velocimeter will also be installed on the Chandrayaan-3 lander (LDV). 
    • ISRO requested initial funding for the project of 75 crore (US$10 million) in December 2019, of which 60 crore (US$8.0 million) will be used to meet expenditures for machinery, equipment, and other capital expenditures, and the remaining 15 crore (US$2.0 million) will be used to meet revenue expenditures. 

    ISRO chairman K. Sivan confirmed the project's existence and estimated the cost to be approximately 615 crore (US$82 million).



    ~ Jai Krishna Ponnappan.

    Find Jai on Twitter | LinkedIn | Instagram

    You may also want to read more about space based systems here.






    Artificial Intelligence - What Are Non-Player Characters And Emergent Gameplay?

     


    Emergent gameplay occurs when a player in a video game encounters complicated scenarios as a result of their interactions with other players in the game.


    Players may fully immerse themselves in an intricate and realistic game environment and feel the consequences of their choices in today's video games.

    Players may personalize and build their character and tale.

    Players take on the role of a cyborg in a dystopian metropolis in the Deus Ex series (2000), for example, one of the first emergent game play systems.

    They may change the physical appearance of their character as well as their skill sets, missions, and affiliations.

    Players may choose between militarized adaptations that allow for more aggressive play and stealthier options.

    The plot and experience are altered by the choices made on how to customize and play, resulting in unique challenges and results for each player.


    When players interact with other characters or items, emergent gameplay guarantees that the game environment reacts.



    Because of many options, the tale unfolds in surprising ways as the gaming world changes.

    Specific outcomes are not predetermined by the designer, and emergent gameplay can even take advantage of game flaws to generate actions in the game world, which some consider to be a form of emergence.

    Artificial intelligence has become more popular among game creators in order to have the game environment respond to player actions in a timely manner.

    Artificial intelligence aids the behavior of video characters and their interactions via the use of algorithms, basic rule-based forms that help in generating the game environment in sophisticated ways.

    "Game AI" refers to the usage of artificial intelligence in games.

    The most common use of AI algorithms is to construct the form of a non-player character (NPC), which are characters in the game world with whom the player interacts but does not control.


    In its most basic form, AI will use pre-scripted actions for the characters, who will then concentrate on reacting to certain events.


    Pre-scripted character behaviors performed by AI are fairly rudimentary, and NPCs are meant to respond to certain "case" events.

    The NPC will evaluate its current situation before responding in a range determined by the AI algorithm.

    Pac-Man is a good early and basic illustration of this (1980).

    Pac-Man is controlled by the player through a labyrinth while being pursued by a variety of ghosts, who are the game's non-player characters.


    Players could only interact with ghosts (NPCs) by moving about; ghosts had limited replies and their own AI-programmed pre-scripted movement.




    The AI planned reaction would occur if the ghost ran into a wall.

    It would then roll an AI-created die that would determine whether or not the NPC would turn toward or away from the direction of the player.

    If the NPC decided to go after the player, the AI pre-scripted pro gram would then detect the player’s location and turn the ghost toward them.

    If the NPC decided not to go after the player, it would turn in an opposite or a random direction.

    This NPC interaction is very simple and limited; however, this was an early step in AI providing emergent gameplay.



    Contemporary games provide a variety of options available and a much larger set of possible interactions for the player.


    Players in contemporary role-playing games (RPGs) are given an incredibly high number of potential options, as exemplified by Fallout 3 (2008) and its sequels.

    Fallout is a role-playing game, where the player takes on the role of a survivor in a post-apocalyptic America.

    The story narrative gives the player a goal with no direction; as a result, the player is given the freedom to play as they see fit.

    The player can punch every NPC, or they can talk to them instead.

    In addition to this variety of actions by the player, there are also a variety of NPCs controlled through AI.

    Some of the NPCs are key NPCs, which means they have their own unique scripted dialogue and responses.

    This provides them with a personality and provides a complexity through the use of AI that makes the game environment feel more real.


    When talking to key NPCs, the player is given options for what to say, and the Key NPCs will have their own unique responses.


    This differs from the background character NPCs, as the key NPCs are supposed to respond in such a way that it would emulate interaction with a real personality.

    These are still pre-scripted responses to the player, but the NPC responses are emergent based on the possible combination of the interaction.

    As the player makes decisions, the NPC will examine this decision and decide how to respond in accordance to its script.

    The NPCs that the players help or hurt and the resulting interactions shape the game world.

    Game AI can emulate personalities and present emergent gameplay in a narrative setting; however, AI is also involved in challenging the player in difficulty settings.


    A variety of pre-scripted AI can still be used to create difficulty.

    Pre scripted AI are often made to make suboptimal decisions for enemy NPCs in games where players fight.

    This helps make the game easier and also makes the NPCs seem more human.

    Suboptimal pre-scripted decisions make the enemy NPCs easier to handle.

    Optimal decisions however make the opponents far more difficult to handle.

    This can be seen in contemporary games like Tom Clancy’s The Division (2016), where players fight multiple NPCs.

    The enemy NPCs range from angry rioters to fully trained paramilitary units.

    The rioter NPCs offer an easier challenge as they are not trained in combat and make suboptimal decisions while fighting the player.

    The military trained NPCs are designed to have more optimal decision-making AI capabilities in order to increase the difficulty for the player fighting them.



    Emergent gameplay has evolved to its full potential through use of adaptive AI.


    Similar to prescript AI, the character examines a variety of variables and plans about an action.

    However, unlike the prescript AI that follows direct decisions, the adaptive AI character will make their own decisions.

    This can be done through computer-controlled learning.


    AI-created NPCs follow rules of interactions with the players.


    As players go through the game, the player interactions are analyzed, and some AI judgments become more weighted than others.

    This is done in order to provide distinct player experiences.

    Various player behaviors are actively examined, and modifications are made by the AI when designing future challenges.

    The purpose of the adaptive AI is to challenge the players to a degree that the game is fun while not being too easy or too challenging.

    Difficulty may still be changed if players seek a different challenge.

    This may be observed in the Left 4 Dead game (2008) series’ AI Director.

    Players navigate through a level, killing zombies and gathering resources in order to live.


    The AI Director chooses which zombies to spawn, where they will spawn, and what supplies will be spawned.

    The choice to spawn them is not made at random; rather, it is based on how well the players performed throughout the level.

    The AI Director makes its own decisions about how to respond; as a result, the AI Director adapts to the level's player success.

    The AI Director gives less resources and spawns more adversaries as the difficulty level rises.


    Changes in emergent gameplay are influenced by advancements in simulation and game world design.


    As virtual reality technology develops, new technologies will continue to help in this progress.

    Virtual reality games provide an even more immersive gaming experience.

    Players may use their own hands and eyes to interact with the environment.

    Computers are growing more powerful, allowing for more realistic pictures and animations to be rendered.


    Adaptive AI demonstrates the capability of really autonomous decision-making, resulting in a truly participatory gaming experience.


    Game makers are continuing to build more immersive environments as AI improves to provide more lifelike behavior.

    These cutting-edge technology and new AI will elevate emergent gameplay to new heights.

    The importance of artificial intelligence in videogames has emerged as a crucial part of the industry for developing realistic and engrossing gaming.



    Jai Krishna Ponnappan


    You may also want to read more about Artificial Intelligence here.



    See also: 


    Brooks, Rodney; Distributed and Swarm Intelligence; General and Narrow AI.



    Further Reading:



    Brooks, Rodney. 1986. “A Robust Layered Control System for a Mobile Robot.” IEEE Journal of Robotics and Automation 2, no. 1 (March): 14–23.

    Brooks, Rodney. 1990. “Elephants Don’t Play Chess.” Robotics and Autonomous Systems6, no. 1–2 (June): 3–15.

    Brooks, Rodney. 1991. “Intelligence Without Representation.” Artificial Intelligence Journal 47: 139–60.

    Dennett, Daniel C. 1997. “Cog as a Thought Experiment.” Robotics and Autonomous Systems 20: 251–56.

    Gallagher, Shaun. 2005. How the Body Shapes the Mind. Oxford: Oxford University Press.

    Pfeifer, Rolf, and Josh Bongard. 2007. How the Body Shapes the Way We Think: A New View of Intelligence. Cambridge, MA: MIT Press.




    Artificial Intelligence - What Is AI Embodiment Or Embodied Artificial Intelligence?

     



    Embodied Artificial Intelligence is a method for developing AI that is both theoretical and practical.

    It is difficult to fully trace its his tory due to its beginnings in different fields.

    Rodney Brooks' Intelligence Without Representation, written in 1987 and published in 1991, is one claimed for the genesis of this concept.


    Embodied AI is still a very new area, with some of the first references to it dating back to the early 2000s.


    Rather than focusing on either modeling the brain (connectionism/neural net works) or linguistic-level conceptual encoding (GOFAI, or the Physical Symbol System Hypothesis), the embodied approach to AI considers the mind (or intelligent behavior) to emerge from interaction between the body and the world.

    There are hundreds of different and sometimes contradictory approaches to interpret the role of the body in cognition, the majority of which utilize the term "embodied." 

    The idea that the physical body's shape is related to the structure and content of the mind is shared by all of these viewpoints.


    Despite the success of neural network or GOFAI (Good Old-Fashioned Artificial Intelligence or classic symbolic artificial intelligence) techniques in building row expert systems, the embodied approach contends that general artificial intelligence cannot be accomplished in code alone.




    For example, in a tiny robot with four motors, each driving a separate wheel, and programming that directs the robot to avoid obstacles, the same code might create dramatically different observable behaviors if the wheels were relocated to various areas of the body or replaced with articulated legs.

    This is a basic explanation of why the shape of a body must be taken into account when designing robotic systems, and why embodied AI (rather than merely robotics) considers the dynamic interaction between the body and the surroundings to be the source of sometimes surprising emergent behaviors.


    The instance of passive dynamic walkers is an excellent illustration of this method.

    The passive dynamic walker is a bipedal walking model that depends on the dynamic interaction of the leg design and the environment's structure.

    The gait is not generated by an active control system.

    The walker is propelled forward by gravity, inertia, and the forms of the feet, legs, and inclination.


    This strategy is based on the biological concept of stigmergy.

    • Stigmergy is based on the idea that signs or marks left by actions in the environment inspire future actions.




    AN APPROACH INFORMED BY ENGINEERING.



    Embodied AI is influenced by a variety of domains. Engineering and philosophy are two frequent methods.


    Rodney Brooks proposed the "subsumption architecture" in 1986, which is a method of generating complex behaviors by arranging lower-level layers of the system to interact with the environment in prioritized ways, tightly coupling perception and action, and attempting to eliminate the higher-level processing of other models.


    For example, the Smithsonian's robot Genghis was created to traverse rugged terrain, a talent that made the design and engineering of other robots very challenging at the time.


    The success of this approach was primarily due to the design choice to divide the processing of various motors and sensors throughout the network rather than trying higher-level system integration to create a full representational model of the robot and its surroundings.

    To put it another way, there was no central processing region where all of the robot's parts sought to integrate data for the system.


    Cog, a humanoid torso built by the MIT Humanoid Robotics Group in the 1990s, was an early effort at embodied AI.


    Cog was created to learn about the world by interacting with it physically.

    Cog, for example, may be shown learning how to apply force and weight to a drum while holding drumsticks for the first time, or learning how to gauge the weight of a ball once it was put in Cog's hand.

    These early notions of letting the body conduct the learning are still at the heart of the embodied AI initiative.


    The Swiss Robots, created and constructed in the AI Lab at Zurich University, are perhaps one of the most prominent instances of embodied emergent intelligence.



    Simple small robots with two motors (one on each side) and two infrared sensors, the Swiss Robots (one on each side).

    The only high-level instructions in their programming were that if a sensor detected an item on one side, it should move in the other direction.

    However, when combined with a certain body form and sensor location, this resulted in what seemed to be high-level cleaning or clustering behavior in certain situations.

    A similar strategy is used in many other robotics projects.


    Shakey the Robot, developed by SRI International in the 1960s, is frequently credited as being the first mobile robot with thinking ability.


    Shakey was clumsy and sluggish, and he's often portrayed as the polar antithesis of what embodied AI is attempting to achieve by moving away from higher-level thinking and processing.

    However, even in 1968, SRI's approach to embodiment was a clear forerunner of Brooks', since they were the first to assert that the finest reservoir of knowledge about the actual world is the real world itself.

    The greatest model of the world is the world itself, according to this notion, which has become a rallying cry against higher-level representation in embodied AI.

    Earlier robots, in contrast to the embodied AI software, were mostly preprogrammed and did not actively interface with their environs in the manner that this method does.


    Honda's ASIMO robot, for example, isn't an excellent illustration of embodied AI; instead, it's representative of other and older approaches to robotics.


    Work in embodied AI is exploding right now, with Boston Dynamics' robots serving as excellent examples (particularly the non-humanoid forms).

    Embodied AI is influenced by a number of philosophical ideas.

    Rodney Brooks, a roboticist, particularly rejects philosophical influence on his technical concerns in a 1991 discussion of his subsumption architecture, while admitting that his arguments mirror Heidegger's.

    In several essential design aspects, his ideas match those of phenom enologist Merleau-Ponty, demonstrating how earlier philosophical issues at least reflect, and likely shape, much of the design work in contemplating embodied AI.

    Because of its methodology in experimenting toward an understanding of how awareness and intelligent behavior originate, which are highly philosophical activities, this study in embodied robotics is deeply philosophical.

    Other clearly philosophical themes may be found in a few embodied AI projects as well.

    Rolf Pfeifer and Josh Bongard, for example, often draw to philosophical (and psychological) literature in their work, examining how these ideas intersect with their own methods to developing intelligent machines.


    They discuss how these ideas may (and frequently do not) guide the development of embodied AI.


    This covers a broad spectrum of philosophical inspirations, such as George Lakoff and Mark Johnson's conceptual metaphor work, Shaun Gallagher's (2005) body image and phenomenology work, and even John Dewey's early American pragmatism.

    It's difficult to say how often philosophical concerns drive engineering concerns, but it's clear that the philosophy of embodiment is probably the most robust of the various disciplines within cognitive science to have done embodiment work, owing to the fact that theorizing took place long before the tools and technologies were available to actually realize the machines being imagined.

    This suggests that for roboticists interested in the strong AI project, that is, broad intellectual capacities and functions that mimic the human brain, there are likely still unexplored resources here.


    Jai Krishna Ponnappan


    You may also want to read more about Artificial Intelligence here.


    See also: 


    Brooks, Rodney; Distributed and Swarm Intelligence; General and Narrow AI.


    Further Reading:


    Brooks, Rodney. 1986. “A Robust Layered Control System for a Mobile Robot.” IEEE Journal of Robotics and Automation 2, no. 1 (March): 14–23.

    Brooks, Rodney. 1990. “Elephants Don’t Play Chess.” Robotics and Autonomous Systems 6, no. 1–2 (June): 3–15.

    Brooks, Rodney. 1991. “Intelligence Without Representation.” Artificial Intelligence Journal 47: 139–60.

    Dennett, Daniel C. 1997. “Cog as a Thought Experiment.” Robotics and Autonomous Systems 20: 251–56.

    Gallagher, Shaun. 2005. How the Body Shapes the Mind. Oxford: Oxford University Press.

    Pfeifer, Rolf, and Josh Bongard. 2007. How the Body Shapes the Way We Think: A New View of Intelligence. Cambridge, MA: MIT Press.




    Analog Space Missions: Earth-Bound Training for Cosmic Exploration

    What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...