Artificial Intelligence - Machine Translation.

  



Machine translation is the process of using computer technology to automatically translate human languages.

The US administration saw machine translation as a valuable instrument in diplomatic attempts to restrict communism in the USSR and the People's Republic of China from the 1950s through the 1970s.

Machine translation has lately become a tool for marketing goods and services in countries where they would otherwise be unavailable due to language limitations, as well as a standalone offering.

Machine translation is also one of the litmus tests for artificial intelligence progress.

This artificial intelligence study advances along three broad paradigms.

Rule-based expert systems and statistical methods to machine translation are the earliest.

Neural-based machine translation and example-based machine translation are two more contemporary paradigms (or translation by analogy).

Within computer linguistics, automated language translation is now regarded an academic specialization.

While there are multiple possible roots for the present discipline of machine translation, the notion of automated translation as an academic topic derives from a 1947 communication between crystallographer Andrew D. Booth of Birkbeck College (London) and Warren Weaver of the Rockefeller Foundation.

"I have a manuscript in front of me that is written in Russian, but I am going to assume that it is truly written in English and that it has been coded in some bizarre symbols," Weaver said in a preserved note to colleagues in 1949.

To access the information contained in the text, all I have to do is peel away the code" (Warren Weaver, as cited in Arnold et al. 1994, 13).

Most commercial machine translation systems have a translation engine at their core.

The user's sentences are parsed several times by translation engines, each time applying algorithmic rules to transform the source sentence into the desired target language.

There are rules for word-based and phrase-based trans formation.

The initial objective of a parser software is generally to replace words using a two-language dictionary.

Additional processing rounds of the phrases use comparative grammatical rules that consider sentence structure, verb form, and suffixes.

The intelligibility and accuracy of translation engines are measured.

Machine translation isn't perfect.

Poor grammar in the source text, lexical and structural differences between languages, ambiguous usage, multiple meanings of words and idioms, and local variations in usage can all lead to "word salad" translations.

In 1959–60, MIT philosopher, linguist, and mathematician Yehoshua Bar-Hillel issued the harshest early criticism of machine translation of language.

In principle, according to Bar-Hillel, near-perfect machine translation is impossible.

He used the following sentence to demonstrate the issue: John was on the prowl for his toy box.

He eventually discovered it.

In the pen, there was a box.

John was overjoyed.

The word "pen" poses a problem in this statement since it might refer to a child's playpen or a writing ballpoint pen.

Knowing the difference necessitates a broad understanding of the world, which a computer lacks.

When the National Academy of Sciences Automatic Language Processing Advisory Committee (ALPAC) released an extremely damaging report about the poor quality and high cost of machine translation in 1964, the initial rounds of US government funding eroded.

ALPAC came to the conclusion that the country already had an abundant supply of human translators capable of producing significantly greater translations.

Many machine translation experts slammed the ALPAC report, pointing to machine efficiency in the preparation of first drafts and the successful rollout of a few machine translation systems.

In the 1960s and 1970s, there were only a few machine translation research groups.

The TAUM group in Canada, the Mel'cuk and Apresian groups in the Soviet Union, the GETA group in France, and the German Saarbrücken SUSY group were among the biggest.

SYSTRAN (System Translation), a private corporation financed by government contracts founded by Hungarian-born linguist and computer scientist Peter Toma, was the main supplier of automated translation technology and services in the United States.

In the 1950s, Toma became interested in machine translation while studying at the California Institute of Technology.

Around 1960, Toma moved to Georgetown University and started collaborating with other machine translation experts.

The Georgetown machine translation project, as well as SYSTRAN's initial contract with the United States Air Force in 1969, were both devoted to translating Russian into English.

That same year, at Wright-Patterson Air Force Base, the company's first machine translation programs were tested.

SYSTRAN software was used by the National Aeronautics and Space Administration (NASA) as a translation help during the Apollo-Soyuz Test Project in 1974 and 1975.

Shortly after, SYSTRAN was awarded a contract by the Commission of the European Communities to offer automated translation services, and the company has subsequently amalgamated with the European Commission (EC).

By the 1990s, the EC had seventeen different machine translation systems focused on different language pairs in use for internal communications.

In 1992, SYSTRAN began migrating its mainframe software to personal computers.

SYSTRAN Professional Premium for Windows was launched in 1995 by the company.

SYSTRAN continues to be the industry leader in machine translation.

METEO, which has been in use by the Canadian Meteorological Center in Montreal since 1977 for the purpose of translating weather bulletins from English to French; ALPS, developed by Brigham Young University for Bible translation; SPANAM, the Pan American Health Organization's Spanish-to-English automatic translation system; and METAL, developed at the University of Toronto.

In the late 1990s, machine translation became more readily accessible to the general public through web browsers.

Babel Fish, a web-based application created by a group of researchers at Digital Equipment Corporation using SYSTRAN machine translation technology, was one of the earliest online language translation services (DEC).

Thirty-six translation pairs between thirteen languages were supported by the technology.

Babel Fish began as an AltaVista web search engine tool before being sold to Yahoo! and then Microsoft.

The majority of online translation services still use rule-based and statistical machine translation.

Around 2016, SYSTRAN, Microsoft Translator, and Google Translate made the switch to neural machine translation.

103 languages are supported by Google Translate.

Predictive deep learning algorithms, artificial neural networks, or connectionist systems based after biological brains are used in neural machine translation.

Machine translation based on neural networks is achieved in two steps.

The translation engine models its interpretation in the first phase based on the context of each source word within the entire sentence.

The artificial neural network then translates the entire word model into the target language in the second phase.

Simply said, the engine predicts the probability of word sequences and combinations inside whole sentences, resulting in a fully integrated translation model.

The underlying algorithms use statistical models to learn language rules.

The Harvard SEAS natural language processing group, in collaboration with SYSTRAN, has launched OpenNMT, an open-source neural machine translation system.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Cheng, Lili; Natural Language Processing and Speech Understanding.



Further Reading:


Arnold, Doug J., Lorna Balkan, R. Lee Humphreys, Seity Meijer, and Louisa Sadler. 1994. Machine Translation: An Introductory Guide. Manchester and Oxford: NCC Blackwell.

Bar-Hillel, Yehoshua. 1960. “The Present Status of Automatic Translation of Languages.” Advances in Computers 1: 91–163.

Garvin, Paul L. 1967. “Machine Translation: Fact or Fancy?” Datamation 13, no. 4: 29–31.

Hutchins, W. John, ed. 2000. Early Years in Machine Translation: Memoirs and Biographies of Pioneers. Philadelphia: John Benjamins.

Locke, William Nash, and Andrew Donald Booth, eds. 1955. Machine Translation of Languages. New York: Wiley.

Yngve, Victor H. 1964. “Implications of Mechanical Translation Research.” Proceedings of the American Philosophical Society 108 (August): 275–81.



Artificial Intelligence - What Is The Mac Hack IV Program?

 




Mac Hack IV, a 1967 chess software built by Richard Greenblatt, gained notoriety for being the first computer chess program to engage in a chess tournament and to play adequately against humans, obtaining a USCF rating of 1,400 to 1,500.

Greenblatt's software, written in the macro assembly language MIDAS, operated on a DEC PDP-6 computer with a clock speed of 200 kilohertz.

While a graduate student at MIT's Artificial Intelligence Laboratory, he built the software as part of Project MAC.

"Chess is the drosophila [fruit fly] of artificial intelligence," according to Russian mathematician Alexander Kronrod, the field's chosen experimental organ ism (Quoted in McCarthy 1990, 227).



Creating a champion chess software has been a cherished goal in artificial intelligence since 1950, when Claude Shan ley first described chess play as a task for computer programmers.

Chess and games in general involve difficult but well-defined issues with well-defined rules and objectives.

Chess has long been seen as a prime illustration of human-like intelligence.

Chess is a well-defined example of human decision-making in which movements must be chosen with a specific purpose in mind, with limited knowledge and uncertainty about the result.

The processing capability of computers in the mid-1960s severely restricted the depth to which a chess move and its alternative answers could be studied since the number of different configurations rises exponentially with each consecutive reply.

The greatest human players have been proven to examine a small number of moves in greater detail rather than a large number of moves in lower depth.

Greenblatt aimed to recreate the methods used by good players to locate significant game tree branches.

He created Mac Hack to reduce the number of nodes analyzed while choosing moves by using a minimax search of the game tree along with alpha-beta pruning and heuristic components.

In this regard, Mac Hack's style of play was more human-like than that of more current chess computers (such as Deep Thought and Deep Blue), which use the sheer force of high processing rates to study tens of millions of branches of the game tree before making moves.

In a contest hosted by MIT mathematician Seymour Papert in 1967, Mac Hack defeated MIT philosopher Hubert Dreyfus and gained substantial renown among artificial intelligence researchers.

The RAND Corporation published a mimeographed version of Dreyfus's paper, Alchemy and Artificial Intelligence, in 1965, which criticized artificial intelligence researchers' claims and aspirations.

Dreyfus claimed that no computer could ever acquire intelligence since human reason and intelligence are not totally rule-bound, and hence a computer's data processing could not imitate or represent human cognition.

In a part of the paper titled "Signs of Stagnation," Dreyfus highlighted attempts to construct chess-playing computers, among his many critiques of AI.

Mac Hack's victory against Dreyfus was first seen as vindication by the AI community.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Alchemy and Artificial Intelligence; Deep Blue.



Further Reading:



Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books.

Greenblatt, Richard D., Donald E. Eastlake III, and Stephen D. Crocker. 1967. “The Greenblatt Chess Program.” In AFIPS ’67: Proceedings of the November 14–16, 1967, Fall Joint Computer Conference, 801–10. Washington, DC: Thomson Book Company.

Marsland, T. Anthony. 1990. “A Short History of Computer Chess.” In Computers, Chess, and Cognition, edited by T. Anthony Marsland and Jonathan Schaeffer, 3–7. New York: Springer-Verlag.

McCarthy, John. 1990. “Chess as the Drosophila of AI.” In Computers, Chess, and Cognition, edited by T. Anthony Marsland and Jonathan Schaeffer, 227–37. New York: Springer-Verlag.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman.




Artificial Intelligence - Who Is Ray Kurzweil (1948–)?




Ray Kurzweil is a futurist and inventor from the United States.

He spent the first half of his career developing the first CCD flat-bed scanner, the first omni-font optical character recognition device, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed, large-vocabulary speech recognition machine.

He has earned several awards for his contributions to technology, including the Technical Grammy Award in 2015 and the National Medal of Technology.

Kurzweil is the cofounder and chancellor of Singularity University, as well as the director of engineering at Google, where he leads a team that works on artificial intelligence and natural language processing.

Singularity University is a non-accredited graduate school founded on the premise of tackling great issues like renewable energy and space travel by gaining a deep understanding of the opportunities presented by technology progress's current acceleration.

The university, which is headquartered in Silicon Valley, has evolved to include one hundred chapters in fifty-five countries, delivering seminars, educational programs, and business acceleration programs.

While at Google, Kurzweil published the book How to Create a Mind (2012).

He claims that the neo cortex is a hierarchical structure of pattern recognizers in his Pattern Recognition Theory of Mind.

Kurzweil claims that replicating this design in machines might lead to the creation of artificial superintelligence.

He believes that by doing so, he will be able to bring natural language comprehension to Google.

Kurzweil's popularity stems from his work as a futurist.

Futurists are those who specialize in or are interested in the near-to-long-term future and associated topics.

They use well-established methodologies like scenario planning to carefully examine forecasts and construct future possibilities.

Kurzweil is the author of five national best-selling books, including The Singularity Is Near, which was named a New York Times best-seller (2005).

He has an extensive list of forecasts.

Kurzweil predicted the enormous development of international internet usage in the second part of the decade in his debut book, The Age of Intelligent Machines (1990).

He correctly predicted that computers will soon exceed humans in making the greatest investing choices in his second extremely important book, The Age of Spiritual Machines (where "spiritual" stands for "aware"), published in 1999.

Kurzweil prophesied in the same book that computers would one day "appear to have their own free will" and perhaps have "spiritual experiences" (Kurz weil 1999, 6).

Human-machine barriers will dissolve to the point that they will basically live forever as combined human-machine hybrids.

Scientists and philosophers have slammed Kurzweil's forecast of a sentient computer, claiming that awareness cannot be created by calculations.

Kurzweil tackles the phenome non of the Technological Singularity in his third book, The Singularity Is Near.

John von Neumann, a famous mathematician, created the word singularity.

In a 1950s chat with his colleague Stanislaw Ulam, von Neumann proposed that the ever-accelerating speed of technological progress "appears to be reaching some essential singularity in the history of the race beyond which human activities as we know them could not continue" (Ulam 1958, 5).

To put it another way, technological development would alter the course of human history.

Vernor Vinge, a computer scientist, math professor, and science fiction writer, rediscovered the word in 1993 and utilized it in his article "The Coming Technological Singularity." In Vinge's article, technological progress is more accurately defined as an increase in processing power.

Vinge investigates the idea of a self-improving artificial intelligence agent.

According to this theory, the artificial intelligent agent continues to update itself and grow technologically at an unfathomable pace, eventually resulting in the birth of a superintelligence—that is, an artificial intelligence that far exceeds all human intelligence.

In Vinge's apocalyptic vision, robots first become autonomous, then superintelligent, to the point where humans lose control of technology and machines seize control of their own fate.

Machines will rule the planet because technology is more intelligent than humans.

According to Vinge, the Singularity is the end of the human age.

Kurzweil presents an anti-dystopic Singularity perspective.

Kurzweil's core premise is that humans can develop something smarter than themselves; in fact, exponential advances in computer power make the creation of an intelligent machine all but inevitable, to the point that the machine will surpass humans in intelligence.

Kurzweil believes that machine intelligence and human intellect will converge at this moment.

The subtitle of The Singularity Is Near is When Humans Transcend Biology, which is no coincidence.

Kurzweil's overarching vision is based on discontinuity: no lesson from the past, or even the present, can aid humans in determining the way to the future.

This also explains why new types of education, such as Singularity University, are required.

Every sentimental look back to history, every memory of the past, renders humans more susceptible to technological change.

With the arrival of a new superintelligent, almost immortal race, history as a human construct will soon come to an end.

Posthumans, the next phase in human development, are known as immortals.

Kurzweil believes that posthumanity will be made up of sentient robots rather than people with mechanical bodies.

He claims that the future should be formed on the assumption that mankind is in the midst of an extraordinary period of technological advancement.

The Singularity, he believes, would elevate humanity beyond its wildest dreams.

While Kurzweil claims that artificial intelligence is now outpacing human intellect on certain activities, he also acknowledges that the moment of superintelligence, often known as the Technological Singularity, has not yet arrived.

He believes that individuals who embrace the new age of human-machine synthesis and are daring to go beyond evolution's boundaries would view humanity's future as positive. 




Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


General and Narrow AI; Superintelligence; Technological Singularity.



Further Reading:




Kurzweil, Ray. 1990. The Age of Intelligent Machines. Cambridge, MA: MIT Press.

Kurzweil, Ray. 1999. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Penguin.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Ulam, Stanislaw. 1958. “Tribute to John von Neumann.” Bulletin of the American Mathematical Society 64, no. 3, pt. 2 (May): 1–49.

Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. Cleveland, OH: NASA Lewis Research Center.



 

Artificial Intelligence - Who Is Heather Knight?




Heather Knight is a robotics and artificial intelligence specialist best recognized for her work in the entertainment industry.

Her Collaborative Humans and Robots: Interaction, Sociability, Machine Learning, and Art (CHARISMA) Research Lab at Oregon State University aims to apply performing arts techniques to robots.

Knight identifies herself as a social roboticist, a person who develops non-anthropomorphic—and sometimes nonverbal—machines that interact with people.

She makes robots that act in ways that are modeled after human interpersonal communication.

These behaviors include speaking styles, greeting movements, open attitudes, and a variety of other context indicators that assist humans in establishing rapport with robots in ordinary life.

Knight examines social and political policies relating to robotics in the CHARISMA Lab, where he works with social robots and so-called charismatic machines.

The Marilyn Monrobot interactive robot theatre company was founded by Knight.

The Robot Film Festival provides a venue for roboticists to demonstrate their latest inventions in a live setting, as well as films that are relevant to the evolving state of the art in robotics and robot-human interaction.

The Marilyn Monrobot firm arose from Knight's involvement with the Syyn Labs creative collective and her observations of Guy Hoffman, Director of the MIT Media Innovation Lab, on robots built for performance reasons.

Knight's production firm specializes on robot humor.

Knight claims that theatrical spaces are ideal for social robotics research because they not only encourage playfulness—requiring robot actors to express themselves and interact—but also include creative constraints that robots thrive in, such as a fixed stage, trial-and-error learning, and repeat performances (with manipu lated variations).

The usage of robots in entertainment situations, according to Knight, is beneficial since it increases human culture, imagination, and creativity.

At the TEDWomen conference in 2010, Knight debuted Data, a stand-up comedy robot.

Aldebaran Robotics created Data, an Nao robot (now SoftBank Group).

Data performs a comedy performance (with roughly 200 pre-programmed jokes) while gathering input from the audience and fine-tuning its act in real time.

The robot was created at Carnegie Mellon University by Scott Satkin and Varun Ramakrisha.

Knight is presently collaborating with Ginger the Robot on a comedic project.

The development of algorithms for artificial social intelligence is also fueled by robot entertainment.

In other words, art is utilized to motivate the development of new technologies.

To evaluate audience responses and understand the noises made by audiences, Data and Ginger use a microphone and a machine learning system (laughter, chatter, clap ping, etc.).

After each joke, the audience is given green and red cards to hold up.

Green cards indicate to the robots that the audience enjoys the joke.

Red cards are given out when jokes fall flat.

Knight has discovered that excellent robot humor doesn't have to disguise the fact that it's about a robot.

Rather, Data makes people laugh by drawing attention to its machine-specific issues and making self-deprecating remarks about its limits.

In order to create expressive, captivating robots, Knight has found improvisational acting and dancing skills to be quite useful.

In the process, she has changed the original Robotic Paradigm's technique of Sense-Plan-Act, preferring Sensing-Character-Enactment, which is more similar to the procedure utilized in theatrical performance in practice.

Knight is presently experimenting with ChairBots, which are hybrid robots made by gluing IKEA wooden chairs to Neato Botvacs (a brand of intelligent robotic vacuum cleaner).

The ChairBots are being tested in public places to see how a basic robot might persuade people to get out of the way using just rudimentary gestures as a mode of communication.

They've also been used to persuade prospective café customers to come in, locate a seat, and settle down.

Knight collaborated on the synthetic organic robot art piece Public Anemone for the SIGGRAPH computer graphics conference while pursuing degrees at the MIT Media Lab with Personal Robots group head Professor Cynthia Breazeal.

The installation consisted of a fiberglass cave filled with glowing creatures that moved and responded to music and people.

The cave's centerpiece robot, also known as "Public Anemone," swayed and interacted with visitors, bathed in a waterfall, watered a plant, and interacted with other cave attractions.

Knight collaborated with animatronics designer Dan Stiehl to create capacitive sensor-equipped artificial tube worms.

The tubeworm's fiberoptic tentacles drew into their tubes and changed color when a human observer reached into the cave, as though prompted by protective impulses.

The team behind Public Anemone defined the initiative as "a step toward fully embodied robot theatrical performance" and "an example of intelligent staging." Knight also helped with the mechanical design of the Smithsonian/Cooper-Hewitt Design Museum's "Cyberflora" kinetic robot flower garden display in 2003.

Her master's thesis at MIT focused on the Sensate Bear, a huggable robot teddy bear with full-body capacitive touch sensors that she used to investigate real-time algorithms incorporating social touch and nonverbal communication.

In 2016, Knight received her PhD from Carnegie Mellon University.

Her dissertation focused on expressive motion in robots with a reduced degree of freedom.

Humans do not require robots to closely resemble humans in appearance or behavior to be treated as close associates, according to Knight's research.

Humans, on the other hand, are quick to anthropomorphize robots and offer them autonomy.

Indeed, she claims, when robots become more human-like in appearance, people may feel uneasy or anticipate a far higher level of humanlike conduct.

Professor Matt Mason of the School of Computer Science and Robotics Institute advised Knight.

She was formerly a robotic artist in residence at Alphabet's X, Google's parent company's research lab.

Knight has previously worked with Aldebaran Robotics and NASA's Jet Propulsion Laboratory as a research scientist and engineer.

While working as an engineer at Aldebaran Robotics, Knight created the touch sensing panel for the Nao autonomous family companion robot, as well as the infrared detection and emission capabilities in its eyes.

Syyn Labs won a UK Music Video Award for her work on the opening two minutes of the OK Go video "This Too Shall Pass," which contains a Rube Goldberg machine.

She is now assisting Clearpath Robotics in making its self-driving, mobile-transport robots more socially conscious. 





Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


RoboThespian; Turkle, Sherry.


Further Reading:



Biever, Celeste. 2010. “Wherefore Art Thou, Robot?” New Scientist 208, no. 2792: 50–52.

Breazeal, Cynthia, Andrew Brooks, Jesse Gray, Matt Hancher, Cory Kidd, John McBean, Dan Stiehl, and Joshua Strickon. 2003. “Interactive Robot Theatre.” Communications of the ACM 46, no. 7: 76–84.

Knight, Heather. 2013. “Social Robots: Our Charismatic Friends in an Automated Future.” Wired UK, April 2, 2013. https://www.wired.co.uk/article/the-inventor.

Knight, Heather. 2014. How Humans Respond to Robots: Building Public Policy through Good Design. Washington, DC: Brookings Institute, Center for Technology Innovation.



Artificial Intelligence - Who Is Hiroshi Ishiguro (1963–)?

  


Hiroshi Ishiguro is a well-known engineer who is most known for his life-like humanoid robots.

He thinks that the present information culture will eventually develop into a world populated by robot caregivers or helpmates.

Ishiguro also expects that studying artificial people would help us better understand how humans are conditioned to read and comprehend the actions and expressions of their own species.

Ishiguro seeks to explain concepts like relationship authenticity, autonomy, creativity, imitation, reciprocity, and robot ethics in terms of cognitive science.

Ishiguro's study aims to produce robots that are uncannily identical to humans in look and behavior.

He thinks that his robots will assist us in comprehending what it is to be human.

Sonzaikan is the Japanese name for this sense of a human's substantial presence, or spirit.

Success, according to Ishiguro, may be measured and evaluated in two ways.

The first is what he refers to as the complete Turing Test, in which an android passes if 70% of human spectators are unaware that they are seeing a robot until at least two seconds have passed.

The second metric for success, he claims, is the length of time a human stays actively engaged with a robot before discovering that the robot's cooperative eye tracking does not reflect true thinking.

Robovie was one of Ishiguro's earliest robots, launched in 2000.

Ishiguro intended to make a robot that didn't appear like a machine or a pet, but might be mistaken for a friend in everyday life.

Robovie may not seem to be human, but it can perform a variety of innovative human-like motions and interactive activities.

Eye contact, staring at items, pointing at things, nodding, swinging and folding arms, shaking hands, and saying hello and goodbye are all possible with Robovie.

Robo Doll was extensively featured in Japanese media, and Ishiguro was persuaded that the robot's look, engagement, and conversation were vital to deeper, more nuanced connections between robots and humans.

In 2003, Ishiguro debuted Actroid to the general public for the first time.

Sanrio's Kokoro animatronics division has begun manufacturing Actroid, an autonomous robot controlled by AI software developed at Osaka University's Intelligent Robotics Laboratory.

Actroid has a feminine look (in science fiction terms, a "gynoid") with skin constructed of incredibly realistic silicone.

Internal sensors and quiet air actuators at 47 points of physical articulation allow the robot to replicate human movement, breathing, and blinking, and it can even speak.

Movement is done by sensor processing, data files carrying key val ues for degrees of freedom in movement of limbs and joints.

Five to seven degrees of freedom are typical for robot arms.

Arms, legs, torso, and neck of humanoid robots may have thirty or more degrees of freedom.

Programmers create Actroid scenarios in four steps: (1) collect recognition data from sensors activated by contact, (2) choose a motion module, (3) execute a specified series of movements and play an audio file, and (4) return to step 1.

Experiments utilizing irregular random or contingent reactions to human context hints have been shown to be helpful in holding the human subject's attention, but they are made much more effective when planned scenarios are included.

Motion modules are written in XML, a text-based markup language that is simple enough for even inexperienced programmers to understand.

Ishiguro debuted Repliee variants of the Actroid in 2005, which were supposed to be indistinguishable from a human female on first glance.

Repliee Q1Expo is an android replica of Ayako Fujii, a genuine Japanese newscaster.

Repliee androids are interactive; they can use voice recognition software to comprehend human conversations, answer verbally, maintain eye contact, and react quickly to human touch.

This is made possible by a sensor network made up of infrared motion detectors, cameras, microphones, identification tag readers, and floor sensors that is distributed and ubiquitous.

Artificial intelligence is used by the robot to assess whether the human is contacting the robot gently or aggressively.

Ishiguro also debuted Repliee R1, a kid version of the robot that looks identical to his then four-year-old daughter.

Actroids have recently been proven to be capable of imitating human limb and joint movement by observing and duplicating the movements.

Because much of the computer gear that runs the artificial intelligence program is external to the robot, it is not capable of actual movement.

Self-reports of human volunteers' sentiments and moods are captured when robots perform activities in research done at Ishiguro's lab.

The Actroid elicits a wide spectrum of emotions, from curiosity to disgust, acceptance to terror.

Ishiguro's research colleagues have also benefited from real-time neuroimaging of human volunteers in order to better understand how human brains are stimulated in human-android interactions.

As a result, Actroid serves as a testbed for determining why particular nonhuman agent acts fail to elicit the required cognitive reactions in humans.

The Geminoid robots were created in response to the fact that artificial intelligence lags far behind robotics when it comes to developing realistic interactions between humans and androids.

Ishiguro, in particular, admitted that it would be several years before a computer could have a lengthy, intensive spoken discussion with a person.

The Geminoid HI-1, which debuted in 2006, is a teleoperated (rather than totally autonomous) robot that looks similar to Ishiguro.

The name "gemininoid" is derived from the Latin word "twin." Hand fidgeting, blinking, and motions similar with human respiration are all possible for Geminoid.

Motion-capture technology is used to operate the android, which mimics Ishiguro's face and body motions.

The robot can imitate its creator's voice and communicate in a human-like manner.

Ishiguro plans to utilize the robot to teach students through remote telepresence one day.

When he is teleoperating the robot, he has observed that the sensation of immersion is so strong that his brain is fooled into producing phantom perceptions of actual contact when the android is poked.

The Geminoid-DK is a mechanical doppelgänger of Danish psychology professor Henrik Schärfe, launched in 2011.

While some viewers find the Geminoid's look unsettling, many others do not and simply communicate with the robot in a normal way.

In 2010, the Telenoid R1 was introduced as a teleoperated android robot.

Telenoid is 30 inches tall and amorphous, with just a passing resemblance to a human form.

The robot's objective is to transmit a human voice and gestures to a spectator who may use it as a communication or videoconferencing tool.

The Telenoid, like the other robots in Ishiguro's lab, looks to be alive: it simulates breathing and speech gestures and blinks.

However, in order to stimulate creativity, the design limits the amount of features.

In this manner, the Telenoid is analogous to a tangible, real-world avatar.

Its goal is to make more intimate, human-like interactions possible using telecommunications technology.

Ishiguro suggests that the robot might one day serve as a suitable stand-in for a teacher or partner who is otherwise only accessible from afar.

The Elfoid, a tiny version of the robot, can be grasped with one hand and carried in a pocket.

The autonomous persocom dolls that replace smart phones and other electronics in the immensely famous manga series Chobits foreshadowed the Actroid and Telenoid.

Ishiguro is a professor of systems innovation and the director of Osaka University's Intelligent Robotics Laboratory.

He's also a group leader at Kansai Science City's Advanced Telecommunications Research Institute (ATR) and a cofounder of the tech-transfer startup Vstone Ltd.

He thinks that future commercial enterprises will profit from the success of teleoperated robots in order to fund the continued development of his autonomous robots.

Erica, a humanoid robot that became a Japanese television news presenter in 2018, is his most recent creation.

Ishiguro studied oil painting extensively as a young man, pondering how to depict human resemblance on canvas while he worked.

In Hanao Mori's computer science lab at Yamanashi University, he got enthralled with robots.

At Osaka University, Ishiguro pursued his PhD in engineering under computer vision and image recognition pioneer Saburo Tsuji.

At studies done in Tsuji’s lab, he worked on mobile robots capable of SLAM— simultaneous mapping and navigation using panoramic and omni-directional video cameras.

This work led to his doctoral dissertation, which focused on tracking a human subject using active camera control and panning to acquire complete 360-degree views of the surroundings.

Ishiguro believed that his technology and applications may be utilized to provide a meaningful internal map of an interacting robot's surroundings.

His dissertation was rejected by the first reviewer of an article based on it.

Fine arts and technology, according to Ishiguro, are inexorably linked; art inspires new technologies, while technology enables for the creation and duplication of art.

Ishiguro has recently brought his robots to Seinendan, a theatre company founded by Oriza Hirata, in order to put what he's learned about human-robot communication into practice.

Ishiguro's field of cognitive science and AI, which he calls android science, has precedents in Disneyland's "Great Moments with Mr.

Lincoln" robotics animation show and the fictitious robot replacements described in the Bruce Willis film Surrogates (2009).

In the Willis film, Ishiguro has a cameo appearance.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Caregiver Robots; Nonhuman Rights and Personhood.



Further Reading:



Guizzo, Erico. 2010. “The Man Who Made a Copy of Himself.” IEEE Spectrum 47, no. 4 (April): 44–56.

Ishiguro, Hiroshi, and Fabio Dalla Libera, eds. 2018. Geminoid Studies: Science and Technologies for Humanlike Teleoperated Androids. New York: Springer.

Ishiguro, Hiroshi, and Shuichi Nishio. 2007. “Building Artificial Humans to Understand Humans.” Journal of Artificial Organs 10, no. 3: 133–42.

Ishiguro, Hiroshi, Tetsuo Ono, Michita Imai, Takeshi Maeda, Takayuki Kanda, and Ryohei Nakatsu. 2001. “Robovie: An Interactive Humanoid Robot.” International Journal of Industrial Robotics 28, no. 6: 498–503.

Kahn, Peter H., Jr., Hiroshi Ishiguro, Batya Friedman, Takayuki Kanda, Nathan G. Freier, Rachel L. Severson, and Jessica Miller. 2007. “What Is a Human? Toward Psychological Benchmarks in the Field of Human–Robot Interaction.” Interaction Studies 8, no. 3: 363–90.

MacDorman, Karl F., and Hiroshi Ishiguro. 2006. “The Uncanny Advantage of Using Androids in Cognitive and Social Science Research.” Interaction Studies 7, no. 3: 297–337.

Nishio, Shuichi, Hiroshi Ishiguro, and Norihiro Hagita. 2007a. “Can a Teleoperated Android Represent Personal Presence? A Case Study with Children.” Psychologia 50: 330–42.

Nishio, Shuichi, Hiroshi Ishiguro, and Norihiro Hagita. 2007b. “Geminoid: Teleoperated Android of an Existing Person.” In Humanoid Robots: New Developments, edited by Armando Carlos de Pina Filho, 343–52. Vienna, Austria: I-Tech.






Artificial Intelligence - Knowledge Engineering In Expert Systems.

  


Knowledge engineering (KE) is an artificial intelligence subject that aims to incorporate expert knowledge into a formal automated programming system in such a manner that the latter can produce the same or comparable results in problem solving as human experts when working with the same data set.

Knowledge engineering, more precisely, is a discipline that develops methodologies for constructing large knowledge-based systems (KBS), also known as expert systems, using appropriate methods, models, tools, and languages.

For knowledge elicitation, modern knowledge engineering uses the knowledge acquisition and documentation structuring (KADS) approach; hence, the development of knowledge-based systems is considered a modeling effort (i.e., knowledge engineer ing builds up computer models).

It's challenging to codify the knowledge acquisition process since human specialists' knowledge is a combination of skills, experience, and formal knowledge.

As a result, rather than directly transferring knowledge from human experts to the programming system, the experts' knowledge is modeled.

Simultaneously, direct simulation of the entire cognitive process of experts is extremely difficult.

Designed computer models are expected to achieve targets similar to experts’ results doing problem solving in the domain rather than matching the cognitive capabilities of the experts.

As a result, knowledge engineering focuses on modeling and problem-solving methods (PSM) that are independent of various representation formalisms (production rules, frames, etc.).

The problem solving method is a key component of knowledge engineering, and it refers to the knowledge-level specification of a reasoning pattern that can be used to complete a knowledge-intensive task.

Each problem-solving technique is a pattern that offers template structures for addressing a specific issue.

The terms "diagnostic," "classification," and "configuration" are often used to categorize problem-solving strategies based on their topology.

PSM "Cover-and-Differentiate" for diagnostic tasks and PSM "Propose-and-Reverse" for parametric design tasks are two examples.

Any problem-solving approach is predicated on the notion that the suggested method's logical adequacy corresponds to the computational tractability of the system implementation based on it.

The PSM heuristic classification—an inference pattern that defines the behavior of knowledge-based systems in terms of objectives and knowledge required to attain these goals—is often used in early instances of expert systems.

Inference actions and knowledge roles, as well as their relationships, are covered by this problem-solving strategy.

The relationships specify how domain knowledge is used in each interference action.

Observables, abstract observables, solution abstractions, and solution are the knowledge roles, while the interference action might be abstract, heuristic match, or refine.

The PSM heuristic classification requires a hierarchically organized model of observables as well as answers for "abstract" and "refine," making it suited for static domain knowledge acquisition.

In the late 1980s, knowledge engineering modeling methodologies shifted toward role limiting methods (RLM) and generic tasks (GT).

The idea of the "knowledge role" is utilized in role-limiting methods to specify how specific domain knowledge is employed in the problem-solving process.

RLM creates a wrapper over PSM by explaining it in broad terms with the purpose of reusing it.

However, since this technique only covers a single instance of PSM, it is ineffective for issues that need the employment of several methods.

Configurable role limiting methods (CRLM) are an extension of the role limiting methods concept, offering a predetermined collection of RLMs as well as a fixed scheme of knowledge categories.

Each member method may be used on a distinct subset of a job, but introducing a new method is challenging since it necessitates changes to established knowledge categories.

The generic task method includes a predefined scheme of knowledge kinds and an inference mechanism, as well as a general description of input and output.

The generic task is based on the "strong interaction problem hypothesis," which claims that domain knowledge's structure and representation may be totally defined by its application.

Each generic job makes use of information and employs control mechanisms tailored to that knowledge.

Because the control techniques are more domain-specific, the actual knowledge acquisition employed in GT is more precise in terms of problem-solving step descriptions.

As a result, the design of specialized knowledge-based systems may be thought of as the instantiation of specified knowledge categories using domain-specific words.

The downside of GT is that it may not be possible to integrate a specified problem-solving approach with the optimum problem-solving strategy required to complete the assignment.

The task structure (TS) approach seeks to address GT's shortcomings by distinguishing between the job and the technique employed to complete it.

As a result, every task-structure based on that method postulates how the issue might be solved using a collection of generic tasks, as well as what knowledge has to be acquired or produced for these tasks.

Because of the requirement for several models, modeling frameworks were created to meet various parts of knowledge engineering methodologies.

The organizational model, task model, agent model, communication model, expertise model, and design model are the models of the most common engineering CommonKADS structure (which depends on KADS).

The organizational model explains the structure as well as the tasks that each unit performs.

The task model describes tasks in a hierarchical order.

Each agent's skills in task execution are specified by the agent model.

The communication model specifies how agents interact with one another.

The expertise model, which employs numerous layers and focuses on representing domain-specific knowledge (domain layer) as well as inference for the reasoning process, is the most significant model (inference layer).

A task layer is also supported by the expertise model.

The latter is concerned with task decomposition.

The system architecture and computational mechanisms used to make the inference are described in the design model.

In CommonKADS, there is a clear distinction between domain-specific knowledge and generic problem-solving techniques, allowing various problems to be addressed by constructing a new instance of the domain layer and utilizing the PSM on a different domain.

Several libraries of problem-solving algorithms are now available for use in development.

They are distinguished by their key characteristics: if the library was created for a specific goal or has a larger reach; whether the library is formal, informal, or implemented; whether the library uses fine or coarse grained PSM; and, lastly, the library's size.

Recently, some research has been carried out with the goal of unifying existing libraries by offering adapters that convert task-neutral PSM to task-specific PSM.

The MIKE (model-based and incremental knowledge engineering) method, which proposes integrating semiformal and formal specification and prototyping into the framework, grew out of the creation of CommonKADS.

As a result, MIKE divides the entire process of developing knowledge-based systems into a number of sub-activities, each of which focuses on a different aspect of system development.

The Protégé method makes use of PSMs and ontologies, with an ontology being defined as an explicit statement of a common conceptualization that holds in a certain situation.

Although the ontologies used in Protégé might be of any form, the ones utilized are domain ontologies, which describe the common conceptualization of a domain, and method ontologies, which specify the ideas and relations used by problem solving techniques.

In addition to problem-solving techniques, the development of knowledge-based systems necessitates the creation of particular languages capable of defining the information needed by the system as well as the reasoning process that will use that knowledge.

The purpose of such languages is to give a clear and formal foundation for expressing knowledge models.

Furthermore, some of these formal languages may be executable, allowing simulation of knowledge model behavior on specified input data.

The knowledge was directly encoded in rule-based implementation languages in the early years.

This resulted in a slew of issues, including the impossibility to provide some forms of information, the difficulty to assure consistent representation of various types of knowledge, and a lack of specifics.

Modern approaches to language development aim to target and formalize the conceptual models of knowledge-based systems, allowing users to precisely define the goals and process for obtaining models, as well as the functionality of interface actions and accurate semantics of the various domain knowledge elements.

The majority of these epistemological languages include primitives like constants, functions, and predicates, as well as certain mathematical operations.

Object-oriented or frame-based languages, for example, define a wide range of modeling primitives such as objects and classes.

KARL, (ML)2, and DESIRE are the most common examples of specific languages.

KARL is a language that employs a Horn logic variation.

It was created as part of the MIKE project and combines two forms of logic to target the KADS expertise model: L-KARL and P-KARL.

The L-KARL is a frame logic version that may be used in inference and domain layers.

It's a mix of first-order logic and semantic data modeling primitives, in fact.

P-KARL is a task layer specification language that is also a dynamic logic in some versions.

For KADS expertise models, (ML)2 is a formalization language.

The language mixes first-order extended logic for domain layer definition, first-order meta logic for inference layer specification, and quantified dynamic logic for task layer specification.

The concept of compositional architecture is used in DESIRE (the design and specification of interconnected reasoning components).

It specifies the dynamic reasoning process using temporal logics.

Transactions describe the interaction between components in knowl edge-based systems, and control flow between any two objects is specified as a set of control rules.

A metadata description is attached to each item.

In a declarative approach, the meta level specifies the dynamic features of the object level.

The need to design large knowledge-based systems prompted the development of knowledge engineering, which entails creating a computer model with the same problem-solving capabilities as human experts.

Knowledge engineering views knowledge-based systems as operational systems that should display some desirable behavior, and provides modeling methodologies, tools, and languages to construct such systems.




Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Clinical Decision Support Systems; Expert Systems; INTERNIST-I and QMR; MOLGEN; MYCIN.



Further Reading:


Schreiber, Guus. 2008. “Knowledge Engineering.” In Foundations of Artificial Intelligence, vol. 3, edited by Frank van Harmelen, Vladimir Lifschitz, and Bruce Porter, 929–46. Amsterdam: Elsevier.

Studer, Rudi, V. Richard Benjamins, and Dieter Fensel. 1998. “Knowledge Engineering: Principles and Methods.” Data & Knowledge Engineering 25, no. 1–2 (March): 161–97.

Studer, Rudi, Dieter Fensel, Stefan Decker, and V. Richard Benjamins. 1999. “Knowledge Engineering: Survey and Future Directions.” In XPS 99: German Conference on Knowledge-Based Systems, edited by Frank Puppe, 1–23. Berlin: Springer.



Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...