Showing posts with label Natural Language Processing. Show all posts
Showing posts with label Natural Language Processing. Show all posts

Artificial Intelligence - Speech Recognition And Natural Language Processing

 


Natural language processing (NLP) is a branch of artificial intelligence that entails mining human text and voice in order to produce or reply to human enquiries in a legible or natural manner.

To decode the ambiguities and opacities of genuine human language, NLP has needed advances in statistics, machine learning, linguistics, and semantics.

Chatbots will employ natural language processing to connect with humans across text-based and voice-based interfaces in the future.

Interactions between people with varying talents and demands will be supported by computer assistants.

By making search more natural, they will enable natural language searches of huge volumes of information, such as that found on the internet.

They may also incorporate useful ideas or nuggets of information into a variety of circumstances, including meetings, classes, and informal discussions.



They may even be able to "read" and react in real time to the emotions or moods of human speakers (so-called "sentient analysis").

By 2025, the market for NLP hardware, software, and services might be worth $20 billion per year.

Speech recognition, often known as voice recognition, has a long history.

Harvey Fletcher, a physicist who pioneered research showing the link between voice energy, frequency spectrum, and the perception of sound by a listener, initiated research into automated speech recognition and transcription at Bell Labs in the 1930s.

Most voice recognition algorithms nowadays are based on his research.

Homer Dudley, another Bell Labs scientist, received patents for a Vodor voice synthesizer that imitated human vocalizations and a parallel band pass vocodor that could take sound samples and put them through narrow band filters to identify their energy levels by 1940.

By putting the recorded energy levels through various filters, the latter gadget might convert them back into crude approximations of the original sounds.

Bell Labs researchers had found out how to make a system that could do more than mimic speech by the 1950s.

During that decade, digital technology had progressed to the point that the system could detect individual spoken word portions by comparing their frequencies and energy levels to a digital sound reference library.

In essence, the system made an informed guess about the expression being expressed.

The pace of change was gradual.

Bell Labs robots could distinguish around 10 syllables uttered by a single person by the mid-1950s.

Researchers at MIT, IBM, Kyoto University, and University College London were working on recognizing computers that employed statistics to detect words with numerous phonemes toward the end of the decade.

Phonemes are sound units that are perceived as separate from one another by listeners.



Additionally, progress was being made on systems that could recognize the voice of many speakers.

Allen Newell headed the first professional automated speech recognition group, which was founded in 1971.

The research team split their time between acoustics, parametrics, phonemics, lexical ideas, sentence processing, and semantics, among other levels of knowledge generation.

Some of the issues examined by the group were investigated via funds from the Defense Advanced Research Project Agency in the 1970s (DARPA).

DARPA was intrigued in the technology because it might be used to handle massive amounts of spoken data generated by multiple government departments and transform that data into insights and strategic solutions to challenges.

Techniques like dynamic temporal warping and continuous voice recognition have made progress.

Computer technology progressed significantly, and numerous mainframe and minicomputer manufacturers started to perform research in natural language processing and voice recognition.

The Speech Understanding Research (SUR) project at Carnegie Mellon University was one of the DARPA-funded projects.



The SUR project, directed by Raj Reddy, produced numerous groundbreaking speech recognition systems, including Hearsay, Dragon, Harpy, and Sphinx.

Harpy is notable in that it employs the beam search approach, which has been a standard in such systems for decades.

Beam search is a heuristic search technique that examines a network by extending the most promising node among a small number of possibilities.

Beam search is an improved version of best-first search that uses less memory.

It's a greedy algorithm in the sense that it uses the problem-solving heuristic of making the locally best decision at each step in the hopes of obtaining a global best choice.

In general, graph search algorithms have served as the foundation for voice recognition research for decades, just as they have in the domains of operations research, game theory, and artificial intelligence.

By the 1980s and 1990s, data processing and algorithms had advanced to the point where researchers could use statistical models to identify whole strings of words, even phrases.

The Pentagon remained the field's leader, but IBM's work had progressed to the point where the corporation was on the verge of manufacturing a computerized voice transcription device for its corporate clients.

Bell Labs had developed sophisticated digital systems for automatic voice dialing of telephone numbers.

Other applications that seemed to be within reach were closed captioned transcription of television broadcasts and personal automatic reservation systems.

The comprehension of spoken language has dramatically improved.

The Air Travel Information System was the first commercial system to emerge from DARPA funding (ATIS).

New obstacles arose, such as "disfluencies," or natural pauses, corrections, casual speech, interruptions, and verbal fillers like "oh" and "um" that organically formed from conversational speaking.

Every Windows 95 operating system came with the Speech Application Programming Interface (SAPI) in 1995.

SAPI (which comprised subroutine definitions, protocols, and tools) made it easier for programmers and developers to include speech recognition and voice synthesis into Windows programs.

Other software developers, in particular, were given the option to construct and freely share their own speech recognition engines thanks to SAPI.

It gave NLP technology a big boost in terms of increasing interest and generating wider markets.

The Dragon line of voice recognition and dictation software programs is one of the most well-known mass-market NLP solutions.

The popular Dragon NaturallySpeaking program aims to provide automatic real-time, large-vocabulary, continuous-speech dictation with the use of a headset or microphone.

The software took fifteen years to create and was initially published in 1997.

It is still widely regarded as the gold standard for personal computing today.

One hour of digitally recorded speech takes the program roughly 4–8 hours to transcribe, although dictation on screen is virtually instantaneous.

Similar software is packaged with voice dictation functions in smart phones, which converts regular speech into text for usage in text messages and emails.

The large amount of data accessible on the cloud, as well as the development of gigantic archives of voice recordings gathered from smart phones and electronic peripherals, have benefited industry tremendously in the twenty-first century.

Companies have been able to enhance acoustic and linguistic models for voice processing as a result of these massive training data sets.

To match observed and "classified" sounds, traditional speech recognition systems employed statistical learning methods.

Since the 1990s, more Markovian and hidden Markovian systems with reinforcement learning and pattern recognition algorithms have been used in speech processing.

Because of the large amounts of data available for matching and the strength of deep learning algorithms, error rates have dropped dramatically in recent years.

Despite the fact that linguists argue that natural languages need flexibility and context to be effectively comprehended, these approximation approaches and probabilistic functions are exceptionally strong in deciphering and responding to human voice inputs.

The n-gram, a continuous sequence of n elements from a given sample of text or voice, is now the foundation of computational linguistics.

Depending on the application, the objects might be pho nemes, syllables, letters, words, or base pairs.

N-grams are usually gathered from text or voice.

In terms of proficiency, no other method presently outperforms this one.

For their virtual assistants, Google and Bing have indexed the whole internet and incorporate user query data in their language models for voice search applications.

Today's systems are starting to identify new terms from their datasets on the fly, which is referred to as "lifelong learning" by humans, although this is still a novel technique.

Companies working in natural language processing will desire solutions that are portable (not reliant on distant servers), deliver near-instantaneous response, and provide a seamless user experience in the future.

Richard Socher, a deep learning specialist and the founder and CEO of the artificial intelligence start-up MetaMind, is working on a strong example of next-generation NLP.

Based on massive chunks of natural language information, the company's technology employs a neural networking architecture and reinforcement learning algorithms to provide responses to specific and highly broad inquiries.

Salesforce, the digital marketing powerhouse, just purchased the startup.

Text-to-speech analysis and advanced conversational interfaces in automobiles will be in high demand in the future, as will speech recognition and translation across cultures and languages, automatic speech understanding in noisy environments like construction sites, and specialized voice systems to control office and home automation processes and internet-connected devices.

To work on, any of these applications to enhance human speech will need the collection of massive data sets of natural language.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Natural Language Generation; Newell, Allen; Workplace Automation.


References & Further Reading:


Chowdhury, Gobinda G. 2003. “Natural Language Processing.” Annual Review of Information Science and Technology 37: 51–89.

Jurafsky, Daniel, and James H. Martin. 2014. Speech and Language Processing. Second edition. Upper Saddle River, NJ: Pearson Prentice Hall.

Mahavan, Radhika. n.d. “Natural Language Processing: Current Applications and Future Possibilities.” https://www.techemergence.com/nlp-current-applications-and-future-possibilities/.

Manning, Christopher D., and Hinrich Schütze. 1999. Foundations of Statistical Natural Language Processing. Cambridge, MA: MIT Press.

Metz, Cade. 2015. “AI’s Next Frontier: Machines That Understand Language.” Wired, June 24, 2015. https://www.wired.com/2015/06/ais-next-frontier-machines-understand-language/.

Nusca, Andrew. 2011. “Say Command: How Speech Recognition Will Change the World.” 

ZDNet, November 2, 2011. https://www.zdnet.com/article/say-command-how-speech-recognition-will-change-the-world/.





Artificial Intelligence - What Are Mobile Recommendation Assistants?

 




Mobile Recommendation Assistants, also known as Virtual Assistants, Intelligent Agents, or Virtual Personal Assistants, are a collection of software features that combine a conversational user interface with artificial intelligence to act on behalf of a user.

They may deliver what seems to a user as an agent when they work together.

In this sense, an agent differs from a tool in that it has the ability to act autonomously and make choices with some degree of autonomy.

Many qualities may be included into the design of mobile suggestion helpers to improve the user's impression of agency.

Using visual avatars to represent technology, incorporating features of personality such as humor or informal/colloquial language, giving a voice and a legitimate name, constructing a consistent method of behaving, and so on are examples of such tactics.

A human user can use a mobile recommendation assistant to help them with a wide range of tasks, such as opening software applications, answering questions, performing tasks (operating other software/hardware), or engaging in conversational commerce or entertainment (telling stories, telling jokes, playing games, etc.).

Apple's Siri, Baidu's Xiaodu, Amazon's Alexa, Microsoft's Cortana, Google's Google Assistant, and Xiaomi's Xiao AI are among the mobile voice assistants now in development, each designed for certain companies, use cases, and user experiences.

A range of user interface modali ties are used by mobile recommendation aides.

Some are completely text-based, and they are referred regarded as chatbots.

Business to consumer (B2C) communication is the most common use case for a chatbot, and notable applications include online retail communication, insurance, banking, transportation, and restaurants.

Chatbots are increasingly being employed in medical and psychological applications, such as assisting users with behavior modification.

Similar apps are becoming more popular in educational settings to help students with language learning, studying, and exam preparation.

Facebook Messenger is a prominent example of a chatbot on social media.

While not all mobile recommendation assistants need voice-enabled interaction as an input modality (some, such web site chatbots, may depend entirely on text input), many contemporary examples do.

A Mobile Recommendation Assistant uses a number similar predecessor technologies, including a voice-enabled user interface.

Early voice-enabled user interfaces were made feasible by a command syntax that was hand-coded as a collection of rules or heuristics in advance.

These rule-based systems allowed users to operate devices without using their hands by delivering voice directions.

IBM produced the first voice recognition program, which was exhibited during the 1962 World's Fair in Seattle.

The IBM Shoebox has a limited vocabulary of sixteen words and nine numbers.

By the 1990s, IBM and Microsoft's personal computers and software had basic speech recognition; Apple's Siri, which debuted on the iPhone 4s in 2011, was the first mobile application of a mobile assistant.

These early voice recognition systems were disadvantaged in comparison to conversational mobile agents in terms of user experience since they required a user to learn and adhere to a preset command language.

The consequence of rule-based voice interaction might seem mechanical when it comes to contributing to real humanlike conversation with computers, which is a feature of current mobile recommendation assistants.

Instead, natural language processing (NLP) uses machine learning and statistical inference to learn rules from enormous amounts of linguistic data (corpora).

Decision trees and statistical modeling are used in natural language processing machine learning to understand requests made in a variety of ways that are typical of how people regularly communicate with one another.

Advanced agents may have the capacity to infer a user's purpose in light of explicit preferences expressed via settings or other inputs, such as calendar entries.

Google's Voice Assistant uses a mix of probabilistic reasoning and natural language processing to construct a natural-sounding dialogue, which includes conversational components such as paralanguage ("uh", "uh-huh", "ummm").

To convey knowledge and attention, modern digital assistants use multimodal communication.

Paralanguage refers to communication components that don't have semantic content but are nonetheless important for conveying meaning in context.

These may be used to show purpose, collaboration in a dialogue, or emotion.

The aspects of paralanguage utilized in Google's voice assistant employing Duplex technology are termed vocal segre gates or speech disfluencies; they are intended to not only make the assistant appear more human, but also to help the dialogue "flow" by filling gaps or making the listener feel heard.

Another key aspect of engagement is kinesics, which makes an assistant feel more like an engaged conversation partner.

Kinesics is the use of gestures, movements, facial expressions, and emotion to aid in the flow of communication.

The car firm NIO's virtual robot helper, Nome, is one recent example of the application of face expression.

Nome is a digital voice assistant that sits above the central dashboard of NIO's ES8 in a spherical shell with an LCD screen.

It can swivel its "head" automatically to attend to various speakers and display emotions using facial expressions.

Another example is Dr. Cynthia Breazeal's commercial Jibo home robot from MIT, which uses anthropomorphism using paralinguistic approaches.

Motion graphics or lighting animations are used to communicate states of communication such as listening, thinking, speaking, or waiting in less anthropomorphic uses of kinesics, such as the graphical user interface elements on Apple's Siri or illumination arrays on Amazon Alexa's physical interface Echo or in Xiami's Xiao AI.

The rising intelligence and anthropomorphism (or, in some circumstances, zoomorphism or mechanomorphism) that comes with it might pose some ethical issues about user experience.

The need for more anthropomorphic systems derives from the positive user experience of humanlike agentic systems whose communicative behaviors are more closely aligned with familiar interactions like conversation, which are made feasible by natural language and paralinguistics.

Natural conversation systems have the fundamental advantage of not requiring the user to learn new syntax or semantics in order to successfully convey orders and wants.

These more humanistic human machine interfaces may employ a user's familiar mental model of communication, which they gained through interacting with other people.

Transparency and security become difficulties when a user's judgments about a machine's behavior are influenced by human-to-human communication as machine systems become closer to human-to-human contact.

The establishment of comfort and rapport may obscure the differences between virtual assistant cognition and assumed motivation.

Many systems may be outfitted with motion sensors, proximity sensors, cameras, tiny phones, and other devices that resemble, replicate, or even surpass human capabilities in terms of cognition (the assistant's intellect and perceptive capacity).

While these can be used to facilitate some humanlike interaction by improving perception of the environment, they can also be used to record, document, analyze, and share information that is opaque to a user when their mental model and the machine's interface do not communicate the machine's operation at a functional level.

After a user interaction, a digital assistant visual avatar may shut his eyes or vanish, but there is no need to associate such behavior with the microphone's and camera's capabilities to continue recording.

As digital assistants become more incorporated into human users' daily lives, data privacy issues are becoming more prominent.

Transparency becomes a significant problem to solve when specifications, manufacturer data collecting aims, and machine actions are potentially mismatched with user expectations.

Finally, when it comes to data storage, personal information, and sharing methods, security becomes a concern, as hacking, disinformation, and other types of abuse threaten to undermine faith in technology systems and organizations.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Chatbots and Loebner Prize; Mobile Recommendation Assistants; Natural Language Processing and Speech Understanding.


References & Further Reading:


Lee, Gary G., Hong Kook Kim, Minwoo Jeong, and Ji-Hwan Kim, eds. 2015. Natural Language Dialog Systems and Intelligent Assistants. Berlin: Springer.

Leviathan, Yaniv, and Yossi Matias. 2018. “Google Duplex: An AI System for Accomplishing Real-world Tasks Over the Phone.” Google AI Blog. https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html.

Viken, Alexander. 2009. “The History of Personal Digital Assistants, 1980–2000.” Agile Mobility, April 10, 2009.

Waddell, Kaveh. 2016. “The Privacy Problem with Digital Assistants.” The Atlantic, May 24, 2016. https://www.theatlantic.com/technology/archive/2016/05/the-privacy-problem-with-digital-assistants/483950/.

Artificial Intelligence - Machine Translation.

  



Machine translation is the process of using computer technology to automatically translate human languages.

The US administration saw machine translation as a valuable instrument in diplomatic attempts to restrict communism in the USSR and the People's Republic of China from the 1950s through the 1970s.

Machine translation has lately become a tool for marketing goods and services in countries where they would otherwise be unavailable due to language limitations, as well as a standalone offering.

Machine translation is also one of the litmus tests for artificial intelligence progress.

This artificial intelligence study advances along three broad paradigms.

Rule-based expert systems and statistical methods to machine translation are the earliest.

Neural-based machine translation and example-based machine translation are two more contemporary paradigms (or translation by analogy).

Within computer linguistics, automated language translation is now regarded an academic specialization.

While there are multiple possible roots for the present discipline of machine translation, the notion of automated translation as an academic topic derives from a 1947 communication between crystallographer Andrew D. Booth of Birkbeck College (London) and Warren Weaver of the Rockefeller Foundation.

"I have a manuscript in front of me that is written in Russian, but I am going to assume that it is truly written in English and that it has been coded in some bizarre symbols," Weaver said in a preserved note to colleagues in 1949.

To access the information contained in the text, all I have to do is peel away the code" (Warren Weaver, as cited in Arnold et al. 1994, 13).

Most commercial machine translation systems have a translation engine at their core.

The user's sentences are parsed several times by translation engines, each time applying algorithmic rules to transform the source sentence into the desired target language.

There are rules for word-based and phrase-based trans formation.

The initial objective of a parser software is generally to replace words using a two-language dictionary.

Additional processing rounds of the phrases use comparative grammatical rules that consider sentence structure, verb form, and suffixes.

The intelligibility and accuracy of translation engines are measured.

Machine translation isn't perfect.

Poor grammar in the source text, lexical and structural differences between languages, ambiguous usage, multiple meanings of words and idioms, and local variations in usage can all lead to "word salad" translations.

In 1959–60, MIT philosopher, linguist, and mathematician Yehoshua Bar-Hillel issued the harshest early criticism of machine translation of language.

In principle, according to Bar-Hillel, near-perfect machine translation is impossible.

He used the following sentence to demonstrate the issue: John was on the prowl for his toy box.

He eventually discovered it.

In the pen, there was a box.

John was overjoyed.

The word "pen" poses a problem in this statement since it might refer to a child's playpen or a writing ballpoint pen.

Knowing the difference necessitates a broad understanding of the world, which a computer lacks.

When the National Academy of Sciences Automatic Language Processing Advisory Committee (ALPAC) released an extremely damaging report about the poor quality and high cost of machine translation in 1964, the initial rounds of US government funding eroded.

ALPAC came to the conclusion that the country already had an abundant supply of human translators capable of producing significantly greater translations.

Many machine translation experts slammed the ALPAC report, pointing to machine efficiency in the preparation of first drafts and the successful rollout of a few machine translation systems.

In the 1960s and 1970s, there were only a few machine translation research groups.

The TAUM group in Canada, the Mel'cuk and Apresian groups in the Soviet Union, the GETA group in France, and the German Saarbrücken SUSY group were among the biggest.

SYSTRAN (System Translation), a private corporation financed by government contracts founded by Hungarian-born linguist and computer scientist Peter Toma, was the main supplier of automated translation technology and services in the United States.

In the 1950s, Toma became interested in machine translation while studying at the California Institute of Technology.

Around 1960, Toma moved to Georgetown University and started collaborating with other machine translation experts.

The Georgetown machine translation project, as well as SYSTRAN's initial contract with the United States Air Force in 1969, were both devoted to translating Russian into English.

That same year, at Wright-Patterson Air Force Base, the company's first machine translation programs were tested.

SYSTRAN software was used by the National Aeronautics and Space Administration (NASA) as a translation help during the Apollo-Soyuz Test Project in 1974 and 1975.

Shortly after, SYSTRAN was awarded a contract by the Commission of the European Communities to offer automated translation services, and the company has subsequently amalgamated with the European Commission (EC).

By the 1990s, the EC had seventeen different machine translation systems focused on different language pairs in use for internal communications.

In 1992, SYSTRAN began migrating its mainframe software to personal computers.

SYSTRAN Professional Premium for Windows was launched in 1995 by the company.

SYSTRAN continues to be the industry leader in machine translation.

METEO, which has been in use by the Canadian Meteorological Center in Montreal since 1977 for the purpose of translating weather bulletins from English to French; ALPS, developed by Brigham Young University for Bible translation; SPANAM, the Pan American Health Organization's Spanish-to-English automatic translation system; and METAL, developed at the University of Toronto.

In the late 1990s, machine translation became more readily accessible to the general public through web browsers.

Babel Fish, a web-based application created by a group of researchers at Digital Equipment Corporation using SYSTRAN machine translation technology, was one of the earliest online language translation services (DEC).

Thirty-six translation pairs between thirteen languages were supported by the technology.

Babel Fish began as an AltaVista web search engine tool before being sold to Yahoo! and then Microsoft.

The majority of online translation services still use rule-based and statistical machine translation.

Around 2016, SYSTRAN, Microsoft Translator, and Google Translate made the switch to neural machine translation.

103 languages are supported by Google Translate.

Predictive deep learning algorithms, artificial neural networks, or connectionist systems based after biological brains are used in neural machine translation.

Machine translation based on neural networks is achieved in two steps.

The translation engine models its interpretation in the first phase based on the context of each source word within the entire sentence.

The artificial neural network then translates the entire word model into the target language in the second phase.

Simply said, the engine predicts the probability of word sequences and combinations inside whole sentences, resulting in a fully integrated translation model.

The underlying algorithms use statistical models to learn language rules.

The Harvard SEAS natural language processing group, in collaboration with SYSTRAN, has launched OpenNMT, an open-source neural machine translation system.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Cheng, Lili; Natural Language Processing and Speech Understanding.



Further Reading:


Arnold, Doug J., Lorna Balkan, R. Lee Humphreys, Seity Meijer, and Louisa Sadler. 1994. Machine Translation: An Introductory Guide. Manchester and Oxford: NCC Blackwell.

Bar-Hillel, Yehoshua. 1960. “The Present Status of Automatic Translation of Languages.” Advances in Computers 1: 91–163.

Garvin, Paul L. 1967. “Machine Translation: Fact or Fancy?” Datamation 13, no. 4: 29–31.

Hutchins, W. John, ed. 2000. Early Years in Machine Translation: Memoirs and Biographies of Pioneers. Philadelphia: John Benjamins.

Locke, William Nash, and Andrew Donald Booth, eds. 1955. Machine Translation of Languages. New York: Wiley.

Yngve, Victor H. 1964. “Implications of Mechanical Translation Research.” Proceedings of the American Philosophical Society 108 (August): 275–81.



Artificial Intelligence - Intelligent Tutoring Systems.

  



Intelligent tutoring systems are artificial intelligence-based instructional systems that adapt instruction based on a variety of learner variables, such as dynamic measures of students' ongoing knowledge growth, personal interest, motivation to learn, affective states, and aspects of how they self-regulate their learning.

For a variety of problem areas, such as STEM, computer programming, language, and culture, intelligent tutoring systems have been created.

Complex problem-solving activities, collaborative learning activities, inquiry learning or other open-ended learning activities, learning through conversations, game-based learning, and working with simulations or virtual reality environments are among the many types of instructional activities they support.

Intelligent tutoring systems arose from a field of study known as AI in Education (AIED).

MATHia® (previously Cognitive Tutor), SQL-Tutor, ALEKS, and Rea soning Mind's Genie system are among the commercially successful and widely used intelligent tutoring systems.

Intelligent tutoring systems are frequently more successful than conventional kinds of training, according to six comprehensive meta-analyses.

This efficiency might be due to a number of things.

First, intelligent tutoring systems give adaptive help inside issues, allowing classroom instructors to scale one-on-one tutoring beyond what they could do without it.

Second, they allow adaptive problem selection based on the understanding of particular pupils.

Third, cognitive task analysis, cognitive theory, and learning sciences ideas are often used in intelligent tutoring systems.

Fourth, the employment of intelligent tutoring tools in so-called blended classrooms may result in favorable cultural adjustments by allowing teachers to spend more time working one-on-one with pupils.

Fifth, more sophisticated tutoring systems are repeatedly developed using new approaches from the area of educational data mining, based on data.

Finally, Open Learner Models (OLMs), which are visual representations of the system's internal student model, are often used in intelligent tutoring systems.

OLMs have the potential to assist learners in productively reflecting on their current level of learning.

Model-tracing tutors, constraint-based tutors, example-tracing tutors, and ASSISTments are some of the most common intelligent tutoring system paradigms.

These paradigms vary in how they are created, as well as in tutoring behaviors and underlying representations of domain knowledge, student knowledge, and pedagogical knowledge.

For domain reasoning (e.g., producing future steps in a problem given a student's partial answer), assessing student solutions and partial solutions, and student modeling, intelligent tutoring systems use a number of AI approaches (i.e., dynamically estimating and maintaining a range of learner vari ables).

To increase systems' student modeling skills, a range of data mining approaches (including Bayesian models, hidden Markov models, and logistic regression models) are increasingly being applied.

Machine learning approaches, such as reinforcement learning, are utilized to build instructional policies to a lesser extent.

Researchers are looking at concepts for the smart classroom of the future that go beyond the capabilities of present intelligent tutoring technologies.

AI systems, in their visions, typically collaborate with instructors and students to provide excellent learning experiences for all pupils.

Recent research suggests that rather than designing intelligent tutoring systems to handle all aspects of adaptation, such as providing teachers with real-time analytics from an intelligent tutoring system to draw their attention to learners who may need additional support, promising approaches that adaptively share regulation of learning processes across students, teachers, and AI systems—rather than designing intelligent tutoring systems to handle all aspects of adaptation, for example—by providing teachers with real-time analytics from an intelligent tutoring system to draw their attention to learners who may need additional support.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Natural Language Processing and Speech Understanding; Workplace Automation.



Further Reading:




Aleven, Vincent, Bruce M. McLaren, Jonathan Sewall, Martin van Velsen, Octav Popescu, Sandra Demi, Michael Ringenberg, and Kenneth R. Koedinger. 2016. “Example-Tracing Tutors: Intelligent Tutor Development for Non-Programmers.” International Journal of Artificial Intelligence in Education 26, no. 1 (March): 224–69.

Aleven, Vincent, Elizabeth A. McLaughlin, R. Amos Glenn, and Kenneth R. Koedinger. 2017. “Instruction Based on Adaptive Learning Technologies.” In Handbook of Research on Learning and Instruction, Second edition, edited by Richard E. Mayer and Patricia Alexander, 522–60. New York: Routledge.

du Boulay, Benedict. 2016. “Recent Meta-Reviews and Meta-Analyses of AIED Systems.” International Journal of Artificial Intelligence in Education 26, no. 1: 536–37.

du Boulay, Benedict. 2019. “Escape from the Skinner Box: The Case for Contemporary Intelligent Learning Environments.” British Journal of Educational Technology, 50, no. 6: 2902–19.

Heffernan, Neil T., and Cristina Lindquist Heffernan. 2014. “The ASSISTments Ecosystem: Building a Platform that Brings Scientists and Teachers Together for Minimally Invasive Research on Human Learning and Teaching.” International Journal of Artificial Intelligence in Education 24, no. 4: 470–97.

Koedinger, Kenneth R., and Albert T. Corbett. 2006. “Cognitive Tutors: Technology Bringing Learning Sciences to the Classroom.” In The Cambridge Handbook of the Learning Sciences, edited by Robert K. Sawyer, 61–78. New York: Cambridge University Press.

Mitrovic, Antonija. 2012. “Fifteen Years of Constraint-Based Tutors: What We Have Achieved and Where We Are Going.” User Modeling and User-Adapted Interaction 22, no. 1–2: 39–72.

Nye, Benjamin D., Arthur C. Graesser, and Xiangen Hu. 2014. “AutoTutor and Family: A Review of 17 Years of Natural Language Tutoring.” International Journal of Artificial Intelligence in Education 24, no. 4: 427–69.

Pane, John F., Beth Ann Griffin, Daniel F. McCaffrey, and Rita Karam. 2014. “Effectiveness of Cognitive Tutor Algebra I at Scale.” Educational Evaluation and Policy Analysis 36, no. 2: 127–44.

Schofield, Janet W., Rebecca Eurich-Fulcer, and Chen L. Britt. 1994. “Teachers, Computer Tutors, and Teaching: The Artificially Intelligent Tutor as an Agent for Classroom Change.” American Educational Research Journal 31, no. 3: 579–607.

VanLehn, Kurt. 2016. “Regulative Loops, Step Loops, and Task Loops.” International Journal of Artificial Intelligence in Education 26, no. 1: 107–12.


Artificial Intelligence - What Is The ELIZA Software?

 



ELIZA is a conversational computer software created by German-American computer scientist Joseph Weizenbaum at Massachusetts Institute of Technology between 1964 and 1966.


Weizenbaum worked on ELIZA as part of a groundbreaking artificial intelligence research team on the DARPA-funded Project MAC, which was directed by Marvin Minsky (Mathematics and Computation).

Weizenbaum called ELIZA after Eliza Doolittle, a fictitious character in the play Pygmalion who learns correct English; that play had recently been made into the successful film My Fair Lady in 1964.


ELIZA was created with the goal of allowing a person to communicate with a computer system in plain English.


Weizenbaum became an AI skeptic as a result of ELIZA's popularity among users.

When communicating with ELIZA, users may input any statement into the system's open-ended interface.

ELIZA will often answer by asking a question, much like a Rogerian psychologist attempting to delve deeper into the patient's core ideas.

The application recycles portions of the user's comments while the user continues their chat with ELIZA, providing the impression that ELIZA is genuinely listening.


Weizenbaum had really developed ELIZA to have a tree-like decision structure.


The user's statements are first filtered for important terms.

The terms are ordered in order of significance if more than one keyword is discovered.

For example, if a user writes in "I suppose everyone laughs at me," the term "everybody," not "I," is the most crucial for ELIZA to reply to.

In order to generate a response, the computer uses a collection of algorithms to create a suitable sentence structure around those key phrases.

Alternatively, if the user's input phrase does not include any words found in ELIZA's database, the software finds a content-free comment or repeats a previous answer.


ELIZA was created by Weizenbaum to investigate the meaning of machine intelligence.


Weizenbaum derived his inspiration from a comment made by MIT cognitive scientist Marvin Minsky, according to a 1962 article in Datamation.

"Intelligence was just a characteristic human observers were willing to assign to processes they didn't comprehend, and only for as long as they didn't understand them," Minsky had claimed (Weizenbaum 1962).

If such was the case, Weizenbaum concluded, artificial intelligence's main goal was to "fool certain onlookers for a while" (Weizenbaum 1962).


ELIZA was created to accomplish precisely that by giving users reasonable answers while concealing how little the software genuinely understands in order to keep the user's faith in its intelligence alive for a bit longer.


Weizenbaum was taken aback by how successful ELIZA became.

ELIZA's Rogerian script became popular as a program renamed DOCTOR at MIT and distributed to other university campuses by the late 1960s—where the program was constructed from Weizenbaum's 1965 description published in the journal Communications of the Association for Computing Machinery.

The application deceived (too) many users, even those who were well-versed in its methods.


Most notably, some users grew so engrossed with ELIZA that they demanded that others leave the room so they could have a private session with "the" DOCTOR.


But it was the psychiatric community's reaction that made Weizenbaum very dubious of current artificial intelligence ambitions in general, and promises of computer comprehension of natural language in particular.

Kenneth Colby, a Stanford University psychiatrist with whom Weizenbaum had previously cooperated, created PARRY about the same time that Weizenbaum released ELIZA.


Colby, unlike Weizenbaum, thought that programs like PARRY and ELIZA were beneficial to psychology and public health.


They aided the development of diagnostic tools, enabling one psychiatric computer to treat hundreds of patients, according to him.

Weizenbaum's worries and emotional plea to the community of computer scientists were eventually conveyed in his book Computer Power and Human Reason (1976).

Weizenbaum railed against individuals who neglected the presence of basic distinctions between man and machine in this — at the time — hotly discussed book, arguing that "there are some things that computers ought not to execute, regardless of whether computers can be persuaded to do them" (Weizenbaum 1976, x).


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Chatbots and Loebner Prize; Expert Systems; Minsky, Marvin; Natural Lan￾guage Processing and Speech Understanding; PARRY; Turing Test


Further Reading:


McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence, 251–56, 308–28. San Francisco: W. H. Freeman and Company.

Weizenbaum, Joseph. 1962. “How to Make a Computer Appear Intelligent: Five in a Row Offers No Guarantees.” Datamation 8 (February): 24–26.

Weizenbaum, Joseph. 1966. “ELIZA: A Computer Program for the Study of Natural Language Communication between Man and Machine.” Communications of the ACM 1 (January): 36–45.

Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to Calculation. San Francisco: W.H. Freeman and Company



Artificial Intelligence - Climate Change Crisis And AI.

 




Artificial intelligence has a double-edged sword when it comes to climate change and the environment.


Artificial intelligence is being used by scientists to detect, adapt, and react to ecological concerns.

Civilization is becoming exposed to new environmental hazards and vulnerabilities as a result of the same technologies.

Much has been written on the importance of information technology in green economy solutions.

Data from natural and urban ecosystems is collected and analyzed using intelligent sensing systems and environmental information systems.

Machine learning is being applied in the development of sustainable infrastructure, citizen detection of environmental perturbations and deterioration, contamination detection and remediation, and the redefining of consumption habits and resource recycling.



Planet hacking is a term used to describe such operations.


Precision farming is one example of planet hacking.

Artificial intelligence is used in precision farming to diagnose plant illnesses and pests, as well as detect soil nutrition issues.

Agricultural yields are increased while water, fertilizer, and chemical pesticides are used more efficiently thanks to sensor technology directed by AI.

Controlled farming approaches offer more environmentally friendly land management and (perhaps) biodiversity conservation.

Another example is IBM Research's collaboration with the Chinese government to minimize pollution in the nation via the Green Horizons program.

Green Horizons is a ten-year effort that began in July 2014 with the goal of improving air quality, promoting renewable energy integration, and promoting industrial energy efficiency.

To provide air quality reports and track pollution back to its source, IBM is using cognitive computing, decision support technologies, and sophisticated sensors.

Green Horizons has grown to include global initiatives such as collaborations with Delhi, India, to link traffic congestion patterns with air pollution; Johannesburg, South Africa, to fulfill air quality objectives; and British wind farms, to estimate turbine performance and electricity output.

According to the National Renewable Energy Laboratory at the University of Maryland, AI-enabled automobiles and trucks are predicted to save a significant amount of gasoline, maybe in the region of 15% less use.


Smart cars eliminate inefficient combustion caused by stop-and-go and speed-up and slow-down driving behavior, resulting in increased fuel efficiency (Brown et al.2014).


Intelligent driver input is merely the first step toward a more environmentally friendly automobile.

According to the Society of Automotive Engineers and the National Renewable Energy Laboratory, linked automobiles equipped with vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication might save up to 30% on gasoline (Gonder et al.

2012).

Smart trucks and robotic taxis will be grouped together to conserve fuel and minimize carbon emissions.

Environmental robots (ecobots) are projected to make significant advancements in risk monitoring, management, and mitigation.

At nuclear power plants, service robots are in use.

Two iRobot PackBots were sent to Japan's Fukushima nuclear power plant to measure radioactivity.

Treebot is a dexterous tree-climbing robot that is meant to monitor arboreal environments that are too difficult for people to access.

The Guardian, a robot created by the same person who invented the Roomba, is being developed to hunt down and remove invasive lionfish that endanger coral reefs.

A similar service is being provided by the COTSbot, which employs visual recognition technology to wipe away crown-of-thorn starfish.

Artificial intelligence is assisting in the discovery of a wide range of human civilization's effects on the natural environment.

Cornell University's highly multidisciplinary Institute for Computer Sustainability brings together professional scientists and citizens to apply new computing techniques to large-scale environmental, social, and economic issues.

Birders are partnering with the Cornell Lab of Ornithology to submit millions of observations of bird species throughout North America, to provide just one example.

An app named eBird is used to record the observations.

To monitor migratory patterns and anticipate bird population levels across time and space, computational sustainability approaches are applied.

Wildbook, iNaturalist, Cicada Hunt, and iBats are some of the other crowdsourced nature observation apps.

Several applications are linked to open-access databases and big data initiatives, such as the Global Biodiversity Information Facility, which will include 1.4 billion searchable entries by 2020.


By modeling future climate change, artificial intelligence is also being utilized to assist human populations understand and begin dealing with environmental issues.

A multidisciplinary team from the Montreal Institute for Learning Algorithms, Microsoft Research, and ConscientAI Labs is using street view imagery of extreme weather events and generative adversarial networks—in which two neural networks are pitted against one another—to create realistic images depicting the effects of bushfires and sea level rise on actual neighborhoods.

Human behavior and lifestyle changes may be influenced by emotional reactions to photos.

Virtual reality simulations of contaminated ocean ecosystems are being developed by Stanford's Virtual Human Interaction Lab in order to increase human empathy and modify behavior in coastal communities.


Information technology and artificial intelligence, on the other hand, play a role in the climate catastrophe.


The pollution created by the production of electronic equipment and software is one of the most pressing concerns.

These are often seen as clean industries, however they often use harsh chemicals and hazardous materials.

With twenty-three active Superfund sites, California's Silicon Valley is one of the most contaminated areas in the country.

Many of these hazardous waste dumps were developed by computer component makers.

Trichloroethylene, a solvent used in semiconductor cleaning, is one of the most common soil pollutants.

Information technology uses a lot of energy and contributes a lot of greenhouse gas emissions.

Solar-powered data centers and battery storage are increasingly being used to power cloud computing data centers.


In recent years, a number of cloud computing facilities have been developed around the Arctic Circle to take use of the inherent cooling capabilities of the cold air and ocean.


The so-called Node Pole, situated in Sweden's northernmost county, is a favored location for such building.

In 2020, a data center project in Reykjavik, Iceland, will run entirely on renewable geo thermal and hydroelectric energy.

Recycling is also a huge concern, since life cycle engineering is just now starting to address the challenges of producing environmentally friendly computers.

Toxic electronic trash is difficult to dispose of in the United States, thus a considerable portion of all e-waste is sent to Asia and Africa.

Every year, some 50 million tons of e-waste are produced throughout the globe (United Nations 2019).

Jack Ma of the international e-commerce company Alibaba claimed at the World Economic Forum annual gathering in Davos, Switzerland, that artificial intelligence and big data were making the world unstable and endangering human life.

Artificial intelligence research's carbon impact is just now being quantified with any accuracy.

While Microsoft and Pricewaterhouse Coopers reported that artificial intelligence could reduce carbon dioxide emissions by 2.4 gigatonnes by 2030 (the combined emissions of Japan, Canada, and Australia), researchers at the University of Massachusetts, Amherst discovered that training a model for natural language processing can emit the equivalent of 626,000 pounds of greenhouse gases.

This is over five times the carbon emissions produced by a typical automobile throughout the course of its lifespan, including original production.

Artificial intelligence has a massive influence on energy usage and carbon emissions right now, especially when models are tweaked via a technique called neural architecture search (Strubell et al. 2019).

It's unclear if next-generation technologies like quantum artificial intelligence, chipset designs, and unique machine intelligence processors (such as neuromorphic circuits) would lessen AI's environmental effect.


Artificial intelligence is also being utilized to extract additional oil and gas from beneath, but more effectively.


Oilfield services are becoming more automated, and businesses like Google and Microsoft are opening offices and divisions to cater to them.

Since the 1990s, Total S.A., a French multinational oil firm, has used artificial intelligence to enhance production and understand subsurface data.

Total partnered up with Google Cloud Advanced Solutions Lab professionals in 2018 to use modern machine learning techniques to technical data analysis difficulties in the exploration and production of fossil fuels.

Every geoscience engineer at the oil company will have access to an AI intelligent assistant, according to Google.

With artificial intelligence, Google is also assisting Anadarko Petroleum (bought by Occidental Petroleum in 2019) in analyzing seismic data to discover oil deposits, enhance production, and improve efficiency.


Working in the emerging subject of evolutionary robotics, computer scientists Joel Lehman and Risto Miikkulainen claim that in the case of a future extinction catastrophe, superintelligent robots and artificial life may swiftly breed and push out humans.


In other words, robots may enter the continuing war between plants and animals.

To investigate evolvability in artificial and biological populations, Lehman and Miikkulainen created computer models to replicate extinction events.

The study is mostly theoretical, but it may assist engineers comprehend how extinction events could impact their work; how the rules of variation apply to evolutionary algorithms, artificial neural networks, and virtual organisms; and how coevolution and evolvability function in ecosystems.

As a result of such conjecture, Emerj Artificial Intelligence Research's Daniel Faggella notably questioned if the "environment matter[s] after the Singularity" (Faggella 2019).

Ian McDonald's River of Gods (2004) is a notable science fiction novel about climate change and artificial intelligence.

The book's events take place in 2047 in the Indian subcontinent.

A.I.Artificial Intelligence (2001) by Steven Spielberg is set in a twenty-second-century planet plagued by global warming and rising sea levels.

Humanoid robots are seen as important to the economy since they do not deplete limited resources.

Transcendence, a 2014 science fiction film starring Johnny Depp as an artificial intelligence researcher, portrays the cataclysmic danger of sentient computers as well as its unclear environmental effects.



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Chatbots and Loebner Prize; Gender and AI; Mobile Recommendation Assistants; Natural Language Processing and Speech Understanding.


Further Reading


Bort, Julie. 2017. “The 43 Most Powerful Female Engineers of 2017.” Business Insider. https://www.businessinsider.com/most-powerful-female-engineers-of-2017-2017-2.

Chan, Sharon Pian. 2011. “Tech-Savvy Dreamer Runs Microsoft’s Social-Media Lab.” Seattle Times. https://www.seattletimes.com/business/tech-savvy-dreamer-runs-microsofts-social-media-lab.

Cheng, Lili. 2018. “Why You Shouldn’t Be Afraid of Artificial Intelligence.” Time. http://time.com/5087385/why-you-shouldnt-be-afraid-of-artificial-intelligence.

Cheng, Lili, Shelly Farnham, and Linda Stone. 2002. “Lessons Learned: Building and Deploying Shared Virtual Environments.” In The Social Life of Avatars: Com￾puter Supported Cooperative Work, edited by Ralph Schroeder, 90–111. London: Springer.

Davis, Jeffrey. 2018. “In Chatbots She Trusts: An Interview with Microsoft AI Leader Lili Cheng.” Workflow. https://workflow.servicenow.com/customer-experience/lili-chang-ai-chatbot-interview.



Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...