Showing posts with label Natural Language Processing and Speech Understanding. Show all posts
Showing posts with label Natural Language Processing and Speech Understanding. Show all posts

Artificial Intelligence - Who Was Raj Reddy Or Dabbala Rajagopal "Raj" Reddy?

 


 


Dabbala Rajagopal "Raj" Reddy (1937–) is an Indian American who has made important contributions to artificial intelligence and has won the Turing Award.

He holds the Moza Bint Nasser Chair and University Professor of Computer Science and Robotics at Carnegie Mellon University's School of Computer Science.

He worked on the faculties of Stanford and Carnegie Mellon universities, two of the world's leading colleges for artificial intelligence research.

In the United States and in India, he has received honors for his contributions to artificial intelligence.

In 2001, the Indian government bestowed upon him the Padma Bhushan Award (the third highest civilian honor).

In 1984, he was also given the Legion of Honor, France's highest honor, which was created in 1802 by Napoleon Bonaparte himself.

In 1958, Reddy obtained his bachelor's degree from the University of Madras' Guindy Engineering College, and in 1960, he received his master's degree from the University of New South Wales in Australia.

In 1966, he came to the United States to get his doctorate in computer science at Stanford University.

He was the first in his family to get a university degree, which is typical of many rural Indian households.

He went to the academy in 1966 and joined the faculty of Stanford University as an Assistant Professor of Computer Science, where he stayed until 1969, after working in the industry as an Applied Science Representative at IBM Australia from 1960 to 1963.

He began working at Carnegie Mellon as an Associate Professor of Computer Science in 1969 and will continue to do so until 2020.

He rose up the ranks at Carnegie Mellon, eventually becoming a full professor in 1973 and a university professor in 1984.

In 1991, he was appointed as the head of the School of Computer Science, a post he held until 1999.

Many schools and institutions were founded as a result of Reddy's efforts.

In 1979, he launched the Robotics Institute and served as its first director, a position he held until 1999.

He was a driving force behind the establishment of the Language Technologies Institute, the Human Computer Interaction Institute, the Center for Automated Learning and Discovery (now the Machine Learning Department), and the Institute for Software Research at CMU during his stint as dean.

From 1999 to 2001, Reddy was a cochair of the President's Information Technology Advisory Committee (PITAC).

The President's Council of Advisors on Science and Technology (PCAST) took over PITAC in 2005.

Reddy was the president of the American Association for Artificial Intelligence (AAAI) from 1987 to 1989.

The AAAI has been renamed the Association for the Advancement of Artificial Intelligence, recognizing the worldwide character of the research community, which began with pioneers like Reddy.

The former logo, acronym (AAAI), and purpose have been retained.

Artificial intelligence, or the study of giving intelligence to computers, was the subject of Reddy's research.

He worked on voice control for robots, speech recognition without relying on the speaker, and unlimited vocabulary dictation, which allowed for continuous speech dictation.

Reddy and his collaborators have made significant contributions to computer analysis of natural sceneries, job oriented computer architectures, universal access to information (a project supported by UNESCO), and autonomous robotic systems.

Reddy collaborated on Hearsay II, Dragon, Harpy, and Sphinx I/II with his coworkers.

The blackboard model, one of the fundamental concepts that sprang from this study, has been extensively implemented in many fields of AI.

Reddy was also interested in employing technology for the sake of society, and he worked as the Chief Scientist at the Centre Mondial Informatique et Ressource Humaine in France.

He aided the Indian government in the establishment of the Rajiv Gandhi University of Knowledge Technologies, which focuses on low-income rural youth.

He is a member of the International Institute of Information Technology (IIIT) in Hyderabad's governing council.

IIIT is a non-profit public-private partnership (N-PPP) that focuses on technological research and applied research.

He was on the board of directors of the Emergency Management and Research Institute, a nonprofit public-private partnership that offers public emergency medical services.

EMRI has also aided in the emergency management of its neighboring nation, Sri Lanka.

In addition, he was a member of the Health Care Management Research Institute (HMRI).

HMRI provides non-emergency health-care consultation to rural populations, particularly in Andhra Pradesh, India.

In 1994, Reddy and Edward A. Feigenbaum shared the Turing Award, the top honor in artificial intelligence, and Reddy became the first person of Indian/Asian descent to receive the award.

In 1991, he received the IBM Research Ralph Gomory Fellow Award, the Okawa Foundation's Okawa Prize in 2004, the Honda Foundation's Honda Prize in 2005, and the Vannevar Bush Award from the United States National Science Board in 2006.

Reddy has received fellowships from the Institute of Electronic and Electrical Engineers (IEEE), the Acoustical Society of America, and the American Association for Artificial Intelligence, among other prestigious organizations.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Autonomous and Semiautonomous Systems; Natural Language Processing and Speech Understanding.


References & Further Reading:


Reddy, Raj. 1988. “Foundations and Grand Challenges of Artificial Intelligence.” AI Magazine 9, no. 4 (Winter): 9–21.

Reddy, Raj. 1996. “To Dream the Possible Dream.” Communications of the ACM 39, no. 5 (May): 105–12.






Artificial Intelligence - What Is The PARRY Computer Program?




PARRY (short for paranoia) is the first computer program to imitate a mental patient, created by Stanford University psychiatrist Kenneth Colby.

PARRY is communicated with by the psychiatrist-user in simple English.

PARRY's responses are intended to mirror the cognitive (mal)functioning of a paranoid patient.

In the late 1960s and early 1970s, Colby experimented with mental patient chatbots, which led to the development of PARRY.

Colby sought to illustrate that cognition is fundamentally a symbol manipulation process and that computer simulations may help psychiatric research.

Many technical aspects of PARRY were shared with Joseph Weizenbaum's ELIZA.

Both of these applications were conversational in nature, allowing the user to submit remarks in plain English.

PARRY's underlying algorithms, like ELIZA's, examined inputted phrases for essential terms to create plausible answers.





PARRY, on the other hand, was given a history in order to imitate the right paranoid behaviors.

Parry, who was fictitious, was a gambler who had gotten into a fight with a bookie.

Parry was paranoid enough to assume that the bookie would send the Mafia after him.

Since a result, PARRY freely shared information on its crazy Mafia ideas, as it would wish to enlist the user's assistance.

PARRY was also born with the ability to be "sensitive to his parents, religion, and sex" (Colby 1975, 36).

In most other topics of conversation, the show was neutral.

If PARRY couldn't find a match in its database, it may respond with "I don't know," "Why do you ask that?" or by returning to an earlier subject (Colby 1975, 77).

Whereas ELIZA's achievements made Weizenbaum a skeptic of AI, PARRY's findings bolstered Colby's support for computer simulations in psychiatry.

Colby picked paranoia as the mental state to mimic because it has the least fluid behavior and hence is the simplest to see.

Colby felt that human cognition was a process of symbol manipulation, as did artificial intelligence pioneers Herbert Simon and Allen Newell.

PARRY's cognitive functioning resembled that of a paranoid human being as a result of this.

Colby emphasized that a psychiatrist conversing with PARRY had learnt something about human paranoia.

He saw PARRY as a tool to help novice psychiatrists get started in their careers.

PARRY's reactions might also be used to determine the most successful therapeutic discourse lines.

Colby hoped that systems like PARRY would assist confirm or refute psychiatric hypotheses while also bolstering the field's scientific credibility.

On PARRY, Colby put his shame humiliation hypothesis of paranoid insanity to the test.

In the 1970s, Colby performed a series of studies to see how effectively PARRY could simulate true paranoia.

Two of these examinations resembled the Turing Test.

To begin, practicing psychiatrists were instructed to interview patients using a teletype terminal, an antiquated electromechanical typewriter that was used to send and receive typed messages over telecommunications.

The doctors were unaware that PARRY was one of the patients who took part in the interviews.

The transcripts of these interviews were then distributed to a group of 100 psychiatrists.

These psychiatrists were tasked with determining which version was created by a computer.

Twenty psychiatrists properly recognized PARRY, whereas the other twenty did not.

A total of 100 computer scientists received transcripts.

32 of the 67 computer scientists were accurate, while 35 were incorrect.

According to Colby, the findings "are akin to tossing a coin" statistically, and PARRY was not exposed (Colby 1975, 92).



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Chatbots and Loebner Prize; ELIZA; Expert Systems; Natural Language Processing and Speech Understanding; Turing Test.


References & Further Reading:


Cerf, Vincent. 1973. “Parry Encounters the Doctor: Conversation between a Simulated Paranoid and a Simulated Psychiatrist.” Datamation 19, no. 7 (July): 62–65.

Colby, Kenneth M. 1975. Artificial Paranoia: A Computer Simulation of Paranoid Pro￾cesses. New York: Pergamon Press.

Colby, Kenneth M., James B. Watt, and John P. Gilbert. 1966. “A Computer Method of Psychotherapy: Preliminary Communication.” Journal of Nervous and Mental Disease 142, no. 2 (February): 148–52.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence, 251–56, 308–28. San Francisco: W. H. Freeman and Company.

Warren, Jim. 1976. Artificial Paranoia: An NIMH Program Report. Rockville, MD: US. Department of Health, Education, and Welfare, Public Health Service, Alcohol, Drug Abuse, and Mental Health Administration, National Institute of Mental Health, Division of Scientific and Public Information, Mental Health Studies and Reports Branch.






Artificial Intelligence - Natural Language Generation Or NLG.

 




Natural Language Generation, or NLG, is the computer process by which information that cannot be easily comprehended by humans is converted into a message that is optimized for human comprehension, as well as the name of the AI area dedicated to its research and development.



In computer science and AI, the phrase "natural language" refers to what most people simply refer to as language, the mechanism by which humans interact with one another and, increasingly, with computers and robots.



Natural language is the polar opposite of "machine language," or programming language, which was created for the purpose of programming and controlling computers.

The data processed by NLG technology is some sort of data, such as scores and statistics from a sporting event, and the message created from this data may take different forms (text or voice), such as a sports game news broadcast.

The origins of NLG may be traced back to the mid-twentieth century, when computers were first introduced.

Entering data into early computers and then deciphering the results was complex, time-consuming, and needed highly specialized skills.

These difficulties with machine input and output were seen by researchers and developers as communication issues.



Communication is also essential for gaining knowledge and information, as well as exhibiting intelligence.

The answer suggested by researchers was to work toward adapting human-machine communication to the most "natural" form of communication, that is, people's own languages.

Natural Language Processing is concerned with how robots can understand human language, while Natural Language Generation is concerned with the creation of communications customized to people.

Some researchers in this field, like those working in artificial intelligence, are interested in developing systems that generate messages from data, while others are interested in studying the human process of language and message formation.

NLG is a subfield of Computational Linguistics, as well as being a branch of artificial intelligence.

The rapid expansion of NLG technologies has been facilitated by the proliferation of technology for producing, collecting, and linking enormous swaths of data, as well as advancements in processing power.



NLG has a wide range of applications in a variety of sectors, including journalism and media.

Large international and national news organizations throughout the globe have begun to use automated news-writing tools based on NLG technology into their news production.

Journalists utilize the program in this context to create informative reports from diverse datasets, such as lists of local crimes, corporate earnings reports, and synopses of athletic events.

Companies and organizations may also utilize NLG systems to create automated summaries of their own or external data.

Computational narrative and the development of automated narrative generating systems that concentrate on the production of fictitious stories and characters for use in media and entertainment, such as video games, as well as education and learning, are two related areas of study.



NLG is likely to improve further in the future, allowing future technologies to create more sophisticated and nuanced messages over a wider range of convention texts.

NLG's development and use are still in their early stages, thus it's unclear what the entire influence of NLG-based technologies will be on people, organizations, industries, and society.

Current concerns include whether NLG technologies will have a beneficial or detrimental impact on the workforce in the sectors where they are being implemented, as well as the legal and ethical ramifications of having computers rather than people generate factual and fiction.

There are also bigger philosophical questions around the connection between communication, language usage, and how humans have defined what it means to be human socially and culturally.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 



Natural Language Processing and Speech Understanding; Turing Test; Work￾place Automation.


References & Further Reading:


Guzman, Andrea L. 2018. “What Is Human-Machine Communication, Anyway?” In Human-Machine Communication: Rethinking Communication, Technology, and Ourselves, edited by Andrea L. Guzman, 1–28. New York: Peter Lang.

Lewis, Seth C., Andrea L. Guzman, and Thomas R. Schmidt. 2019. “Automation, Journalism, and Human-Machine Communication: Rethinking Roles and Relationships of Humans and Machines in News.” Digital Journalism 7, no. 4: 409–27.

Licklider, J. C. R. 1968. “The Computer as Communication Device.” In In Memoriam: J. C. R. Licklider, 1915–1990, edited by Robert W. Taylor, 21–41. Palo Alto, CA: Systems Research Center.

Marconi, Francesco, Alex Siegman, and Machine Journalist. 2017. The Future of Aug￾mented Journalism: A Guide for Newsrooms in the Age of Smart Machines. New York: Associated Press. https://insights.ap.org/uploads/images/the-future-of-augmented-journalism_ap-report.pdf.

Paris, Cecile L., William R. Swartout, and William C. Mann, eds. 1991. Natural Language Generation in Artificial Intelligence and Computational Linguistics. Norwell, MA: Kluwer Academic Publishers.

Riedl, Mark. 2017. “Computational Narrative Intelligence: Past, Present, and Future.” Medium, October 25, 2017. https://medium.com/@mark_riedl/computational-narrative-intelligence-past-present-and-future-99e58cf25ffa.





Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...