Showing posts with label Expert Systems. Show all posts
Showing posts with label Expert Systems. Show all posts

AI - Symbol Manipulation.

 



The broad information-processing skills of a digital stored program computer are referred to as symbol manipulation.

From the 1960s through the 1980s, seeing the computer as fundamentally a symbol manipulator became the norm, leading to the scientific study of symbolic artificial intelligence, now known as Good Old-Fashioned AI (GOFAI).

In the 1960s, the emergence of stored-program computers sparked a renewed interest in a computer's programming flexibility.

Symbol manipulation became a comprehensive theory of intelligent behavior as well as a research guideline for AI.

The Logic Theorist, created by Herbert Simon, Allen Newell, and Cliff Shaw in 1956, was one of the first computer programs to mimic intelligent symbol manipulation.

The Logic Theorist was able to prove theorems from Bertrand Russell's Principia Mathematica (1910–1913) and Alfred North Whitehead's Principia Mathematica (1910–1913).

It was presented at Dartmouth's Artificial Intelligence Summer Research Project in 1956. (the Dartmouth Conference).


John McCarthy, a Dartmouth mathematics professor who invented the phrase "artificial intelligence," convened this symposium.


The Dartmouth Conference might be dubbed the genesis of AI since it was there that the Logic Theorist first appeared, and many of the participants went on to become pioneering AI researchers.

The features of symbol manipulation, as a generic process that underpins all types of intelligent problem-solving behavior, were thoroughly explicated and provided a foundation for most of the early work in AI only in the early 1960s, when Simon and Newell had built their General Problem Solver (GPS).

In 1961, Simon and Newell took their knowledge of AI and their work on GPS to a wider audience.


"A computer is not a number-manipulating device; it is a symbol-manipulating device," they wrote in Science, "and the symbols it manipulates may represent numbers, letters, phrases, or even nonnumerical, nonverbal patterns" (Newell and Simon 1961, 2012).





Reading "symbols or patterns presented by appropriate input devices, storing symbols in memory, copying symbols from one memory location to another, erasing symbols, comparing symbols for identity, detecting specific differences between their patterns, and behaving in a manner conditional on the results of its processes," Simon and Newell continued (Newell and Simon 1961, 2012).


The growth of symbol manipulation in the 1960s was also influenced by breakthroughs in cognitive psychology and symbolic logic prior to WWII.


Starting in the 1930s, experimental psychologists like Edwin Boring at Harvard University began to advance their profession away from philosophical and behavioralist methods.





Boring challenged his colleagues to break the mind open and create testable explanations for diverse cognitive mental operations (an approach that was adopted by Kenneth Colby in his work on PARRY in the 1960s).

Simon and Newell also emphasized their debt to pre-World War II developments in formal logic and abstract mathematics in their historical addendum to Human Problem Solving—not because all thought is logical or follows the rules of deductive logic, but because formal logic treated symbols as tangible objects.

"The formalization of logic proved that symbols can be copied, compared, rearranged, and concatenated with just as much definiteness of procedure as [wooden] boards can be sawed, planed, measured, and glued [in a carpenter shop]," Simon and Newell noted (Newell and Simon 1973, 877).



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Expert Systems; Newell, Allen; PARRY; Simon, Herbert A.


References & Further Reading:


Boring, Edwin G. 1946. “Mind and Mechanism.” American Journal of Psychology 59, no. 2 (April): 173–92.

Feigenbaum, Edward A., and Julian Feldman. 1963. Computers and Thought. New York: McGraw-Hill.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman and Company

Newell, Allen, and Herbert A. Simon. 1961. “Computer Simulation of Human Thinking.” Science 134, no. 3495 (December 22): 2011–17.

Newell, Allen, and Herbert A. Simon. 1972. Human Problem Solving. Englewood Cliffs, NJ: Prentice Hall.

Schank, Roger, and Kenneth Colby, eds. 1973. Computer Models of Thought and Language. San Francisco: W. H. Freeman and Company.


Artificial Intelligence - Who Was Herbert A. Simon?

 


Herbert A. Simon (1916–2001) was a multidisciplinary scholar who contributed significantly to artificial intelligence.


He is largely regarded as one of the twentieth century's most prominent social scientists.

His contributions at Carnegie Mellon University lasted five decades.

Early artificial intelligence research was driven by the idea of the computer as a symbol manipulator rather than a number cruncher.

Emil Post, who originally wrote about this sort of computational model in 1943, is credited with inventing production systems, which included sets of rules for symbol strings used to establish conditions—which must exist before rules can be applied—and the actions to be done or conclusions to be drawn.

Simon and his Carnegie Mellon colleague Allen Newell popularized these theories regarding symbol manipulation and production systems by praising their potential benefits for general-purpose reading, storing, and replicating, as well as comparing and contrasting various symbols and patterns.


Simon, Newell, and Cliff Shaw's Logic Theorist software was the first to employ symbol manipulation to construct "intelligent" behavior.


Theorems presented in Bertrand Russell and Alfred North Whitehead's Principia Mathematica (1910) might be independently proved by logic theorists.

Perhaps most notably, the Logic Theorist program uncovered a shorter, more elegant demonstration of Theorem 2.85 in the Principia Mathematica, which was subsequently rejected by the Journal of Symbolic Logic since it was coauthored by a machine.

Although it was theoretically conceivable to prove the Principia Mathematica's theorems in an exhaustively detailed and methodical manner, it was impractical in reality due to the time required.

Newell and Simon were fascinated by the human rules of thumb for solving difficult issues for which an extensive search for answers was impossible due to the massive quantities of processing necessary.

They used the term "heuristics" to describe procedures that may solve issues but do not guarantee success.


A heuristic is a "rule of thumb" used to solve a problem that is too difficult or time consuming to address using an exhaustive search, a formula, or a step-by-step method.


Heuristic approaches are often compared with algorithmic methods in computer science, with the result of the method being a significant differentiating element.

According to this contrast, a heuristic program will provide excellent results in most cases, but not always, while an algorithmic program is a clear technique that guarantees a solution.

This is not, however, a technical difference.

In fact, a heuristic procedure that consistently yields the best result may no longer be deemed "heuristic"—alpha-beta pruning is an example of this.

Simon's heuristics are still utilized by programmers who are trying to solve issues that demand a lot of time and/or memory.

The game of chess is one such example, in which an exhaustive search of all potential board configurations for the proper solution is beyond the human mind's or any computer's capabilities.


Indeed, for artificial intelligence research, Herbert Simon and Allen Newell referred to computer chess as the Drosophila or fruit fly.


Heuristics may also be used to solve issues that don't have a precise answer, such as in medical diagnosis, when heuristics are applied to a collection of symptoms to determine the most probable diagnosis.

Production rules are derived from a class of cognitive science models that apply heuristic principles to productions (situations).

In practice, these rules reduce down to "IF-THEN" statements that reflect specific preconditions or antecedents, as well as the conclusions or consequences that these preconditions or antecedents justify.

"IF there are two X's in a row, THEN put an O to block," is a frequent example offered for the application of production rules to the tic-tac-toe game.

These IF-THEN statements are incorporated into expert systems' inference mechanisms so that a rule interpreter can apply production rules to specific situations lodged in the context data structure or short-term working memory buffer containing information supplied about that situation and draw conclusions or make recommendations.


Production rules were crucial in the development of artificial intelligence as a discipline.


Joshua Lederberg, Edward Feigenbaum, and other Stanford University partners would later use this fundamental finding to develop DENDRAL, an expert system for detecting molecular structure, in the 1960s.

These production guidelines were developed in DENDRAL after discussions between the system's developers and other mass spectrometry specialists.

Edward Shortliffe, Bruce Buchanan, and Edward Feigenbaum used production principles to create MYCIN in the 1970s.

MYCIN has over 600 IFTHEN statements in it, each reflecting domain-specific knowledge about microbial illness diagnosis and treatment.

PUFF, EXPERT, PROSPECTOR, R1, and CLAVIER were among the several production rule systems that followed.


Simon, Newell, and Shaw demonstrated how heuristics may overcome the drawbacks of classical algorithms, which promise answers but take extensive searches or heavy computing to find.


A process for solving issues in a restricted, clear sequence of steps is known as an algorithm.

Sequential operations, conditional operations, and iterative operations are the three kinds of fundamental instructions required to create computable algorithms.

Sequential operations perform tasks in a step-by-step manner.

The algorithm only moves on to the next job when each step is completed.

Conditional operations are made up of instructions that ask questions and then choose the next step dependent on the response.

One kind of conditional operation is the "IF-THEN" expression.

Iterative operations run "loops" of instructions.

These statements tell the task flow to go back and repeat a previous series of statements in order to solve an issue.

Algorithms are often compared to cookbook recipes, in which a certain order and execution of actions in the manufacture of a product—in this example, food—are dictated by a specific sequence of set instructions.


Newell, Shaw, and Simon created list processing for the Logic Theorist software in 1956.


List processing is a programming technique for allocating dynamic storage.

It's mostly utilized in symbol manipulation computer applications like compiler development, visual or linguistic data processing, and artificial intelligence, among others.

Allen Newell, J. Clifford Shaw, and Herbert A. Simon are credited with creating the first list processing software with enormous, sophisticated, and flexible memory structures that were not reliant on subsequent computer/machine memory.

List processing techniques are used in a number of higher-order languages.

IPL and LISP, two artificial intelligence languages, are the most well-known.


Simon and Newell's Generic Problem Solver (GPS) was published in the early 1960s, and it thoroughly explains the essential properties of symbol manipulation as a general process that underpins all types of intelligent problem-solving behavior.


GPS formed the foundation for decades of early AI research.

To arrive at a solution, General Problem Solver is a software for a problem-solving method that employs means-ends analysis and planning.

GPS was created with the goal of separating the problem-solving process from information particular to the situation at hand, allowing it to be used to a wide range of issues.

Simon is an economist, a political scientist, and a cognitive psychologist.


Simon is known for the notions of limited rationality, satisficing, and power law distributions in complex systems, in addition to his important contributions to organizational theory, decision-making, and problem-solving.


Computer and data scientists are interested in all three themes.

Human reasoning is inherently constrained, according to bounded rationality.

Humans lack the time or knowledge required to make ideal judgments; problems are difficult, and the mind has cognitive limitations.

Satisficing is a term used to describe a decision-making process that produces a solution that "satisfies" and "suffices," rather than the most ideal answer.

Customers use satisficing in market conditions when they choose things that are "good enough," meaning sufficient or acceptable.


Simon described how power law distributions were obtained from preferred attachment mechanisms in his study on complex organizations.


When a relative change in one variable induces a proportionate change in another, power laws, also known as scaling laws, come into play.

A square is a simple illustration; when the length of a side doubles, the square's area quadruples.

Power laws may be found in biological systems, fractal patterns, and wealth distributions, among other things.

Preferential attachment processes explain why the affluent grow wealthier in income/wealth distributions: Individuals' wealth is dispersed according on their current level of wealth; those with more wealth get proportionately more income, and hence greater overall wealth, than those with less.

When graphed, such distributions often create so-called long tails.

These long-tailed distributions are being employed to explain crowdsourcing, microfinance, and online marketing, among other things.



Simon was born in Milwaukee, Wisconsin, to a Jewish electrical engineer with multiple patents who came from Germany in the early twentieth century.


His mother was a musical prodigy. Simon grew interested in the social sciences after reading books on psychology and economics written by an uncle.

He has said that two works inspired his early thinking on the subjects: Norman Angell's The Great Illusion (1909) and Henry George's Progress and Poverty (1879).



Simon obtained his doctorate in organizational decision-making from the University of Chicago in 1943.

Rudolf Carnap, Harold Lasswell, Edward Merriam, Nicolas Rashevsky, and Henry Schultz were among his instructors.

He started his career as a political science professor at the Illinois Institute of Technology, where he taught and conducted research.

In 1949, he transferred to Carnegie Mellon University, where he stayed until 2001.

He progressed through the ranks of the Department of Industrial Management to become its chair.

He has written twenty-seven books and several articles that have been published.

In 1959, he was elected a member of the American Academy of Arts and Sciences.

In 1975, Simon was awarded the coveted Turing Award, and in 1978, he was awarded the Nobel Prize in Economics.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Dartmouth AI Conference; Expert Systems; General Problem Solver; Newell, Allen.


References & Further Reading:


Crowther-Heyck, Hunter. 2005. Herbert A. Simon: The Bounds of Reason in Modern America. Baltimore: Johns Hopkins Press.

Newell, Allen, and Herbert A. Simon. 1956. The Logic Theory Machine: A Complex Information Processing System. Santa Monica, CA: The RAND Corporation.

Newell, Allen, and Herbert A. Simon. 1976. “Computer Science as Empirical Inquiry: Symbols and Search.” Communications of the ACM 19, no. 3: 113–26.

Simon, Herbert A. 1996. Models of My Life. Cambridge, MA: MIT Press.



AI Terms Glossary - ACORN

 


ACORN was a Hybrid rule-based Bayesian system for directing emergency department doctors on how to treat patients with chest discomfort.

It was created and put into usage in the mid-1980s.


Further Reading:





Artificial Intelligence - What Is The PARRY Computer Program?




PARRY (short for paranoia) is the first computer program to imitate a mental patient, created by Stanford University psychiatrist Kenneth Colby.

PARRY is communicated with by the psychiatrist-user in simple English.

PARRY's responses are intended to mirror the cognitive (mal)functioning of a paranoid patient.

In the late 1960s and early 1970s, Colby experimented with mental patient chatbots, which led to the development of PARRY.

Colby sought to illustrate that cognition is fundamentally a symbol manipulation process and that computer simulations may help psychiatric research.

Many technical aspects of PARRY were shared with Joseph Weizenbaum's ELIZA.

Both of these applications were conversational in nature, allowing the user to submit remarks in plain English.

PARRY's underlying algorithms, like ELIZA's, examined inputted phrases for essential terms to create plausible answers.





PARRY, on the other hand, was given a history in order to imitate the right paranoid behaviors.

Parry, who was fictitious, was a gambler who had gotten into a fight with a bookie.

Parry was paranoid enough to assume that the bookie would send the Mafia after him.

Since a result, PARRY freely shared information on its crazy Mafia ideas, as it would wish to enlist the user's assistance.

PARRY was also born with the ability to be "sensitive to his parents, religion, and sex" (Colby 1975, 36).

In most other topics of conversation, the show was neutral.

If PARRY couldn't find a match in its database, it may respond with "I don't know," "Why do you ask that?" or by returning to an earlier subject (Colby 1975, 77).

Whereas ELIZA's achievements made Weizenbaum a skeptic of AI, PARRY's findings bolstered Colby's support for computer simulations in psychiatry.

Colby picked paranoia as the mental state to mimic because it has the least fluid behavior and hence is the simplest to see.

Colby felt that human cognition was a process of symbol manipulation, as did artificial intelligence pioneers Herbert Simon and Allen Newell.

PARRY's cognitive functioning resembled that of a paranoid human being as a result of this.

Colby emphasized that a psychiatrist conversing with PARRY had learnt something about human paranoia.

He saw PARRY as a tool to help novice psychiatrists get started in their careers.

PARRY's reactions might also be used to determine the most successful therapeutic discourse lines.

Colby hoped that systems like PARRY would assist confirm or refute psychiatric hypotheses while also bolstering the field's scientific credibility.

On PARRY, Colby put his shame humiliation hypothesis of paranoid insanity to the test.

In the 1970s, Colby performed a series of studies to see how effectively PARRY could simulate true paranoia.

Two of these examinations resembled the Turing Test.

To begin, practicing psychiatrists were instructed to interview patients using a teletype terminal, an antiquated electromechanical typewriter that was used to send and receive typed messages over telecommunications.

The doctors were unaware that PARRY was one of the patients who took part in the interviews.

The transcripts of these interviews were then distributed to a group of 100 psychiatrists.

These psychiatrists were tasked with determining which version was created by a computer.

Twenty psychiatrists properly recognized PARRY, whereas the other twenty did not.

A total of 100 computer scientists received transcripts.

32 of the 67 computer scientists were accurate, while 35 were incorrect.

According to Colby, the findings "are akin to tossing a coin" statistically, and PARRY was not exposed (Colby 1975, 92).



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Chatbots and Loebner Prize; ELIZA; Expert Systems; Natural Language Processing and Speech Understanding; Turing Test.


References & Further Reading:


Cerf, Vincent. 1973. “Parry Encounters the Doctor: Conversation between a Simulated Paranoid and a Simulated Psychiatrist.” Datamation 19, no. 7 (July): 62–65.

Colby, Kenneth M. 1975. Artificial Paranoia: A Computer Simulation of Paranoid Pro￾cesses. New York: Pergamon Press.

Colby, Kenneth M., James B. Watt, and John P. Gilbert. 1966. “A Computer Method of Psychotherapy: Preliminary Communication.” Journal of Nervous and Mental Disease 142, no. 2 (February): 148–52.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence, 251–56, 308–28. San Francisco: W. H. Freeman and Company.

Warren, Jim. 1976. Artificial Paranoia: An NIMH Program Report. Rockville, MD: US. Department of Health, Education, and Welfare, Public Health Service, Alcohol, Drug Abuse, and Mental Health Administration, National Institute of Mental Health, Division of Scientific and Public Information, Mental Health Studies and Reports Branch.






Artificial Intelligence - What Is The MYCIN Expert System?




MYCIN is an interactive expert system for infectious illness diagnosis and treatment developed by computer scientists Edward Feigenbaum (1936–) and Bruce Buchanan at Stanford University in the 1970s.

MYCIN was Feigenbaum's second expert system (after DENDRAL), but it was the first to be commercially accessible as a standalone software package.

TeKnowledge, the software business cofounded by Feigenbaum and other partners, offered EMYCIN as the most successful expert shell for this purpose by the 1980s.

MYCIN was developed by Feigenbaum's Heuristic Programming Project (HPP) in collaboration with Stanford Medical School's Infectious Diseases Group (IDG).

The expert clinical physician was IDG's Stanley Cohen.

Feigenbaum and Buchanan had read stories of antibiotics being prescribed wrongly owing to misdiagnoses in the early 1970s.

MYCIN was created to assist a human expert in making the best judgment possible.

MYCIN started out as a consultation tool.

MYCIN supplied a diagnosis that included the necessary antibiotics and dose after inputting the results of a patient's blood test, bacterial cultures, and other data.



MYCIN also served as an explanation system.

In simple English, the physician-user may ask MYCIN to expound on a certain inference.

Finally, MYCIN had a knowledge-acquisition software that was used to keep the system's knowledge base up to date.

Feigenbaum and his collaborators introduced two additional features to MYCIN after gaining experience with DENDRAL.

MYCIN's inference engine comes with a rule interpreter to begin with.

This enabled "goal-directed backward chaining" to be used to achieve diagnostic findings (Cendrowska and Bramer 1984, 229).

MYCIN set itself the objective of discovering a useful clinical parameter that matched the patient data submitted at each phase in the procedure.

The inference engine looked for a set of rules that applied to the parameter in question.

MYCIN typically required more information when evaluating the premise of one of the rules in this parameter set.

The system's next subgoal was to get that data.

MYCIN might test out new regulations or ask the physician for further information.

This process was repeated until MYCIN had enough data on numerous factors to make a diagnosis.

The certainty factor was MYCIN's second unique feature.

These elements should not be seen "as conditional probabilities, [though] they are loosely grounded on probability theory," according to William van Melle (then a doctoral student working on MYCIN for his thesis project) (van Melle 1978, 314).

The execution of production rules was assigned a value between –1 and +1 by MYCIN (dependent on how strongly the system felt about their correctness).

MYCIN's diagnosis also included these certainty elements, allowing the physician-user to make their own final decision.

The software package, known as EMYCIN, was released in 1976 and comprised an inference engine, user interface, and short-term memory.

It didn't have any information.

("E" stood for "Empty" at first, then "Essential.") Customers of EMYCIN were required to link their own knowledge base to the system.

Faced with high demand for EMYCIN packages and high interest in MOLGEN (Feigenbaum's third expert system), HPP decided to form IntelliCorp and TeKnowledge, the first two expert system firms.

TeKnowledge was eventually founded by a group of roughly twenty individuals, including all of the previous HPP students who had developed expert systems.

EMYCIN was and continues to be their most popular product.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Expert Systems; Knowledge Engineering


References & Further Reading:


Cendrowska, J., and M. A. Bramer. 1984. “A Rational Reconstruction of the MYCIN Consultation System.” International Journal of Man-Machine Studies 20 (March): 229–317.

Crevier, Daniel. 1993. AI: Tumultuous History of the Search for Artificial Intelligence. Princeton, NJ: Princeton University Press.

Feigenbaum, Edward. 2000. “Oral History.” Charles Babbage Institute, October 13, 2000. van Melle, William. 1978. “MYCIN: A Knowledge-based Consultation Program for Infectious Disease Diagnosis.” International Journal of Man-Machine Studies 10 (May): 313–22.







Artificial Intelligence - What Is The MOLGEN Expert System?

 



MOLGEN is an expert system that helped molecular scientists and geneticists plan studies between 1975 and 1980.

It was Edward Feigenbaum's Heuristic Programming Project (HPP) at Stanford University's third expert system (after DENDRAL and MYCIN).

MOLGEN, like MYCIN before it, attracted hundreds of users outside of Stanford.

MOLGEN was originally made accessible to artificial intelligence researchers, molecular biologists, and geneticists via time-sharing on the GENET network in the 1980s.

Feigenbaum founded IntelliCorp in the late 1980s to offer a stand-alone software version of MOLGEN.

Scientific advancements in chromosomes and genes sparked an information boom in the early 1970s.

In 1971, Stanford University scientist Paul Berg performed the first gene splicing studies.

Stanford geneticist Stanley Cohen and University of California at San Francisco biochemist Herbert Boyer succeeded in inserting recombinant DNA into an organ ism two years later; the host organism (a bacterium) subsequently spontaneously replicated the foreign rDNA structure in its progeny.

Because of these developments, Stanford molecular researcher Joshua Lederberg told Feigenbaum that now was the right time to construct an expert system in Lederberg's expertise of molecular biology.

(Lederberg and Feigenbaum previously collaborated on DENDRAL, the first expert system.) MOLGEN could accomplish for recombinant DNA research and genetic engineering what DENDRAL had done for mass spectrometry, the two agreed.

Both expert systems were created with developing scientific topics in mind.

This enabled MOL GEN (and DENRAL) to absorb the most up-to-date scientific information and contribute to the advancement of their respective fields.

Mark Stefik and Peter Friedland developed programs for MOLGEN as their thesis project at HPP, and Feigenbaum was the primary investigator.

MOLGEN was supposed to follow a "skeletal blueprint" (Friedland and Iwasaki 1985, 161).

MOLGEN prepared a new experiment in the manner of a human expert, beginning with a design approach that had previously proven effective for a comparable issue.

MOLGEN then made hierarchical, step-by-step changes to the plan.

The algorithm was able to choose the most promising new experiments because to the combination of skeleton blueprints and MOLGEN's enormous knowledge base in molecular biology.

MOLGEN contained 300 lab procedures and strategies, as well as current data on forty genes, phages, plasmids, and nucleic acid structures, by 1980.

Fried reich and Stefik presented MOLGEN with a set of algorithms based on the molecular biology knowledge of Stanford University's Douglas Brutlag, Larry Kedes, John Sninsky, and Rosalind Grymes.

SEQ (for nucleic acid sequence analysis), GA1 (for generating enzyme maps of DNA structures), and SAFE were among them (for selecting enzymes most suit able for gene excision).

Beginning in February 1980, MOLGEN was made available to the molecular biology community outside of Stanford.

Under an account named GENET, the system was linked to SUMEX AIM (Stanford University Medical Experimental Computer for Artificial Intelligence in Medicine).

GENET was able to swiftly locate hundreds of users around the United States.

Academic scholars, experts from commercial giants like Monsanto, and researchers from modest start-ups like Genentech were among the frequent visitors.

The National Institutes of Health (NIH), which was SUMEX AIM's primary supporter, finally concluded that business customers could not have unfettered access to cutting-edge technology produced with public funds.

Instead, the National Institutes of Health encouraged Feigenbaum, Brutlag, Kedes, and Friedland to form IntelliGenetics, a company that caters to business biotech customers.

IntelliGenetics created BIONET with the support of a $5.6 million NIH grant over five years to sell or rent MOLGEN and other GENET applications.

For a $400 yearly charge, 900 labs throughout the globe had access to BIONET by the end of the 1980s.

Companies who did not wish to put their data on BIONET might purchase a software package from IntelliGenetics.

Until the mid-1980s, when IntelliGenetics withdrew its genetics material and maintained solely its underlying Knowledge Engineering Environment, MOLGEN's software did not sell well as a stand-alone product (KEE).

IntelliGenetics' AI division, which marketed the new KEE shell, changed its name to IntelliCorp.

Two more public offerings followed, but growth finally slowed.

MOLGEN's shell's commercial success, according to Feigenbaum, was hampered by its LISP-language; although LISP was chosen by pioneering computer scientists working on mainframe computers, it did not inspire the same level of interest in the corporate minicomputer sector.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


DENDRAL; Expert Systems; Knowledge Engineering.


References & Further Reading:


Feigenbaum, Edward. 2000. Oral History. Minneapolis, MN: Charles Babbage Institute.

Friedland, Peter E., and Yumi Iwasaki. 1985. “The Concept and Implementation of  Skeletal Plans.” Journal of Automated Reasoning 1: 161–208.

Friedland, Peter E., and Laurence H. Kedes. 1985. “Discovering the Secrets of DNA.” Communications of the ACM 28 (November): 1164–85.

Lenoir, Timothy. 1998. “Shaping Biomedicine as an Information Science.” In Proceedings of the 1998 Conference on the History and Heritage of Science Information Systems, edited by Mary Ellen Bowden, Trudi Bellardo Hahn, and Robert V. Williams, 27–46. Pittsburgh, PA: Conference on the History and Heritage of Science Information Systems.

Watt, Peggy. 1984. “Biologists Map Genes On-Line.” InfoWorld 6, no. 19 (May 7): 43–45.







Artificial Intelligence - Who Was John McCarthy?

 


John McCarthy  (1927–2011) was an American computer scientist and mathematician who was best known for helping to develop the subject of artificial intelligence in the late 1950s and pushing the use of formal logic in AI research.

McCarthy was a creative thinker who earned multiple accolades for his contributions to programming languages and operating systems research.

Throughout McCarthy's life, however, artificial intelligence and "formalizing common sense" remained his primary research interest (McCarthy 1990).

As a graduate student, McCarthy first met the concepts that would lead him to AI at the Hixon conference on "Cerebral Mechanisms in Behavior" in 1948.

The symposium was place at the California Institute of Technology, where McCarthy had just finished his undergraduate studies and was now enrolled in a graduate mathematics program.

In the United States, machine intelligence had become a subject of substantial academic interest under the wide term of cybernetics by 1948, and many renowned cyberneticists, notably Princeton mathematician John von Neumann, were in attendance at the symposium.

McCarthy moved to Princeton's mathematics department a year later, when he discussed some early ideas inspired by the symposium with von Neumann.

McCarthy never published the work, despite von Neumann's urging, since he believed cybernetics could not solve his problems concerning human knowing.

McCarthy finished a PhD on partial differential equations at Princeton.

He stayed at Princeton as an instructor after graduating in 1951, and in the summer of 1952, he had the chance to work at Bell Labs with cyberneticist and inventor of information theory Claude Shannon, whom he persuaded to collaborate on an edited collection of writings on machine intelligence.

Automata Studies received contributions from a variety of fields, ranging from pure mathematics to neuroscience.

McCarthy, on the other hand, felt that the published studies did not devote enough attention to the important subject of how to develop intelligent machines.

McCarthy joined the mathematics department at Stanford in 1953, but was fired two years later, maybe because he spent too much time thinking about intelligent computers and not enough time working on his mathematical studies, he speculated.

In 1955, he accepted a position at Dartmouth, just as IBM was preparing to establish the New England Computation Center at MIT.

The New England Computation Center gave Dartmouth access to an IBM computer that was installed at MIT and made accessible to a group of New England colleges.

McCarthy met IBM researcher Nathaniel Rochester via the IBM initiative, and he recruited McCarthy to IBM in the summer of 1955 to work with his research group.

McCarthy persuaded Rochester of the need for more research on machine intelligence, and he submitted a proposal to the Rockefeller Foundation for a "Summer Research Project on Artificial Intelligence" with Rochester, Shannon, and Marvin Minsky, a graduate student at Princeton, which included the first known use of the phrase "artificial intelligence." Despite the fact that the Dartmouth Project is usually regarded as a watershed moment in the development of AI, the conference did not go as McCarthy had envisioned.

The Rockefeller Foundation supported the proposal at half the proposed budget since it was for such an unique field of research with a relatively young professor as author, and because Shannon's reputation carried substantial weight with the Foundation.

Furthermore, since the event took place over many weeks in the summer of 1955, only a handful of the guests were able to attend the whole period.

As a consequence, the Dartmouth conference was a fluid affair with an ever-changing and unpredictably diverse guest list.

Despite its chaotic implementation, the meeting was crucial in establishing AI as a distinct area of research.

McCarthy won a Sloan grant to spend a year at MIT, closer to IBM's New England Computation Center, while still at Dartmouth in 1957.

McCarthy was given a post in the Electrical Engineering department at MIT in 1958, which he accepted.

Later, he was joined by Minsky, who worked in the mathematics department.

McCarthy and Minsky suggested the construction of an official AI laboratory to Jerome Wiesner, head of MIT's Research Laboratory of Electronics, in 1958.

McCarthy and Minsky agreed on the condition that Wiesner let six freshly accepted graduate students into the laboratory, and the "artificial intelligence project" started teaching its first generation of students.

McCarthy released his first article on artificial intelligence in the same year.

In his book "Programs with Common Sense," he described a computer system he named the Advice Taker that would be capable of accepting and understanding instructions in ordinary natural language from nonexpert users.

McCarthy would later define Advice Taker as the start of a study program aimed at "formalizing common sense." McCarthy felt that everyday common sense notions, such as comprehending that if you don't know a phone number, you'll need to look it up before calling, might be written as mathematical equations and fed into a computer, enabling the machine to come to the same conclusions as humans.

Such formalization of common knowledge, McCarthy felt, was the key to artificial intelligence.

McCarthy's presentation, which was presented at the United Kingdom's National Physical Laboratory's "Symposium on Mechansation of Thought Processes," helped establish the symbolic program of AI research.

McCarthy's research was focused on AI by the late 1950s, although he was also involved in a range of other computing-related topics.

In 1957, he was assigned to a group of the Association for Computing Machinery charged with developing the ALGOL programming language, which would go on to become the de facto language for academic research for the next several decades.

He created the LISP programming language for AI research in 1958, and its successors are widely used in business and academia today.

McCarthy contributed to computer operating system research via the construction of time sharing systems, in addition to his work on programming languages.

Early computers were large and costly, and they could only be operated by one person at a time.

McCarthy identified the necessity for several users throughout a major institution, such as a university or hospital, to be able to use the organization's computer systems concurrently via computer terminals in their offices from his first interaction with computers in 1955 at IBM.

McCarthy pushed for study on similar systems at MIT, serving on a university committee that looked into the issue and ultimately assisting in the development of MIT's Compatible Time-Sharing System (CTSS).

Although McCarthy left MIT before the CTSS work was completed, his advocacy with J.C.R.

Licklider, future office head at the Advanced Research Projects Agency, the predecessor to DARPA, while a consultant at Bolt Beranek and Newman in Cambridge, was instrumental in helping MIT secure significant federal support for computing research.

McCarthy was recruited to join what would become the second department of computer science in the United States, after Purdue's, by Stanford Professor George Forsythe in 1962.

McCarthy insisted on going only as a full professor, which he believed would be too much for Forsythe to handle as a young researcher.

Forsythe was able to persuade Stanford to grant McCarthy a full chair, and he moved to Stanford in 1965 to establish the Stanford AI laboratory.

Until his retirement in 2000, McCarthy oversaw research at Stanford on AI topics such as robotics, expert systems, and chess.

McCarthy was up in a family where both parents were ardent members of the Communist Party, and he had a lifetime interest in Russian events.

He maintained numerous professional relationships with Soviet cybernetics and AI experts, traveling and lecturing there in the mid-1960s, and even arranged a chess match between a Stanford chess computer and a Russian equivalent in 1965, which the Russian program won.

He developed many foundational concepts in symbolic AI theory while at Stanford, such as circumscription, which expresses the idea that a computer must be allowed to make reasonable assumptions about problems presented to it; otherwise, even simple scenarios would have to be specified in such exacting logical detail that the task would be all but impossible.

McCarthy's accomplishments have been acknowledged with various prizes, including the 1971 Turing Award, the 1988 Kyoto Prize, admission into the National Academy of Sciences in 1989, the 1990 Presidential Medal of Science, and the 2003 Benjamin Franklin Medal.

McCarthy was a brilliant thinker who continuously imagined new technologies, such as a space elevator for economically transporting stuff into orbit and a system of carts strung from wires to better urban transportation.

In a 2008 interview, McCarthy was asked what he felt the most significant topics in computing now were, and he answered without hesitation, "Formalizing common sense," the same endeavor that had inspired him from the start.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Cybernetics and AI; Expert Systems; Symbolic Logic.


References & Further Reading:


Hayes, Patrick J., and Leora Morgenstern. 2007. “On John McCarthy’s 80th Birthday, in Honor of His Contributions.” AI Magazine 28, no. 4 (Winter): 93–102.

McCarthy, John. 1990. Formalizing Common Sense: Papers, edited by Vladimir Lifschitz. Norwood, NJ: Albex.

Morgenstern, Leora, and Sheila A. McIlraith. 2011. “John McCarthy’s Legacy.” Artificial Intelligence 175, no. 1 (January): 1–24.

Nilsson, Nils J. 2012. “John McCarthy: A Biographical Memoir.” Biographical Memoirs of the National Academy of Sciences. http://www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/mccarthy-john.pdf.



Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...