Showing posts with label DENDRAL. Show all posts
Showing posts with label DENDRAL. Show all posts

AI Glossary - Agenda Based Systems

 


An agenda or job-list controls the inference process in Agenda Based Systems.

It deconstructs the system into discrete, modular stages.

Each entry in the work list, or task, represents a particular task to be completed throughout the problem-solving process.


Related Terms:


AM, DENDRAL.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


Be sure to refer to the complete & active AI Terms Glossary here.

You may also want to read more about Artificial Intelligence here.


Artificial Intelligence - Who Was Herbert A. Simon?

 


Herbert A. Simon (1916–2001) was a multidisciplinary scholar who contributed significantly to artificial intelligence.


He is largely regarded as one of the twentieth century's most prominent social scientists.

His contributions at Carnegie Mellon University lasted five decades.

Early artificial intelligence research was driven by the idea of the computer as a symbol manipulator rather than a number cruncher.

Emil Post, who originally wrote about this sort of computational model in 1943, is credited with inventing production systems, which included sets of rules for symbol strings used to establish conditions—which must exist before rules can be applied—and the actions to be done or conclusions to be drawn.

Simon and his Carnegie Mellon colleague Allen Newell popularized these theories regarding symbol manipulation and production systems by praising their potential benefits for general-purpose reading, storing, and replicating, as well as comparing and contrasting various symbols and patterns.


Simon, Newell, and Cliff Shaw's Logic Theorist software was the first to employ symbol manipulation to construct "intelligent" behavior.


Theorems presented in Bertrand Russell and Alfred North Whitehead's Principia Mathematica (1910) might be independently proved by logic theorists.

Perhaps most notably, the Logic Theorist program uncovered a shorter, more elegant demonstration of Theorem 2.85 in the Principia Mathematica, which was subsequently rejected by the Journal of Symbolic Logic since it was coauthored by a machine.

Although it was theoretically conceivable to prove the Principia Mathematica's theorems in an exhaustively detailed and methodical manner, it was impractical in reality due to the time required.

Newell and Simon were fascinated by the human rules of thumb for solving difficult issues for which an extensive search for answers was impossible due to the massive quantities of processing necessary.

They used the term "heuristics" to describe procedures that may solve issues but do not guarantee success.


A heuristic is a "rule of thumb" used to solve a problem that is too difficult or time consuming to address using an exhaustive search, a formula, or a step-by-step method.


Heuristic approaches are often compared with algorithmic methods in computer science, with the result of the method being a significant differentiating element.

According to this contrast, a heuristic program will provide excellent results in most cases, but not always, while an algorithmic program is a clear technique that guarantees a solution.

This is not, however, a technical difference.

In fact, a heuristic procedure that consistently yields the best result may no longer be deemed "heuristic"—alpha-beta pruning is an example of this.

Simon's heuristics are still utilized by programmers who are trying to solve issues that demand a lot of time and/or memory.

The game of chess is one such example, in which an exhaustive search of all potential board configurations for the proper solution is beyond the human mind's or any computer's capabilities.


Indeed, for artificial intelligence research, Herbert Simon and Allen Newell referred to computer chess as the Drosophila or fruit fly.


Heuristics may also be used to solve issues that don't have a precise answer, such as in medical diagnosis, when heuristics are applied to a collection of symptoms to determine the most probable diagnosis.

Production rules are derived from a class of cognitive science models that apply heuristic principles to productions (situations).

In practice, these rules reduce down to "IF-THEN" statements that reflect specific preconditions or antecedents, as well as the conclusions or consequences that these preconditions or antecedents justify.

"IF there are two X's in a row, THEN put an O to block," is a frequent example offered for the application of production rules to the tic-tac-toe game.

These IF-THEN statements are incorporated into expert systems' inference mechanisms so that a rule interpreter can apply production rules to specific situations lodged in the context data structure or short-term working memory buffer containing information supplied about that situation and draw conclusions or make recommendations.


Production rules were crucial in the development of artificial intelligence as a discipline.


Joshua Lederberg, Edward Feigenbaum, and other Stanford University partners would later use this fundamental finding to develop DENDRAL, an expert system for detecting molecular structure, in the 1960s.

These production guidelines were developed in DENDRAL after discussions between the system's developers and other mass spectrometry specialists.

Edward Shortliffe, Bruce Buchanan, and Edward Feigenbaum used production principles to create MYCIN in the 1970s.

MYCIN has over 600 IFTHEN statements in it, each reflecting domain-specific knowledge about microbial illness diagnosis and treatment.

PUFF, EXPERT, PROSPECTOR, R1, and CLAVIER were among the several production rule systems that followed.


Simon, Newell, and Shaw demonstrated how heuristics may overcome the drawbacks of classical algorithms, which promise answers but take extensive searches or heavy computing to find.


A process for solving issues in a restricted, clear sequence of steps is known as an algorithm.

Sequential operations, conditional operations, and iterative operations are the three kinds of fundamental instructions required to create computable algorithms.

Sequential operations perform tasks in a step-by-step manner.

The algorithm only moves on to the next job when each step is completed.

Conditional operations are made up of instructions that ask questions and then choose the next step dependent on the response.

One kind of conditional operation is the "IF-THEN" expression.

Iterative operations run "loops" of instructions.

These statements tell the task flow to go back and repeat a previous series of statements in order to solve an issue.

Algorithms are often compared to cookbook recipes, in which a certain order and execution of actions in the manufacture of a product—in this example, food—are dictated by a specific sequence of set instructions.


Newell, Shaw, and Simon created list processing for the Logic Theorist software in 1956.


List processing is a programming technique for allocating dynamic storage.

It's mostly utilized in symbol manipulation computer applications like compiler development, visual or linguistic data processing, and artificial intelligence, among others.

Allen Newell, J. Clifford Shaw, and Herbert A. Simon are credited with creating the first list processing software with enormous, sophisticated, and flexible memory structures that were not reliant on subsequent computer/machine memory.

List processing techniques are used in a number of higher-order languages.

IPL and LISP, two artificial intelligence languages, are the most well-known.


Simon and Newell's Generic Problem Solver (GPS) was published in the early 1960s, and it thoroughly explains the essential properties of symbol manipulation as a general process that underpins all types of intelligent problem-solving behavior.


GPS formed the foundation for decades of early AI research.

To arrive at a solution, General Problem Solver is a software for a problem-solving method that employs means-ends analysis and planning.

GPS was created with the goal of separating the problem-solving process from information particular to the situation at hand, allowing it to be used to a wide range of issues.

Simon is an economist, a political scientist, and a cognitive psychologist.


Simon is known for the notions of limited rationality, satisficing, and power law distributions in complex systems, in addition to his important contributions to organizational theory, decision-making, and problem-solving.


Computer and data scientists are interested in all three themes.

Human reasoning is inherently constrained, according to bounded rationality.

Humans lack the time or knowledge required to make ideal judgments; problems are difficult, and the mind has cognitive limitations.

Satisficing is a term used to describe a decision-making process that produces a solution that "satisfies" and "suffices," rather than the most ideal answer.

Customers use satisficing in market conditions when they choose things that are "good enough," meaning sufficient or acceptable.


Simon described how power law distributions were obtained from preferred attachment mechanisms in his study on complex organizations.


When a relative change in one variable induces a proportionate change in another, power laws, also known as scaling laws, come into play.

A square is a simple illustration; when the length of a side doubles, the square's area quadruples.

Power laws may be found in biological systems, fractal patterns, and wealth distributions, among other things.

Preferential attachment processes explain why the affluent grow wealthier in income/wealth distributions: Individuals' wealth is dispersed according on their current level of wealth; those with more wealth get proportionately more income, and hence greater overall wealth, than those with less.

When graphed, such distributions often create so-called long tails.

These long-tailed distributions are being employed to explain crowdsourcing, microfinance, and online marketing, among other things.



Simon was born in Milwaukee, Wisconsin, to a Jewish electrical engineer with multiple patents who came from Germany in the early twentieth century.


His mother was a musical prodigy. Simon grew interested in the social sciences after reading books on psychology and economics written by an uncle.

He has said that two works inspired his early thinking on the subjects: Norman Angell's The Great Illusion (1909) and Henry George's Progress and Poverty (1879).



Simon obtained his doctorate in organizational decision-making from the University of Chicago in 1943.

Rudolf Carnap, Harold Lasswell, Edward Merriam, Nicolas Rashevsky, and Henry Schultz were among his instructors.

He started his career as a political science professor at the Illinois Institute of Technology, where he taught and conducted research.

In 1949, he transferred to Carnegie Mellon University, where he stayed until 2001.

He progressed through the ranks of the Department of Industrial Management to become its chair.

He has written twenty-seven books and several articles that have been published.

In 1959, he was elected a member of the American Academy of Arts and Sciences.

In 1975, Simon was awarded the coveted Turing Award, and in 1978, he was awarded the Nobel Prize in Economics.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Dartmouth AI Conference; Expert Systems; General Problem Solver; Newell, Allen.


References & Further Reading:


Crowther-Heyck, Hunter. 2005. Herbert A. Simon: The Bounds of Reason in Modern America. Baltimore: Johns Hopkins Press.

Newell, Allen, and Herbert A. Simon. 1956. The Logic Theory Machine: A Complex Information Processing System. Santa Monica, CA: The RAND Corporation.

Newell, Allen, and Herbert A. Simon. 1976. “Computer Science as Empirical Inquiry: Symbols and Search.” Communications of the ACM 19, no. 3: 113–26.

Simon, Herbert A. 1996. Models of My Life. Cambridge, MA: MIT Press.



Artificial Intelligence - What Is The MYCIN Expert System?




MYCIN is an interactive expert system for infectious illness diagnosis and treatment developed by computer scientists Edward Feigenbaum (1936–) and Bruce Buchanan at Stanford University in the 1970s.

MYCIN was Feigenbaum's second expert system (after DENDRAL), but it was the first to be commercially accessible as a standalone software package.

TeKnowledge, the software business cofounded by Feigenbaum and other partners, offered EMYCIN as the most successful expert shell for this purpose by the 1980s.

MYCIN was developed by Feigenbaum's Heuristic Programming Project (HPP) in collaboration with Stanford Medical School's Infectious Diseases Group (IDG).

The expert clinical physician was IDG's Stanley Cohen.

Feigenbaum and Buchanan had read stories of antibiotics being prescribed wrongly owing to misdiagnoses in the early 1970s.

MYCIN was created to assist a human expert in making the best judgment possible.

MYCIN started out as a consultation tool.

MYCIN supplied a diagnosis that included the necessary antibiotics and dose after inputting the results of a patient's blood test, bacterial cultures, and other data.



MYCIN also served as an explanation system.

In simple English, the physician-user may ask MYCIN to expound on a certain inference.

Finally, MYCIN had a knowledge-acquisition software that was used to keep the system's knowledge base up to date.

Feigenbaum and his collaborators introduced two additional features to MYCIN after gaining experience with DENDRAL.

MYCIN's inference engine comes with a rule interpreter to begin with.

This enabled "goal-directed backward chaining" to be used to achieve diagnostic findings (Cendrowska and Bramer 1984, 229).

MYCIN set itself the objective of discovering a useful clinical parameter that matched the patient data submitted at each phase in the procedure.

The inference engine looked for a set of rules that applied to the parameter in question.

MYCIN typically required more information when evaluating the premise of one of the rules in this parameter set.

The system's next subgoal was to get that data.

MYCIN might test out new regulations or ask the physician for further information.

This process was repeated until MYCIN had enough data on numerous factors to make a diagnosis.

The certainty factor was MYCIN's second unique feature.

These elements should not be seen "as conditional probabilities, [though] they are loosely grounded on probability theory," according to William van Melle (then a doctoral student working on MYCIN for his thesis project) (van Melle 1978, 314).

The execution of production rules was assigned a value between –1 and +1 by MYCIN (dependent on how strongly the system felt about their correctness).

MYCIN's diagnosis also included these certainty elements, allowing the physician-user to make their own final decision.

The software package, known as EMYCIN, was released in 1976 and comprised an inference engine, user interface, and short-term memory.

It didn't have any information.

("E" stood for "Empty" at first, then "Essential.") Customers of EMYCIN were required to link their own knowledge base to the system.

Faced with high demand for EMYCIN packages and high interest in MOLGEN (Feigenbaum's third expert system), HPP decided to form IntelliCorp and TeKnowledge, the first two expert system firms.

TeKnowledge was eventually founded by a group of roughly twenty individuals, including all of the previous HPP students who had developed expert systems.

EMYCIN was and continues to be their most popular product.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Expert Systems; Knowledge Engineering


References & Further Reading:


Cendrowska, J., and M. A. Bramer. 1984. “A Rational Reconstruction of the MYCIN Consultation System.” International Journal of Man-Machine Studies 20 (March): 229–317.

Crevier, Daniel. 1993. AI: Tumultuous History of the Search for Artificial Intelligence. Princeton, NJ: Princeton University Press.

Feigenbaum, Edward. 2000. “Oral History.” Charles Babbage Institute, October 13, 2000. van Melle, William. 1978. “MYCIN: A Knowledge-based Consultation Program for Infectious Disease Diagnosis.” International Journal of Man-Machine Studies 10 (May): 313–22.







Artificial Intelligence - What Is The MOLGEN Expert System?

 



MOLGEN is an expert system that helped molecular scientists and geneticists plan studies between 1975 and 1980.

It was Edward Feigenbaum's Heuristic Programming Project (HPP) at Stanford University's third expert system (after DENDRAL and MYCIN).

MOLGEN, like MYCIN before it, attracted hundreds of users outside of Stanford.

MOLGEN was originally made accessible to artificial intelligence researchers, molecular biologists, and geneticists via time-sharing on the GENET network in the 1980s.

Feigenbaum founded IntelliCorp in the late 1980s to offer a stand-alone software version of MOLGEN.

Scientific advancements in chromosomes and genes sparked an information boom in the early 1970s.

In 1971, Stanford University scientist Paul Berg performed the first gene splicing studies.

Stanford geneticist Stanley Cohen and University of California at San Francisco biochemist Herbert Boyer succeeded in inserting recombinant DNA into an organ ism two years later; the host organism (a bacterium) subsequently spontaneously replicated the foreign rDNA structure in its progeny.

Because of these developments, Stanford molecular researcher Joshua Lederberg told Feigenbaum that now was the right time to construct an expert system in Lederberg's expertise of molecular biology.

(Lederberg and Feigenbaum previously collaborated on DENDRAL, the first expert system.) MOLGEN could accomplish for recombinant DNA research and genetic engineering what DENDRAL had done for mass spectrometry, the two agreed.

Both expert systems were created with developing scientific topics in mind.

This enabled MOL GEN (and DENRAL) to absorb the most up-to-date scientific information and contribute to the advancement of their respective fields.

Mark Stefik and Peter Friedland developed programs for MOLGEN as their thesis project at HPP, and Feigenbaum was the primary investigator.

MOLGEN was supposed to follow a "skeletal blueprint" (Friedland and Iwasaki 1985, 161).

MOLGEN prepared a new experiment in the manner of a human expert, beginning with a design approach that had previously proven effective for a comparable issue.

MOLGEN then made hierarchical, step-by-step changes to the plan.

The algorithm was able to choose the most promising new experiments because to the combination of skeleton blueprints and MOLGEN's enormous knowledge base in molecular biology.

MOLGEN contained 300 lab procedures and strategies, as well as current data on forty genes, phages, plasmids, and nucleic acid structures, by 1980.

Fried reich and Stefik presented MOLGEN with a set of algorithms based on the molecular biology knowledge of Stanford University's Douglas Brutlag, Larry Kedes, John Sninsky, and Rosalind Grymes.

SEQ (for nucleic acid sequence analysis), GA1 (for generating enzyme maps of DNA structures), and SAFE were among them (for selecting enzymes most suit able for gene excision).

Beginning in February 1980, MOLGEN was made available to the molecular biology community outside of Stanford.

Under an account named GENET, the system was linked to SUMEX AIM (Stanford University Medical Experimental Computer for Artificial Intelligence in Medicine).

GENET was able to swiftly locate hundreds of users around the United States.

Academic scholars, experts from commercial giants like Monsanto, and researchers from modest start-ups like Genentech were among the frequent visitors.

The National Institutes of Health (NIH), which was SUMEX AIM's primary supporter, finally concluded that business customers could not have unfettered access to cutting-edge technology produced with public funds.

Instead, the National Institutes of Health encouraged Feigenbaum, Brutlag, Kedes, and Friedland to form IntelliGenetics, a company that caters to business biotech customers.

IntelliGenetics created BIONET with the support of a $5.6 million NIH grant over five years to sell or rent MOLGEN and other GENET applications.

For a $400 yearly charge, 900 labs throughout the globe had access to BIONET by the end of the 1980s.

Companies who did not wish to put their data on BIONET might purchase a software package from IntelliGenetics.

Until the mid-1980s, when IntelliGenetics withdrew its genetics material and maintained solely its underlying Knowledge Engineering Environment, MOLGEN's software did not sell well as a stand-alone product (KEE).

IntelliGenetics' AI division, which marketed the new KEE shell, changed its name to IntelliCorp.

Two more public offerings followed, but growth finally slowed.

MOLGEN's shell's commercial success, according to Feigenbaum, was hampered by its LISP-language; although LISP was chosen by pioneering computer scientists working on mainframe computers, it did not inspire the same level of interest in the corporate minicomputer sector.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


DENDRAL; Expert Systems; Knowledge Engineering.


References & Further Reading:


Feigenbaum, Edward. 2000. Oral History. Minneapolis, MN: Charles Babbage Institute.

Friedland, Peter E., and Yumi Iwasaki. 1985. “The Concept and Implementation of  Skeletal Plans.” Journal of Automated Reasoning 1: 161–208.

Friedland, Peter E., and Laurence H. Kedes. 1985. “Discovering the Secrets of DNA.” Communications of the ACM 28 (November): 1164–85.

Lenoir, Timothy. 1998. “Shaping Biomedicine as an Information Science.” In Proceedings of the 1998 Conference on the History and Heritage of Science Information Systems, edited by Mary Ellen Bowden, Trudi Bellardo Hahn, and Robert V. Williams, 27–46. Pittsburgh, PA: Conference on the History and Heritage of Science Information Systems.

Watt, Peggy. 1984. “Biologists Map Genes On-Line.” InfoWorld 6, no. 19 (May 7): 43–45.







Artificial Intelligence - What Are Expert Systems?

 






Expert systems are used to solve issues that would normally be addressed by humans.


In the early decades of artificial intelligence research, they emerged as one of the most promising application strategies.

The core concept is to convert an expert's knowledge into a computer-based knowledge system.




Dan Patterson, a statistician and computer scientist at the University of Texas in El Paso, differentiates various properties of expert systems:


• They make decisions based on knowledge rather than facts.

• The task of representing heuristic knowledge in expert systems is daunting.

• Knowledge and the program are generally separated so that the same program can operate on different knowledge bases.

• Expert systems should be able to explain their decisions, represent knowledge symbolically, and have and use meta knowledge, that is, knowledge about knowledge.





(Patterson, et al., 2008) Expert systems generally often reflect domain-specific knowledge.


The subject of medical research was a frequent test application for expert systems.

Expert systems were created as a tool to assist medical doctors in their work.

Symptoms were usually communicated by the patient in the form of replies to inquiries.

Based on its knowledge base, the system would next attempt to identify the ailment and, in certain cases, recommend relevant remedies.

MYCIN, a Stanford University-developed expert system for detecting bacterial infections and blood disorders, is one example.




Another well-known application in the realm of engineering and engineering design tries to capture the heuristic knowledge of the design process in the design of motors and generators.


The expert system assists in the initial design phase, when choices like as the number of poles, whether to use AC or DC, and so on are made (Hoole et al. 2003).

The knowledge base and the inference engine are the two components that make up the core framework of expert systems.




The inference engine utilizes the knowledge base to make choices, whereas the knowledge base holds the expert's expertise.

In this way, the knowledge is isolated from the software that manipulates it.

Knowledge must first be gathered, then comprehended, categorized, and stored in order to create expert systems.

It is retrieved to answer issues depending on predetermined criteria.

The four main processes in the design of an expert system, according to Thomson Reuters chief scientist Peter Jackson, are obtaining information, representing that knowledge, directing reasoning via an inference engine, and explaining the expert system's answer (Jackson 1999).

The expert system's largest issue was acquiring domain knowledge.

Human specialists may be challenging to obtain information from.


Many variables contribute to the difficulty of acquiring knowledge, but the complexity of encoding heuristic and experienced information is perhaps the most important.



The knowledge acquisition process is divided into five phases, according to Hayes-Roth et al. (1983).

Identification, or recognizing the problem and the data that must be used to arrive at a solution; conceptualization, or comprehending the key concepts and relationships between the data; formalization, or comprehending the relevant search space; implementation, or converting formalized knowledge into a software program; and testing the rules for completeness and accuracy are among them.


  • Production (rule-based) or non-production systems may be used to represent domain knowledge.
  • In rule-based systems, knowledge is represented by rules in the form of IF THEN-ELSE expressions.



The inference process is carried out by iteratively going over the rules, either through a forward or backward chaining technique.



  • Forward chaining asks what would happen next if the condition and rules were known to be true. Going from a goal to the rules we know to be true, backward chaining asks why this occurred.
  • Forward chaining is defined as when the left side of the rule is assessed first, that is, when the conditions are verified first and the rules are performed left to right (also known as data-driven inference).
  • Backward chaining occurs when the rules are evaluated from the right side, that is, when the outcomes are verified first (also known as goal-driven inference).
  • CLIPS, a public domain example of an expert system tool that implements the forward chaining method, was created at NASA's Johnson Space Center. MYCIN is an expert system that works backwards.



Associative/semantic networks, frame representations, decision trees, and neural networks may be used in expert system designs based on nonproduction architectures.


Nodes make form an associative/semantic network, which may be used to represent hierarchical knowledge. 

  • An example of a system based on an associative network is CASNET.
  • The most well-known use of CASNET was the development of an expert system for glaucoma diagnosis and therapy.

Frames are structured sets of closely related knowledge in frame architectures.


  • A frame-based architecture is an example of PIP (Present Illness Program).
  • MIT and Tufts-New England Clinical Center developed PIP to generate hypotheses regarding renal illness.

Top-down knowledge is represented via decision tree structures.


Blackboard system designs are complex systems in which the inference process's direction may be changed during runtime.


A blackboard system architecture may be seen in DARPA's HEARSAY domain independent expert system.


  • Knowledge is spread throughout a neural network in the form of nodes in neural network topologies.
  • Case-based reasoning is attempting to examine and find answers for a problem using previously solved examples.
  • A loose connection may be formed between case-based reasoning and judicial law, in which the decision of a comparable but previous case is used to solve a current legal matter.
  • Case-based reasoning is often implemented as a frame, which necessitates a more involved matching and retrieval procedure.



There are three options for manually constructing the knowledge base.


  • Knowledge may be elicited via an interview with a computer using interactive tools. This technique is shown by the computer-graphics-based OPAL software, which enabled clinicians with no prior medical training to construct expert medical knowledge bases for the care of cancer patients.
  • Text scanning algorithms that read books into memory are a second alternative to human knowledge base creation.
  • Machine learning algorithms that build competence on their own, with or without supervision from a human expert, are a third alternative still under development.




DENDRAL, a project started at Stanford University in 1965, is an early example of a machine learning architecture project.


DENDRAL was created in order to study the molecular structure of organic molecules.


  • While DENDRAL followed a set of rules to complete its work, META-DENDRAL created its own rules.
  • META-DENDRAL chose the important data points to observe with the aid of a human chemist.




Expert systems may be created in a variety of methods.


  • User-friendly graphical user interfaces are used in interactive development environments to assist programmers as they code.
  • Special languages may be used in the construction of expert systems.
  • Prolog (Logic Programming) and LISP are two of the most common options (List Programming).
  • Because Prolog is built on predicate logic, it belongs to the logic programming paradigm.
  • One of the first programming languages for artificial intelligence applications was LISP.



Expert system shells are often used by programmers.



A shell provides a platform for knowledge to be programmed into the system.


  • The shell is a layer without a knowledge basis, as the name indicates.
  • The Java Expert System Shell (JESS) is a strong expert shell built in Java.


Many efforts have been made to blend disparate paradigms to create hybrid systems.


  • Object-oriented programming seeks to combine logic-based and object-oriented systems.
  • Object orientation, despite its lack of a rigorous mathematical basis, is very useful in modeling real-world circumstances.

  • Knowledge is represented as objects that encompass both the data and the ways for working with it.
  • Object-oriented systems are more accurate models of real-world things than procedural programming.
  • The Object Inference Knowledge Specification Language (OI-KSL) is one way (Mascrenghe et al. 2002).



Although other languages, such as Visual Prolog, have merged object-oriented programming, OI-KSL takes a different approach.


Backtracking in Visual Prolog occurs inside the objects; that is, the methods backtracked.

Backtracking is taken to a whole new level in OI KSL, with the item itself being backtracked.

To cope with uncertainties in the given data, probability theory, heuristics, and fuzzy logic are sometimes utilized.

A fuzzy electric lighting system was one example of a Prolog implementation of fuzzy logic, in which the quantity of natural light influenced the voltage that flowed to the electric bulb (Mascrenghe 2002).

This allowed the system to reason in the face of uncertainty and with little data.


Interest in expert systems started to wane in the late 1990s, owing in part to unrealistic expectations for the technology and the expensive cost of upkeep.

Expert systems were unable to deliver on their promises.



Even today, technology generated in expert systems research is used in various fields like data science, chatbots, and machine intelligence.


  • Expert systems are designed to capture the collective knowledge that mankind has accumulated through millennia of learning, experience, and practice.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 


Clinical Decision Support Systems; Computer-Assisted Diagnosis; DENDRAL; Expert Systems.



Further Reading:


Hayes-Roth, Frederick, Donald A. Waterman, and Douglas B. Lenat, eds. 1983. Building Expert Systems. Teknowledge Series in Knowledge Engineering, vol. 1. Reading, MA: Addison Wesley.

Hoole, S. R. H., A. Mascrenghe, K. Navukkarasu, and K. Sivasubramaniam. 2003. “An Expert Design Environment for Electrical Devices and Its Engineering Assistant.” IEEE Transactions on Magnetics 39, no. 3 (May): 1693–96.

Jackson, Peter. 1999. Introduction to Expert Systems. Third edition. Reading, MA: Addison-Wesley.

Mascrenghe, A. 2002. “The Fuzzy Electric Bulb: An Introduction to Fuzzy Logic with Sample Implementation.” PC AI 16, no. 4 (July–August): 33–37.

Mascrenghe, A., S. R. H. Hoole, and K. Navukkarasu. 2002. “Prototype for a New Electromagnetic Knowledge Specification Language.” In CEFC Digest. Perugia, Italy: IEEE.

Patterson, Dan W. 2008. Introduction to Artificial Intelligence and Expert Systems. New Delhi, India: PHI Learning.

Rich, Elaine, Kevin Knight, and Shivashankar B. Nair. 2009. Artificial Intelligence. New Delhi, India: Tata McGraw-Hill.



Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...