Showing posts with label MYCIN. Show all posts
Showing posts with label MYCIN. Show all posts

Artificial Intelligence - Who Was Herbert A. Simon?

 


Herbert A. Simon (1916–2001) was a multidisciplinary scholar who contributed significantly to artificial intelligence.


He is largely regarded as one of the twentieth century's most prominent social scientists.

His contributions at Carnegie Mellon University lasted five decades.

Early artificial intelligence research was driven by the idea of the computer as a symbol manipulator rather than a number cruncher.

Emil Post, who originally wrote about this sort of computational model in 1943, is credited with inventing production systems, which included sets of rules for symbol strings used to establish conditions—which must exist before rules can be applied—and the actions to be done or conclusions to be drawn.

Simon and his Carnegie Mellon colleague Allen Newell popularized these theories regarding symbol manipulation and production systems by praising their potential benefits for general-purpose reading, storing, and replicating, as well as comparing and contrasting various symbols and patterns.


Simon, Newell, and Cliff Shaw's Logic Theorist software was the first to employ symbol manipulation to construct "intelligent" behavior.


Theorems presented in Bertrand Russell and Alfred North Whitehead's Principia Mathematica (1910) might be independently proved by logic theorists.

Perhaps most notably, the Logic Theorist program uncovered a shorter, more elegant demonstration of Theorem 2.85 in the Principia Mathematica, which was subsequently rejected by the Journal of Symbolic Logic since it was coauthored by a machine.

Although it was theoretically conceivable to prove the Principia Mathematica's theorems in an exhaustively detailed and methodical manner, it was impractical in reality due to the time required.

Newell and Simon were fascinated by the human rules of thumb for solving difficult issues for which an extensive search for answers was impossible due to the massive quantities of processing necessary.

They used the term "heuristics" to describe procedures that may solve issues but do not guarantee success.


A heuristic is a "rule of thumb" used to solve a problem that is too difficult or time consuming to address using an exhaustive search, a formula, or a step-by-step method.


Heuristic approaches are often compared with algorithmic methods in computer science, with the result of the method being a significant differentiating element.

According to this contrast, a heuristic program will provide excellent results in most cases, but not always, while an algorithmic program is a clear technique that guarantees a solution.

This is not, however, a technical difference.

In fact, a heuristic procedure that consistently yields the best result may no longer be deemed "heuristic"—alpha-beta pruning is an example of this.

Simon's heuristics are still utilized by programmers who are trying to solve issues that demand a lot of time and/or memory.

The game of chess is one such example, in which an exhaustive search of all potential board configurations for the proper solution is beyond the human mind's or any computer's capabilities.


Indeed, for artificial intelligence research, Herbert Simon and Allen Newell referred to computer chess as the Drosophila or fruit fly.


Heuristics may also be used to solve issues that don't have a precise answer, such as in medical diagnosis, when heuristics are applied to a collection of symptoms to determine the most probable diagnosis.

Production rules are derived from a class of cognitive science models that apply heuristic principles to productions (situations).

In practice, these rules reduce down to "IF-THEN" statements that reflect specific preconditions or antecedents, as well as the conclusions or consequences that these preconditions or antecedents justify.

"IF there are two X's in a row, THEN put an O to block," is a frequent example offered for the application of production rules to the tic-tac-toe game.

These IF-THEN statements are incorporated into expert systems' inference mechanisms so that a rule interpreter can apply production rules to specific situations lodged in the context data structure or short-term working memory buffer containing information supplied about that situation and draw conclusions or make recommendations.


Production rules were crucial in the development of artificial intelligence as a discipline.


Joshua Lederberg, Edward Feigenbaum, and other Stanford University partners would later use this fundamental finding to develop DENDRAL, an expert system for detecting molecular structure, in the 1960s.

These production guidelines were developed in DENDRAL after discussions between the system's developers and other mass spectrometry specialists.

Edward Shortliffe, Bruce Buchanan, and Edward Feigenbaum used production principles to create MYCIN in the 1970s.

MYCIN has over 600 IFTHEN statements in it, each reflecting domain-specific knowledge about microbial illness diagnosis and treatment.

PUFF, EXPERT, PROSPECTOR, R1, and CLAVIER were among the several production rule systems that followed.


Simon, Newell, and Shaw demonstrated how heuristics may overcome the drawbacks of classical algorithms, which promise answers but take extensive searches or heavy computing to find.


A process for solving issues in a restricted, clear sequence of steps is known as an algorithm.

Sequential operations, conditional operations, and iterative operations are the three kinds of fundamental instructions required to create computable algorithms.

Sequential operations perform tasks in a step-by-step manner.

The algorithm only moves on to the next job when each step is completed.

Conditional operations are made up of instructions that ask questions and then choose the next step dependent on the response.

One kind of conditional operation is the "IF-THEN" expression.

Iterative operations run "loops" of instructions.

These statements tell the task flow to go back and repeat a previous series of statements in order to solve an issue.

Algorithms are often compared to cookbook recipes, in which a certain order and execution of actions in the manufacture of a product—in this example, food—are dictated by a specific sequence of set instructions.


Newell, Shaw, and Simon created list processing for the Logic Theorist software in 1956.


List processing is a programming technique for allocating dynamic storage.

It's mostly utilized in symbol manipulation computer applications like compiler development, visual or linguistic data processing, and artificial intelligence, among others.

Allen Newell, J. Clifford Shaw, and Herbert A. Simon are credited with creating the first list processing software with enormous, sophisticated, and flexible memory structures that were not reliant on subsequent computer/machine memory.

List processing techniques are used in a number of higher-order languages.

IPL and LISP, two artificial intelligence languages, are the most well-known.


Simon and Newell's Generic Problem Solver (GPS) was published in the early 1960s, and it thoroughly explains the essential properties of symbol manipulation as a general process that underpins all types of intelligent problem-solving behavior.


GPS formed the foundation for decades of early AI research.

To arrive at a solution, General Problem Solver is a software for a problem-solving method that employs means-ends analysis and planning.

GPS was created with the goal of separating the problem-solving process from information particular to the situation at hand, allowing it to be used to a wide range of issues.

Simon is an economist, a political scientist, and a cognitive psychologist.


Simon is known for the notions of limited rationality, satisficing, and power law distributions in complex systems, in addition to his important contributions to organizational theory, decision-making, and problem-solving.


Computer and data scientists are interested in all three themes.

Human reasoning is inherently constrained, according to bounded rationality.

Humans lack the time or knowledge required to make ideal judgments; problems are difficult, and the mind has cognitive limitations.

Satisficing is a term used to describe a decision-making process that produces a solution that "satisfies" and "suffices," rather than the most ideal answer.

Customers use satisficing in market conditions when they choose things that are "good enough," meaning sufficient or acceptable.


Simon described how power law distributions were obtained from preferred attachment mechanisms in his study on complex organizations.


When a relative change in one variable induces a proportionate change in another, power laws, also known as scaling laws, come into play.

A square is a simple illustration; when the length of a side doubles, the square's area quadruples.

Power laws may be found in biological systems, fractal patterns, and wealth distributions, among other things.

Preferential attachment processes explain why the affluent grow wealthier in income/wealth distributions: Individuals' wealth is dispersed according on their current level of wealth; those with more wealth get proportionately more income, and hence greater overall wealth, than those with less.

When graphed, such distributions often create so-called long tails.

These long-tailed distributions are being employed to explain crowdsourcing, microfinance, and online marketing, among other things.



Simon was born in Milwaukee, Wisconsin, to a Jewish electrical engineer with multiple patents who came from Germany in the early twentieth century.


His mother was a musical prodigy. Simon grew interested in the social sciences after reading books on psychology and economics written by an uncle.

He has said that two works inspired his early thinking on the subjects: Norman Angell's The Great Illusion (1909) and Henry George's Progress and Poverty (1879).



Simon obtained his doctorate in organizational decision-making from the University of Chicago in 1943.

Rudolf Carnap, Harold Lasswell, Edward Merriam, Nicolas Rashevsky, and Henry Schultz were among his instructors.

He started his career as a political science professor at the Illinois Institute of Technology, where he taught and conducted research.

In 1949, he transferred to Carnegie Mellon University, where he stayed until 2001.

He progressed through the ranks of the Department of Industrial Management to become its chair.

He has written twenty-seven books and several articles that have been published.

In 1959, he was elected a member of the American Academy of Arts and Sciences.

In 1975, Simon was awarded the coveted Turing Award, and in 1978, he was awarded the Nobel Prize in Economics.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Dartmouth AI Conference; Expert Systems; General Problem Solver; Newell, Allen.


References & Further Reading:


Crowther-Heyck, Hunter. 2005. Herbert A. Simon: The Bounds of Reason in Modern America. Baltimore: Johns Hopkins Press.

Newell, Allen, and Herbert A. Simon. 1956. The Logic Theory Machine: A Complex Information Processing System. Santa Monica, CA: The RAND Corporation.

Newell, Allen, and Herbert A. Simon. 1976. “Computer Science as Empirical Inquiry: Symbols and Search.” Communications of the ACM 19, no. 3: 113–26.

Simon, Herbert A. 1996. Models of My Life. Cambridge, MA: MIT Press.



Artificial Intelligence - What Is The MYCIN Expert System?




MYCIN is an interactive expert system for infectious illness diagnosis and treatment developed by computer scientists Edward Feigenbaum (1936–) and Bruce Buchanan at Stanford University in the 1970s.

MYCIN was Feigenbaum's second expert system (after DENDRAL), but it was the first to be commercially accessible as a standalone software package.

TeKnowledge, the software business cofounded by Feigenbaum and other partners, offered EMYCIN as the most successful expert shell for this purpose by the 1980s.

MYCIN was developed by Feigenbaum's Heuristic Programming Project (HPP) in collaboration with Stanford Medical School's Infectious Diseases Group (IDG).

The expert clinical physician was IDG's Stanley Cohen.

Feigenbaum and Buchanan had read stories of antibiotics being prescribed wrongly owing to misdiagnoses in the early 1970s.

MYCIN was created to assist a human expert in making the best judgment possible.

MYCIN started out as a consultation tool.

MYCIN supplied a diagnosis that included the necessary antibiotics and dose after inputting the results of a patient's blood test, bacterial cultures, and other data.



MYCIN also served as an explanation system.

In simple English, the physician-user may ask MYCIN to expound on a certain inference.

Finally, MYCIN had a knowledge-acquisition software that was used to keep the system's knowledge base up to date.

Feigenbaum and his collaborators introduced two additional features to MYCIN after gaining experience with DENDRAL.

MYCIN's inference engine comes with a rule interpreter to begin with.

This enabled "goal-directed backward chaining" to be used to achieve diagnostic findings (Cendrowska and Bramer 1984, 229).

MYCIN set itself the objective of discovering a useful clinical parameter that matched the patient data submitted at each phase in the procedure.

The inference engine looked for a set of rules that applied to the parameter in question.

MYCIN typically required more information when evaluating the premise of one of the rules in this parameter set.

The system's next subgoal was to get that data.

MYCIN might test out new regulations or ask the physician for further information.

This process was repeated until MYCIN had enough data on numerous factors to make a diagnosis.

The certainty factor was MYCIN's second unique feature.

These elements should not be seen "as conditional probabilities, [though] they are loosely grounded on probability theory," according to William van Melle (then a doctoral student working on MYCIN for his thesis project) (van Melle 1978, 314).

The execution of production rules was assigned a value between –1 and +1 by MYCIN (dependent on how strongly the system felt about their correctness).

MYCIN's diagnosis also included these certainty elements, allowing the physician-user to make their own final decision.

The software package, known as EMYCIN, was released in 1976 and comprised an inference engine, user interface, and short-term memory.

It didn't have any information.

("E" stood for "Empty" at first, then "Essential.") Customers of EMYCIN were required to link their own knowledge base to the system.

Faced with high demand for EMYCIN packages and high interest in MOLGEN (Feigenbaum's third expert system), HPP decided to form IntelliCorp and TeKnowledge, the first two expert system firms.

TeKnowledge was eventually founded by a group of roughly twenty individuals, including all of the previous HPP students who had developed expert systems.

EMYCIN was and continues to be their most popular product.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Expert Systems; Knowledge Engineering


References & Further Reading:


Cendrowska, J., and M. A. Bramer. 1984. “A Rational Reconstruction of the MYCIN Consultation System.” International Journal of Man-Machine Studies 20 (March): 229–317.

Crevier, Daniel. 1993. AI: Tumultuous History of the Search for Artificial Intelligence. Princeton, NJ: Princeton University Press.

Feigenbaum, Edward. 2000. “Oral History.” Charles Babbage Institute, October 13, 2000. van Melle, William. 1978. “MYCIN: A Knowledge-based Consultation Program for Infectious Disease Diagnosis.” International Journal of Man-Machine Studies 10 (May): 313–22.







Artificial Intelligence in Medicine.

 



Artificial intelligence aids health-care providers by aiding with activities that need large-scale data management.

Artificial intelligence (AI) is revolutionizing how clinicians diagnose, treat, and predict outcomes in clinical settings.

In the 1970s, Scottish surgeon Alexander Gunn used computer analysis to assist diagnose nose severe abdominal discomfort, which was one of the earliest effective applications of artificial intelligence in medicine.

Artificial intelligence applications have risen in quantity and complexity since then, in line with advances in computer science.

Artificial neural networks, fuzzy expert systems, evolutionary computation, and hybrid intelligent systems are the most prevalent AI applications in medicine.

Artificial neural networks (ANNs) are brain-inspired systems that mimic how people learn and absorb information.

Warren McCulloch and Walter Pitts created the first artificial "neurons" in the mid-twentieth century.

Paul Werbos has just given artificial neural networks the capacity to execute backpropagation, which is the process of adjusting neural layers in response to new events.

ANNs are built up of linked processors known as "neurons" that process data in parallel.

In most cases, these neurons are divided into three layers: input, middle (or hidden), and output.

Each layer is completely related to the one before it.

Individual neurons are connected or linked, and a weight is assigned to them.

The technology "learns" by adjusting these weights.

The creation of sophisticated tools capable of processing nonlinear data and generalizing from inaccurate data sets is made feasible by ANNs.

Because of their capacity to spot patterns and interpret nonlinear data, ANNs have found widespread use in therapeutic contexts.

ANNs are utilized in radiology for image analysis, high-risk patient identification, and intensive care data analysis.

In instances where a variety of factors must be evaluated, ANNs are extremely beneficial for diagnosing and forecasting outcomes.

Artificial intelligence techniques known as fuzzy expert systems may operate in confusing situations.

In contrast to systems based on traditional logic, fuzzy systems are founded on the understanding that data processing often has to deal with ambiguity and vagueness.

Because medical information is typically complicated and imprecise, fuzzy expert systems are useful in health care.

Fuzzy systems can recognize, understand, manipulate, and use ambiguous information for a variety of purposes.

Fuzzy logic algorithms are being utilized to predict a variety of outcomes for patients with cancers including lung cancer and melanoma.

They've also been utilized to create medicines for those who are dangerously unwell.

Algorithms inspired by natural evolutionary processes are used in evolutionary computing.

Through trial and error, evolutionary computing solves issues by optimizing their performance.

They produce an initial set of solutions and then make modest random adjustments to the data set and discard failed intermediate solutions with each subsequent generation.

These solutions have been exposed to mutation and natural selection in some way.

As the fitness of the solutions improves, the consequence is algorithms that develop over time.

While there are many other types of these algorithms, the genetic algorithm is the most common one utilized in the field of medicine.

These were created in the 1970s by John Holland and make use of fundamental evolutionary patterns to build solutions in complicated situations like healthcare settings.

They're employed for a variety of clinical jobs including diagnostics, medical imaging, scheduling, and signal processing, among others.

Hybrid intelligent systems are AI technologies that mix many systems to take use of the advantages of the methodologies discussed above.

Hybrid systems are better at imitating human logic and adapting to changing circumstances.

These systems, like the individual AI technologies listed above, are being applied in a variety of healthcare situations.

Currently, they are utilized to detect breast cancer, measure myocardial viability, and interpret digital mammograms.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Clinical Decision Support Systems; Computer-Assisted Diagnosis; MYCIN; Precision Medicine Initiative.



References & Further Reading:


Baeck, Thomas, David B. Fogel, and Zbigniew Michalewicz, eds. 1997. Handbook of Evolutionary Computation. Boca Raton, FL: CRC Press.

Eiben, Agoston, and Jim Smith. 2003. Introduction to Evolutionary Computing. Berlin: Springer-Verlag.

Patel, Jigneshkumar L., and Ramesh K. Goyal. 2007. “Applications of Artificial Neural Networks in Medical Science.” Current Clinical Pharmacology 2, no. 3: 217–26.

Ramesh, Anavai N., Chandrasekhar Kambhampati, John R. T. Monson, and Patrick J. Drew. 2004. “Artificial Intelligence in Medicine.” Annals of the Royal College of Surgeons of England 86, no. 5: 334–38.


Artificial Intelligence - Knowledge Engineering In Expert Systems.

  


Knowledge engineering (KE) is an artificial intelligence subject that aims to incorporate expert knowledge into a formal automated programming system in such a manner that the latter can produce the same or comparable results in problem solving as human experts when working with the same data set.

Knowledge engineering, more precisely, is a discipline that develops methodologies for constructing large knowledge-based systems (KBS), also known as expert systems, using appropriate methods, models, tools, and languages.

For knowledge elicitation, modern knowledge engineering uses the knowledge acquisition and documentation structuring (KADS) approach; hence, the development of knowledge-based systems is considered a modeling effort (i.e., knowledge engineer ing builds up computer models).

It's challenging to codify the knowledge acquisition process since human specialists' knowledge is a combination of skills, experience, and formal knowledge.

As a result, rather than directly transferring knowledge from human experts to the programming system, the experts' knowledge is modeled.

Simultaneously, direct simulation of the entire cognitive process of experts is extremely difficult.

Designed computer models are expected to achieve targets similar to experts’ results doing problem solving in the domain rather than matching the cognitive capabilities of the experts.

As a result, knowledge engineering focuses on modeling and problem-solving methods (PSM) that are independent of various representation formalisms (production rules, frames, etc.).

The problem solving method is a key component of knowledge engineering, and it refers to the knowledge-level specification of a reasoning pattern that can be used to complete a knowledge-intensive task.

Each problem-solving technique is a pattern that offers template structures for addressing a specific issue.

The terms "diagnostic," "classification," and "configuration" are often used to categorize problem-solving strategies based on their topology.

PSM "Cover-and-Differentiate" for diagnostic tasks and PSM "Propose-and-Reverse" for parametric design tasks are two examples.

Any problem-solving approach is predicated on the notion that the suggested method's logical adequacy corresponds to the computational tractability of the system implementation based on it.

The PSM heuristic classification—an inference pattern that defines the behavior of knowledge-based systems in terms of objectives and knowledge required to attain these goals—is often used in early instances of expert systems.

Inference actions and knowledge roles, as well as their relationships, are covered by this problem-solving strategy.

The relationships specify how domain knowledge is used in each interference action.

Observables, abstract observables, solution abstractions, and solution are the knowledge roles, while the interference action might be abstract, heuristic match, or refine.

The PSM heuristic classification requires a hierarchically organized model of observables as well as answers for "abstract" and "refine," making it suited for static domain knowledge acquisition.

In the late 1980s, knowledge engineering modeling methodologies shifted toward role limiting methods (RLM) and generic tasks (GT).

The idea of the "knowledge role" is utilized in role-limiting methods to specify how specific domain knowledge is employed in the problem-solving process.

RLM creates a wrapper over PSM by explaining it in broad terms with the purpose of reusing it.

However, since this technique only covers a single instance of PSM, it is ineffective for issues that need the employment of several methods.

Configurable role limiting methods (CRLM) are an extension of the role limiting methods concept, offering a predetermined collection of RLMs as well as a fixed scheme of knowledge categories.

Each member method may be used on a distinct subset of a job, but introducing a new method is challenging since it necessitates changes to established knowledge categories.

The generic task method includes a predefined scheme of knowledge kinds and an inference mechanism, as well as a general description of input and output.

The generic task is based on the "strong interaction problem hypothesis," which claims that domain knowledge's structure and representation may be totally defined by its application.

Each generic job makes use of information and employs control mechanisms tailored to that knowledge.

Because the control techniques are more domain-specific, the actual knowledge acquisition employed in GT is more precise in terms of problem-solving step descriptions.

As a result, the design of specialized knowledge-based systems may be thought of as the instantiation of specified knowledge categories using domain-specific words.

The downside of GT is that it may not be possible to integrate a specified problem-solving approach with the optimum problem-solving strategy required to complete the assignment.

The task structure (TS) approach seeks to address GT's shortcomings by distinguishing between the job and the technique employed to complete it.

As a result, every task-structure based on that method postulates how the issue might be solved using a collection of generic tasks, as well as what knowledge has to be acquired or produced for these tasks.

Because of the requirement for several models, modeling frameworks were created to meet various parts of knowledge engineering methodologies.

The organizational model, task model, agent model, communication model, expertise model, and design model are the models of the most common engineering CommonKADS structure (which depends on KADS).

The organizational model explains the structure as well as the tasks that each unit performs.

The task model describes tasks in a hierarchical order.

Each agent's skills in task execution are specified by the agent model.

The communication model specifies how agents interact with one another.

The expertise model, which employs numerous layers and focuses on representing domain-specific knowledge (domain layer) as well as inference for the reasoning process, is the most significant model (inference layer).

A task layer is also supported by the expertise model.

The latter is concerned with task decomposition.

The system architecture and computational mechanisms used to make the inference are described in the design model.

In CommonKADS, there is a clear distinction between domain-specific knowledge and generic problem-solving techniques, allowing various problems to be addressed by constructing a new instance of the domain layer and utilizing the PSM on a different domain.

Several libraries of problem-solving algorithms are now available for use in development.

They are distinguished by their key characteristics: if the library was created for a specific goal or has a larger reach; whether the library is formal, informal, or implemented; whether the library uses fine or coarse grained PSM; and, lastly, the library's size.

Recently, some research has been carried out with the goal of unifying existing libraries by offering adapters that convert task-neutral PSM to task-specific PSM.

The MIKE (model-based and incremental knowledge engineering) method, which proposes integrating semiformal and formal specification and prototyping into the framework, grew out of the creation of CommonKADS.

As a result, MIKE divides the entire process of developing knowledge-based systems into a number of sub-activities, each of which focuses on a different aspect of system development.

The Protégé method makes use of PSMs and ontologies, with an ontology being defined as an explicit statement of a common conceptualization that holds in a certain situation.

Although the ontologies used in Protégé might be of any form, the ones utilized are domain ontologies, which describe the common conceptualization of a domain, and method ontologies, which specify the ideas and relations used by problem solving techniques.

In addition to problem-solving techniques, the development of knowledge-based systems necessitates the creation of particular languages capable of defining the information needed by the system as well as the reasoning process that will use that knowledge.

The purpose of such languages is to give a clear and formal foundation for expressing knowledge models.

Furthermore, some of these formal languages may be executable, allowing simulation of knowledge model behavior on specified input data.

The knowledge was directly encoded in rule-based implementation languages in the early years.

This resulted in a slew of issues, including the impossibility to provide some forms of information, the difficulty to assure consistent representation of various types of knowledge, and a lack of specifics.

Modern approaches to language development aim to target and formalize the conceptual models of knowledge-based systems, allowing users to precisely define the goals and process for obtaining models, as well as the functionality of interface actions and accurate semantics of the various domain knowledge elements.

The majority of these epistemological languages include primitives like constants, functions, and predicates, as well as certain mathematical operations.

Object-oriented or frame-based languages, for example, define a wide range of modeling primitives such as objects and classes.

KARL, (ML)2, and DESIRE are the most common examples of specific languages.

KARL is a language that employs a Horn logic variation.

It was created as part of the MIKE project and combines two forms of logic to target the KADS expertise model: L-KARL and P-KARL.

The L-KARL is a frame logic version that may be used in inference and domain layers.

It's a mix of first-order logic and semantic data modeling primitives, in fact.

P-KARL is a task layer specification language that is also a dynamic logic in some versions.

For KADS expertise models, (ML)2 is a formalization language.

The language mixes first-order extended logic for domain layer definition, first-order meta logic for inference layer specification, and quantified dynamic logic for task layer specification.

The concept of compositional architecture is used in DESIRE (the design and specification of interconnected reasoning components).

It specifies the dynamic reasoning process using temporal logics.

Transactions describe the interaction between components in knowl edge-based systems, and control flow between any two objects is specified as a set of control rules.

A metadata description is attached to each item.

In a declarative approach, the meta level specifies the dynamic features of the object level.

The need to design large knowledge-based systems prompted the development of knowledge engineering, which entails creating a computer model with the same problem-solving capabilities as human experts.

Knowledge engineering views knowledge-based systems as operational systems that should display some desirable behavior, and provides modeling methodologies, tools, and languages to construct such systems.




Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Clinical Decision Support Systems; Expert Systems; INTERNIST-I and QMR; MOLGEN; MYCIN.



Further Reading:


Schreiber, Guus. 2008. “Knowledge Engineering.” In Foundations of Artificial Intelligence, vol. 3, edited by Frank van Harmelen, Vladimir Lifschitz, and Bruce Porter, 929–46. Amsterdam: Elsevier.

Studer, Rudi, V. Richard Benjamins, and Dieter Fensel. 1998. “Knowledge Engineering: Principles and Methods.” Data & Knowledge Engineering 25, no. 1–2 (March): 161–97.

Studer, Rudi, Dieter Fensel, Stefan Decker, and V. Richard Benjamins. 1999. “Knowledge Engineering: Survey and Future Directions.” In XPS 99: German Conference on Knowledge-Based Systems, edited by Frank Puppe, 1–23. Berlin: Springer.



Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...