Showing posts with label Computer-Assisted Diagnosis. Show all posts
Showing posts with label Computer-Assisted Diagnosis. Show all posts

Artificial Intelligence - Predictive Policing.

 





Predictive policing is a term that refers to proactive police techniques that are based on software program projections, particularly on high-risk areas and periods.

Since the late 2000s, these tactics have been progressively used in the United States and in a number of other nations throughout the globe.

Predictive policing has sparked heated debates about its legality and effectiveness.

Deterrence work in policing has always depended on some type of prediction.





Furthermore, from its inception in the late 1800s, criminology has included the study of trends in criminal behavior and the prediction of at-risk persons.

As early as the late 1920s, predictions were used in the criminal justice system.

Since the 1970s, an increased focus on geographical components of crime research, particularly spatial and environmental characteristics (such as street lighting and weather), has helped to establish crime mapping as a useful police tool.





Since the 1980s, proactive policing techniques have progressively used "hot-spot policing," which focuses police resources (particularly patrols) in regions where crime is most prevalent.

Predictive policing is sometimes misunderstood to mean that it prevents crime before it happens, as in the science fiction film Minority Report (2002).

Unlike conventional crime analysis approaches, they depend on predictive modeling algorithms powered by software programs that statistically analyze police data and/or apply machine-learning algorithms.





Perry et al. (2013) identified three sorts of projections that they can make: 

(1) locations and times when crime is more likely to occur; 

(2) persons who are more likely to conduct crimes; and 

(3) the names of offenders and victims of crimes.


"Predictive policing," on the other hand, generally relates mainly to the first and second categories of predictions.






Two forms of modeling are available in predictive policing software tools.

The geospatial ones show when and where crimes are likely to occur (in which area or even block), and they lead to the mapping of crime "hot spots." Individual-based modeling is the second form of modeling.

Variables like age, criminal histories, gang involvement, or the chance of a person being engaged in a criminal activity, particularly a violent one, are used in programs that give this sort of modeling.

These forecasts are often made in conjunction with the adoption of proactive police measures (Ridgeway 2013).

Police patrols and restrictions in crime "hot areas" are naturally included in geospatial modeling.

Individuals having a high risk of becoming involved in criminal behavior are placed under observation or reported to the authorities in the case of individual-based modeling.

Since the late 2000s, police agencies have been progressively using software tools from technology businesses that assist them create projections and implement predictive policing methods.

With the deployment of PredPol in 2011, the Santa Cruz Police Department became the first in the United States to employ such a strategy.





This software tool, which was inspired by earthquake aftershock prediction techniques, offers daily (and occasionally hourly) maps of "hot zones." It was first restricted to property offenses, but it was subsequently expanded to encompass violent crimes.

More than sixty police agencies throughout the United States already employ PredPol.

In 2012, the New Orleans Police Department was one of the first to employ Palantir to perform predictive policing.

Since then, many more software programs have been created, including CrimeScan, which analyzes seasonal and weekday trends in addition to crime statistics, and Hunchlab, which employs machine learning techniques and adds weather patterns.

Some police agencies utilize software tools that enable individual-based modeling in addition to geographic modeling.

The Chicago Police Department, for example, has relied on the Strategic Subject List (SSL) since 2013, which is generated by an algorithm that assesses the likelihood of persons being engaged in a shooting as either perpetrators or victims.

Individuals with the highest risk ratings are referred to the police for preventative action.




Predictive policing has been used in countries other than the United States.


PredPol was originally used in the United Kingdom in the early 2010s, and the Crime Anticipation System, which was first utilized in Amsterdam, was made accessible to all Dutch police departments in May 2017.

Several concerns have been raised about the accuracy of predictions produced by software algorithms employed in predictive policing.

Some argue that software systems are more objective than human crime data analyzers and can anticipate where crime will occur more accurately.

Predictive policing, from this viewpoint, may lead to a more efficient allocation of police resources (particularly police patrols) and is cost-effective, especially when software is used instead of paying human crime data analysts.

On the contrary, opponents argue that software program forecasts embed systemic biases since they depend on police data that is itself heavily skewed due to two sorts of faults.

To begin with, crime records appropriately represent law enforcement efforts rather than criminal activity.

Arrests for marijuana possession, for example, provide information on the communities and people targeted by police in their anti-drug efforts.

Second, not all victims report crimes to the police, and not all crimes are documented in the same way.

Sexual crimes, child abuse, and domestic violence, for example, are generally underreported, and U.S. citizens are more likely than non-U.S. citizens to report a crime.

For all of these reasons, some argue that predictions produced by predictive police software algorithms may merely tend to repeat prior policing behaviors, resulting in a feedback loop: In areas where the programs foresee greater criminal activity, policing may be more active, resulting in more arrests.

To put it another way, predictive police software tools may be better at predicting future policing than future criminal activity.

Furthermore, others argue that predictive police forecasts are racially prejudiced, given how historical policing has been far from colorblind.

Furthermore, since race and location of residency in the United States are intimately linked, the use of predictive policing may increase racial prejudices against nonwhite communities.

However, evaluating the effectiveness of predictive policing is difficult since it creates a number of methodological difficulties.

In fact, there is no statistical proof that it has a more beneficial impact on public safety than previous or other police approaches.

Finally, others argue that predictive policing is unsuccessful at decreasing crime since police patrols just dispense with criminal activity.

Predictive policing has sparked several debates.

The constitutionality of predictive policy's implicit preemptive action, for example, has been questioned since the hot-spot policing that commonly comes with it may include stop-and-frisks or unjustified stopping, searching, and questioning of persons.

Predictive policing raises ethical concerns about how it may infringe on civil freedoms, particularly the legal notion of presumption of innocence.

In reality, those on lists like the SSL should be allowed to protest their inclusion.

Furthermore, police agencies' lack of openness about how they use their data has been attacked, as has software firms' lack of transparency surrounding their algorithms and predictive models.

Because of this lack of openness, individuals are oblivious to why they are on lists like the SSL or why their area is often monitored.

Members of civil rights groups are becoming more concerned about the use of predictive policing technologies.

Predictive Policing Today: A Shared Statement of Civil Rights Concerns was published in 2016 by a coalition of seventeen organizations, highlighting the technology's racial biases, lack of transparency, and other serious flaws that lead to injustice, particularly for people of color and nonwhite neighborhoods.

In June 2017, four journalists sued the Chicago Police Department under the Freedom of Details Act, demanding that the department provide all information on the algorithm used to create the SSL.

While police departments are increasingly implementing software programs that predict crime, their use may decline in the future due to their mixed results in terms of public safety.

Two police agencies in the United Kingdom (Kent) and Louisiana (New Orleans) have terminated their contracts with predictive policing software businesses in 2018.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Clinical Decision Support Systems; Computer-Assisted Diagnosis.



References & Further Reading:



Collins, Francis S., and Harold Varmus. 2015. “A New Initiative on Precision Medicine.” New England Journal of Medicine 372, no. 2 (February 26): 793–95.

Haskins, Julia. 2018. “Wanted: 1 Million People to Help Transform Precision Medicine: All of Us Program Open for Enrollment.” Nation’s Health 48, no. 5 (July 2018): 1–16.

Madara, James L. 2016 “AMA Statement on Precision Medicine Initiative.” February 25, 2016. Chicago, IL: American Medical Association.

Morrison, S. M. 2019. “Precision Medicine.” Lister Hill National Center for Biomedical Communications. U.S. National Library of Medicine. Bethesda, MD: National Institutes of Health, Department of Health and Human Services.





Artificial Intelligence - The Precision Medicine Initiative.

 





Precision medicine, or preventative and treatment measures that account for individual variability, is not a new concept.

For more than a century, blood type has been used to guide blood transfusions.




However, the recent development of large-scale biologic databases (such as the human genome sequence), powerful methods for characterizing patients (such as proteomics, metabolomics, genomics, diverse cellular assays, and even mobile health technology), and computational tools for analyzing large sets of data have significantly improved the prospect of expanding this application to more broad uses (Collins and Varmus 2015, 793).

The Precision Medicine Initiative (PMI), which was launched by President Barack Obama in 2015, is a long-term research endeavor including the National Institutes of Health (NIH) and a number of other public and commercial research organizations.

The initiative's goal, as stated, is to learn how a person's genetics, environment, and lifestyle can help determine viable disease prevention, treatment, and mitigation strategies.





It consists of both short- and long-term objectives.

The short-term objectives include advancing precision medicine in cancer research.

Scientists at the National Cancer Institute (NCI), for example, want to employ a better understanding of cancer's genetics and biology to develop new, more effective treatments for diverse kinds of the illness.

The long-term objectives of PMI are to introduce precision medicine to all aspects of health and health care on a wide scale.

To that goal, the National Institutes of Health (NIH) created the All of Us Research Program in 2018, which enlists the help of at least one million volunteers from throughout the country.



Participants will provide genetic information, biological samples, and other health-related information.

Contributors will be able to view their health information, as well as research that incorporates their data, throughout the study to promote open data sharing.

Researchers will utilize the information to look at a variety of illnesses in order to better forecast disease risk, understand how diseases develop, and develop better diagnostic and treatment options (Morrison 2019, 6).

The PMI is designed to provide doctors with the information and assistance they need to incorporate personalized medicine services into their practices in order to accurately focus therapy and enhance health outcomes.

It will also work to enhance patient access to their medical records and assist physicians in using electronic technologies to make health information more accessible, eliminate inefficiencies in health-care delivery, cut costs, and improve treatment quality (Madara 2016, 1).

While the initiative explicitly states that participants will not get a direct medical benefit as a result of their participation, it also states that their participation may lead to medical breakthroughs that will benefit future generations.



It will generate substantially more effective health treatments that assure quality and equality in support of efforts to both prevent illness and decrease premature mortality by extending evidence-based disease models to include individuals from historically underrepresented communities (Haskins 2018, 1).


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.




See also: 


Clinical Decision Support Systems; Computer-Assisted Diagnosis.



References & Further Reading:



Collins, Francis S., and Harold Varmus. 2015. “A New Initiative on Precision Medicine.” New England Journal of Medicine 372, no. 2 (February 26): 793–95.

Haskins, Julia. 2018. “Wanted: 1 Million People to Help Transform Precision Medicine: All of Us Program Open for Enrollment.” Nation’s Health 48, no. 5 (July 2018): 1–16.

Madara, James L. 2016 “AMA Statement on Precision Medicine Initiative.” February 25, 2016. Chicago, IL: American Medical Association.

Morrison, S. M. 2019. “Precision Medicine.” Lister Hill National Center for Biomedical Communications. U.S. National Library of Medicine. Bethesda, MD: National Institutes of Health, Department of Health and Human Services.




Artificial Intelligence - The INTERNIST-I And QMR Expert Systems.

 



INTERNIST-I and QMR (Quick Medical Reference) are two similar expert systems created in the 1970s at the University of Pittsburgh School of Medicine.

The INTERNIST-I system was created by Jack D. Myers, who worked with the university's Intelligent Systems Program head Randolph A.

Miller, artificial intelligence pioneer Harry Pople, and infectious disease specialist Victor Yu to encode his internal medicine knowledge.

The expert system's microcomputer version is known as QMR.

It was created in the 1980s by Fred E. Masarie, Jr., Randolph A. Miller, and Jack D. Myers at the University of Pittsburgh School of Medicine's Section of Medical Informatics.

The two expert systems shared algorithms and are referred to as INTERNIST-I/QMR combined.

QMR may be used as a decision support tool, but it can also be used to evaluate physician opinions and recommend laboratory testing.

QMR may also be used as a teaching tool since it includes case scenarios.

INTERNIST-I was created in a medical school course presented at the University of Pittsburgh by Myers, Miller, Pople, and Yu.

The Logic of Problem-Solving in Clinical Diagnosis requires fourth-year students to integrate lab oratory and sign-and-symptom data obtained from published and unpublished clinical icopathological reports and patient histories into the course.

The technology was also used to test the registered pupils as a "quizmaster." The team developed a ranking algorithm, partitioning algorithm, exclusion functions, and other heuristic rules instead of using statistical artificial intelligence approaches.

The algorithm generated a prioritized list of likely diagnoses based on the submitted physician findings, as well as responses to follow-up questions.

INTERNIST-I may potentially suggest further lab testing.

By 1982, the project's directors believed that fifteen person-years had been invested on the system.

The system finally included taxonomy information about 1,000 disorders and three-quarters of all known internal medicine diagnoses, making it very knowledge-intensive.

At the pinnacle of the Greek oracle approach to medical artificial intelligence, the University of Pittsburgh School of Medicine produced INTERNIST-I.

The initial generation of the system's user was mostly considered as a passive spectator.

The system's creators hoped that it might take the role of doctors in locations where they were rare, such as manned space missions, rural communities, and nuclear submarines.

The technology, on the other hand, was time-consuming and difficult to use for paramedics and medical personnel.

Donald McCracken and Robert Akscyn of neighboring Carnegie Mellon University created INTERNIST-I in ZOG, an early knowledge management hypertext system, to address this challenge.

QMR enhanced INTERNIST-user-friendliness I's while promoting more active investigation of the case study knowledge set.

QMR also used a weighted scales and a ranking algorithm to analyze a patient's signs and symptoms and relate them to diagnosis.

By researching the literature in the topic, system designers were able to assess the evocative intensity and frequency (or sensitivity) of case results.

The foundation of QMR is a heuristic algorithm that assesses evocative strength and frequency and assigns a numerical value to them.

In the solution of diagnostic problems, QMR adds rules that enable the system to convey time-sensitive reasoning.

The capacity to create homologies between several related groups of symptoms was one feature of QMR that was not available in INTERNIST-I.

QMR included not just diagnoses that were probable, but also illnesses with comparable histories, signs and symptoms, and early laboratory findings.

By comparing QMR's output with case files published in The New England Journal of Medicine, the system's accuracy was tested on a regular basis.

QMR, which was commercially offered to doctors from First DataBank in the 1980s and 1990s, needed roughly 10 hours of basic training.

Typical runs of the software on individual patient situations were done after hours in private clinics.

Instead of being a clinical decisionmaker, QMR's architects recast the expert system as a hyperlinked electronic textbook.

The National Library of Medicine, the NIH Division of Research Resources, and the CAMDAT Foundation all provided funding for INTERNIST-I/QMR.

DXplain, Meditel, and Iliad were three medical artificial intelligence decision aids that were equivalent at the time.

G. Octo Barnett and Stephen Pauker of the Massachusetts General Hospital/Harvard Medical School Laboratory of Computer Science created DXplain with funding help from the American Medical Association.

DXplain's knowledge base was derived from the American Medical Association's (AMA) book Current Medical Information and Terminology (CMIT), which described the causes, symptoms, and test results for over 3,000 disorders.

The diagnostic algorithm at the core of DXplain, like that of INTERNIST-I, used a scoring or ranking procedure as well as modified Bayesian conditional probability computations.

In the 1990s, DXplain became accessible on diskette for PC users.

Meditel was developed from an earlier computerized decision aid, the Meditel Pediatric System, by Albert Einstein Medical Center educator Herbert Waxman and physician William Worley of the University of Pennsylvania Department of Medicine in the mid-1970s.

Using Bayesian statistics and heuristic decision principles, Meditel aided in suggesting probable diagnosis.

Meditel was marketed as a doc-in-a-box software package for IBM personal computers in the 1980s by Elsevier Science Publishing Company.

In the Knowledge Engineering Center of the Department of Medical Informatics at the University of Utah, Dr.

Homer Warner and his partners nurtured Iliad, a third medical AI competitor.

The federal government awarded Applied Medical Informatics a two-million-dollar grant in the early 1990s to integrate Iliad's diagnostic software directly to computerized databases of patient data.

Iliad's core target was doctors and medical students, but in 1994, the business produced Medical HouseCall, a consumer version of Iliad. 


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Clinical Decision Support Systems; Computer-Assisted Diagnosis.




Further Reading:


Bankowitz, Richard A. 1994. The Effectiveness of QMR in Medical Decision Support: Executive Summary and Final Report. Springfield, VA: U.S. Department of Commerce, National Technical Information Service.

Freiherr, Gregory. 1979. The Seeds of Artificial Intelligence: SUMEX-AIM. NIH Publication 80-2071. Washington, DC: National Institutes of Health, Division of Research Resources.

Lemaire, Jane B., Jeffrey P. Schaefer, Lee Ann Martin, Peter Faris, Martha D. Ainslie, and Russell D. Hull. 1999. “Effectiveness of the Quick Medical Reference as a Diagnostic Tool.” Canadian Medical Association Journal 161, no. 6 (September 21): 725–28.

Miller, Randolph A., and Fred E. Masarie, Jr. 1990. “The Demise of the Greek Oracle Model for Medical Diagnosis Systems.” Methods of Information in Medicine 29, no. 1: 1–2.

Miller, Randolph A., Fred E. Masarie, Jr., and Jack D. Myers. 1986. “Quick Medical Reference (QMR) for Diagnostic Assistance.” MD Computing 3, no. 5: 34–48.

Miller, Randolph A., Harry E. Pople, Jr., and Jack D. Myers. 1982. “INTERNIST-1: An Experimental Computer-Based Diagnostic Consultant for General Internal Medicine.” New England Journal of Medicine 307, no. 8: 468–76.

Myers, Jack D. 1990. “The Background of INTERNIST-I and QMR.” In A History of Medical Informatics, edited by Bruce I. Blum and Karen Duncan, 427–33. New York: ACM Press.

Myers, Jack D., Harry E. Pople, Jr., and Jack D. Myers. 1982. “INTERNIST: Can Artificial Intelligence Help?” In Clinical Decisions and Laboratory Use, edited by Donald P. Connelly, Ellis S. Benson, M. Desmond Burke, and Douglas Fenderson, 251–69. Minneapolis: University of Minnesota Press.

Pople, Harry E., Jr. 1976. “Presentation of the INTERNIST System.” In Proceedings of the AIM Workshop. New Brunswick, NJ: Rutgers University.






Artificial Intelligence - What Is A Group Symbol Associator?



Firmin Nash, director of the South West London Mass X-Ray Service, devised the Group Symbol Associator, a slide rule-like device that enabled a clinician to correlate a patient's symptoms against 337 predefined symptom-disease complexes and establish a diagnosis in the early 1950s.

It resolves cognitive processes in automated medical decision-making by using multi-key look-up from inverted files.

The Group Symbol Associator has been dubbed a "cardboard brain" by Derek Robinson, a professor at the Ontario College of Art & Design's Integrated Media Program.

Hugo De Santo Caro, a Dominican monk who finished his index in 1247, used an inverted scriptural concordance similar to this one.

Marsden Blois, an artificial intelligence in medicine professor at the University of California, San Francisco, rebuilt the Nash device in software in the 1980s.

Blois' diagnostic aid RECONSIDER, which is based on the Group Symbol Associator, performed as good as or better than other expert systems, according to his own testing.

Nash dubbed the Group Symbol Associator the "Logoscope" because it employed propositional calculus to analyze different combinations of medical symptoms.

The Group Symbol Associator is one of the early efforts to apply digital computers to diagnostic issues, in this instance by adapting an analog instrument.

Along the margin of Nash's cardboard rule, disease groupings chosen from mainstream textbooks on differential diagnosis are noted.

Each patient symptom or property has its own cardboard symptom stick with lines opposing the locations of illnesses that share that property.

There were a total of 82 sign and symptom sticks in the Group Symbol Associator.

Sticks that correspond to the state of the patient are chosen and entered into the rule.



Diseases with a higher number of symptom lines are thought to be diagnoses.

Nash's slide rule is simply a matrix with illnesses as columns and properties as rows.

Wherever qualities are predicted in each illness, a mark (such as a "X") is inserted into the matrix.

Rows that describe symptoms that the patient does not have are removed.

The most probable or "best match" diagnosis is shown by columns with a mark in every cell.

When seen as a matrix, the Nash device reconstructs information in the same manner as peek-a-boo card retrieval systems did in the 1940s to manage knowledge stores.

The Group Symbol Associator is similar to Leo J. Brannick's analog computer for medical diagnosis, Martin Lipkin and James Hardy's McBee punch card system for diagnosing hematological diseases, Keeve Brodman's Cornell Medical Index Health Questionnaire, Vladimir K.

Zworykin's symptom spectra analog computer, and other "peek-a-boo" card systems and devices.

The challenge that these devices are trying to solve is locating or mapping illnesses that are suited for the patient's mix of standardized features or attributes (signs, symptoms, laboratory findings, etc.).

Nash claimed to have condensed a physician's memory of hundreds of pages of typical diagnostic tables to a little machine around a yard long.

Nash claimed that his Group Symbol Associator obeyed the "rule of mechanical experience conservation," which he coined.



"Will man crumble under the weight of the wealth of experience he has to bear and pass on to the next generation if our books and brains are reaching relative inadequacy?" he wrote.

I don't believe so.

Power equipment and labor-saving gadgets took on the physical strain.

Now is the time to usher in the age of thought-saving technologies" (Nash 1960b, 240).

Nash's equipment did more than just help him remember things.

He asserted that the machine was involved in the diagnostic procedure' logical analysis.

"Not only does the Group Symbol Associator represent the final results of various diagnostic classificatory thoughts, but it also displays the skeleton of the whole process as a simultaneous panorama of spectral patterns that correlate with changing degrees of completeness," Nash said.

"For each diagnostic occasion, it creates a map or pattern of the issue and functions as a physical jig to guide the mental process" (Paycha 1959, 661).

On October 14, 1953, a patent application for the invention was filed with the Patent Office in London.

At the 1958 Mechanization of Thought Processes Conference at the National Physical Laboratory (NPL) in the Teddington region of London, Nash conducted the first public demonstration of the Group Symbol Associator.

The NPL meeting in 1958 is notable for being just the second to be held on the topic of artificial intelligence.

In the late 1950s, the Mark III Model of the Group Symbol Associator became commercially available.

Nash hoped that when doctors were away from their offices and books, they would bring Mark III with them.

"The GSA is tiny, affordable to create, ship, and disseminate," Nash noted.

It is simple to use and does not need any maintenance.

Even in outposts, ships, and other places, a person might have one" (Nash 1960b, 241).

Nash also published instances of xerography (dry photocopying)-based "logoscopic photograms" that obtained the same outcomes as his hardware device.

Medical Data Systems of Nottingham, England, produced the Group Symbol Associator in large quantities.

Yamanouchi Pharmaceutical Company distributed the majority of the Mark V devices in Japan.

In 1959, Nash's main opponent, François Paycha, a French ophthalmologist, explained the practical limits of Nash's Group Symbol Associator.

He pointed out that in the identification of corneal diseases, where there are roughly 1,000 differentiable disorders and 2,000 separate indications and symptoms, such a gadget would become highly cumbersome.

The instrument was examined in 1975 by R. W. Pain of the Royal Adelaide Hospital in South Australia, who found it to be accurate in just a quarter of instances.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Computer-Assisted Diagnosis.


Further Reading:


Eden, Murray. 1960. “Recapitulation of Conference.” IRE Transactions on Medical Electronics ME-7, no. 4 (October): 232–38.

Nash, F. A. 1954. “Differential Diagnosis: An Apparatus to Assist the Logical Faculties.” Lancet 1, no. 6817 (April 24): 874–75.

Nash, F. A. 1960a. “Diagnostic Reasoning and the Logoscope.” Lancet 2, no. 7166 (December 31): 1442–46.

Nash, F. A. 1960b. “The Mechanical Conservation of Experience, Especially in Medicine.” IRE Transactions on Medical Electronics ME-7, no. 4 (October): 240–43.

Pain, R. W. 1975. “Limitations of the Nash Logoscope or Diagnostic Slide Rule.” Medical Journal of Australia 2, no. 18: 714–15.

Paycha, François. 1959. “Medical Diagnosis and Cybernetics.” In Mechanisation of Thought Processes, vol. 2, 635–67. London: Her Majesty’s Stationery Office


Artificial Intelligence - What Are Expert Systems?

 






Expert systems are used to solve issues that would normally be addressed by humans.


In the early decades of artificial intelligence research, they emerged as one of the most promising application strategies.

The core concept is to convert an expert's knowledge into a computer-based knowledge system.




Dan Patterson, a statistician and computer scientist at the University of Texas in El Paso, differentiates various properties of expert systems:


• They make decisions based on knowledge rather than facts.

• The task of representing heuristic knowledge in expert systems is daunting.

• Knowledge and the program are generally separated so that the same program can operate on different knowledge bases.

• Expert systems should be able to explain their decisions, represent knowledge symbolically, and have and use meta knowledge, that is, knowledge about knowledge.





(Patterson, et al., 2008) Expert systems generally often reflect domain-specific knowledge.


The subject of medical research was a frequent test application for expert systems.

Expert systems were created as a tool to assist medical doctors in their work.

Symptoms were usually communicated by the patient in the form of replies to inquiries.

Based on its knowledge base, the system would next attempt to identify the ailment and, in certain cases, recommend relevant remedies.

MYCIN, a Stanford University-developed expert system for detecting bacterial infections and blood disorders, is one example.




Another well-known application in the realm of engineering and engineering design tries to capture the heuristic knowledge of the design process in the design of motors and generators.


The expert system assists in the initial design phase, when choices like as the number of poles, whether to use AC or DC, and so on are made (Hoole et al. 2003).

The knowledge base and the inference engine are the two components that make up the core framework of expert systems.




The inference engine utilizes the knowledge base to make choices, whereas the knowledge base holds the expert's expertise.

In this way, the knowledge is isolated from the software that manipulates it.

Knowledge must first be gathered, then comprehended, categorized, and stored in order to create expert systems.

It is retrieved to answer issues depending on predetermined criteria.

The four main processes in the design of an expert system, according to Thomson Reuters chief scientist Peter Jackson, are obtaining information, representing that knowledge, directing reasoning via an inference engine, and explaining the expert system's answer (Jackson 1999).

The expert system's largest issue was acquiring domain knowledge.

Human specialists may be challenging to obtain information from.


Many variables contribute to the difficulty of acquiring knowledge, but the complexity of encoding heuristic and experienced information is perhaps the most important.



The knowledge acquisition process is divided into five phases, according to Hayes-Roth et al. (1983).

Identification, or recognizing the problem and the data that must be used to arrive at a solution; conceptualization, or comprehending the key concepts and relationships between the data; formalization, or comprehending the relevant search space; implementation, or converting formalized knowledge into a software program; and testing the rules for completeness and accuracy are among them.


  • Production (rule-based) or non-production systems may be used to represent domain knowledge.
  • In rule-based systems, knowledge is represented by rules in the form of IF THEN-ELSE expressions.



The inference process is carried out by iteratively going over the rules, either through a forward or backward chaining technique.



  • Forward chaining asks what would happen next if the condition and rules were known to be true. Going from a goal to the rules we know to be true, backward chaining asks why this occurred.
  • Forward chaining is defined as when the left side of the rule is assessed first, that is, when the conditions are verified first and the rules are performed left to right (also known as data-driven inference).
  • Backward chaining occurs when the rules are evaluated from the right side, that is, when the outcomes are verified first (also known as goal-driven inference).
  • CLIPS, a public domain example of an expert system tool that implements the forward chaining method, was created at NASA's Johnson Space Center. MYCIN is an expert system that works backwards.



Associative/semantic networks, frame representations, decision trees, and neural networks may be used in expert system designs based on nonproduction architectures.


Nodes make form an associative/semantic network, which may be used to represent hierarchical knowledge. 

  • An example of a system based on an associative network is CASNET.
  • The most well-known use of CASNET was the development of an expert system for glaucoma diagnosis and therapy.

Frames are structured sets of closely related knowledge in frame architectures.


  • A frame-based architecture is an example of PIP (Present Illness Program).
  • MIT and Tufts-New England Clinical Center developed PIP to generate hypotheses regarding renal illness.

Top-down knowledge is represented via decision tree structures.


Blackboard system designs are complex systems in which the inference process's direction may be changed during runtime.


A blackboard system architecture may be seen in DARPA's HEARSAY domain independent expert system.


  • Knowledge is spread throughout a neural network in the form of nodes in neural network topologies.
  • Case-based reasoning is attempting to examine and find answers for a problem using previously solved examples.
  • A loose connection may be formed between case-based reasoning and judicial law, in which the decision of a comparable but previous case is used to solve a current legal matter.
  • Case-based reasoning is often implemented as a frame, which necessitates a more involved matching and retrieval procedure.



There are three options for manually constructing the knowledge base.


  • Knowledge may be elicited via an interview with a computer using interactive tools. This technique is shown by the computer-graphics-based OPAL software, which enabled clinicians with no prior medical training to construct expert medical knowledge bases for the care of cancer patients.
  • Text scanning algorithms that read books into memory are a second alternative to human knowledge base creation.
  • Machine learning algorithms that build competence on their own, with or without supervision from a human expert, are a third alternative still under development.




DENDRAL, a project started at Stanford University in 1965, is an early example of a machine learning architecture project.


DENDRAL was created in order to study the molecular structure of organic molecules.


  • While DENDRAL followed a set of rules to complete its work, META-DENDRAL created its own rules.
  • META-DENDRAL chose the important data points to observe with the aid of a human chemist.




Expert systems may be created in a variety of methods.


  • User-friendly graphical user interfaces are used in interactive development environments to assist programmers as they code.
  • Special languages may be used in the construction of expert systems.
  • Prolog (Logic Programming) and LISP are two of the most common options (List Programming).
  • Because Prolog is built on predicate logic, it belongs to the logic programming paradigm.
  • One of the first programming languages for artificial intelligence applications was LISP.



Expert system shells are often used by programmers.



A shell provides a platform for knowledge to be programmed into the system.


  • The shell is a layer without a knowledge basis, as the name indicates.
  • The Java Expert System Shell (JESS) is a strong expert shell built in Java.


Many efforts have been made to blend disparate paradigms to create hybrid systems.


  • Object-oriented programming seeks to combine logic-based and object-oriented systems.
  • Object orientation, despite its lack of a rigorous mathematical basis, is very useful in modeling real-world circumstances.

  • Knowledge is represented as objects that encompass both the data and the ways for working with it.
  • Object-oriented systems are more accurate models of real-world things than procedural programming.
  • The Object Inference Knowledge Specification Language (OI-KSL) is one way (Mascrenghe et al. 2002).



Although other languages, such as Visual Prolog, have merged object-oriented programming, OI-KSL takes a different approach.


Backtracking in Visual Prolog occurs inside the objects; that is, the methods backtracked.

Backtracking is taken to a whole new level in OI KSL, with the item itself being backtracked.

To cope with uncertainties in the given data, probability theory, heuristics, and fuzzy logic are sometimes utilized.

A fuzzy electric lighting system was one example of a Prolog implementation of fuzzy logic, in which the quantity of natural light influenced the voltage that flowed to the electric bulb (Mascrenghe 2002).

This allowed the system to reason in the face of uncertainty and with little data.


Interest in expert systems started to wane in the late 1990s, owing in part to unrealistic expectations for the technology and the expensive cost of upkeep.

Expert systems were unable to deliver on their promises.



Even today, technology generated in expert systems research is used in various fields like data science, chatbots, and machine intelligence.


  • Expert systems are designed to capture the collective knowledge that mankind has accumulated through millennia of learning, experience, and practice.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 


Clinical Decision Support Systems; Computer-Assisted Diagnosis; DENDRAL; Expert Systems.



Further Reading:


Hayes-Roth, Frederick, Donald A. Waterman, and Douglas B. Lenat, eds. 1983. Building Expert Systems. Teknowledge Series in Knowledge Engineering, vol. 1. Reading, MA: Addison Wesley.

Hoole, S. R. H., A. Mascrenghe, K. Navukkarasu, and K. Sivasubramaniam. 2003. “An Expert Design Environment for Electrical Devices and Its Engineering Assistant.” IEEE Transactions on Magnetics 39, no. 3 (May): 1693–96.

Jackson, Peter. 1999. Introduction to Expert Systems. Third edition. Reading, MA: Addison-Wesley.

Mascrenghe, A. 2002. “The Fuzzy Electric Bulb: An Introduction to Fuzzy Logic with Sample Implementation.” PC AI 16, no. 4 (July–August): 33–37.

Mascrenghe, A., S. R. H. Hoole, and K. Navukkarasu. 2002. “Prototype for a New Electromagnetic Knowledge Specification Language.” In CEFC Digest. Perugia, Italy: IEEE.

Patterson, Dan W. 2008. Introduction to Artificial Intelligence and Expert Systems. New Delhi, India: PHI Learning.

Rich, Elaine, Kevin Knight, and Shivashankar B. Nair. 2009. Artificial Intelligence. New Delhi, India: Tata McGraw-Hill.



Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...