Showing posts with label Computational Neuroscience. Show all posts
Showing posts with label Computational Neuroscience. Show all posts

AI - SyNAPSE

 


 

Project SyNAPSE (Systemsof Neuromorphic Adaptive Plastic Scalable Electronics) is a collaborativecognitive computing effort sponsored by the Defense Advanced Research ProjectsAgency to develop the architecture for a brain-inspired neurosynaptic computercore.

The project, which began in 2008, is a collaboration between IBM Research, HRL Laboratories, and Hewlett-Packard.

Researchers from a number of universities are also involved in the project.


The acronym SyNAPSE comes from the Ancient Greek word v, which means "conjunction," and refers to the neural connections that let information go to the brain.



The project's purpose is to reverse-engineer the functional intelligence of rats, cats, or potentially humans to produce a flexible, ultra-low-power system for use in robots.

The initial DARPA announcement called for a machine that could "scale to biological levels" and break through the "algorithmic-computational paradigm" (DARPA 2008, 4).

In other words, they needed an electronic computer that could analyze real-world complexity, respond to external inputs, and do so in near-real time.

SyNAPSE is a reaction to the need for computer systems that can adapt to changing circumstances and understand the environment while being energy efficient.

Scientists at SyNAPSE are working on neuromorphicelectronics systems that are analogous to biological nervous systems and capable of processing data from complex settings.




It is envisaged that such systems would gain a considerable deal of autonomy in the future.

The SyNAPSE project takes an interdisciplinary approach, drawing on concepts from areas as diverse as computational neuroscience, artificial neural networks, materials science, and cognitive science.


Basic science and engineering will need to be expanded in the following areas by SyNAPSE: 


  •  simulation—for the digital replication of systems in order to verify functioning prior to the installation of material neuromorphological systems.





In 2008, IBM Research and HRL Laboratories received the first SyNAPSE grant.

Various aspects of the grant requirements were subcontracted to a variety of vendors and contractors by IBM and HRL.

The project was split into four parts, each of which began following a nine-month feasibility assessment.

The first simulator, C2, was released in 2009 and operated on a BlueGene/P supercomputer, simulating cortical simulations with 109 neurons and 1013 synapses, similar to those seen in a mammalian cat brain.

Following a revelation by the Blue Brain Project leader that the simulation did not meet the complexity claimed, the software was panned.

Each neurosynaptic core is 2 millimeters by 3 millimeters in size and is made up of materials derived from human brain biology.

The cores and actual brains have a more symbolic than comparable relationship.

Communication replaces real neurons, memory replaces synapses, and axons and dendrites are replaced by communication.

This enables the team to explain a biological system's hardware implementation.





HRL Labs stated in 2012 that it has created the world's first working memristor array layered atop a traditional CMOS circuit.

The term "memristor," which combines the words "memory" and "transistor," was invented in the 1970s.

Memory and logic functions are integrated in a memristor.

In 2012, project organizers reported the successful large-scale simulation of 530 billion neurons and 100 trillion synapses on the Blue Gene/Q Sequoia machine at Lawrence Livermore National Laboratory in California, which is the world's second fastest supercomputer.





The TrueNorth processor, a 5.4-billion-transistor chip with 4096 neurosynaptic cores coupled through an intrachip network that includes 1 million programmable spiking neurons and 256 million adjustable synapses, was presented by IBM in 2014.

Finally, in 2016, an end-to-end ecosystem (including scalable systems, software, and apps) that could fully use the TrueNorth CPU was unveiled.

At the time, there were reports on the deployment of applications such as interactive handwritten character recognition and data-parallel text extraction and recognition.

TrueNorth's cognitive computing chips have now been put to the test in simulations like a virtual-reality robot driving and playing the popular videogame Pong.

DARPA has been interested in the construction of brain-inspired computer systems since the 1980s.

Dharmendra Modha, director of IBM Almaden's Cognitive ComputingInitiative, and Narayan Srinivasa, head of HRL's Center for Neural and Emergent Systems, are leading the Project SyNAPSE project.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Cognitive Computing; Computational Neuroscience.


References And Further Reading


Defense Advanced Research Projects Agency (DARPA). 2008. “Systems of Neuromorphic Adaptive Plastic Scalable Electronics.” DARPA-BAA 08-28. Arlington, VA: DARPA, Defense Sciences Office.

Hsu, Jeremy. 2014. “IBM’s New Brain.” IEEE Spectrum 51, no. 10 (October): 17–19.

Merolla, Paul A., et al. 2014. “A Million Spiking-Neuron Integrated Circuit with a Scalable Communication Network and Interface.” Science 345, no. 6197 (August): 668–73.

Monroe, Don. 2014. “Neuromorphic Computing Gets Ready for the (Really) Big Time.” Communications of the ACM 57, no. 6 (June): 13–15.




Artificial Intelligence - What Is Computational Neuroscience?

 



Computational neuroscience (CNS) is a branch of neuroscience that uses the notion of computing to the study of the brain.

Eric Schwartz coined the phrase "computational neuroscience" in 1985 to replace the words "neural modeling" and "brain theory," which were previously used to describe different forms of nervous system study.

The concept that nervous system effects may be perceived as examples of computations, since state transitions can be explained as relations between abstract attributes, is at the heart of CNS.

In other words, explanations of effects in neurological systems are descriptions of information changed, stored, and represented, rather than casual descriptions of interaction of physically distinct elements.

As a result, CNS aims to develop computational models to better understand how the nervous system works in terms of the information processing characteristics of the brain's parts.

Constructing a model of how interacting neurons might build basic components of cognition is one example.

A brain map, on the other hand, does not disclose the nervous system's computing process, but it may be utilized as a restriction for theoretical models.

Information sharing, for example, has costs in terms of physical connections between communicating areas, in that locations that make connections often (in cases of high bandwidth and low latency) would be clustered together.

The description of neural systems as carry-on computations is central to computational neuroscience, and it contradicts the claim that computational constructs are exclusive to the explanatory framework of psychology; that is, human cognitive capacities can be constructed and confirmed independently of how they are implemented in the nervous system.

For example, when it became clear in 1973 that cognitive processes could not be understood by analyzing the results of one-dimensional questions/scenarios, a popular approach in cognitive psychology at the time, Allen Newell argued that only synthesis with computer simulation could reveal the complex interactions of the proposed component's mechanism and whether the proposed component's mechanism was correct.

David Marr (1945–1980) proposed the first computational neuroscience framework.

This framework tries to give a conceptual starting point for thinking about levels in the context of computing by nervous structure.

It reflects the three-level structure utilized in computer science (abstract issue analysis, algorithm, and physical implementation).

The model, however, has drawbacks since it is made up of three poorly linked layers and uses a rigid top-down approach that ignores all neurobiological facts as instances at the implementation level.

As a result, certain events are thought to be explicable on just one or two levels.

As a result, the Marr levels framework does not correspond to the levels of nervous system structure (molecules, synapses, neurons, nuclei, circuits, networks layers, maps, and systems), nor does it explain nervous system emergent type features.

Computational neuroscience takes a bottom-up approach, beginning with neurons and illustrating how computational functions and their implementations with neurons result in dynamic interactions between neurons.

Models of connectivity and dynamics, decoding models, and representational models are the three kinds of models that try to get computational understanding from brain-activity data.

The correlation matrix, which displays pairwise functional connectivity between places and establishes the features of related areas, is used in connection models.

Because they are generative models, they can generate data at the level of the measurements and are models of brain dynamics, analyses of effective connectivity and large-scale brain dynamics go beyond generic statistical models that are linear models used in action and information-based brain mapping.

The goal of the decoding models is to figure out what information is stored in each brain area.

When an area is designated as a "knowledge representing" one, its data becomes a functional entity that informs regions that receive these signals about the content.

In the simplest scenario, decoding identifies which of the two stimuli elicited a recorded response pattern.

The representation's content might be the sensory stimulus's identity, a stimulus feature (such as orientation), or an abstract variable required for a cognitive operation or action.

Decoding and multivariate pattern analysis were utilized to determine the components that must be included in the brain computational model.

Decoding, on the other hand, does not provide models for brain computing; rather, it discloses some elements without requiring brain calculation.

Because they strive to characterize areas' reactions to arbitrary stimuli, representation models go beyond decoding.

Encoding models, pattern component models, and representational similarity analysis are three forms of representational model analysis that have been presented.

All three studies are based on multivariate descriptions of the experimental circumstances and test assumptions about representational space.

In encoding models, the activity profile of each voxel across stimuli is predicted as a linear combination of the model's properties.

The distribution of the activity profiles that define the representational space is treated as a multivariate normal distribution in pattern component models.

The representational space is defined by the representational dissimilarities of the activity patterns evoked by the stimuli in representational similarity analysis.

The qualities that indicate how the information processing cognitive function could operate are not tested in the brain models.

Task performance models are used to describe cognitive processes in terms of algorithms.

These models are put to the test using experimental data and, in certain cases, data from brain activity.

Neural network models and cognitive models are the two basic types of models.

Models of neural networks are created using varying degrees of biological information, ranging from neurons to maps.

Multiple steps of linear-nonlinear signal modification are supported by neural networks, which embody the parallel distributed processing paradigm.

To enhance job performance, models often incorporate millions of parameters (connection weights).

Simple models will not be able to describe complex cognitive processes, hence a high number of parameters is required.

The implementations of deep convolutional neural network models have been used to predict brain representations of new pictures in the ventral visual stream of primates.

The representations in the first few layers of neural networks are comparable to those in the early visual cortex.

Higher layers are similar to the inferior temporal cortical representation in that they both allow for the decoding of object location, size, and posture, as well as the object's categorization.

Various research have shown that deep convolutional neural networks' internal representations provide the best current models of visual picture representations in the inferior temporal cortex in humans and animals.

When a wide number of models were compared, those that were optimized for object categorization described the cortical representation the best.

Cognitive models are artificial intelligence applications in computational neuroscience that target information processing that do not include any neurological components (neurons, axons, etc.).

Production systems, reinforcement learning, and Bayesian cognitive models are the three kinds of models.

They use logic and predicates, and they work with symbols rather than signals.

There are various advantages of employing artificial intelligence in computational neuroscience research.

  1. First, although a vast quantity of information on the brain has accumulated through time, the true knowledge of how the brain functions remains unknown.
  2. Second, there are embedded effects created by networks of neurons, but how these networks of neurons operate is yet unknown.
  3. Third, although the brain has been crudely mapped, as has understanding of what distinct brain areas (mostly sensory and motor functions) perform, a precise map is still lacking.

Furthermore, some of the information gathered via experiments or observations may be useless; the link between synaptic learning principles and computing is mostly unclear.

The models of a production system are the first models for explaining reasoning and problem resolution.

A "production" is a cognitive activity that occurs as a consequence of the "if-then" rule, in which "if" defines the set of circumstances under which the range of productions ("then" clause) may be carried out.

When the prerequisites for numerous rules are satisfied, the model uses a conflict resolution algorithm to choose the best production.

The production models provide a sequence of predictions that seem like a conscious stream of brain activity.

The same approach is now being used to predict the regional mean fMRI (functional Magnetic Resonance Imaging) activation time in new applications.

Reinforcement models are used in a variety of areas to simulate the accomplishment of optimum decision-making.

The implementation in neurobiological systems is a basal ganglia in neurobiochemical systems.

The agent might learn a "value function" that links each state to the predicted total reward.

The agent may pick the most promising action if it can forecast which state each action will lead to and understands the values of those states.

The agent could additionally pick up a "policy" that links each state to promised actions.

Exploitation (which provides immediate gratification) and exploration must be balanced (which benefits learning and brings long-term reward).

The Bayesian models show what the brain should really calculate in order to perform at its best.

These models enable inductive inference, which is beyond the capability of neural network models and requires previous knowledge.

The models have been used to explain cognitive biases as the result of past beliefs, as well as to comprehend fundamental sensory and motor processes.

The representation of the probability distribution of neurons, for example, has been investigated theoretically using Bayesian models and compared to actual evidence.

These practices illustrate that connecting Bayesian inference to real brain implementation is still difficult since the brain "cuts corners" in trying to be efficient, therefore approximations may explain departures from statistical optimality.

The concept of a brain doing computations is central to computational neuroscience, so researchers are using modeling and analysis of information processing properties of nervous system elements to try to figure out how complex brain functions work.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Bayesian Inference; Cognitive Computing.


Further Reading


Kaplan, David M. 2011. “Explanation and Description in Computational Neuroscience.” Synthese 183, no. 3: 339–73.

Kriegeskorte, Nikolaus, and Pamela K. Douglas. 2018. “Cognitive Computational Neuroscience.” Nature Neuroscience 21, no. 9: 1148–60.

Schwartz, Eric L., ed. 1993. Computational Neuroscience. Cambridge, MA: Massachusetts Institute of Technology.

Trappenberg, Thomas. 2009. Fundamentals of Computational Neuroscience. New York: Oxford University Press.



Artificial Intelligence - What Is Cognitive Computing?


 


Self-learning hardware and software systems that use machine learning, natural language processing, pattern recognition, human-computer interaction, and data mining technologies to mimic the human brain are referred to as cognitive computing.


The term "cognitive computing" refers to the use of advances in cognitive science to create new and complex artificial intelligence systems.


Cognitive systems aren't designed to take the place of human thinking, reasoning, problem-solving, or decision-making; rather, they're meant to supplement or aid people.

A collection of strategies to promote the aims of affective computing, which entails narrowing the gap between computer technology and human emotions, is frequently referred to as cognitive computing.

Real-time adaptive learning approaches, interactive cloud services, interactive memo ries, and contextual understanding are some of these methodologies.

To conduct quantitative assessments of organized statistical data and aid in decision-making, cognitive analytical tools are used.

Other scientific and economic systems often include these tools.

Complex event processing systems utilize complex algorithms to assess real-time data regarding events for patterns and trends, offer choices, and make judgments.

These kinds of systems are widely used in algorithmic stock trading and credit card fraud detection.

Face recognition and complex image recognition are now possible with image recognition systems.

Machine learning algorithms build models from data sets and improve as new information is added.

Neural networks, Bayesian classifiers, and support vector machines may all be used in machine learning.

Natural language processing entails the use of software to extract meaning from enormous amounts of data generated by human conversation.

Watson from IBM and Siri from Apple are two examples.

Natural language comprehension is perhaps cognitive computing's Holy Grail or "killer app," and many people associate natural language processing with cognitive computing.

Heuristic programming and expert systems are two of the oldest branches of so-called cognitive computing.

Since the 1980s, there have been four reasonably "full" cognitive computing architectures: Cyc, Soar, Society of Mind, and Neurocognitive Networks.

Speech recognition, sentiment analysis, face identification, risk assessment, fraud detection, and behavioral suggestions are some of the applications of cognitive computing technology.

These applications are referred regarded as "cognitive analytics" systems when used together.

In the aerospace and defense industries, agriculture, travel and transportation, banking, health care and the life sciences, entertainment and media, natural resource development, utilities, real estate, retail, manufacturing and sales, marketing, customer service, hospitality, and leisure, these systems are in development or are being used.

Netflix's movie rental suggestion algorithm is an early example of predictive cognitive computing.

Computer vision algorithms are being used by General Electric to detect tired or distracted drivers.

Customers of Domino's Pizza can place orders online by speaking with a virtual assistant named Dom.

Elements of Google Now, a predictive search feature that debuted in Google applications in 2012, assist users in predicting road conditions and anticipated arrival times, locating hotels and restaurants, and remembering anniversaries and parking spots.


In IBM marketing materials, the term "cognitive" computing appears frequently.

Cognitive computing, according to the company, is a subset of "augmented intelligence," which is preferred over artificial intelligence.


The Watson machine from IBM is frequently referred to as a "cognitive computer" since it deviates from the traditional von Neumann design and instead draws influence from neural networks.

Neuroscientists are researching the inner workings of the human brain, seeking for connections between neuronal assemblies and mental aspects, and generating new mental ideas.

Hebbian theory is an example of a neuroscientific theory that underpins cognitive computer machine learning implementations.

The Hebbian theory is a proposed explanation for neural adaptation during the learning process.

Donald Hebb initially proposed the hypothesis in his 1949 book The Organization of Behavior.

Learning, according to Hebb, is a process in which the causal induction of recurrent or persistent neuronal firing or activity causes neural traces to become stable.

"Any two cells or systems of cells that are consistently active at the same time will likely to become'associated,' such that activity in one favors activity in the other," Hebb added (Hebb 1949, 70).

"Cells that fire together, wire together," is how the idea is frequently summarized.

According to this hypothesis, the connection of neuronal cells and tissues generates neurologically defined "engrams" that explain how memories are preserved in the brain as biophysical or biochemical changes.

Engrams' actual location, as well as the procedures by which they are formed, are currently unknown.

IBM machines are stated to learn by aggregating information into a computational convolution or neural network architecture made up of weights stored in a parallel memory system.

Intel introduced Loihi, a cognitive chip that replicates the functions of neurons and synapses, in 2017.

Loihi is touted to be 1,000 times more energy efficient than existing neurosynaptic devices, with 128 clusters of 1,024 simulated neurons on per chip, for a total of 131,072 simulated neurons.

Instead of relying on simulated neural networks and parallel processing with the overarching goal of developing artificial cognition, Loihi uses purpose-built neural pathways imprinted in silicon.

These neuromorphic processors are likely to play a significant role in future portable and wire-free electronics, as well as automobiles.

Roger Schank, a cognitive scientist and artificial intelligence pioneer, is a vocal opponent of cognitive computing technology.

"Watson isn't thinking. You can only reason if you have objectives, plans, and strategies to achieve them, as well as an understanding of other people's ideas and a knowledge of prior events to draw on.

"Having a point of view is also beneficial," he writes.

"How does Watson feel about ISIS, for example?" Is this a stupid question? ISIS is a topic on which actual thinking creatures have an opinion" (Schank 2017).



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Computational Neuroscience; General and Narrow AI; Human Brain Project; SyNAPSE.


Further Reading

Hebb, Donald O. 1949. The Organization of Behavior. New York: Wiley.

Kelly, John, and Steve Hamm. 2013. Smart Machines: IBM’s Watson and the Era of Cognitive Computing. New York: Columbia University Press.

Modha, Dharmendra S., Rajagopal Ananthanarayanan, Steven K. Esser, Anthony Ndirango, Anthony J. Sherbondy, and Raghavendra Singh. 2011. “Cognitive Computing.” Communications of the ACM 54, no. 8 (August): 62–71.

Schank, Roger. 2017. “Cognitive Computing Is Not Cognitive at All.” FinTech Futures, May 25. https://www.bankingtech.com/2017/05/cognitive-computing-is-not-cognitive-at-all

Vernon, David, Giorgio Metta, and Giulio Sandini. 2007. “A Survey of Artificial Cognitive Systems: Implications for the Autonomous Development of Mental Capabilities in Computational Agents.” IEEE Transactions on Evolutionary Computation 11, no. 2: 151–80.







Artificial Intelligence - What Is Bayesian Inference?

 





Bayesian inference is a method of calculating the likelihood of a proposition's validity based on a previous estimate of its likelihood plus any new and relevant facts.

In the twentieth century, Bayes' Theorem, from which Bayesian statistics are derived, was a prominent mathematical technique employed in expert systems.

The Bayesian theorem has been used to issues such as robot locomotion, weather forecasting, juri metry (the application of quantitative approaches to legislation), phylogenetics (the evolutionary links among animals), and pattern recognition in the twenty-first century.

It's also used in email spam filters and can be used to solve the famous Monty Hall issue.

The mathematical theorem was derived by Reverend Thomas Bayes (1702–1761) of England and published posthumously in the Philosophical Transactions of the Royal Society of London in 1763 as "An Essay Towards Solving a Problem in the Doctrine of Chances." Bayes' Theorem of Inverse Probabilities is another name for it.

A classic article titled "Reasoning Foundations of Medical Diagnosis," written by George Washington University electrical engineer Robert Ledley and Rochester School of Medicine radiologist Lee Lusted and published by Science in 1959, was the first notable discussion of Bayes' Theorem as applied to the field of medical artificial intelligence.

Medical information in the mid-twentieth century was frequently given as symptoms connected with an illness, rather than diseases associated with a symptom, as Lusted subsequently recalled.

They came up with the notion of expressing medical knowledge as the likelihood of a disease given the patient's symptoms using Bayesian reasoning.

Bayesian statistics are conditional, allowing one to determine the likelihood that a specific disease is present based on a specific symptom, but only with prior knowledge of how frequently the disease and symptom are correlated, as well as how frequently the symptom is present in the absence of the disease.

It's pretty similar to what Alan Turing called the evidence-based element in support of the hypothesis.

The symptom-disease complex, which involves several symptoms in a patient, may also be resolved using Bayes' Theorem.

In computer-aided diagnosis, Bayesian statistics analyzes the likelihood of each illness manifesting in a population with the chance of each symptom manifesting given each disease to determine the probability of all possible diseases given each patient's symptom-disease complex.

All induction, according to Bayes' Theorem, is statistical.

In 1960, the theory was used to generate the posterior probability of certain illnesses for the first time.

In that year, University of Utah cardiologist Homer Warner, Jr.

used Bayesian statistics to detect well-defined congenital heart problems at Salt Lake's Latter-Day Saints Hospital, thanks to his access to a Burroughs 205 digital computer.

The theory was used by Warner and his team to calculate the chances that an undiscovered patient having identifiable symptoms, signs, or laboratory data would fall into previously recognized illness categories.

As additional information became available, the computer software could be employed again and again, creating or rating diagnoses via serial observation.

The Burroughs computer outperformed any professional cardiologist in applying Bayesian conditional-probability algorithms to a symptom-disease matrix of thirty-five cardiac diseases, according to Warner.

John Overall, Clyde Williams, and Lawrence Fitzgerald for thyroid problems; Charles Nugent for Cushing's illness; Gwilym Lodwick for primary bone tumors; Martin Lipkin for hematological diseases; and Tim de Dombal for acute abdominal discomfort were among the early supporters of Bayesian estimation.

In the previous half-century, the Bayesian model has been expanded and changed several times to account for or compensate for sequential diagnosis and conditional independence, as well as to weight other elements.

Poor prediction of rare diseases, insufficient discrimination between diseases with similar symptom complexes, inability to quantify qualitative evidence, troubling conditional dependence between evidence and hypotheses, and the enormous amount of manual labor required to maintain the requisite joint probability distribution tables are all criticisms leveled at Bayesian computer-aided diagnosis.

Outside of the populations for which they were intended, Bayesian diagnostic helpers have been chastised for their shortcomings.

When rule-based decision support algorithms became more prominent in the mid-1970s, the application of Bayesian statistics in differential diagnosis reached a low.

In the 1980s, Bayesian approaches resurfaced and are now extensively employed in the area of machine learning.

From the concept of Bayesian inference, artificial intelligence researchers have developed robust techniques for supervised learning, hidden Markov models, and mixed approaches for unsupervised learning.

Bayesian inference has been controversially utilized in artificial intelligence algorithms that aim to calculate the conditional chance of a crime being committed, to screen welfare recipients for drug use, and to identify prospective mass shooters and terrorists in the real world.

The method has come under fire once again, especially when screening includes infrequent or severe incidents, where the AI system might act arbitrarily and flag too many people as being at danger of partaking in the unwanted behavior.

In the United Kingdom, Bayesian inference has also been used into the courtroom.

The defense team in Regina v.

Adams (1996) offered jurors the Bayesian approach to aid them in forming an unbiased mechanism for combining introduced evidence, which included a DNA profile and varying match probability calculations, as well as constructing a personal threshold for convicting the accused "beyond a reasonable doubt." Before Ledley, Lusted, and Warner revived Bayes' theorem in the 1950s, it had previously been "rediscovered" multiple times.

Pierre-Simon Laplace, the Marquis de Condorcet, and George Boole were among the historical figures who saw merit in the Bayesian approach to probability.

The Monty Hall dilemma, named after the presenter of the famous game show Let's Make a Deal, involves a contestant selecting whether to continue with the door they've chosen or swap to another unopened door when Monty Hall (who knows where the reward is) opens one to reveal a goat.

Switching doors, contrary to popular belief, doubles your odds of winning under conditional probability.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Computational Neuroscience; Computer-Assisted Diagnosis.


Further Reading

Ashley, Kevin D., and Stefanie Brüninghaus. 2006. “Computer Models for Legal Prediction.” Jurimetrics 46, no. 3 (Spring): 309–52.

Barnett, G. Octo. 1968. “Computers in Patient Care.” New England Journal of Medicine
279 (December): 1321–27.

Bayes, Thomas. 1763. “An Essay Towards Solving a Problem in the Doctrine of Chances.” 
Philosophical Transactions 53 (December): 370–418.

Donnelly, Peter. 2005. “Appealing Statistics.” Significance 2, no. 1 (February): 46–48.
Fox, John, D. Barber, and K. D. Bardhan. 1980. “Alternatives to Bayes: A Quantitative 
Comparison with Rule-Based Diagnosis.” Methods of Information in Medicine 19, 
no. 4 (October): 210–15.

Ledley, Robert S., and Lee B. Lusted. 1959. “Reasoning Foundations of Medical Diagnosis.” Science 130, no. 3366 (July): 9–21.

Lusted, Lee B. 1991. “A Clearing ‘Haze’: A View from My Window.” Medical Decision 
Making 11, no. 2 (April–June): 76–87.

Warner, Homer R., Jr., A. F. Toronto, and L. G. Veasey. 1964. “Experience with Bayes’ 
Theorem for Computer Diagnosis of Congenital Heart Disease.” Annals of the 
New York Academy of Sciences 115: 558–67.


Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...