Showing posts with label Cognitive Computing. Show all posts
Showing posts with label Cognitive Computing. Show all posts

AI - SyNAPSE

 


 

Project SyNAPSE (Systemsof Neuromorphic Adaptive Plastic Scalable Electronics) is a collaborativecognitive computing effort sponsored by the Defense Advanced Research ProjectsAgency to develop the architecture for a brain-inspired neurosynaptic computercore.

The project, which began in 2008, is a collaboration between IBM Research, HRL Laboratories, and Hewlett-Packard.

Researchers from a number of universities are also involved in the project.


The acronym SyNAPSE comes from the Ancient Greek word v, which means "conjunction," and refers to the neural connections that let information go to the brain.



The project's purpose is to reverse-engineer the functional intelligence of rats, cats, or potentially humans to produce a flexible, ultra-low-power system for use in robots.

The initial DARPA announcement called for a machine that could "scale to biological levels" and break through the "algorithmic-computational paradigm" (DARPA 2008, 4).

In other words, they needed an electronic computer that could analyze real-world complexity, respond to external inputs, and do so in near-real time.

SyNAPSE is a reaction to the need for computer systems that can adapt to changing circumstances and understand the environment while being energy efficient.

Scientists at SyNAPSE are working on neuromorphicelectronics systems that are analogous to biological nervous systems and capable of processing data from complex settings.




It is envisaged that such systems would gain a considerable deal of autonomy in the future.

The SyNAPSE project takes an interdisciplinary approach, drawing on concepts from areas as diverse as computational neuroscience, artificial neural networks, materials science, and cognitive science.


Basic science and engineering will need to be expanded in the following areas by SyNAPSE: 


  •  simulation—for the digital replication of systems in order to verify functioning prior to the installation of material neuromorphological systems.





In 2008, IBM Research and HRL Laboratories received the first SyNAPSE grant.

Various aspects of the grant requirements were subcontracted to a variety of vendors and contractors by IBM and HRL.

The project was split into four parts, each of which began following a nine-month feasibility assessment.

The first simulator, C2, was released in 2009 and operated on a BlueGene/P supercomputer, simulating cortical simulations with 109 neurons and 1013 synapses, similar to those seen in a mammalian cat brain.

Following a revelation by the Blue Brain Project leader that the simulation did not meet the complexity claimed, the software was panned.

Each neurosynaptic core is 2 millimeters by 3 millimeters in size and is made up of materials derived from human brain biology.

The cores and actual brains have a more symbolic than comparable relationship.

Communication replaces real neurons, memory replaces synapses, and axons and dendrites are replaced by communication.

This enables the team to explain a biological system's hardware implementation.





HRL Labs stated in 2012 that it has created the world's first working memristor array layered atop a traditional CMOS circuit.

The term "memristor," which combines the words "memory" and "transistor," was invented in the 1970s.

Memory and logic functions are integrated in a memristor.

In 2012, project organizers reported the successful large-scale simulation of 530 billion neurons and 100 trillion synapses on the Blue Gene/Q Sequoia machine at Lawrence Livermore National Laboratory in California, which is the world's second fastest supercomputer.





The TrueNorth processor, a 5.4-billion-transistor chip with 4096 neurosynaptic cores coupled through an intrachip network that includes 1 million programmable spiking neurons and 256 million adjustable synapses, was presented by IBM in 2014.

Finally, in 2016, an end-to-end ecosystem (including scalable systems, software, and apps) that could fully use the TrueNorth CPU was unveiled.

At the time, there were reports on the deployment of applications such as interactive handwritten character recognition and data-parallel text extraction and recognition.

TrueNorth's cognitive computing chips have now been put to the test in simulations like a virtual-reality robot driving and playing the popular videogame Pong.

DARPA has been interested in the construction of brain-inspired computer systems since the 1980s.

Dharmendra Modha, director of IBM Almaden's Cognitive ComputingInitiative, and Narayan Srinivasa, head of HRL's Center for Neural and Emergent Systems, are leading the Project SyNAPSE project.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Cognitive Computing; Computational Neuroscience.


References And Further Reading


Defense Advanced Research Projects Agency (DARPA). 2008. “Systems of Neuromorphic Adaptive Plastic Scalable Electronics.” DARPA-BAA 08-28. Arlington, VA: DARPA, Defense Sciences Office.

Hsu, Jeremy. 2014. “IBM’s New Brain.” IEEE Spectrum 51, no. 10 (October): 17–19.

Merolla, Paul A., et al. 2014. “A Million Spiking-Neuron Integrated Circuit with a Scalable Communication Network and Interface.” Science 345, no. 6197 (August): 668–73.

Monroe, Don. 2014. “Neuromorphic Computing Gets Ready for the (Really) Big Time.” Communications of the ACM 57, no. 6 (June): 13–15.




Artificial Intelligence - The Human Brain Project

  



The European Union's major brain research endeavor is the Human Brain Project.

The project, which encompasses Big Science in terms of the number of participants and its lofty ambitions, is a multidisciplinary coalition of over one hundred partner institutions and includes professionals from the disciplines of computer science, neurology, and robotics.

The Human Brain Project was launched in 2013 as an EU Future and Emerging Technologies initiative with a budget of over one billion euros.

The ten-year project aims to make fundamental advancements in neuroscience, medicine, and computer technology.

Researchers working on the Human Brain Project hope to learn more about how the brain functions and how to imitate its computing skills.

Human Brain Organization, Systems and Cognitive Neuroscience, Theoretical Neuroscience, and implementations such as the Neuroinformatics Platform, Brain Simulation Platform, Medical Informatics Platform, and Neuromorphic Computing Platform are among the twelve subprojects of the Human Brain Project.

Six information and communication technology platforms were released by the Human Brain Project in 2016 as the main research infrastructure for ongoing brain research.

The project's research is focused on the creation of neuromorphic (brain-inspired) computer chips, in addition to infrastructure established for gathering and distributing data from the scientific community.

BrainScaleS is a subproject that uses analog signals to simulate the neuron and its synapses.

SpiNNaker (Spiking Neural Network Design) is a supercomputer architecture based on numerical models operating on special multicore digital devices.

The Neurorobotic Platform is another ambitious subprogram, where "virtual brain models meet actual or simulated robot bodies" (Fauteux 2019).

The project's modeling of the human brain, which includes 100 billion neurons with 7,000 synaptic connections to other neurons, necessitates massive computational resources.

Computer models of the brain are created on six supercomputers at research sites around Europe.

These models are currently being used by project researchers to examine illnesses.

The show has been panned.

Scientists protested in a 2014 open letter to the European Commission about the program's lack of openness and governance, as well as the program's small breadth of study in comparison to its initial goal and objectives.

The Human Brain Project has a new governance structure as a result of an examination and review of its financing procedures, needs, and stated aims.

 



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Blue Brain Project; Cognitive Computing; SyNAPSE.


Further Reading:


Amunts, Katrin, Christoph Ebell, Jeff Muller, Martin Telefont, Alois Knoll, and Thomas Lippert. 2016. “The Human Brain Project: Creating a European Research Infrastructure to Decode the Human Brain.” Neuron 92, no. 3 (November): 574–81.

Fauteux, Christian. 2019. “The Progress and Future of the Human Brain Project.” Scitech Europa, February 15, 2019. https://www.scitecheuropa.eu/human-brain-project/92951/.

Markram, Henry. 2012. “The Human Brain Project.” Scientific American 306, no. 6 

(June): 50–55.

Markram, Henry, Karlheinz Meier, Thomas Lippert, Sten Grillner, Richard Frackowiak, 

Stanislas Dehaene, Alois Knoll, Haim Sompolinsky, Kris Verstreken, Javier 

DeFelipe, Seth Grant, Jean-Pierre Changeux, and Alois Sariam. 2011. “Introduc￾ing the Human Brain Project.” Procedia Computer Science 7: 39–42.



Artificial Intelligence - Giant Mechanical Brains, Giant Brains, Or Machines That Think.




Edmund Callis Berkeley, a Harvard-trained computer scientist, impacted the American public's conceptions of what computers were and what role they would play in society from the late 1940s to the mid-1960s. 

In his opinion, computers were "huge mechanical brains" or massive, automated, information-processing, thinking devices that should be employed for the benefit of society.

Berkeley's Association for Computing Machinery (cofounded in 1947), his firm Berkeley Associates (founded in 1948), his book Giant Brains (1949), and the periodical Computers and Automation all encouraged early, peaceful, and commercial computer innovations (established in 1951).




Berkeley classified computers as huge mechanical brains in his classic book Giant Brains, or Machines that Think, because of their strong, automated, cognitive, information-processing characteristics.

Berkeley saw computers as devices that ran on their own, without the need for human involvement.

Simply press the start button, and "the machine begins to whirr and writes out the answers as they are obtained" (Berkeley 1949, 5).

Because they processed information, computers had cognitive functions as well.

Berkeley saw human mind as "a process of storing knowledge and then referring to it, via a process of learning and remembering" (Berkeley 1949, 2).

A computer may think similarly; it "automatically passes information from one portion of the machine to another, [with] flexible control over the order of its actions" (Berkeley 1949, 5).





He used the word "giant" to highlight the early computers' processing capacity as much as their physical size.

The ENIAC, the first electronic general-purpose digital computer, took up the whole basement of the Moore School of Electrical Engineering at the University of Pennsylvania in 1946.

Berkeley was involved in the application of symbolic logic to early computer designs, in addition to shaping the role of computers in the popular imagination.

He graduated from Harvard University with a bachelor's degree in mathematics and logic, and by 1934, he was working in Prudential Insurance's actuarial department.

Claude Shannon, an electrical engineer at Bell Labs, released his groundbreaking work on the application of Boolean logic to automated circuit design in 1938.

At Prudential, Berkeley advocated Shannon's results, asking the insurance company to use logic in its punch card tabulations.



The New York Symbolic Logic Group was founded in 1941 by Berkeley, Shannon, and others to develop logic applications in electronic relay computing.

Berkeley joined in the US Navy when the US entered World War II (1939–1945) in 1941, and was later transferred to Howard Aiken's Lab at Harvard University to assist create the Mark II electromechanical computer.

Berkeley returned to Prudential, convinced of the commercial potential of computers, based on his experiences with Mark II.

Berkeley demonstrated in 1946 that computers could properly calculate a difficult insurance issue, in this instance the cost of a policy change, using Bell Labs' general-purpose relay calculator (Yates 2005, 123–24).



Berkeley met John William Mauchly in 1947 at the Harvard Symposium on Large Scale Digital Calculating Machinery.

Prudential and John Adam Presper Eckert and Mauchly's Electronic Control Company (ECC) signed a contract for the creation of a general-purpose computer that would aid insurance calculations after their meeting.

The UNIVAC was born from that general-purpose machine (in 1951).

Prudential, on the other hand, chose not to adopt UNIVAC and instead reverted to IBM's tabulating technology.

UNIVAC's first commercial contract was in payroll computations for General Electric (Yates 2005, 124–27).



Berkeley left Prudential in 1948 to form Berkeley Associates, which later became Berkeley Enterprises.

Berkeley devoted the rest of his life to promoting Giant Brains, including the application of symbolic logic in computer technology, as well as other social activist concerns.

Berkeley followed Giant Brains (1949) with Brainiacs (1959) and Symbolic Logic and Intelligent Machines (1960).

He also established correspondence courses in general knowledge, computers, mathematics, and logic systems, as well as the Roster of Organizations in the Field of Automatic Computing Machinery, the first periodical for computing specialists.

From 1951 until 1973, this publication was renamed Computers and Automation.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Cognitive Computing; Symbolic Logic.


Further Reading:


Berkeley, Edmund C. 1949. Giant Brains, or Machines That Think. London: John Wiley & Sons.

Berkeley, Edmund C. 1959a. Brainiacs: 201 Electronic Brain Machines and How to Make Them. Newtonville, MA: Berkeley Enterprises.

Berkeley, Edmund C. 1959b. Symbolic Logic and Intelligent Machines. New York: Reinhold.

Longo, Bernadette. 2015. Edmund Berkeley and the Social Responsibility of Computer Professionals. New York: Association for Computing Machinery.

Yates, JoAnne. 2005. Structuring the Information Age: Life Insurance and Technology in the Twentieth Century. Baltimore: Johns Hopkins University Press.


Artificial Intelligence - Who Is Daniel Dennett?

 



At Tufts University, Daniel Dennett(1942–) is the Austin B. Fletcher Professor of Philosophy and Co-Director of the Center for Cognitive Studies.

Philosophy of mind, free will, evolutionary biology, cognitive neuroscience, and artificial intelligence are his main areas of study and publishing.

He has written over a dozen books and hundreds of articles.

Much of this research has focused on the origins and nature of consciousness, as well as how naturalistically it may be described.

Dennett is also an ardent atheist, and one of the New Atheism's "Four Horsemen." Richard Dawkins, Sam Harris, and Christopher Hitchens are the others.

Dennett's worldview is naturalistic and materialistic throughout.

He opposes Cartesian dualism, which holds that the mind and body are two distinct things that merge.

Instead, he contends that the brain is a form of computer that has developed through time due to natural selection.

Dennett also opposes the homunculus theory of the mind, which holds that the brain has a central controller or "little man" who performs all of the thinking and emotion.

Dennett, on the other hand, argues for a viewpoint he refers to as the numerous drafts model.

According to his theory, which he lays out in his 1991 book Consciousness Explained, the brain is constantly sifting through, interpreting, and editing sensations and inputs, forming overlapping drafts of experience.

Dennett later used the metaphor of "fame in the brain" to describe how various aspects of ongoing neural processes are periodically emphasized at different times and under different circumstances.

Consciousness is a story made up of these varied interpretations of human events.

Dennett dismisses the assumption that these ideas coalesce or are structured in a central portion of the brain, which he mockingly refers to as "Cartesian theater." The brain's story is made up of a never-ending, un-centralized flow of bottom-up awareness that spans time and place.

Dennett denies the existence of qualia, which are subjective individual experiences such as how colors seem to the human eye or how food feels.

He does not deny that colors and tastes exist; rather, he claims that the sensation of color and taste does not exist as a separate thing in the human mind.

He claims that there is no difference between human and computer "sensation experiences." According to Dennett, just as some robots can discern between colors without people deciding that they have qualia, so can the human brain.

For Dennett, the color red is just the quality that brains sense and which is referred to as red in the English language.

It has no extra, indescribable quality.

This is a crucial consideration for artificial intelligence because the ability to experience qualia is frequently seen as a barrier to the development of Strong AI (AI that is functionally equivalent to that of a human) and as something that will invariably distinguish human and machine intelligence.

However, if qualia do not exist, as Dennett contends, it cannot constitute a stumbling block to the creation of machine intelligence comparable to that of humans.

Dennett compares our brains to termite colonies in another metaphor.

Termites do not join together and plot to form a mound, but their individual activities cause it to happen.

The mound is the consequence of natural selection producing uncomprehending expertise in cooperative mound-building rather than intellectual design by the termites.

To create a mound, termites don't need to comprehend what they're doing.

Likewise, comprehension is an emergent attribute of such abilities.

Brains, according to Dennett, are control centers that have evolved to respond swiftly and effectively to threats and opportunities in the environment.

As the demands of responding to the environment grow more complicated, understanding emerges as a tool for dealing with them.

On a sliding scale, comprehension is a question of degree.

Dennett, for example, considers bacteria's quasi-comprehension in response to diverse stimuli and computers' quasi-comprehension in response to coded instructions to be on the low end of the range.

On the other end of the spectrum, he placed Jane Austen's comprehension of human social processes and Albert Einstein's understanding of relativity.

However, they are just changes in degree, not in type.

Natural selection has shaped both extremes of the spectrum.

Comprehension is not a separate mental process arising from the brain's varied abilities.

Rather, understanding is a collection of these skills.

Consciousness is an illusion to the extent that we recognize it as an additional element of the mind in the shape of either qualia or cognition.

In general, Dennett advises mankind to avoid positing understanding when basic competence would suffice.

Humans, on the other hand, often adopt what Dennett refers to as a "intentional position" toward other humans and, in some cases, animals.

When individuals perceive acts as the outcome of mind-directed thoughts, emotions, wants, or other mental states, they adopt the intentional viewpoint.

This is in contrast to the "physical posture" and the "design stance," according to him.

The physical stance is when anything is seen as the outcome of simply physical forces or natural principles.

Gravity causes a stone to fall when it is dropped, not any conscious purpose to return to the ground.

An action is seen as the mindless outcome of a preprogrammed, or predetermined, purpose in the design stance.

An alarm clock, for example, beeps at a certain time because it was built to do so, not because it chose to do so on its own.

In contrast to both the physical and design stances, the intentional stance considers behaviors and acts as though they are the consequence of the agent's deliberate decision.

It might be difficult to decide whether to apply the purposeful or design perspective to computers.

A chess-playing computer has been created with the goal of winning.

However, its movements are often indistinguishable from those of a human chess player who wants or intends to win.

In fact, having a purposeful posture toward the computer's behavior, rather than a design stance, improves human interpretation of its behavior and capacity to respond to it.

Dennett claims that the purposeful perspective is the greatest strategy to adopt toward both humans and computers since it works best in describing both human and computer behavior.

Furthermore, there is no need to differentiate them in any way.

Though the intentional attitude considers behavior as agent-driven, it is not required to take a position on what is truly going on inside the human or machine's internal workings.

This posture provides a neutral starting point from which to investigate cognitive competency without presuming a certain explanation of what's going on behind the scenes.

Dennett sees no reason why AI should be impossible in theory since human mental abilities have developed organically.

Furthermore, by abandoning the concept of qualia and adopting an intentional posture that relieves people of the responsibility of speculating about what is going on in the background of cognition, two major impediments to solving the hard issue of consciousness have been eliminated.

Dennett argues that since the human brain and computers are both machines, there is no good theoretical reason why humans should be capable of acquiring competence-driven understanding while AI should be intrinsically unable.

Consciousness in the traditional sense is illusory, hence it is not a need for Strong AI.

Dennett does not believe that Strong AI is theoretically impossible.

He feels that society's technical sophistication is still at least fifty years away from producing it.

Strong AI development, according to Dennett, is not desirable.

Humans should strive to build AI tools, but Dennett believes that attempting to make computer pals or colleagues would be a mistake.

Such robots, he claims, would lack human moral intuitions and understanding, and hence would not be able to integrate into human society.

Humans do not need robots to provide friendship since they have each other.

Robots, even AI-enhanced machines, should be seen as tools to be utilized by humans alone.


 


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Cognitive Computing; General and Narrow AI.


Further Reading:


Dennett, Daniel C. 1987. The Intentional Stance. Cambridge, MA: MIT Press.

Dennett, Daniel C. 1993. Consciousness Explained. London: Penguin.

Dennett, Daniel C. 1998. Brainchildren: Essays on Designing Minds. Cambridge, MA: MIT Press.

Dennett, Daniel C. 2008. Kinds of Minds: Toward an Understanding of Consciousness. New York: Basic Books.

Dennett, Daniel C. 2017. From Bacteria to Bach and Back: The Evolution of Minds. New York: W. W. Norton.

Dennett, Daniel C. 2019. “What Can We Do?” In Possible Minds: Twenty-Five Ways of Looking at AI, edited by John Brockman, 41–53. London: Penguin Press.

Artificial Intelligence - What Is Computational Neuroscience?

 



Computational neuroscience (CNS) is a branch of neuroscience that uses the notion of computing to the study of the brain.

Eric Schwartz coined the phrase "computational neuroscience" in 1985 to replace the words "neural modeling" and "brain theory," which were previously used to describe different forms of nervous system study.

The concept that nervous system effects may be perceived as examples of computations, since state transitions can be explained as relations between abstract attributes, is at the heart of CNS.

In other words, explanations of effects in neurological systems are descriptions of information changed, stored, and represented, rather than casual descriptions of interaction of physically distinct elements.

As a result, CNS aims to develop computational models to better understand how the nervous system works in terms of the information processing characteristics of the brain's parts.

Constructing a model of how interacting neurons might build basic components of cognition is one example.

A brain map, on the other hand, does not disclose the nervous system's computing process, but it may be utilized as a restriction for theoretical models.

Information sharing, for example, has costs in terms of physical connections between communicating areas, in that locations that make connections often (in cases of high bandwidth and low latency) would be clustered together.

The description of neural systems as carry-on computations is central to computational neuroscience, and it contradicts the claim that computational constructs are exclusive to the explanatory framework of psychology; that is, human cognitive capacities can be constructed and confirmed independently of how they are implemented in the nervous system.

For example, when it became clear in 1973 that cognitive processes could not be understood by analyzing the results of one-dimensional questions/scenarios, a popular approach in cognitive psychology at the time, Allen Newell argued that only synthesis with computer simulation could reveal the complex interactions of the proposed component's mechanism and whether the proposed component's mechanism was correct.

David Marr (1945–1980) proposed the first computational neuroscience framework.

This framework tries to give a conceptual starting point for thinking about levels in the context of computing by nervous structure.

It reflects the three-level structure utilized in computer science (abstract issue analysis, algorithm, and physical implementation).

The model, however, has drawbacks since it is made up of three poorly linked layers and uses a rigid top-down approach that ignores all neurobiological facts as instances at the implementation level.

As a result, certain events are thought to be explicable on just one or two levels.

As a result, the Marr levels framework does not correspond to the levels of nervous system structure (molecules, synapses, neurons, nuclei, circuits, networks layers, maps, and systems), nor does it explain nervous system emergent type features.

Computational neuroscience takes a bottom-up approach, beginning with neurons and illustrating how computational functions and their implementations with neurons result in dynamic interactions between neurons.

Models of connectivity and dynamics, decoding models, and representational models are the three kinds of models that try to get computational understanding from brain-activity data.

The correlation matrix, which displays pairwise functional connectivity between places and establishes the features of related areas, is used in connection models.

Because they are generative models, they can generate data at the level of the measurements and are models of brain dynamics, analyses of effective connectivity and large-scale brain dynamics go beyond generic statistical models that are linear models used in action and information-based brain mapping.

The goal of the decoding models is to figure out what information is stored in each brain area.

When an area is designated as a "knowledge representing" one, its data becomes a functional entity that informs regions that receive these signals about the content.

In the simplest scenario, decoding identifies which of the two stimuli elicited a recorded response pattern.

The representation's content might be the sensory stimulus's identity, a stimulus feature (such as orientation), or an abstract variable required for a cognitive operation or action.

Decoding and multivariate pattern analysis were utilized to determine the components that must be included in the brain computational model.

Decoding, on the other hand, does not provide models for brain computing; rather, it discloses some elements without requiring brain calculation.

Because they strive to characterize areas' reactions to arbitrary stimuli, representation models go beyond decoding.

Encoding models, pattern component models, and representational similarity analysis are three forms of representational model analysis that have been presented.

All three studies are based on multivariate descriptions of the experimental circumstances and test assumptions about representational space.

In encoding models, the activity profile of each voxel across stimuli is predicted as a linear combination of the model's properties.

The distribution of the activity profiles that define the representational space is treated as a multivariate normal distribution in pattern component models.

The representational space is defined by the representational dissimilarities of the activity patterns evoked by the stimuli in representational similarity analysis.

The qualities that indicate how the information processing cognitive function could operate are not tested in the brain models.

Task performance models are used to describe cognitive processes in terms of algorithms.

These models are put to the test using experimental data and, in certain cases, data from brain activity.

Neural network models and cognitive models are the two basic types of models.

Models of neural networks are created using varying degrees of biological information, ranging from neurons to maps.

Multiple steps of linear-nonlinear signal modification are supported by neural networks, which embody the parallel distributed processing paradigm.

To enhance job performance, models often incorporate millions of parameters (connection weights).

Simple models will not be able to describe complex cognitive processes, hence a high number of parameters is required.

The implementations of deep convolutional neural network models have been used to predict brain representations of new pictures in the ventral visual stream of primates.

The representations in the first few layers of neural networks are comparable to those in the early visual cortex.

Higher layers are similar to the inferior temporal cortical representation in that they both allow for the decoding of object location, size, and posture, as well as the object's categorization.

Various research have shown that deep convolutional neural networks' internal representations provide the best current models of visual picture representations in the inferior temporal cortex in humans and animals.

When a wide number of models were compared, those that were optimized for object categorization described the cortical representation the best.

Cognitive models are artificial intelligence applications in computational neuroscience that target information processing that do not include any neurological components (neurons, axons, etc.).

Production systems, reinforcement learning, and Bayesian cognitive models are the three kinds of models.

They use logic and predicates, and they work with symbols rather than signals.

There are various advantages of employing artificial intelligence in computational neuroscience research.

  1. First, although a vast quantity of information on the brain has accumulated through time, the true knowledge of how the brain functions remains unknown.
  2. Second, there are embedded effects created by networks of neurons, but how these networks of neurons operate is yet unknown.
  3. Third, although the brain has been crudely mapped, as has understanding of what distinct brain areas (mostly sensory and motor functions) perform, a precise map is still lacking.

Furthermore, some of the information gathered via experiments or observations may be useless; the link between synaptic learning principles and computing is mostly unclear.

The models of a production system are the first models for explaining reasoning and problem resolution.

A "production" is a cognitive activity that occurs as a consequence of the "if-then" rule, in which "if" defines the set of circumstances under which the range of productions ("then" clause) may be carried out.

When the prerequisites for numerous rules are satisfied, the model uses a conflict resolution algorithm to choose the best production.

The production models provide a sequence of predictions that seem like a conscious stream of brain activity.

The same approach is now being used to predict the regional mean fMRI (functional Magnetic Resonance Imaging) activation time in new applications.

Reinforcement models are used in a variety of areas to simulate the accomplishment of optimum decision-making.

The implementation in neurobiological systems is a basal ganglia in neurobiochemical systems.

The agent might learn a "value function" that links each state to the predicted total reward.

The agent may pick the most promising action if it can forecast which state each action will lead to and understands the values of those states.

The agent could additionally pick up a "policy" that links each state to promised actions.

Exploitation (which provides immediate gratification) and exploration must be balanced (which benefits learning and brings long-term reward).

The Bayesian models show what the brain should really calculate in order to perform at its best.

These models enable inductive inference, which is beyond the capability of neural network models and requires previous knowledge.

The models have been used to explain cognitive biases as the result of past beliefs, as well as to comprehend fundamental sensory and motor processes.

The representation of the probability distribution of neurons, for example, has been investigated theoretically using Bayesian models and compared to actual evidence.

These practices illustrate that connecting Bayesian inference to real brain implementation is still difficult since the brain "cuts corners" in trying to be efficient, therefore approximations may explain departures from statistical optimality.

The concept of a brain doing computations is central to computational neuroscience, so researchers are using modeling and analysis of information processing properties of nervous system elements to try to figure out how complex brain functions work.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Bayesian Inference; Cognitive Computing.


Further Reading


Kaplan, David M. 2011. “Explanation and Description in Computational Neuroscience.” Synthese 183, no. 3: 339–73.

Kriegeskorte, Nikolaus, and Pamela K. Douglas. 2018. “Cognitive Computational Neuroscience.” Nature Neuroscience 21, no. 9: 1148–60.

Schwartz, Eric L., ed. 1993. Computational Neuroscience. Cambridge, MA: Massachusetts Institute of Technology.

Trappenberg, Thomas. 2009. Fundamentals of Computational Neuroscience. New York: Oxford University Press.



Artificial Intelligence - What Is Cognitive Computing?


 


Self-learning hardware and software systems that use machine learning, natural language processing, pattern recognition, human-computer interaction, and data mining technologies to mimic the human brain are referred to as cognitive computing.


The term "cognitive computing" refers to the use of advances in cognitive science to create new and complex artificial intelligence systems.


Cognitive systems aren't designed to take the place of human thinking, reasoning, problem-solving, or decision-making; rather, they're meant to supplement or aid people.

A collection of strategies to promote the aims of affective computing, which entails narrowing the gap between computer technology and human emotions, is frequently referred to as cognitive computing.

Real-time adaptive learning approaches, interactive cloud services, interactive memo ries, and contextual understanding are some of these methodologies.

To conduct quantitative assessments of organized statistical data and aid in decision-making, cognitive analytical tools are used.

Other scientific and economic systems often include these tools.

Complex event processing systems utilize complex algorithms to assess real-time data regarding events for patterns and trends, offer choices, and make judgments.

These kinds of systems are widely used in algorithmic stock trading and credit card fraud detection.

Face recognition and complex image recognition are now possible with image recognition systems.

Machine learning algorithms build models from data sets and improve as new information is added.

Neural networks, Bayesian classifiers, and support vector machines may all be used in machine learning.

Natural language processing entails the use of software to extract meaning from enormous amounts of data generated by human conversation.

Watson from IBM and Siri from Apple are two examples.

Natural language comprehension is perhaps cognitive computing's Holy Grail or "killer app," and many people associate natural language processing with cognitive computing.

Heuristic programming and expert systems are two of the oldest branches of so-called cognitive computing.

Since the 1980s, there have been four reasonably "full" cognitive computing architectures: Cyc, Soar, Society of Mind, and Neurocognitive Networks.

Speech recognition, sentiment analysis, face identification, risk assessment, fraud detection, and behavioral suggestions are some of the applications of cognitive computing technology.

These applications are referred regarded as "cognitive analytics" systems when used together.

In the aerospace and defense industries, agriculture, travel and transportation, banking, health care and the life sciences, entertainment and media, natural resource development, utilities, real estate, retail, manufacturing and sales, marketing, customer service, hospitality, and leisure, these systems are in development or are being used.

Netflix's movie rental suggestion algorithm is an early example of predictive cognitive computing.

Computer vision algorithms are being used by General Electric to detect tired or distracted drivers.

Customers of Domino's Pizza can place orders online by speaking with a virtual assistant named Dom.

Elements of Google Now, a predictive search feature that debuted in Google applications in 2012, assist users in predicting road conditions and anticipated arrival times, locating hotels and restaurants, and remembering anniversaries and parking spots.


In IBM marketing materials, the term "cognitive" computing appears frequently.

Cognitive computing, according to the company, is a subset of "augmented intelligence," which is preferred over artificial intelligence.


The Watson machine from IBM is frequently referred to as a "cognitive computer" since it deviates from the traditional von Neumann design and instead draws influence from neural networks.

Neuroscientists are researching the inner workings of the human brain, seeking for connections between neuronal assemblies and mental aspects, and generating new mental ideas.

Hebbian theory is an example of a neuroscientific theory that underpins cognitive computer machine learning implementations.

The Hebbian theory is a proposed explanation for neural adaptation during the learning process.

Donald Hebb initially proposed the hypothesis in his 1949 book The Organization of Behavior.

Learning, according to Hebb, is a process in which the causal induction of recurrent or persistent neuronal firing or activity causes neural traces to become stable.

"Any two cells or systems of cells that are consistently active at the same time will likely to become'associated,' such that activity in one favors activity in the other," Hebb added (Hebb 1949, 70).

"Cells that fire together, wire together," is how the idea is frequently summarized.

According to this hypothesis, the connection of neuronal cells and tissues generates neurologically defined "engrams" that explain how memories are preserved in the brain as biophysical or biochemical changes.

Engrams' actual location, as well as the procedures by which they are formed, are currently unknown.

IBM machines are stated to learn by aggregating information into a computational convolution or neural network architecture made up of weights stored in a parallel memory system.

Intel introduced Loihi, a cognitive chip that replicates the functions of neurons and synapses, in 2017.

Loihi is touted to be 1,000 times more energy efficient than existing neurosynaptic devices, with 128 clusters of 1,024 simulated neurons on per chip, for a total of 131,072 simulated neurons.

Instead of relying on simulated neural networks and parallel processing with the overarching goal of developing artificial cognition, Loihi uses purpose-built neural pathways imprinted in silicon.

These neuromorphic processors are likely to play a significant role in future portable and wire-free electronics, as well as automobiles.

Roger Schank, a cognitive scientist and artificial intelligence pioneer, is a vocal opponent of cognitive computing technology.

"Watson isn't thinking. You can only reason if you have objectives, plans, and strategies to achieve them, as well as an understanding of other people's ideas and a knowledge of prior events to draw on.

"Having a point of view is also beneficial," he writes.

"How does Watson feel about ISIS, for example?" Is this a stupid question? ISIS is a topic on which actual thinking creatures have an opinion" (Schank 2017).



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Computational Neuroscience; General and Narrow AI; Human Brain Project; SyNAPSE.


Further Reading

Hebb, Donald O. 1949. The Organization of Behavior. New York: Wiley.

Kelly, John, and Steve Hamm. 2013. Smart Machines: IBM’s Watson and the Era of Cognitive Computing. New York: Columbia University Press.

Modha, Dharmendra S., Rajagopal Ananthanarayanan, Steven K. Esser, Anthony Ndirango, Anthony J. Sherbondy, and Raghavendra Singh. 2011. “Cognitive Computing.” Communications of the ACM 54, no. 8 (August): 62–71.

Schank, Roger. 2017. “Cognitive Computing Is Not Cognitive at All.” FinTech Futures, May 25. https://www.bankingtech.com/2017/05/cognitive-computing-is-not-cognitive-at-all

Vernon, David, Giorgio Metta, and Giulio Sandini. 2007. “A Survey of Artificial Cognitive Systems: Implications for the Autonomous Development of Mental Capabilities in Computational Agents.” IEEE Transactions on Evolutionary Computation 11, no. 2: 151–80.







Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...