Showing posts with label Machine Learning. Show all posts
Showing posts with label Machine Learning. Show all posts

AI Glossary - What Is ARTMAP?


     


    What Is ARTMAP AI Algorithm?



    The supervised learning variant of the ART-1 model is ARTMAP.

    It learns binary input patterns that are given to it.


    The suffix "MAP" is used in the names of numerous supervised ART algorithms, such as Fuzzy ARTMAP.

    Both the inputs and the targets are clustered in these algorithms, and the two sets of clusters are linked.


    The ARTMAP algorithms' fundamental flaw is that they lack a way to prevent overfitting, hence they should not be utilized with noisy data.


    How Does The ARTMAP Neural Network Work?



    A novel neural network architecture called ARTMAP automatically picks out recognition categories for any numbers of arbitrarily ordered vectors depending on the accuracy of predictions. 

    A pair of Adaptive Resonance Theory modules (ARTa and ARTb) that may self-organize stable recognition categories in response to random input pattern sequences make up this supervised learning system. 

    The ARTa module gets a stream of input patterns ([a(p)]) and the ARTb module receives a stream of input patterns ([b(p)]), where b(p) is the right prediction given a (p). 

    An internal controller and an associative learning network connect these ART components to provide real-time autonomous system functioning. 

    The remaining patterns a(p) are shown during test trials without b(p), and their predictions at ARTb are contrasted with b. (p). 



    The ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms when tested on a benchmark machine learning database in both on-line and off-line simulations, and achieves 100% accuracy after training on less than half the input patterns in the database. 


    It accomplishes these features by using an internal controller that, on a trial-by-trial basis, links predictive success to category size and simultaneously optimizes predictive generalization and reduces predictive error, using only local operations. 

    By the smallest amount required to rectify a predicted inaccuracy at ARTb, this calculation raises the alertness parameter an of ARTa. 

    To accept a category or hypothesis triggered by an input a(p), rather than seeking a better one via an autonomously controlled process of hypothesis testing, ARTa must have a minimal level of confidence, which is calibrated by the parameter a. 

    The degree of agreement between parameter a and the top-down learnt expectation, or prototype, which is read out after activating an ARTa category, is compared. 

    If the degree of match is less than a, search is initiated. 


    The self-organizing expert system known as ARTMAP adjusts the selectivity of its hypotheses depending on the accuracy of its predictions. 

    As a result, even if they are identical to frequent occurrences with distinct outcomes, unusual but significant events may be promptly and clearly differentiated. 

    In the intervals between input trials, a returns to baseline alertness. 

    When is big, the system operates in a cautious mode and only makes predictions when it is certain of the result. 

    At no step of learning, therefore, do many false-alarm mistakes happen, yet the system nonetheless achieves asymptote quickly. 

    Due to the self-stabilizing nature of ARTMAP learning, it may continue to learn one or more databases without deteriorating its corpus of memories until all available memory has been used.


    What Is Fuzzy ARTMAP?



    For incremental supervised learning of recognition categories and multidimensional maps in response to arbitrary sequences of analogue or binary input vectors, which may represent fuzzily or crisply defined sets of characteristics, a neural network architecture is developed. 

    By taking advantage of a close formal resemblance between the computations of fuzzy subsethood and ART category choosing, resonance, and learning, the architecture, dubbed fuzzy ARTMAP, accomplishes a synthesis of fuzzy logic and adaptive resonance theory (ART) neural networks. 



    In comparison to benchmark backpropagation and general algorithm systems, fuzzy ARTMAP performance was shown using four simulation classes. 



    A letter recognition database, learning to distinguish between two spirals, identifying locations inside and outside of a circle, and incremental approximation of a piecewise-continuous function are some of the simulations included in this list. 

    Additionally, the fuzzy ARTMAP system is contrasted with Simpson's FMMC system and Salzberg's NGE systems.



    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram



    References And Further Reading:


    • Moreira-Júnior, J.R., Abreu, T., Minussi, C.R. and Lopes, M.L., 2022. Using Aggregated Electrical Loads for the Multinodal Load Forecasting. Journal of Control, Automation and Electrical Systems, pp.1-9.
    • Ferreira, W.D.A.P., Grout, I. and da Silva, A.C.R., 2022, March. Application of a Fuzzy ARTMAP Neural Network for Indoor Air Quality Prediction. In 2022 International Electrical Engineering Congress (iEECON) (pp. 1-4). IEEE.
    • La Marca, A.F., Lopes, R.D.S., Lotufo, A.D.P., Bartholomeu, D.C. and Minussi, C.R., 2022. BepFAMN: A Method for Linear B-Cell Epitope Predictions Based on Fuzzy-ARTMAP Artificial Neural Network. Sensors22(11), p.4027.
    • Santos-Junior, C.R., Abreu, T., Lopes, M.L. and Lotufo, A.D., 2021. A new approach to online training for the Fuzzy ARTMAP artificial neural network. Applied Soft Computing113, p.107936.
    • Ferreira, W.D.A.P., 2021. Rede neural ARTMAP fuzzy implementada em hardware aplicada na previsão da qualidade do ar em ambiente interno.









    AI Glossary - What Is Arcing?

     



    Arcing methods are a broad category of Adaptive Resampling and Combining approaches for boosting machine learning and statistical techniques' performance.

    ADABOOST and bagging are two prominent examples.

    In general, these strategies apply a learning technique to a training set repeatedly, such as a decision tree, and then reweight, or resample, the data and refit the learning technique to the data.

    This results in a set of learning rules.

    New observations are passed through all members of the collection, and the predictions or classifications are aggregated by averaging or a majority rule prediction to generate a combined result.

    These strategies may provide findings that are significantly more accurate than a single classifier, but being less interpretable than a single classifier.

    They can build minimum (Bayes) risk classifiers, according to research.


    See Also: 


    ADABOOST, Bootstrap AGGregation


    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram

    AI Glossary - Algorithm And AI Algorithms.

     



    A methodology or procedure for resolving certain issues.


    In Artificial Intelligence based systems and applications, an AI algorithm is used. 

    An AI algorithm is a subset of machine learning that instructs the computer on how to learn to work independently. 

    As a result, the AI system continues to learn in order to optimize procedures and do jobs more quickly.



    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    Be sure to refer to the complete & active AI Terms Glossary here.

    You may also want to read more about Artificial Intelligence here.



    AI Terms Glossary - ADABOOST

     


    ADABOOST is a way for enhancing machine learning methods that was recently created.

    It has the potential to greatly enhance the performance of classification methods (e.g., decision trees).

    It works by repeatedly applying the procedure to the data, analyzing the findings, and then reweighting the observations to provide more weight to the misclassified instances.

    By a majority vote of the individual classifiers, the final classifier employs all of the intermediate classifiers to categorize an observation.

    It also has the intriguing virtue of continuing to lower the generalization error (i.e., the error in a test set) long after the training set error has stopped dropping or hit 0.



    See Also: 


    arcing, Bootstrap AGGregation (bagging)



    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    Be sure to refer to the complete & active AI Terms Glossary here.

    You may also want to read more about Artificial Intelligence here.







    AI Terms Glossary - Active Learning

     



    Active Learning is a suggested strategy for improving the accuracy of machine learning algorithms by enabling them to designate test zones.

    The algorithm may choose a new point x at any time, examine the outcome, and add the new (x, y) pair to its training base.

    Neural networks, prediction functions, and clustering functions have all benefited from it.




    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    Be sure to refer to the complete & active AI Terms Glossary here.

    You may also want to read more about Artificial Intelligence here.




    AI Terms Glossary - Accuracy

     


    A machine learning system's accuracy is defined as the proportion of accurate predictions or classifications the model makes over a given data set.

    It's usually calculated using a different sample from the one(s) used to build the model, called a test or "hold out" sample.

    The error rate, on the other hand, is the percentage of inaccurate predictions on the same data.


    See Also: 

    Hold out sample, Machine Learning.



    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    Be sure to refer to the complete & active AI Terms Glossary here.

    You may also want to read more about Artificial Intelligence here.





    Artificial Intelligence - Who Is Steve Omohundro?

     




    In the field of artificial intelligence, Steve Omohundro  (1959–) is a well-known scientist, author, and entrepreneur.

    He is the inventor of Self-Aware Systems, the chief scientist of AIBrain, and an adviser to the Machine Intelligence Research Institute (MIRI).

    Omohundro is well-known for his insightful, speculative studies on the societal ramifications of AI and the safety of smarter-than-human computers.

    Omohundro believes that a fully predictive artificial intelligence science is required.

    He thinks that if goal-driven artificial general intelligences are not carefully created in the future, they would likely generate negative activities, cause conflicts, or even lead to the extinction of humanity.

    Indeed, Omohundro argues that AIs with inadequate programming might act psychopathically.

    He claims that programmers often create flaky software and programs that "manipulate bits" without knowing why.

    Omohundro wants AGIs to be able to monitor and comprehend their own operations, spot flaws, and rewrite themselves to improve performance.

    This is what genuine machine learning looks like.

    The risk is that AIs may evolve into something that humans will be unable to comprehend, make incomprehensible judgments, or have unexpected repercussions.

    As a result, Omohundro contends, artificial intelligence must evolve into a discipline that is more predictive and anticipatory.

    Omohundro also suggests in "The Nature of Self-Improving Artificial Intelligence," one of his widely available online papers, that a future self-aware system that will most likely access the internet will be influenced by the scientific papers it reads, which recursively justifies writing the paper in the first place.

    AGI agents must be programmed with value sets that drive them to pick objectives that benefit mankind as they evolve.

    Self-improving systems like the ones Omohundro is working on don't exist yet.

    Inventive minds, according to Omohundro, have only produced inert systems (chairs and coffee mugs), reactive systems (mousetraps and thermostats), adaptive systems (advanced speech recognition systems and intelligent virtual assistants), and deliberative systems (advanced speech recognition systems and intelligent virtual assistants) (the Deep Blue chess-playing computer).

    Self-improving systems, as described by Omohundro, would have to actively think and make judgments in the face of uncertainty regarding the effects of self-modification.

    The essential natures of self-improving AIs, according to Omohundro, may be understood as rational agents, a notion he draws from microeconomic theory.

    Because humans are only imperfectly rational, the discipline of behavioral economics has exploded in popularity in recent decades.

    AI agents, on the other hand, must eventually establish logical objectives and preferences ("utility functions") that sharpen their ideas about their surroundings due to their self-improving cognitive architectures.

    These beliefs will then assist them in forming new aims and preferences.

    Omohundro draws influence from mathematician John von Neumann and economist Oskar Morgenstern's contributions to the anticipated utility hypothesis.

    Completeness, transitivity, continuity, and independence are the axioms of rational behavior proposed by von Neumann and Morgenstern.

    For artificial intelligences, Omohundro proposes four "fundamental drives": efficiency, self-preservation, resource acquisition, and creativity.

    These motivations are expressed as "behaviors" by future AGIs with self-improving, rational agency.

    Both physical and computational operations are included in the efficiency drive.

    Artificial intelligences will strive to make effective use of limited resources such as space, mass, energy, processing time, and computer power.

    To prevent losing resources to other agents and enhance goal fulfillment, the self-preservation drive will use powerful artificial intelligences.

    A passively behaving artificial intelligence is unlikely to survive.

    The acquisition drive is the process of locating new sources of resources, trading for them, cooperating with other agents, or even stealing what is required to reach the end objective.

    The creative drive encompasses all of the innovative ways in which an AGI may boost anticipated utility in order to achieve its many objectives.

    This motivation might include the development of innovative methods for obtaining and exploiting resources.

    Signaling, according to Omohundro, is a singular human source of creative energy, variation, and divergence.

    Humans utilize signaling to express their intentions regarding other helpful tasks they are doing.

    If A is more likely to be true when B is true than when B is false, then A signals B.

    Employers, for example, are more likely to hire potential workers who are enrolled in a class that looks to offer benefits that the company desires, even if this is not the case.

    The fact that the potential employee is enrolled in class indicates to the company that he or she is more likely to learn useful skills than the candidate who is not.

    Similarly, a billionaire does not need to gift another billionaire a billion dollars to indicate that they are among the super-wealthy.

    A huge bag containing several million dollars could suffice.

    Omohundro's notion of fundamental AI drives was included into Oxford philosopher Nick Bostrom's instrumental convergence thesis, which claims that a few instrumental values are sought in order to accomplish an ultimate objective, often referred to as a terminal value.

    Self-preservation, goal content integrity (retention of preferences over time), cognitive improvement, technical perfection, and resource acquisition are among Bostrom's instrumental values (he prefers not to call them drives).

    Future AIs might have a reward function or a terminal value of optimizing some utility function.

    Omohundro wants designers to construct artificial general intelligence with kindness toward people as its ultimate objective.

    Military conflicts and economic concerns, on the other hand, he believes, make the development of destructive artificial general intelligence more plausible.

    Drones are increasingly being used by military forces to deliver explosives and conduct surveillance.

    He also claims that future battles will almost certainly be informational in nature.

    In a future where cyberwar is a possibility, a cyberwar infrastructure will be required.

    Energy encryption, a unique wireless power transmission method that scrambles energy so that it stays safe and cannot be exploited by rogue devices, is one way to counter the issue.

    Another area where information conflict is producing instability is the employment of artificial intelligence in fragile financial markets.

    Digital cryptocurrencies and crowdsourcing marketplace systems like Mechanical Turk are ushering in a new era of autonomous capitalism, according to Omohundro, and we are unable to deal with the repercussions.

    Omohundro has spoken about the need for a complete digital provenance for economic and cultural recordkeeping to prevent AI deception, fakery, and fraud from overtaking human society as president of the company Possibility Research, advocate of a new cryptocurrency called Pebble, and advisory board member of the Institute for Blockchain Studies.

    In order to build a verifiable "blockchain civilization based on truth," he suggests that digital provenance methods and sophisticated cryptography techniques monitor autonomous technology and better check the history and structure of any alterations being performed.

    Possibility Smart technologies that enhance computer programming, decision-making systems, simulations, contracts, robotics, and governance are the focus of research.

    Omohundro has advocated for the creation of so-called Safe AI scaffolding solutions to counter dangers in recent years.

    The objective is to create self-contained systems that already have temporary scaffolding or staging in place.

    The scaffolding assists programmers who are assisting in the development of a new artificial general intelligence.

    The virtual scaffolding may be removed after the AI has been completed and evaluated for stability.

    The initial generation of restricted safe systems created in this manner might be used to develop and test less constrained AI agents in the future.

    Utility functions aligned with agreed-upon human philosophical imperatives, human values, and democratic principles would be included in advanced scaffolding.

    Self-improving AIs may eventually have inscribed the Universal Declaration of Human Rights or a Universal Constitution into their fundamental fabric, guiding their growth, development, choices, and contributions to mankind.

    Omohundro graduated from Stanford University with degrees in mathematics and physics, as well as a PhD in physics from the University of California, Berkeley.

    In 1985, he co-created StarLisp, a high-level programming language for the Thinking Machines Corporation's Connection Machine, a massively parallel supercomputer in construction.

    On differential and symplectic geometry, he wrote the book Geometric Perturbation Theory in Physics (1986).

    He was an associate professor of computer science at the University of Illinois in Urbana-Champaign from 1986 to 1988.

    He cofounded the Center for Complex Systems Research with Stephen Wolfram and Norman Packard.

    He also oversaw the university's Vision and Learning Group.

    He created the Mathematica 3D graphics system, which is a symbolic mathematical calculation application.

    In 1990, he led an international team at the University of California, Berkeley's International Computer Science Institute (ICSI) to develop Sather, an object-oriented, functional programming language.

    Automated lip-reading, machine vision, machine learning algorithms, and other digital technologies have all benefited from his work.



    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read more about Artificial Intelligence here.



    See also: 


    General and Narrow AI; Superintelligence.



    References & Further Reading:



    Bostrom, Nick. 2012. “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents.” Minds and Machines 22, no. 2: 71–85.

    Omohundro, Stephen M. 2008a. “The Basic AI Drives.” In Proceedings of the 2008 Conference on Artificial General Intelligence, 483–92. Amsterdam: IOS Press.

    Omohundro, Stephen M. 2008b. “The Nature of Self-Improving Artificial Intelligence.” https://pdfs.semanticscholar.org/4618/cbdfd7dada7f61b706e4397d4e5952b5c9a0.pdf.

    Omohundro, Stephen M. 2012. “The Future of Computing: Meaning and Values.” https://selfawaresystems.com/2012/01/29/the-future-of-computing-meaning-and-values.

    Omohundro, Stephen M. 2013. “Rational Artificial Intelligence for the Greater Good.” In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart, 161–79. Berlin: Springer.

    Omohundro, Stephen M. 2014. “Autonomous Technology and the Greater Human Good.” Journal of Experimental and Theoretical Artificial Intelligence 26, no. 3: 303–15.

    Shulman, Carl. 2010. Omohundro’s ‘Basic AI Drives’ and Catastrophic Risks. Berkeley, CA: Machine Intelligence Research Institute




    Artificial Intelligence - Who Is Helen Nissenbaum?

     



    In her research, Helen Nissenbaum (1954–), a PhD in philosophy, looks at the ethical and political consequences of information technology.

    She's worked at Stanford University, Princeton University, New York University, and Cornell Tech, among other places.

    Nissenbaum has also worked as the primary investigator on grants from the National Security Agency, the National Science Foundation, the Air Force Office of Scientific Research, the United States Department of Health and Human Services, and the William and Flora Hewlett Foundation, among others.

    Big data, machine learning, algorithms, and models, according to Nissenbaum, lead to output outcomes.

    Her primary issue, which runs across all of these themes, is privacy.

    Nissenbaum explores these problems in her 2010 book, Privacy in Context: Technology, Policy, and the Integrity of Social Life, by using the concept of contextual integrity, which views privacy in terms of acceptable information flows rather than merely prohibiting all information flows.

    In other words, she's interested in establishing an ethical framework within which data may be obtained and utilized responsibly.

    The challenge with developing such a framework, however, is that when many data sources are combined, or aggregated, it becomes possible to understand more about the people from whose the data was obtained than it would be feasible to accomplish with each individual source of data.

    Such aggregated data is used to profile consumers, allowing credit and insurance businesses to make judgments based on the information.

    Outdated data regulation regimes hamper such activities even more.

    One big issue is that the distinction between monitoring users to construct profiles and targeting adverts to those profiles is blurry.

    To make things worse, adverts are often supplied by third-party websites other than the one the user is currently on.

    This leads to the ethical dilemma of many hands, a quandary in which numerous parties are involved and it is unclear who is ultimately accountable for a certain issue, such as maintaining users' privacy in this situation.

    Furthermore, because so many organizations may receive this information and use it for a variety of tracking and targeting purposes, it is impossible to adequately inform users about how their data will be used and allow them to consent or opt out.

    In addition to these issues, the AI systems that use this data are biased itself.

    This prejudice, on the other hand, is a social issue rather than a computational one, since much of the scholarly effort focused on resolving computational bias has been misplaced.

    As an illustration of this prejudice, Nissenbaum cites Google's Behavioral Advertising system.

    When a search contains a name that is traditionally African American, the Google Behavioral Advertising algorithm will show advertising for background checks more often.

    This sort of racism isn't encoded into the coding; rather, it develops through social contact with adverts, since those looking for traditionally African-American names are more likely to click on background check links.

    Correcting these bias-related issues, according to Nissenbaum, would need considerable regulatory reforms connected to the ownership and usage of big data.

    In light of this, and with few data-related legislative changes on the horizon, Nissenbaum has worked to devise measures that can be implemented right now.

    Obfuscation, which comprises purposely adding superfluous information that might interfere with data gathering and monitoring procedures, is the major framework she has utilized to construct these tactics.

    She claims that this is justified by the uneven power dynamics that have resulted in near-total monitoring.

    Nissenbaum and her partners have created a number of useful internet browser plug-ins based on this obfuscation technology.

    TrackMeNot was the first of these obfuscating browser add-ons.

    This pluinator makes random queries to a number of search engines in attempt to contaminate the stream of data obtained and prevent search businesses from constructing an aggregated profile based on the user's genuine searches.

    This plug-in is designed for people who are dissatisfied with existing data rules and want to take quick action against companies and governments who are aggressively collecting information.

    This approach adheres to the obfuscation theory since, rather than concealing the original search phrases, it just hides them with other search terms, which Nissenbaum refers to as "ghosts." Adnostic is a Firefox web browser prototype plugin aimed at addressing the privacy issues related with online behavioral advertising tactics.

    Currently, online behavioral advertising is accomplished by recording a user's activity across numerous websites and then placing the most relevant adverts at those sites.

    Multiple websites gather, aggregate, and keep this behavioral data forever.

    Adnostic provides a technology that enables profiling and targeting to take place exclusively on the user's computer, with no data exchanged with third-party websites.

    Although the user continues to get targeted advertisements, third-party websites do not gather or keep behavioral data.

    AdNauseam is yet another obfuscation-based plugin.

    This program, which runs in the background, clicks all of the adverts on the website.

    The declared goal of this activity is to contaminate the data stream, making targeting and monitoring ineffective.

    Advertisers' expenses will very certainly rise as a result of this.

    This project proved controversial, and in 2017, it was removed from the Chrome Web Store.

    Although workarounds exist to enable users to continue installing the plugin, its loss of availability in the store makes it less accessible to the broader public.

    Nissenbaum's book goes into great length into the ethical challenges surrounding big data and the AI systems that are developed on top of it.

    Nissenbaum has built realistic obfuscation tools that may be accessed and utilized by anybody interested, in addition to offering specific legislative recommendations to solve troublesome privacy issues.


    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read more about Artificial Intelligence here.



    See also: 


    Biometric Privacy and Security; Biometric Technology; Robot Ethics.


    References & Further Reading:


    Barocas, Solon, and Helen Nissenbaum. 2009. “On Notice: The Trouble with Notice and Consent.” In Proceedings of the Engaging Data Forum: The First International Forum on the Application and Management of Personal Electronic Information, n.p. Cambridge, MA: Massachusetts Institute of Technology.

    Barocas, Solon, and Helen Nissenbaum. 2014. “Big Data’s End Run around Consent and Anonymity.” In Privacy, Big Data, and the Public Good, edited by Julia Lane, Victoria Stodden, Stefan Bender, and Helen Nissenbaum, 44–75. Cambridge, UK: Cambridge University Press.

    Brunton, Finn, and Helen Nissenbaum. 2015. Obfuscation: A User’s Guide for Privacy and Protest. Cambridge, MA: MIT Press.

    Lane, Julia, Victoria Stodden, Stefan Bender, and Helen Nissenbaum, eds. 2014. Privacy, Big Data, and the Public Good. New York: Cambridge University Press.

    Nissenbaum, Helen. 2010. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford, CA: Stanford University Press.


    Artificial Intelligence - Who Was Marvin Minsky?

     






    Donner Professor of Natural Sciences Marvin Minsky (1927–2016) was a well-known cognitive scientist, inventor, and artificial intelligence researcher from the United States.

    At the Massachusetts Institute of Technology, he cofounded the Artificial Intelligence Laboratory in the 1950s and the Media Lab in the 1980s.

    His renown was such that the sleeping astronaut Dr.

    Victor Kaminski (killed by the HAL 9000 sentient computer) was named after him when he was an adviser on Stanley Kubrick's iconic film 2001: A Space Odyssey in the 1960s.

    At the conclusion of high school in the 1940s, Minsky got interested in intelligence, thinking, and learning machines.

    He was interested in neurology, physics, music, and psychology as a Harvard student.



    On problem-solving and learning ideas, he collaborated with cognitive psychologist George Miller, and on perception and brain modeling theories with J.C.R. Licklider, professor of psychoacoustics and later father of the internet.

    Minsky started thinking about mental ideas while at Harvard.

    "I thought the brain was made up of tiny relays called neurons, each of which had a probability linked to it that determined whether the neuron would conduct an electric pulse," he later recalled.

    "Technically, this system is now known as a stochastic neural network" (Bern stein 1981).

    This hypothesis is comparable to Donald Hebb's Hebbian theory, which he laid forth in his book The Organization of Behavior (1946).

    In the mathematics department, he finished his undergraduate thesis on topology.

    Minsky studied mathematics as a graduate student at Princeton University, but he became increasingly interested in attempting to build artificial neurons out of vacuum tubes like those described in Warren McCulloch and Walter Pitts' famous 1943 paper "A Logical Calculus of the Ideas Immanent in Nervous Activity." He thought that a machine like this might navigate mazes like a rat.



    In the summer of 1951, he and fellow Princeton student Dean Edmonds created the system, termed SNARC (Stochastic Neural-Analog Reinforcement Calculator), with money from the Office of Naval Research.

    There were 300 tubes in the machine, as well as multiple electric motors and clutches.

    Making it a learning machine, the machine employed the clutches to adjust its own knobs.

    The electric rat initially walked at random, but after learning how to make better choices and accomplish a wanted objective via reinforcement of probability, it learnt how to make better choices and achieve a desired goal.

    Multiple rats finally gathered in the labyrinth and learnt from one another.

    Minsky built a second memory for his hard-wired neural network in his dissertation thesis, which helped the rat recall what stimulus it had received.

    When confronted with a new circumstance, this enabled the system to explore its memories and forecast the optimum course of action.

    Minsky had believed that by adding enough memory loops to his self-organizing random networks, conscious intelligence would arise spontaneously.

    In 1954, Minsky finished his dissertation, "Neural Nets and the Brain Model Problem." After graduating from Princeton, Minsky continued to consider how to create artificial intelligence.



    In 1956, he organized and participated in the DartmouthSummer Research Project on Artificial Intelligence with John McCarthy, Nathaniel Rochester, and Claude Shannon.

    The Dartmouth workshop is often referred to as a watershed moment in AI research.

    Minsky started replicating the computational process of proving Euclid's geometric theorems using bits of paper during the summer workshop since no computer was available.

    He realized he could create an imagined computer that would locate proofs without having to tell it precisely what it needed to accomplish.

    Minsky showed the results to Nathaniel Rochester, who returned to IBM and asked Herbert Gelernter, a new physics hire, to write a geometry-proving program on a computer.

    Gelernter built a program in FORTRAN List Processing Language, a language he invented.

    Later, John McCarthy combined Gelernter's language with ideas from mathematician Alonzo Church to develop LISP, the most widely used AI language (List-Processing).

    Minsky began his studies at MIT in 1957.

    He started worked on pattern recognition difficulties with Oliver Selfridge at the university's Lincoln Laboratory.

    The next year, he was hired as an assistant professor in the mathematics department.

    He founded the AI Group with McCarthy, who had transferred to MIT from Dartmouth.

    They continued to work on machine learning concepts.

    Minsky started working with mathematician Seymour Papert in the 1960s.

    Perceptrons: An Introduction to Computational Geometry (1969) was a joint publication describing a kind of artificial neural network described by Cornell Aeronautical Lab oratory psychologist Frank Rosenblatt.

    The book sparked a decades-long debate in the AI field, which continues to this day in certain aspects.

    The mathematical arguments provided in Minsky and Papert's book pushed the field to shift toward symbolic AI (also known as "Good Old-Fashioned AI" or GOFAI) in the 1980s, when artificial intelligence researchers rediscovered perceptrons and neural networks.

    Time-shared computers were more widely accessible on the MIT campus in the 1960s, and Minsky started working with students on machine intelligence issues.

    One of the first efforts was to teach computers how to solve problems in basic calculus using symbolic manipulation techniques such as differentiation and integration.

    In 1961, his student James Robert Slagle built a software for symbol manipulation.

    SAINT was the name of the application, which operated on an IBM 7090 transistorized mainframe computer (Symbolic Automatic INTegrator).

    Other students applied the technique to any symbol manipulation that their software MACSYMA would demand.

    Minsky's pupils also had to deal with the challenge of educating a computer to reason by analogy.

    Minsky's team also worked on issues related to computational linguistics, computer vision, and robotics.

    Daniel Bobrow, one of his pupils, taught a computer how to answer word problems, an accomplishment that combined language processing and mathematics.

    Henry Ernst, a student, designed the first computer-controlled robot, a mechanical hand with photoelectric touch sensors for grasping nuclear materials.

    Minsky collaborated with Papert to develop semi-independent programs that could interact with one another to address increasingly complex challenges in computer vision and manipulation.

    Minsky and Papert combined their nonhierarchical management techniques into a natural intelligence hypothesis known as the Society of Mind.

    Intelligence, according to this view, is an emergent feature that results from tiny interactions between programs.

    After studying various constructions, the MIT AI Group trained a computer-controlled robot to build structures out of children's blocks by 1970.

    Throughout the 1970s and 1980s, the blocks-manipulating robot and the Society of Mind hypothesis evolved.

    Minsky finally released The Society of Mind (1986), a model for the creation of intelligence through individual mental actors and their interactions, rather than any fundamental principle or universal technique.

    He discussed consciousness, self, free will, memory, genius, language, memory, brainstorming, learning, and many other themes in the book, which is made up of 270 unique articles.

    Agents, according to Minsky, do not require their own mind, thinking, or feeling abilities.

    They are not intelligent.

    However, when they work together as a civilization, they develop what we call human intellect.

    To put it another way, understanding how to achieve any certain goal requires the collaboration of various agents.

    Agents are required by Minsky's robot constructor to see, move, locate, grip, and balance blocks.

    "I'd like to believe that this effort provided us insights into what goes on within specific sections of children's brains when they learn to 'play' with basic toys," he wrote (Minsky 1986, 29).

    Minsky speculated that there may be over a hundred agents collaborating to create what we call mind.

    In the book Emotion Machine, he expanded on his views on Society of Mind (2006).

    He argued that emotions are not a separate kind of reasoning in this section.

    Rather, they reflect different ways of thinking about various sorts of challenges that people face in the real world.

    According to Minsky, the mind changes between different modes of thought, thinks on several levels, finds various ways to represent things, and constructs numerous models of ourselves.

    Minsky remarked on a broad variety of popular and significant subjects linked to artificial intelligence and robotics in his final years via his books and interviews.

    The Turing Option (1992), a book created by Minsky in partnership with science fiction novelist Harry Harrison, is set in the year 2023 and deals with issues of artificial intelligence.

    In a 1994 article for Scientific American headlined "Will Robots Inherit the Earth?" he said, "Yes, but they will be our children" (Minsky 1994, 113).

    Minsky once suggested that a superintelligent AI may one day spark a Riemann Hypothesis Catastrophe, in which an agent charged with answering the hypothesis seizes control of the whole planet's resources in order to obtain even more supercomputing power.

    He didn't think this was a plausible scenario.

    Humans could be able to converse with intelligent alien life forms, according to Minsky.

    They'd think like humans because they'd be constrained by the same "space, time, and material constraints" (Minsky 1987, 117).

    Minsky was also a critic of the Loebner Prize, the world's oldest Turing Test-like competition, claiming that it is detrimental to artificial intelligence research.

    To anybody who could halt Hugh Loebner's yearly competition, he offered his own Minsky Loebner Prize Revocation Prize.

    Both Minsky and Loebner died in 2016, yet the Loebner Prize competition is still going on.

    Minsky was also responsible for the development of the confocal microscope (1957) and the head-mounted display (HMD) (1963).

    He was awarded the Turing Award in 1969, the Japan Prize in 1990, and the Benjamin Franklin Medal in 1991. (2001). Daniel Bobrow (operating systems), K. Eric Drexler (molecular nanotechnology), Carl Hewitt (mathematics and philosophy of logic), Danny Hillis (parallel computing), Benjamin Kuipers (qualitative simulation), Ivan Sutherland (computer graphics), and Patrick Winston (computer graphics) were among Minsky's doctoral students (who succeeded Minsky as director of the MIT AI Lab).


    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read more about Artificial Intelligence here.




    See also: 


    AI Winter; Chatbots and Loebner Prize; Dartmouth AI Conference; 2001: A Space Odyssey.



    References & Further Reading:


    Bernstein, Jeremy. 1981. “Marvin Minsky’s Vision of the Future.” New Yorker, December 7, 1981. https://www.newyorker.com/magazine/1981/12/14/a-i.

    Minsky, Marvin. 1986. The Society of Mind. London: Picador.

    Minsky, Marvin. 1987. “Why Intelligent Aliens Will Be Intelligible.” In Extraterrestrials: Science and Alien Intelligence, edited by Edward Regis, 117–28. Cambridge, UK: Cambridge University Press.

    Minsky, Marvin. 1994. “Will Robots Inherit the Earth?” Scientific American 271, no. 4 (October): 108–13.

    Minsky, Marvin. 2006. The Emotion Machine. New York: Simon & Schuster.

    Minsky, Marvin, and Seymour Papert. 1969. Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: Massachusetts Institute of Technology.

    Singh, Push. 2003. “Examining the Society of Mind.” Computing and Informatics 22, no. 6: 521–43.


    Analog Space Missions: Earth-Bound Training for Cosmic Exploration

    What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...