Showing posts sorted by relevance for query cybernetic thinking. Sort by date Show all posts
Showing posts sorted by relevance for query cybernetic thinking. Sort by date Show all posts

Artificial Intelligence - How Is AI Contributing To Cybernetics?

 





The study of communication and control in live creatures and machines is known as cybernetics.

Although the phrase "cybernetic thinking" is no longer generally used in the United States, it pervades computer science, engineering, biology, and the social sciences today.

Throughout the last half-century, cybernetic connectionist and artificial neural network approaches to information theory and technology have often clashed, and in some cases hybridized, with symbolic AI methods.

Norbert Wiener (1894–1964), who coined the term "cybernetics" from the Greek word for "steersman," saw the field as a unifying force that brought disparate topics like game theory, operations research, theory of automata, logic, and information theory together and elevated them.

Winer argued in Cybernetics, or Control and Communication in the Animal and the Machine (1948), that contemporary science had become too much of a specialist's playground as a consequence of tendencies dating back to the early Enlightenment.

Wiener envisioned a period when experts might collaborate "not as minions of some great administrative officer, but united by the desire, indeed by the spiritual imperative, to comprehend the area as a whole, and to give one another the power of that knowledge" (Weiner 1948b, 3).

For Wiener, cybernetics provided researchers with access to many sources of knowledge while maintaining their independence and unbiased detachment.

Wiener also believed that man and machine should be seen as basically interchangeable epistemologically.

The biological sciences and medicine, according to Wiener, would remain semi-exact and dependent on observer subjectivity until these common components were discovered.



In the setting of World War II (1939– 1945), Wiener developed his cybernetic theory.

Operations research and game theory, for example, are interdisciplinary sciences rich in mathematics that have previously been utilized to identify German submarines and create the best feasible solutions to complex military decision-making challenges.

Wiener committed himself into the job of implementing modern cybernetic weapons against the Axis countries in his role as a military adviser.

To that purpose, Wiener focused on deciphering the feedback processes involved in curvilinear flight prediction and applying these concepts to the development of advanced fire-control systems for shooting down enemy aircraft.

Claude Shannon, a long-serving Bell Labs researcher, went even further than Wiener in attempting to bring cybernetic ideas to life, most notably in his experiments with Theseus, an electromechanical mouse that used digital relays and a feedback process to learn how to navigate mazes based on previous experience.

Shannon created a slew of other automata that mimicked the behavior of thinking machines.

Shannon's mentees, including AI pioneers John McCarthy and Marvin Minsky, followed in his footsteps and labeled him a symbolic information processor.

McCarthy, who is often regarded with establishing the field of artificial intelligence, studied the mathematical logic that underpins human thought.



Minsky opted to research neural network models as a machine imitation of human vision.

The so-called McCulloch-Pitts neurons were the core components of cybernetic understanding of human cognitive processing.

These neurons were strung together by axons for communication, establishing a cybernated system encompassing a crude simulation of the wet science of the brain, and were named after Warren McCulloch and Walter Pitts.

Pitts admired Wiener's straightforward analogy of cerebral tissue to vacuum tube technology, and saw these switching devices as metallic analogues to organic cognitive components.

McCulloch-Pitts neurons were believed to be capable of mimicking basic logical processes required for learning and memory.

Pitts perceived a close binary equivalence between the electrical discharges produced by these devices and the electrochemical nerve impulses generated in the brain in the 1940s.

McCulloch-Pitts inputs may be either a zero or a one, and the output can also be a zero or a one in their most basic form.

Each input may be categorized as excitatory or inhibitory.

It was therefore merely a short step from artificial to animal memory for Pitts and Wiener.

Donald Hebb, a Canadian neuropsychologist, made even more significant contributions to the research of artificial neurons.

These were detailed in his book The Organization of Behavior, published in 1949.

Associative learning is explained by Hebbian theory as a process of neural synaptic cells firing and connecting together.

In his study of the artificial "perceptron," a model and algorithm that weighted inputs so that it could be taught to detect particular kinds of patterns, U.S.

Navy researcher Frank Rosenblatt expanded the metaphor.

The eye and cerebral circuitry of the perceptron could approximately discern between pictures of cats and dogs.

The navy saw the perceptron as "the embryo of an electronic computer that it anticipates to be able to walk, speak, see, write, reproduce itself, and be cognizant of its existence," according to a 1958 interview with Rosenblatt (New York Times, July 8, 1958, 25).

Wiener, Shannon, McCulloch, Pitts, and other cyberneticists were nourished by the famed Macy Conferences on Cybernetics in the 1940s and 1950s, which attempted to automate human comprehension of the world and the learning process.

The gatherings also acted as a forum for discussing artificial intelligence issues.

The divide between the areas developed over time, but it was visible during the 1956 Dartmouth Summer Research Project on ArtificialIntelligence.

Organic cybernetics research was no longer well-defined in American scientific practice by 1970.

Computing sciences and technology evolved from machine cybernetics.

Cybernetic theories are now on the periphery of social and hard scientific disciplines such as cognitive science, complex systems, robotics, systems theory, and computer science, but they were critical to the information revolution of the twentieth and twenty-first centuries.

In recent studies of artificial neural networks and unsupervised machine learning, Hebbian theory has seen a resurgence of attention.

Cyborgs—beings made up of biological and mechanical pieces that augment normal functions—could be regarded a subset of cybernetics (which was once known as "medical cybernetics" in the 1960s).


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Dartmouth AI Conference; Macy Conferences; Warwick, Kevin.


Further Reading


Ashby, W. Ross. 1956. An Introduction to Cybernetics. London: Chapman & Hall.

Galison, Peter. 1994. “The Ontology of the Enemy: Norbert Weiner and the Cybernetic Vision.” Critical Inquiry 21, no. 1 (Autumn): 228–66.

Kline, Ronald R. 2017. The Cybernetics Moment: Or Why We Call Our Age the Information Age. Baltimore, MD: Johns Hopkins University Press.

Mahoney, Michael S. 1990. “Cybernetics and Information Technology.” In Companion to the History of Modern Science, edited by R. C. Olby, G. N. Cantor, J. R. R. Christie, and M. J. S. Hodge, 537–53. London: Routledge.

“New Navy Device Learns by Doing; Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser.” 1958. New York Times, July 8, 25.

Weiner, Norbert. 1948a. “Cybernetics.” Scientific American 179, no. 5 (November): 14–19.

Weiner, Norbert. 1948b. Cybernetics, or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press.



Artificial Intelligence - Who Is Sherry Turkle?

 


 

 

Sherry Turkle(1948–) has a background in sociology and psychology, and her work focuses on the human-technology interaction.

While her study in the 1980s focused on how technology affects people's thinking, her work in the 2000s has become more critical of how technology is utilized at the expense of building and maintaining meaningful interpersonal connections.



She has employed artificial intelligence in products like children's toys and pets for the elderly to highlight what people lose out on when interacting with such things.


Turkle has been at the vanguard of AI breakthroughs as a professor at the Massachusetts Institute of Technology (MIT) and the creator of the MIT Initiative on Technology and the Self.

She highlights a conceptual change in the understanding of AI that occurs between the 1960s and 1980s in Life on the Screen: Identity inthe Age of the Internet (1995), substantially changing the way humans connect to and interact with AI.



She claims that early AI paradigms depended on extensive preprogramming and employed a rule-based concept of intelligence.


However, this viewpoint has given place to one that considers intelligence to be emergent.

This emergent paradigm, which became the recognized mainstream view by 1990, claims that AI arises from a much simpler set of learning algorithms.

The emergent method, according to Turkle, aims to emulate the way the human brain functions, assisting in the breaking down of barriers between computers and nature, and more generally between the natural and the artificial.

In summary, an emergent approach to AI allows people to connect to the technology more easily, even thinking of AI-based programs and gadgets as children.



Not just for the area of AI, but also for Turkle's study and writing on the subject, the rising acceptance of the emerging paradigm of AI and the enhanced relatability it heralds represents a significant turning point.


Turkle started to employ ethnographic research techniques to study the relationship between humans and their gadgets in two edited collections, Evocative Objects: Things We Think With (2007) and The Inner History of Devices (2008).


She emphasized in her book The Inner History of Devices that her intimate ethnography, or the ability to "listen with a third ear," is required to go past the advertising-based clichés that are often employed when addressing technology.


This method comprises setting up time for silent meditation so that participants may think thoroughly about their interactions with their equipment.


Turkle used similar intimate ethnographic approaches in her second major book, Alone Together

Why We Expect More from Technology and Less from Each Other (2011), to argue that the increasing connection between people and the technology they use is harmful.

These issues are connected to the increased usage of social media as a form of communication, as well as the continuous degree of familiarity and relatability with technology gadgets, which stems from the emerging AI paradigm that has become practically omnipresent.

She traced the origins of the dilemma back to early pioneers in the field of cybernetics, citing, for example, Norbert Weiner's speculations on the idea of transmitting a human person across a telegraph line in his book God & Golem, Inc.(1964).

Because it reduces both people and technology to information, this approach to cybernetic thinking blurs the barriers between them.



In terms of AI, this implies that it doesn't matter whether the machines with which we interact are really intelligent.


Turkle claims that by engaging with and caring for these technologies, we may deceive ourselves into feeling we are in a relationship, causing us to treat them as if they were sentient.

In a 2006 presentation titled "Artificial Intelligence at 50: From Building Intelligence to Nurturing Sociabilities" at the Dartmouth Artificial Intelligence Conference, she recognized this trend.

She identified the 1997 Tamagotchi, 1998 Furby, and 2000 MyReal Baby as early versions of what she refers to as relational artifacts, which are more broadly referred to as social machines in the literature.

The main difference between these devices and previous children's toys is that these devices come pre-animated and ready for a relationship, whereas previous children's toys required children to project a relationship onto them.

Turkle argues that this change is about our human weaknesses as much as it is about computer capabilities.

In other words, just caring for an item increases the likelihood of not only seeing it as intelligent but also feeling a connection to it.

This sense of connection is more relevant to the typical person engaging with these technologies than abstract philosophic considerations concerning the nature of their intelligence.



Turkle delves more into the ramifications of people engaging with AI-based technologies in both Alone Together and Reclaiming Conversation: The Power of Talk in a Digital Age (2015).


She provides the example of Adam in Alone Together, who appreciates the appreciation of the AI bots he controls over in the game Civilization.

Adam appreciates the fact that he is able to create something fresh when playing.

Turkle, on the other hand, is skeptical of this interaction, stating that Adam's playing isn't actual creation, but rather the sensation of creation, and that it's problematic since it lacks meaningful pressure or danger.

In Reclaiming Conversation, she expands on this point, suggesting that social partners simply provide a perception of camaraderie.

This is important because of the value of human connection and what may be lost in relationships that simply provide a sensation or perception of friendship rather than true friendship.

Turkle believes that this transition is critical.


She claims that although connections with AI-enabledtechnologies may have certain advantages, they pale in contrast to what is missing: 

  • the complete complexity and inherent contradictions that define what it is to be human.


A person's connection with an AI-enabled technology is not as intricate as one's interaction with other individuals.


Turkle claims that as individuals have become more used to and dependent on technology gadgets, the definition of friendship has evolved.


  • People's expectations for companionship have been simplified as a result of this transformation, and the advantages that one wants to obtain from partnerships have been reduced.
  • People now tend to associate friendship only with the concept of interaction, ignoring the more nuanced sentiments and arguments that are typical in partnerships.
  • By engaging with gadgets, one may form a relationship with them.
  • Conversations between humans have become merely transactional as human communication has shifted away from face-to-face conversation and toward interaction mediated by devices. 

In other words, the most that can be anticipated is engagement.



Turkle, who has a background in psychoanalysis, claims that this kind of transactional communication allows users to spend less time learning to view the world through the eyes of another person, which is a crucial ability for empathy.


Turkle argues we are in a robotic period in which people yearn for, and in some circumstances prefer, AI-based robotic companionship over that of other humans, drawing together these numerous streams of argument.

For example, some people enjoy conversing with their iPhone's Siri virtual assistant because they aren't afraid of being judged by it, as evidenced by a series of Siri commercials featuring celebrities talking to their phones.

Turkle has a problem with this because these devices can only respond as if they understand what is being said.


AI-based gadgets, on the other hand, are confined to comprehending the literal meanings of data stored on the device.

They can decipher the contents of phone calendars and emails, but they have no idea what any of this data means to the user.

There is no discernible difference between a calendar appointment for car maintenance and one for chemotherapy for an AI-based device.

A person may lose sight of what it is to have an authentic dialogue with another human when entangled in a variety of these robotic connections with a growing number of technologies.


While Reclaiming Communication documents deteriorating conversation skills and decreasing empathy, it ultimately ends on a positive note.

Because people are becoming increasingly dissatisfied with their relationships, there may be a chance for face-to-face human communication to reclaim its vital role.


Turkle's ideas focus on reducing the amount of time people spend on their phones, but AI's involvement in this interaction is equally critical.


  • Users must accept that their virtual assistant connections will never be able to replace face-to-face interactions.
  • This will necessitate being more deliberate in how one uses devices, prioritizing in-person interactions over the faster and easier interactions provided by AI-enabled devices.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

  • Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.
  • Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.
  • Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cambridge, MA: MIT Press.



Artificial Intelligence - What Is The Dartmouth AI Conference?

      



    The Dartmouth Conference on Artificial Intelligence, officially known as the "Dartmouth Summer Research Project on Artificial Intelligence," was held in 1956 and is frequently referred to as the AI Constitution.


    • The multidisciplinary conference, held on the Dartmouth College campus in Hanover, New Hampshire, brought together specialists in cybernetics, automata and information theory, operations research, and game theory.
    • Claude Shannon (the "father of information theory"), Marvin Minsky, John McCarthy, Herbert Simon, Allen Newell ("founding fathers of artificial intelligence"), and Nathaniel Rochester (architect of IBM's first commercial scientific mainframe computer) were among the more than twenty attendees.
    • Participants came from the MIT Lincoln Laboratory, Bell Laboratories, and the RAND Systems Research Laboratory.




    The Rockefeller Foundation provided a substantial portion of the funding for the Dartmouth Conference.



    The Dartmouth Conference, which lasted around two months, was envisaged by the organizers as a method to make quick progress on computer models of human cognition.


    • "Every facet of learning or any other trait of intelligence may in theory be so clearly characterized that a computer can be constructed to replicate it," organizers said as a starting point for their deliberations (McCarthy 1955, 2).



    • In his Rockefeller Foundation proposal a year before to the summer meeting, mathematician and principal organizer John McCarthy created the phrase "artificial intelligence." McCarthy subsequently said that the new name was intended to establish a barrier between his study and the discipline of cybernetics.
    • He was a driving force behind the development of symbol processing techniques to artificial intelligence, which were at the time in the minority.
    • In the 1950s, analog cybernetic techniques and neural networks were the most common brain modeling methodologies.




    Issues Covered At The Conference.



    The Dartmouth Conference included a broad variety of issues, from complexity theory and neuron nets to creative thinking and unpredictability.


    • The conference is notable for being the site of the first public demonstration of Newell, Simon, and Clifford Shaw's Logic Theorist, a program that could independently verify theorems stated in Bertrand Russell and Alfred North Whitehead's Principia Mathematica.
    • The only program at the conference that tried to imitate the logical features of human intellect was Logic Theorist.
    • Attendees predicted that by 1970, digital computers would have become chess grandmasters, discovered new and important mathematical theorems, produced passable language translations and understood spoken language, and composed classical music.
    • Because the Rockefeller Foundation never received a formal report on the conference, the majority of information on the events comes from memories, handwritten notes, and a few papers authored by participants and published elsewhere.



    Mechanization of Thought Processes


    Following the Dartmouth Conference, the British National Physical Laboratory (NPL) hosted an international conference on "Mechanization of Thought Processes" in 1958.


    • Several Dartmouth Conference attendees, including Minsky and McCarthy, spoke at the NPL conference.
    • Minsky mentioned the Dartmouth Conference's relevance in the creation of his heuristic software for solving plane geometry issues and the switch from analog feedback, neural networks, and brain modeling to symbolic AI techniques at the NPL conference.
    • Neural networks did not resurface as a research topic until the mid-1980s.



    Dartmouth Summer Research Project 


    The Dartmouth Summer Research Project on Artificial Intelligence was a watershed moment in the development of AI. 

    The Dartmouth Summer Research Project on Artificial Intelligence, which began in 1956, brought together a small group of scientists to kick off this area of study. 

    To mark the occasion, more than 100 researchers and academics gathered at Dartmouth for AI@50, a conference that celebrated the past, appraised current achievements, and helped seed ideas for future artificial intelligence research. 

    John McCarthy, then a mathematics professor at the College, convened the first gathering. 

    The meeting would "continue on the basis of the premise that any facet of learning or any other attribute of intelligence may in theory be so clearly characterized that a computer can be constructed to replicate it," according to his plan. 

    The director of AI@50, Professor of Philosophy James Moor, explains that the researchers who came to Hanover 50 years ago were thinking about methods to make robots more aware and sought to set out a framework to better comprehend human intelligence.



    Context Of The Dartmouth AI Conference:


    Cybernetics, automata theory, and sophisticated information processing were all terms used in the early 50s to describe the science of "thinking machines." 


    The wide range of names reflects the wide range of intellectual approaches. 


    In, John McCarthy, a Dartmouth College Assistant Professor of Mathematics, wanted to form a group to clarify and develop ideas regarding thinking machines. 



    • For the new field, he chose the moniker 'Artificial Intelligence.' He picked the term mainly to escape a concentration on limited automata theory and cybernetics, which was largely focused on analog feedback, as well as the possibility of having to accept or dispute with the aggressive Norbert Wiener as guru. 
    • McCarthy addressed the Rockefeller Foundation in early to seek money for a summer seminar at Dartmouth that would attract roughly 150 people. 
    • In June, he and Claude Shannon, then at Bell Labs, met with Robert Morison, Director of Biological and Medical Research, to explore the concept and potential financing, but Morison was skeptical if money would be made available for such a bold initiative. 



    McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon officially proposed the proposal in September. The term "artificial intelligence" was coined as a result of this suggestion. 


    According to the proposal, 


    • We suggest that during the summer of at Dartmouth College in Hanover, New Hampshire, a -month, -man artificial intelligence research be conducted. 
    • The research will be based on the hypothesis that any part of learning, or any other characteristic of intelligence, can be characterized exactly enough for a computer to imitate it. 
    • It will be attempted to figure out how to get robots to speak, develop abstractions and ideas, solve issues that are now reserved for people, and improve themselves. 
    • We believe that if a properly chosen group of scientists worked on one or more of these topics together for a summer, considerable progress might be accomplished. 
    • Computers, natural language processing, neural networks, theory of computing, abstraction, and creativity are all discussed further in the proposal (these areas within the field of artificial intelligence are considered still relevant to the work of the field). 

    He remarked, "We'll focus on the difficulty of figuring out how to program a calculator to construct notions and generalizations. 


    Of course, this is subject to change once the group meets." Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Herbert A. Simon, and Allen Newell were among the participants at the meeting, according to Stottler Henke Associates. 

    The real participants arrived at various times, most of which were for far shorter periods of time. 


    • Rochester was replaced for three weeks by Trenchard More, and MacKay and Holland were unable to attend—but the project was prepared to commence. 
    • Around June of that year, the first participants (perhaps simply Ray Solomonoff, maybe with Tom Etter) came to Dartmouth College in Hanover, New Hampshire, to join John McCarthy, who had already set up residence there. 
    • Ray and Marvin remained at the Professors' apartments, while the most of the guests stayed at the Hanover Inn.




    List Of Dartmouth AI Conference Attendees:


    1. Ray Solomonoff
    2. Marvin Minsky
    3. John McCarthy
    4. Claude Shannon
    5. Trenchard More
    6. Nat Rochester
    7. Oliver Selfridge
    8. Julian Bigelow
    9. W. Ross Ashby
    10. W.S. McCulloch
    11. Abraham Robinson
    12. Tom Etter
    13. John Nash
    14. David Sayre
    15. Arthur Samuel
    16. Kenneth R. Shoulders
    17. Shoulders' friend
    18. Alex Bernstein
    19. Herbert Simon
    20. Allen Newell


    ~ Jai Krishna Ponnappan

    You may also want to read more about Artificial Intelligence here.


    See also: 

    Cybernetics and AI; Macy Conferences; McCarthy, John; Minsky, Marvin; Newell, Allen; Simon, Herbert A.


    References & Further Reading:


    Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books.

    Gardner, Howard. 1985. The Mind’s New Science: A History of the Cognitive Revolution. New York: Basic Books.

    Kline, Ronald. 2011. “Cybernetics, Automata Studies, and the Dartmouth Conference on Artificial Intelligence.” IEEE Annals of the History of Computing 33, no. 4 (April): 5–16.

    McCarthy, John. 1955. “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.” Rockefeller Foundation application, unpublished.

    Moor, James. 2006. “The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years.” AI Magazine 27, no. 4 (Winter): 87–91.

    Solomonoff, R.J.The Time Scale of Artificial Intelligence; Reflections on Social Effects, Human Systems Management, Vol 5 1985, Pp 149-153

    Moor, J., The Dartmouth College Artificial Intelligence Conference: The Next Fifty years, AI Magazine, Vol 27, No., 4, Pp. 87-9, 2006

    ump up to:

    Kline, Ronald R., Cybernetics, Automata Studies and the Dartmouth Conference on Artificial Intelligence, IEEE Annals of the History of Computing, October–December, 2011, IEEE Computer Society


    McCorduck, P., Machines Who Think, A.K. Peters, Ltd, Second Edition, 2004

    Nilsson, N., The Quest for Artificial Intelligence, Cambridge University Press, 2010


    Kline, Ronald R., Cybernetics, Automata Studies and the Dartmouth Conference on Artificial Intelligence, IEEE Annals of the History of Computing, October–December, 2011, IEEE Computer Society, (citing letters, from Rockefeller Foundation Archives, Dartmouth file6, 17, 1955 etc.


    McCarthy, J., Minsky, M., Rochester, N., Shannon, C.E., A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence., http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf August, 1955


    McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955), A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, archived from the original on 2007-08-26, retrieved 2006-04-09 retrieved 10:47 (UTC), 9th of April 2006

     Stottler-Henke retrieved 18:19 (UTC), 27th of July 2006

    Nilsson, N., The Quest for Artificial Intelligence, Cambridge University Press, 2010, P. 53

    Solomonoff, R.J., dart56ray622716talk710.pdf, 1956 URL:{http://raysolomonoff.com/dartmouth/boxbdart/dart56ray622716talk710.pdf

    McCarthy, J., List, Sept., 1956; List among Solomonoff papers to be posted on website solomonof.com
    http://raysolomonoff.com/dartmouth/boxbdart/dart56ray812825who.pdf 1956

    Nilsson, N., The Quest for Artificial Intelligence, Cambridge University Press, 2010,
    personal communication

    McCorduck, P., Machines Who Think, A.K. Peters, Ltd, Second Edition, 2004.

    Artificial Intelligence - What Is Immortality in the Digital Age?




    The act of putting a human's memories, knowledge, and/or personality into a long-lasting digital memory storage device or robot is known as digital immortality.

    Human intelligence is therefore displaced by artificial intelligence that resembles the mental pathways or imprint of the brain in certain respects.

    The National Academy of Engineering has identified reverse-engineering the brain to attain substrate independence—that is, copying the thinking and feeling mind and reproducing it on a range of physical or virtual media.

    Whole Brain Emulation (also known as mind uploading) is a theoretical science that assumes the mind is a dynamic process independent of the physical biology of the brain and its unique sets or patterns of atoms.

    Instead, the mind is a collection of information-processing functions that can be computed.

    Whole Brain Emulation is presently assumed to be based on the neural networking discipline of computer science, which has as its own ambitious objective the programming of an operating system modeled after the human brain.

    Artificial neural networks (ANNs) are statistical models built from biological neural networks in artificial intelligence research.

    Through connections and weighting, as well as backpropagation and parameter adjustment in algorithms and rules, ANNs may process information in a nonlinear way.

    Through his online "Mind Uploading Home Page," Joe Strout, a computational neurobiology enthusiast at the Salk Institute, facilitated debate of full brain emulation in the 1990s.

    Strout argued for the material origins of consciousness, claiming that evidence from damage to actual people's brains indicates to neuronal, connectionist, and chemical beginnings.

    Strout shared timelines of previous and contemporary technical advancements as well as suggestions for future uploading techniques through his website.

    Mind uploading proponents believe that one of two methods will eventually be used: (1) gradual copy-and-transfer of neurons by scanning the brain and simulating its underlying information states, or (2) deliberate replacement of natural neurons with more durable artificial mechanical devices or manufactured biological products.

    Strout gathered information on a variety of theoretical ways for achieving the objective of mind uploading.

    One is a microtome method, which involves slicing a live brain into tiny slices and scanning it with a sophisticated electron microscope.

    The brain is then reconstructed in a synthetic substrate using the picture data.

    Nanoreplacement involves injecting small devices into the brain to monitor the input and output of neurons.

    When these minuscule robots have a complete understanding of all biological interactions, they will eventually kill the neurons and replace them.

    A robot with billions of appendages that delve deep into every section of the brain, as envisioned by Carnegie Mellon University roboticist Hans Moravec, is used in a variation of this process.

    In this approach, the robot creates a virtual model of every portion and function of the brain, gradually replacing it.

    Everything that the physical brain used to be is eventually replaced by a simulation.

    In copy-and-transfer whole brain emulation, scanning or mapping neurons is commonly considered harmful.

    The live brain is plasticized or frozen before being divided into parts , scanned and simulated on a computational media.

    Philosophically, the technique creates a mental clone of a person, not the person who agrees to participate in the experiment.

    Only a duplicate of that individual's personal identity survives the duplicating experiment; the original person dies.

    Because, as philosopher John Locke reasoned, someone who recalls thinking about something in the past is the same person as the person who performed the thinking in the first place, the copy may be thought of as the genuine person.

    Alternatively, it's possible that the experiment may turn the original and copy into completely different persons, or that they will soon diverge from one another through time and experience as a result of their lack of shared history beyond the experiment.

    There have been many nondestructive approaches proposed as alternatives to damaging the brain during the copy-and-transfer process.

    It is hypothesized that sophisticated types of gamma-ray holography, x-ray holography, magnetic resonance imaging (MRI), biphoton interferometry, or correlation mapping using probes might be used to reconstruct function.

    With 3D reconstructions of atomic-level detail, the present limit of available technology, in the form of electron microscope tomography, has reached the sub-nanometer scale.

    The majority of the remaining challenges are related to the geometry of tissue specimens and tomographic equipment's so-called tilt-range restrictions.

    Advanced kinds of picture recognition, as well as neurocomputer manufacturing to recreate scans as information processing components, are in the works.

    Professor of Electrical and Computer Engineering Alice Parker leads the BioRC Biomimetic Real-Time Cortex Project at the University of Southern California, which focuses on reverse-engineering the brain.

    Parker is now building and producing a memory and carbon nanotube brain nanocircuit for a future synthetic cortex based on statistical predictions with nanotechnology professor Chongwu Zhou and her students.

    Her neuromorphic circuits are designed to mimic the complexities of human neural computations, including glial cell connections (these are nonneuronal cells that form myelin, control homeostasis, and protect and support neurons).

    Members of the BioRC Project are developing systems that scale to the size of human brains.

    Parker is attempting to include dendritic plasticity into these systems, which will allow them to adapt and expand as they learn.

    Carver Mead, a Caltech electrical engineer who has been working on electronic models of human neurological and biological components since the 1980s, is credited with the approach's roots.

    The Terasem Movement, which began in 2002, aims to educate and urge the public to embrace technical advancements that advance the science of mind uploading and integrate science, religion, and philosophy.

    The Terasem Movement, the Terasem Movement Foundation, and the Terasem Movement Transreligion are all incorporated entities that operate together.

    Martine Rothblatt and Bina Aspen Rothblatt, serial entrepreneurs, founded the group.

    The Rothblatts are inspired by the religion of Earthseed, which may be found in Octavia Butler's 1993 novel Parable of the Sower.

    "Life is intentional, death is voluntary, God is technology, and love is fundamental," according to Rothblatt's trans-religious ideas (Roy 2014).

    Terasem's CyBeRev (Cybernetic Beingness Revival) project collects all available data about a person's life—their personal history, recorded memories, photographs, and so on—and stores it in a separate data file in the hopes that their personality and consciousness can be pieced together and reanimated one day by advanced software.

    The Terasem Foundation-sponsored Lifenaut research retains mindfiles with biographical information on individuals for free and keeps track of corresponding DNA samples (biofiles).

    Bina48, a social robot created by the foundation, demonstrates how a person's consciousness may one day be transplanted into a lifelike android.

    Numenta, an artificial intelligence firm based in Silicon Valley, is aiming to reverse-engineer the human neocortex.

    Jeff Hawkins (creator of the portable PalmPilot personal digital assistant), Donna Dubinsky, and Dileep George are the company's founders.

    Numenta's idea of the neocortex is based on Hawkins' and Sandra Blakeslee's theory of hierarchical temporal memory, which is outlined in their book On Intelligence (2004).

    Time-based learning algorithms, which are capable of storing and recalling tiny patterns in data change over time, are at the heart of Numenta's emulation technology.

    Grok, a commercial tool that identifies flaws in computer servers, was created by the business.

    Other applications, such as detecting anomalies in stock market trading or abnormalities in human behavior, have been provided by the business.

    Carboncopies is a non-profit that funds research and cooperation to capture and preserve unique configurations of neurons and synapses carrying human memories.

    Computational modeling, neuromorphic hardware, brain imaging, nanotechnology, and philosophy of mind are all areas where the organization supports research.

    Randal Koene, a computational neuroscientist educated at McGill University and head scientist at neuroprosthetic company Kernel, is the organization's creator.

    Dmitry Itskov, a Russian new media millionaire, donated early funding for Carbon copies.

    Itskov is also the founder of the 2045 Initiative, a non-profit organization dedicated to extreme life extension.

    The purpose of the 2045 Initiative is to develop high-tech methods for transferring personalities into a "advanced nonbiological carrier." Global Future 2045, a meeting aimed to developing "a new evolutionary strategy for mankind," is organized by Koene and Itskov.

    Proponents of digital immortality see a wide range of practical results as a result of their efforts.

    For example, in the case of death by accident or natural causes, a saved backup mind may be used to reawaken into a new body.

    (It's reasonable to assume that elderly brains would seek out new bodies long before aging becomes apparent.) This is also the basis of Arthur C.

    Clarke's science fiction book City of the Stars (1956), which influenced Koene's decision to pursue a career in science at the age of thirteen.

    Alternatively, mankind as a whole may be able to lessen the danger of global catastrophe by uploading their thoughts to virtual reality.

    Civilization might be saved on a high-tech hard drive buried deep into the planet's core, safe from hostile extraterrestrials and incredibly strong natural gamma ray bursts.

    Another potential benefit is the potential for life extension over lengthy periods of interstellar travel.

    For extended travels throughout space, artificial brains might be implanted into metal bodies.

    This is a notion that Clarke foreshadowed in the last pages of his science fiction classic Childhood's End (1953).

    It's also the response offered by Manfred Clynes and Nathan Kline in their 1960 Astronautics article "Cyborgs and Space," which includes the first mention of astronauts with physical capacities that transcend beyond conventional limitations (zero gravity, space vacuum, cosmic radiation) thanks to mechanical help.

    Under real mind uploading circumstances, it may be able to simply encode and send the human mind as a signal to a neighboring exoplanet that is the greatest possibility for alien life discovery.

    The hazards to humans are negligible in each situation when compared to the present threats to astronauts, which include exploding rockets, high-speed impacts with micrometeorites, and faulty suits and oxygen tanks.

    Another potential benefit of digital immortality is real restorative justice and rehabilitation through criminal mind retraining.

    Or, alternatively, mind uploading might enable for penalties to be administered well beyond the normal life spans of those who have committed heinous crimes.

    Digital immortality has far-reaching social, philosophical, and legal ramifications.

    The concept of digital immortality has long been a hallmark of science fiction.

    The short story "The Tunnel Under the World" (1955) by Frederik Pohl is a widely reprinted story about chemical plant workers who are killed in a chemical plant explosion, only to be rebuilt as miniature robots and subjected to advertising campaigns and jingles over the course of a long Truman Show-like repeating day.

    The Silicon Man (1991) by Charles Platt relates the tale of an FBI agent who finds a hidden operation named LifeScan.

    The project has found a technique to transfer human thought patterns to a computer dubbed MAPHIS, which is headed by an old millionaire and a mutinous crew of government experts (Memory Array and Processors for Human Intelligence Storage).

    MAPHIS is capable of delivering any standard stimuli, including pseudomorphs, which are simulations of other persons.

    The Autoverse is introduced in Greg Egan's hard science fiction Permutation City (1994), which mimics complex miniature biospheres and virtual worlds populated by artificial living forms.

    Egan refers to human consciousnesses scanned into the Autoverse as copies.

    The story is inspired by John Conway's Game of Life's cellular automata, quantum ontology (the link between the quantum universe and human perceptions of reality), and what Egan refers to as dust theory.

    The premise that physics and arithmetic are same, and that individuals residing in whatever mathematical, physical, and spacetime systems (and all are feasible) are essentially data, processes, and interactions, is at the core of dust theory.

    This claim is similar to MIT physicist Max Tegmark's Theory of Everything, which states that "all structures that exist mathematically exist also physically," meaning that "all structures that exist mathematically exist also physically," meaning that "all structures that exist mathematically exist also physically, by which we mean that in those complex enough to contain self-aware substructures (SASs), these SASs will subjectively perceive themselves as existing in a physically'real' world" (Tegmark 1998, 1).

    Hans Moravec, a roboticist at Carnegie Mellon University, makes similar assertions in his article "Simulation, Consciousness, Existence" (1998).

    Tron (1982), Freejack (1992), and The 6th Day are examples of mind uploading and digital immortality in movies (2000).

    Kenneth D. Miller, a theoretical neurologist at Columbia University, is a notable skeptic.

    While rebuilding an active, functional mind may be achievable, connectomics researchers (those working on a wiring schematic of the whole brain and nervous system) remain millennia away from finishing their job, according to Miller.

    And, he claims, connectomics is just concerned with the first layer of brain activities that must be comprehended in order to replicate the complexity of the human brain.

    Others have wondered what happens to personhood in situations where individuals are no longer constrained as physical organisms.

    Is identity just a series of connections between neurons in the brain? What is going to happen to markets and economic forces? Is a body required for immortality? Professor Robin Hanson of George Mason University's nonfiction publication The Age of Em: Work, Love, and Life When Robots Rule the Earth provides an economic and social viewpoint on digital immortality (2016).

    Hanson's hypothetical ems are scanned emulations of genuine humans who exist in both virtual reality environments and robot bodies.


    ~ Jai Krishna Ponnappan

    You may also want to read more about Artificial Intelligence here.



    See also: 


    Technological Singularity.


    Further Reading:


    Clynes, Manfred E., and Nathan S. Kline. 1960. “Cyborgs and Space.” Astronautics 14, no. 9 (September): 26–27, 74–76.

    Farnell, Ross. 2000. “Attempting Immortality: AI, A-Life, and the Posthuman in Greg Egan’s ‘Permutation City.’” Science Fiction Studies 27, no. 1: 69–91.

    Global Future 2045. http://gf2045.com/.

    Hanson, Robin. 2016. The Age of Em: Work, Love, and Life when Robots Rule the Earth. Oxford, UK: Oxford University Press.

    Miller, Kenneth D. 2015. “Will You Ever Be Able to Upload Your Brain?” New York Times, October 10, 2015. https://www.nytimes.com/2015/10/11/opinion/sunday/will-you-ever-be-able-to-upload-your-brain.html.

    Moravec, Hans. 1999. “Simulation, Consciousness, Existence.” Intercommunication 28 (Spring): 98–112.

    Roy, Jessica. 2014. “The Rapture of the Nerds.” Time, April 17, 2014. https://time.com/66536/terasem-trascendence-religion-technology/.

    Tegmark, Max. 1998. “Is ‘the Theory of Everything’ Merely the Ultimate Ensemble The￾ory?” Annals of Physics 270, no. 1 (November): 1–51.

    2045 Initiative. http://2045.com/.


    Analog Space Missions: Earth-Bound Training for Cosmic Exploration

    What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...