Showing posts sorted by relevance for query Artificial General Intelligence. Sort by date Show all posts
Showing posts sorted by relevance for query Artificial General Intelligence. Sort by date Show all posts

Artificial Intelligence - Who Is Ben Goertzel (1966–)?


Ben Goertzel is the founder and CEO of SingularityNET, a blockchain AI company, as well as the chairman of Novamente LLC, a research professor at Xiamen University's Fujian Key Lab for Brain-Like Intelligent Systems, the chief scientist of Mozi Health and Hanson Robotics in Shenzhen, China, and the chair of the OpenCog Foundation, Humanity+, and Artificial General Intelligence Society conference series. 

Goertzel has long wanted to create a good artificial general intelligence and use it in bioinformatics, finance, gaming, and robotics.

He claims that, despite AI's current popularity, it is currently superior than specialists in a number of domains.

Goertzel divides AI advancement into three stages, each of which represents a step toward a global brain (Goertzel 2002, 2): • the intelligent Internet • the full-fledged Singularity Goertzel presented a lecture titled "Decentralized AI: The Power and the Necessity" at TEDxBerkeley in 2019.

He examines artificial intelligence in its present form as well as its future in this discussion.

"The relevance of decentralized control in leading AI to the next stages, the strength of decentralized AI," he emphasizes (Goertzel 2019a).

In the evolution of artificial intelligence, Goertzel distinguishes three types: artificial narrow intelligence, artificial broad intelligence, and artificial superintelligence.

Artificial narrow intelligence refers to machines that can "address extremely specific issues... better than humans" (Goertzel 2019a).

In certain restricted activities, such as chess and Go, this kind of AI has outperformed a human.

Ray Kurzweil, an American futurologist and inventor, coined the phrase "narrow AI." Artificial general intelligence (AGI) refers to intelligent computers that can "generate knowledge" in a variety of fields and have "humanlike autonomy." By 2029, according to Goertzel, this kind of AI will have reached the same level of intellect as humans.

Artificial superintelligence (ASI) is based on both narrow and broad AI, but it can also reprogram itself.



By 2045, he claims, this kind of AI will be smarter than the finest human brains in terms of "scientific innovation, general knowledge, and social abilities" (Goertzel 2019a).

According to Goertzel, Facebook, Google, and a number of colleges and companies are all actively working on AGI.

According to Goertzel, the shift from AI to AGI will occur within the next five to thirty years.

Goertzel is also interested in artificial intelligence-assisted life extension.

He thinks that artificial intelligence's exponential advancement will lead to technologies that will increase human life span and health eternally.

He predicts that by 2045, a singularity featuring a drastic increase in "human health span" would have occurred (Goertzel 2012).

Vernor Vinge popularized the term "singularity" in his 1993 article "The Coming Technological Singularity." Ray Kurzweil coined the phrase in his 2005 book The Singularity is Near.

The Technological Singularity, according to both writers, is the merging of machine and human intellect as a result of a fast development in new technologies, particularly robots and AI.

The thought of an impending singularity excites Goertzel.

SingularityNET is his major current initiative, which entails the construction of a worldwide network of artificial intelligence researchers interested in developing, sharing, and monetizing AI technology, software, and services.

By developing a decentralized protocol that enables a full stack AI solution, Goertzel has made a significant contribution to this endeavor.

SingularityNET, as a decentralized marketplace, provides a variety of AI technologies, including text generation, AI Opinion, iAnswer, Emotion Recognition, Market Trends, OpenCog Pattern Miner, and its own cryptocurrency, AGI token.

SingularityNET is presently cooperating with Domino's Pizza in Malaysia and Singapore (Khan 2019).



Domino's is interested in leveraging SingularityNET technologies to design a marketing plan, with the goal of providing the finest products and services to its consumers via the use of unique algorithms.

Domino's thinks that by incorporating the AGI ecosystem into their operations, they will be able to provide value and service in the food delivery market.

Goertzel has reacted to scientist Stephen Hawking's challenge, which claimed that AI might lead to the extinction of human civilization.

Given the current situation, artificial super intelligence's mental state will be based on past AI generations, thus "selling, spying, murdering, and gambling are the key aims and values in the mind of the first super intelligence," according to Goertzel (Goertzel 2019b).

He acknowledges that if humans desire compassionate AI, they must first improve their own treatment of one another.

With four years, Goertzel worked for Hanson Robotics in Hong Kong.

He collaborated with Sophia, Einstein, and Han, three well-known robots.

"Great platforms for experimenting with AI algorithms, including cognitive architectures like OpenCog that aim at human-level AI," he added of the robots (Goertzel 2018).

Goertzel argues that essential human values may be retained for future generations in Sophia-like robot creatures after the Technological Singularity.

Decentralized networks like SingularityNET and OpenCog, according to Goertzel, provide "AIs with human-like values," reducing AI hazards to humanity (Goertzel 2018).

Because human values are complicated in nature, Goertzel feels that encoding them as a rule list is wasteful.

Brain-computer interfacing (BCI) and emotional interfacing are two ways Goertzel offers.

Humans will become "cyborgs," with their brains physically linked to computational-intelligence modules, and the machine components of the cyborgs will be able to read the moral-value-evaluation structures of the human mind directly from the biological components of the cyborgs (Goertzel 2018).

Goertzel uses Elon Musk's Neuralink as an example.

Because it entails invasive trials with human brains and a lot of unknowns, Goertzel doubts that this strategy will succeed.

"Emotional and spiritual connections between people and AIs, rather than Ethernet cables or Wifi signals, are used to link human and AI brains," according to the second method (Goertzel 2018).

To practice human values, he proposes that AIs participate in emotional and social connection with humans via face expression detection and mirroring, eye contact, and voice-based emotion recognition.

To that end, Goertzel collaborated with SingularityNET, Hanson AI, and Lia Inc on the "Loving AI" research project, which aims to assist artificial intelligences speak and form intimate connections with humans.

A funny video of actor Will Smith on a date with Sophia the Robot is presently available on the Loving AI website.

Sophia can already make sixty facial expressions and understand human language and emotions, according to the video of the date.

When linked to a network like SingularityNET, humanoid robots like Sophia obtain "ethical insights and breakthroughs...

via language," according to Goertzel (Goertzel 2018).

Then, through a shared internet "mindcloud," robots and AIs may share what they've learnt.

Goertzel is also the chair of the Artificial General Intelligence Society's Conference Series on Artificial General Intelligence, which has been conducted yearly since 2008.

The Journal of Artificial General Intelligence is a peer-reviewed open-access academic periodical published by the organization. Goertzel is the editor of the conference proceedings series.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 

General and Narrow AI; Superintelligence; Technological Singularity.


Further Reading:


Goertzel, Ben. 2002. Creating Internet Intelligence: Wild Computing, Distributed Digital Consciousness, and the Emerging Global Brain. New York: Springer.

Goertzel, Ben. 2012. “Radically Expanding the Human Health Span.” TEDxHKUST. https://www.youtube.com/watch?v=IMUbRPvcB54.

Goertzel, Ben. 2017. “Sophia and SingularityNET: Q&A.” H+ Magazine, November 5, 2017. https://hplusmagazine.com/2017/11/05/sophia-singularitynet-qa/.

Goertzel, Ben. 2018. “Emotionally Savvy Robots: Key to a Human-Friendly Singularity.” https://www.hansonrobotics.com/emotionally-savvy-robots-key-to-a-human-friendly-singularity/.

Goertzel, Ben. 2019a. “Decentralized AI: The Power and the Necessity.” TEDxBerkeley, March 9, 2019. https://www.youtube.com/watch?v=r4manxX5U-0.

Goertzel, Ben. 2019b. “Will Artificial Intelligence Kill Us?” July 31, 2019. https://www.youtube.com/watch?v=TDClKEORtko.

Goertzel, Ben, and Stephan Vladimir Bugaj. 2006. The Path to Posthumanity: 21st Century Technology and Its Radical Implications for Mind, Society, and Reality. Bethesda, MD: Academica Press.

Khan, Arif. 2019. “SingularityNET and Domino’s Pizza Announce a Strategic Partnership.” https://blog.singularitynet.io/singularitynet-and-dominos-pizza-announce-a-strategic-partnership-cbbe21f80fc7.

Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. NASA: Lewis Research Center





Artificial Intelligence - Who Is Steve Omohundro?

 




In the field of artificial intelligence, Steve Omohundro  (1959–) is a well-known scientist, author, and entrepreneur.

He is the inventor of Self-Aware Systems, the chief scientist of AIBrain, and an adviser to the Machine Intelligence Research Institute (MIRI).

Omohundro is well-known for his insightful, speculative studies on the societal ramifications of AI and the safety of smarter-than-human computers.

Omohundro believes that a fully predictive artificial intelligence science is required.

He thinks that if goal-driven artificial general intelligences are not carefully created in the future, they would likely generate negative activities, cause conflicts, or even lead to the extinction of humanity.

Indeed, Omohundro argues that AIs with inadequate programming might act psychopathically.

He claims that programmers often create flaky software and programs that "manipulate bits" without knowing why.

Omohundro wants AGIs to be able to monitor and comprehend their own operations, spot flaws, and rewrite themselves to improve performance.

This is what genuine machine learning looks like.

The risk is that AIs may evolve into something that humans will be unable to comprehend, make incomprehensible judgments, or have unexpected repercussions.

As a result, Omohundro contends, artificial intelligence must evolve into a discipline that is more predictive and anticipatory.

Omohundro also suggests in "The Nature of Self-Improving Artificial Intelligence," one of his widely available online papers, that a future self-aware system that will most likely access the internet will be influenced by the scientific papers it reads, which recursively justifies writing the paper in the first place.

AGI agents must be programmed with value sets that drive them to pick objectives that benefit mankind as they evolve.

Self-improving systems like the ones Omohundro is working on don't exist yet.

Inventive minds, according to Omohundro, have only produced inert systems (chairs and coffee mugs), reactive systems (mousetraps and thermostats), adaptive systems (advanced speech recognition systems and intelligent virtual assistants), and deliberative systems (advanced speech recognition systems and intelligent virtual assistants) (the Deep Blue chess-playing computer).

Self-improving systems, as described by Omohundro, would have to actively think and make judgments in the face of uncertainty regarding the effects of self-modification.

The essential natures of self-improving AIs, according to Omohundro, may be understood as rational agents, a notion he draws from microeconomic theory.

Because humans are only imperfectly rational, the discipline of behavioral economics has exploded in popularity in recent decades.

AI agents, on the other hand, must eventually establish logical objectives and preferences ("utility functions") that sharpen their ideas about their surroundings due to their self-improving cognitive architectures.

These beliefs will then assist them in forming new aims and preferences.

Omohundro draws influence from mathematician John von Neumann and economist Oskar Morgenstern's contributions to the anticipated utility hypothesis.

Completeness, transitivity, continuity, and independence are the axioms of rational behavior proposed by von Neumann and Morgenstern.

For artificial intelligences, Omohundro proposes four "fundamental drives": efficiency, self-preservation, resource acquisition, and creativity.

These motivations are expressed as "behaviors" by future AGIs with self-improving, rational agency.

Both physical and computational operations are included in the efficiency drive.

Artificial intelligences will strive to make effective use of limited resources such as space, mass, energy, processing time, and computer power.

To prevent losing resources to other agents and enhance goal fulfillment, the self-preservation drive will use powerful artificial intelligences.

A passively behaving artificial intelligence is unlikely to survive.

The acquisition drive is the process of locating new sources of resources, trading for them, cooperating with other agents, or even stealing what is required to reach the end objective.

The creative drive encompasses all of the innovative ways in which an AGI may boost anticipated utility in order to achieve its many objectives.

This motivation might include the development of innovative methods for obtaining and exploiting resources.

Signaling, according to Omohundro, is a singular human source of creative energy, variation, and divergence.

Humans utilize signaling to express their intentions regarding other helpful tasks they are doing.

If A is more likely to be true when B is true than when B is false, then A signals B.

Employers, for example, are more likely to hire potential workers who are enrolled in a class that looks to offer benefits that the company desires, even if this is not the case.

The fact that the potential employee is enrolled in class indicates to the company that he or she is more likely to learn useful skills than the candidate who is not.

Similarly, a billionaire does not need to gift another billionaire a billion dollars to indicate that they are among the super-wealthy.

A huge bag containing several million dollars could suffice.

Omohundro's notion of fundamental AI drives was included into Oxford philosopher Nick Bostrom's instrumental convergence thesis, which claims that a few instrumental values are sought in order to accomplish an ultimate objective, often referred to as a terminal value.

Self-preservation, goal content integrity (retention of preferences over time), cognitive improvement, technical perfection, and resource acquisition are among Bostrom's instrumental values (he prefers not to call them drives).

Future AIs might have a reward function or a terminal value of optimizing some utility function.

Omohundro wants designers to construct artificial general intelligence with kindness toward people as its ultimate objective.

Military conflicts and economic concerns, on the other hand, he believes, make the development of destructive artificial general intelligence more plausible.

Drones are increasingly being used by military forces to deliver explosives and conduct surveillance.

He also claims that future battles will almost certainly be informational in nature.

In a future where cyberwar is a possibility, a cyberwar infrastructure will be required.

Energy encryption, a unique wireless power transmission method that scrambles energy so that it stays safe and cannot be exploited by rogue devices, is one way to counter the issue.

Another area where information conflict is producing instability is the employment of artificial intelligence in fragile financial markets.

Digital cryptocurrencies and crowdsourcing marketplace systems like Mechanical Turk are ushering in a new era of autonomous capitalism, according to Omohundro, and we are unable to deal with the repercussions.

Omohundro has spoken about the need for a complete digital provenance for economic and cultural recordkeeping to prevent AI deception, fakery, and fraud from overtaking human society as president of the company Possibility Research, advocate of a new cryptocurrency called Pebble, and advisory board member of the Institute for Blockchain Studies.

In order to build a verifiable "blockchain civilization based on truth," he suggests that digital provenance methods and sophisticated cryptography techniques monitor autonomous technology and better check the history and structure of any alterations being performed.

Possibility Smart technologies that enhance computer programming, decision-making systems, simulations, contracts, robotics, and governance are the focus of research.

Omohundro has advocated for the creation of so-called Safe AI scaffolding solutions to counter dangers in recent years.

The objective is to create self-contained systems that already have temporary scaffolding or staging in place.

The scaffolding assists programmers who are assisting in the development of a new artificial general intelligence.

The virtual scaffolding may be removed after the AI has been completed and evaluated for stability.

The initial generation of restricted safe systems created in this manner might be used to develop and test less constrained AI agents in the future.

Utility functions aligned with agreed-upon human philosophical imperatives, human values, and democratic principles would be included in advanced scaffolding.

Self-improving AIs may eventually have inscribed the Universal Declaration of Human Rights or a Universal Constitution into their fundamental fabric, guiding their growth, development, choices, and contributions to mankind.

Omohundro graduated from Stanford University with degrees in mathematics and physics, as well as a PhD in physics from the University of California, Berkeley.

In 1985, he co-created StarLisp, a high-level programming language for the Thinking Machines Corporation's Connection Machine, a massively parallel supercomputer in construction.

On differential and symplectic geometry, he wrote the book Geometric Perturbation Theory in Physics (1986).

He was an associate professor of computer science at the University of Illinois in Urbana-Champaign from 1986 to 1988.

He cofounded the Center for Complex Systems Research with Stephen Wolfram and Norman Packard.

He also oversaw the university's Vision and Learning Group.

He created the Mathematica 3D graphics system, which is a symbolic mathematical calculation application.

In 1990, he led an international team at the University of California, Berkeley's International Computer Science Institute (ICSI) to develop Sather, an object-oriented, functional programming language.

Automated lip-reading, machine vision, machine learning algorithms, and other digital technologies have all benefited from his work.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


General and Narrow AI; Superintelligence.



References & Further Reading:



Bostrom, Nick. 2012. “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents.” Minds and Machines 22, no. 2: 71–85.

Omohundro, Stephen M. 2008a. “The Basic AI Drives.” In Proceedings of the 2008 Conference on Artificial General Intelligence, 483–92. Amsterdam: IOS Press.

Omohundro, Stephen M. 2008b. “The Nature of Self-Improving Artificial Intelligence.” https://pdfs.semanticscholar.org/4618/cbdfd7dada7f61b706e4397d4e5952b5c9a0.pdf.

Omohundro, Stephen M. 2012. “The Future of Computing: Meaning and Values.” https://selfawaresystems.com/2012/01/29/the-future-of-computing-meaning-and-values.

Omohundro, Stephen M. 2013. “Rational Artificial Intelligence for the Greater Good.” In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart, 161–79. Berlin: Springer.

Omohundro, Stephen M. 2014. “Autonomous Technology and the Greater Human Good.” Journal of Experimental and Theoretical Artificial Intelligence 26, no. 3: 303–15.

Shulman, Carl. 2010. Omohundro’s ‘Basic AI Drives’ and Catastrophic Risks. Berkeley, CA: Machine Intelligence Research Institute




Artificial Intelligence - What Is Artificial Intelligence, Alchemy, And Associationism?

 



Alchemy and Artificial Intelligence, a RAND Corporation paper prepared by Massachusetts Institute of Technology (MIT) philosopher Hubert Dreyfus and released as a mimeographed memo in 1965, critiqued artificial intelligence researchers' aims and essential assumptions.

The paper, which was written when Dreyfus was consulting for RAND, elicited a significant negative response from the AI community.

Dreyfus had been engaged by RAND, a nonprofit American global policy think tank, to analyze the possibilities for artificial intelligence research from a philosophical standpoint.

Researchers like as Herbert Simon and Marvin Minsky, who predicted in the late 1950s that robots capable of accomplishing whatever humans could do will exist within decades, made bright forecasts for the future of AI.

The objective for most AI researchers was not only to develop programs that processed data in such a manner that the output or outcome looked to be the result of intelligent activity.

Rather, they wanted to create software that could mimic human cognitive processes.

Experts in artificial intelligence felt that human cognitive processes might be used as a model for their algorithms, and that AI could also provide insight into human psychology.

The work of phenomenologists Maurice Merleau-Ponty, Martin Heidegger, and Jean-Paul Sartre impacted Dreyfus' thought.

Dreyfus contended in his report that the theory and aims of AI were founded on associationism, a philosophy of human psychology that includes a core concept: that thinking happens in a succession of basic, predictable stages.

Artificial intelligence researchers believed they could use computers to duplicate human cognitive processes because of their belief in associationism (which Dreyfus claimed was erroneous).

Dreyfus compared the characteristics of human thinking (as he saw them) to computer information processing and the inner workings of various AI systems.

The core of his thesis was that human and machine information processing processes are fundamentally different.

Computers can only be programmed to handle "unambiguous, totally organized information," rendering them incapable of managing "ill-structured material of everyday life," and hence of intelligence (Dreyfus 1965, 66).

On the other hand, Dreyfus contended that, according to AI research's primary premise, many characteristics of human intelligence cannot be represented by rules or associationist psychology.

Dreyfus outlined three areas where humans vary from computers in terms of information processing: fringe consciousness, insight, and ambiguity tolerance.

Chess players, for example, utilize the fringe awareness to decide which area of the board or pieces to concentrate on while making a move.

The human player differs from a chess-playing software in that the human player does not consciously or subconsciously examine the information or count out probable plays the way the computer does.

Only after the player has utilized their fringe awareness to choose which pieces to concentrate on can they consciously calculate the implications of prospective movements in a manner akin to computer processing.

The (human) problem-solver may build a set of steps for tackling a complicated issue by understanding its fundamental structure.

This understanding is lacking in problem-solving software.

Rather, as part of the program, the problem-solving method must be preliminarily established.

The finest example of ambiguity tolerance is in natural language comprehension, when a word or phrase may have an unclear meaning yet is accurately comprehended by the listener.

When reading ambiguous syntax or semantics, there are an endless amount of signals to examine, yet the human processor manages to choose important information from this limitless domain in order to accurately understand the meaning.

On the other hand, a computer cannot be trained to search through all conceivable facts in order to decipher confusing syntax or semantics.

Either the amount of facts is too huge, or the criteria for interpretation are very complex.

AI experts chastised Dreyfus for oversimplifying the difficulties and misrepresenting computers' capabilities.

RAND commissioned MIT computer scientist Seymour Papert to respond to the study, which he published in 1968 as The Artificial Intelligence of Hubert L.Dreyfus: A Budget of Fallacies.

Papert also set up a chess match between Dreyfus and Mac Hack, which Dreyfus lost, much to the amusement of the artificial intelligence community.

Nonetheless, some of his criticisms in this report and subsequent books appear to have foreshadowed intractable issues later acknowledged by AI researchers, such as artificial general intelligence (AGI), artificial simulation of analog neurons, and the limitations of symbolic artificial intelligence as a model of human reasoning.

Dreyfus' work was declared useless by artificial intelligence specialists, who stated that he misinterpreted their research.

Their ire had been aroused by Dreyfus's critiques of AI, which often used aggressive terminology.

The New Yorker magazine's "Talk of the Town" section included extracts from the story.

Dreyfus subsequently refined and enlarged his case in What Computers Can't Do: The Limits of Artificial Intelligence, published in 1972.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: Mac Hack; Simon, Herbert A.; Minsky, Marvin.

Further Reading

Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books.

Dreyfus, Hubert L. 1965. Alchemy and Artificial Intelligence. P-3244. Santa Monica, CA: RAND Corporation.

Dreyfus, Hubert L. 1972. What Computers Can’t Do: The Limits of Artificial Intelligence.New York: Harper and Row.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman.

Papert, Seymour. 1968. The Artificial Intelligence of Hubert L. Dreyfus: A Budget of Fallacies. Project MAC, Memo No. 154. Cambridge, MA: Massachusetts Institute of Technology.


Artificial Intelligence - Who Is Hugo de Garis?

 


Hugo de Garis (1947–) is an expert in genetic algorithms, artificial intelligence, and topological quantum computing.

He is the creator of the concept of evolvable hardware, which uses evolutionary algorithms to produce customized electronics that can alter structural design and performance dynamically and autonomously in response to their surroundings.

De Garis is most known for his 2005 book The Artilect Battle, in which he describes what he thinks will be an unavoidable twenty-first-century worldwide war between mankind and ultraintelligent robots.

In the 1980s, De Garis got fascinated in genetic algorithms, neural networks, and the idea of artificial brains.

In artificial intelligence, genetic algorithms include the use of software to model and apply Darwinian evolutionary ideas to search and optimization issues.

The "fittest" candidate simulations of axons, dendrites, signals, and synapses in artificial neural networks were evolved using evolutionary algorithms developed by de Garis.

De Garis developed artificial neural systems that resembled those seen in organic brains.

In the 1990s, his work with a new type of programmable computer chips spawned the subject of computer science known as evolvable hardware.

The use of programmable circuits allowed neural networks to grow and evolve at high rates.

De Garis also started playing around with cellular automata, which are mathematical models of complex systems that emerge generatively from basic units and rules.

The coding of around 11,000 fundamental rules was required in an early version of his modeling of cellular automata that acted like brain networks.

About 60,000 such rules were encoded in a subsequent version.

De Garis called his neural networks-on-a-chip a Cellular Automata Machine in the 2000s.

De Garis started to hypothesize that the period of "Brain Building on the Cheap" had come as the price of chips dropped (de Garis 2005, 45).

He started referring to himself as the "Father of Artificial Intelligence." He claims that in the future decades, whole artificial brains with billions of neurons will be built utilizing information acquired from molecular scale robot probes of human brain tissues and the advent of new path breaking brain imaging tools.

Topological quantum computing is another enabling technology that de Garis thinks will accelerate the creation of artificial brains.

He claims that once the physical boundaries of standard silicon chip manufacturing are approached, quantum mechanical phenomena must be harnessed.

Inventions in reversible heatless computing will also be significant in dissipating the harmful temperature effects of tightly packed circuits.

De Garis also supports the development of artificial embryology, often known as "embryofacture," which involves the use of evolutionary engineering and self-assembly methods to mimic the development of fully aware beings from single fertilized eggs.

According to De Garis, because to fast breakthroughs in artificial intelligence technology, a conflict over our last innovation will be unavoidable before the end of the twenty-first century.

He thinks the battle will finish with a catastrophic human extinction catastrophe he refers to as "gigadeath." De Garis speculates in his book The Artilect War that continued Moore's Law doubling of transistors packed on computer chips, accompanied by the development of new technologies such as femtotechnology (the achievement of femtometer-scale struc turing of matter), quantum computing, and neuroengineering, will almost certainly lead to gigadeath.

De Garis felt compelled to create The Artilect War as a cautionary tale and as a self-admitted architect of the impending calamity.

The Cosmists and the Terrans are two antagonistic worldwide political parties that De Garis uses to frame his discussion of an impending Artilect War.

The Cosmists will be apprehensive of the immense power of future superintelligent machines, but they will regard their labor in creating them with such veneration that they will experience a near-messianic enthusiasm in inventing and unleashing them into the world.

Regardless of the hazards to mankind, the Cosmists will strongly encourage the development and nurturing of ever-more sophisticated and powerful artificial minds.

The Terrans, on the other hand, will fight against the creation of artificial minds once they realize they represent a danger to human civilization.

They will feel compelled to fight these artificial intelligences because they constitute an existential danger to humanity.

De Garis dismisses a Cyborgian compromise in which humans and their technological creations blend.

He thinks that robots will grow so powerful and intelligent that only a small percentage of humanity would survive the confrontation.

China and the United States, geopolitical adversaries, will be forced to exploit these technology to develop more complex and autonomous economies, defense systems, and military robots.

Artificial intelligence's dominance in the world will be welcomed by the Cosmists, who will come to see them as near-gods deserving of worship.

The Terrans, on the other hand, will fight the transfer of global economic, social, and military dominance to our machine overlords.

They will see the new situation as a terrible tragedy that has befallen humanity.

His case for a future battle over superintelligent robots has sparked a lot of discussion and controversy among scientific and engineering specialists, as well as a lot of criticism in popular science journals.

In his 2005 book, de Garis implicates himself as a cause of the approaching conflict and as a hidden Cosmist, prompting some opponents to question his intentions.

De Garis has answered that he feels compelled to issue a warning now because he thinks there will be enough time for the public to understand the full magnitude of the danger and react when they begin to discover substantial intelligence hidden in household equipment.

If De Garis' warning is taken seriously, he presents a variety of eventualities.

First, he suggests that the Terrans may be able to defeat Cosmist thinking before a superintelligence takes control, though this is unlikely.

De Garis suggests a second scenario in which artilects quit the earth as irrelevant, leaving human civilisation more or less intact.

In a third possibility, the Cosmists grow so terrified of their own innovations that they abandon them.

Again, de Garis believes this is improbable.

In a fourth possibility, he imagines that all Terrans would transform into Cyborgs.

In a fifth scenario, the Terrans will aggressively seek down and murder the Cosmists, maybe even in outer space.

The Cosmists will leave Earth, construct artilects, and ultimately vanish from the solar system to conquer the cosmos in a sixth scenario.

In a seventh possibility, the Cosmists will flee to space and construct artilects that will fight each other until none remain.

In the eighth scenario, the artilects will go to space and be destroyed by an alien super-artilect.

De Garis has been criticized of believing that The Terminator's nightmarish vision would become a reality, rather than contemplating that superintelligent computers may just as well bring world peace.

De Garis answered that there is no way to ensure that artificial brains operate ethically (humanely).

He also claims that it is difficult to foretell whether or not a superintelligence would be able to bypass an implanted death switch or reprogram itself to disobey orders aimed at instilling human respect.

Hugo de Garis was born in 1947 in Sydney, Australia.

In 1970, he graduated from Melbourne University with a bachelor's degree in Applied Mathematics and Theoretical Physics.

He joined the global electronics corporation Philips as a software and hardware architect after teaching undergraduate mathematics at Cambridge University for four years.

He worked at locations in the Netherlands and Belgium.

In 1992, De Garis received a doctorate in Artificial Life and Artificial Intelligence from the Université Libre de Bruxelles in Belgium.

"Genetic Programming: GenNets, Artificial Nervous Systems, Artificial Embryos," was the title of his thesis.

De Garis directed the Center for Data Analysis and Stochastic Processes at the Artificial Intelligence and Artificial Life Research Unit at Brussels as a graduate student, where he explored evolutionary engineering, which uses genetic algorithms to develop complex systems.

He also worked as a senior research associate at George Mason University's Artificial Intelligence Center in Northern Virginia, where he worked with machine learning pioneer Ryszard Michalski.

De Garis did a postdoctoral fellowship at Tsukuba's Electrotechnical Lab.

He directed the Brain Builder Group at the Advanced Telecommunications Research Institute International in Kyoto, Japan, for the following eight years, while they attempted a moon-shot quest to develop a billion-neuron artificial brain.

De Garis returned to Brussels, Belgium, in 2000 to oversee Star Lab's Brain Builder Group, which was working on a rival artificial brain project.

When the dot-com bubble burst in 2001, De Garis' lab went bankrupt while working on a life-size robot cat.

De Garis then moved on to Utah State University as an Associate Professor of Computer Science, where he stayed until 2006.

De Garis was the first to teach advanced research courses on "brain building" and "quantum computing" at Utah State.

He joined Wuhan University's International School of Software in China as Professor of Computer Science and Mathematical Physics in 2006, where he also served as the leader of the Artificial Intelligence group.

De Garis kept working on artificial brains, but he also started looking into topological quantum computing.

De Garis joined the advisory board of Novamente, a commercial business that aims to develop artificial general intelligence, in the same year.

Two years later, Chinese authorities gave his Wuhan University Brain Builder Group a significant funding to begin building an artificial brain.

The China-Brain Project was the name given to the initiative.

De Garis relocated to Xiamen University in China in 2008, where he ran the Artificial Brain Lab in the School of Information Science and Technology's Artificial Intelligence Institute until his retirement in 2010.



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Superintelligence; Technological Singularity; The Terminator.


Further Reading:


de Garis, Hugo. 1989. “What If AI Succeeds? The Rise of the Twenty-First Century Artilect.” AI Magazine 10, no. 2 (Summer): 17–22.

de Garis, Hugo. 1990. “Genetic Programming: Modular Evolution for Darwin Machines.” In Proceedings of the International Joint Conference on Neural Networks, 194–97. Washington, DC: Lawrence Erlbaum.

de Garis, Hugo. 2005. The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines. ETC Publications.

de Garis, Hugo. 2007. “Artificial Brains.” In Artificial General Intelligence: Cognitive Technologies, edited by Ben Goertzel and Cassio Pennachin, 159–74. Berlin: Springer.

Geraci, Robert M. 2008. “Apocalyptic AI: Religion and the Promise of Artificial Intelligence.” Journal of the American Academy of Religion 76, no. 1 (March): 138–66.

Spears, William M., Kenneth A. De Jong, Thomas Bäck, David B. Fogel, and Hugo de Garis. 1993. “An Overview of Evolutionary Computation.” In Machine Learning: ECML-93, Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence), vol. 667, 442–59. Berlin: Springer.


What Is Artificial General Intelligence?



Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the AGI system to solve problems when presented with new tasks. 

In other words, it's AI's capacity to learn similarly to humans.



Strong AI, full AI, and general intelligent action are some names for it. 

The phrase "strong AI," however, is only used in few academic publications to refer to computer systems that are sentient or aware. 

These definitions may change since specialists from many disciplines see human intelligence from various angles. 

For instance, computer scientists often characterize human intelligence as the capacity to accomplish objectives. 

On the other hand, general intelligence is defined by psychologists in terms of survival or adaptation.

Weak or narrow AI, in contrast to strong AI, is made up of programs created to address a single issue and lacks awareness since it is not meant to have broad cognitive capacities. 

Autonomous cars and IBM's Watson supercomputer are two examples. 

Nevertheless, AGI is defined in computer science as an intelligent system having full or comprehensive knowledge as well as cognitive computing skills.



As of right now, there are no real AGI systems; they are still the stuff of science fiction. 

The long-term objective of these systems is to perform as well as humans do. 

However, due to AGI's superior capacity to acquire and analyze massive amounts of data at a far faster rate than the human mind, it may be possible for AGI to be more intelligent than humans.



Artificial intelligence (AI) is now capable of carrying out a wide range of functions, including providing tailored suggestions based on prior web searches. 

Additionally, it can recognize various items for autonomous cars to avoid, recognize malignant cells during medical inspections, and serve as the brain of home automation. 

Additionally, it may be utilized to find possibly habitable planets, act as intelligent assistants, be in charge of security, and more.



Naturally, AGI seems to far beyond such capacities, and some scientists are concerned this may result in a dystopian future

Elon Musk said that sentient AI would be more hazardous than nuclear war, while Stephen Hawking advised against its creation because it would see humanity as a possible threat and act accordingly.


Despite concerns, most scientists agree that genuine AGI is decades or perhaps centuries away from being developed and must first meet a number of requirements (which are always changing) in order to be achieved. 

These include the capacity for logic, tact, puzzle-solving, and making decisions in the face of ambiguity. 



Additionally, it must be able to plan, learn, and communicate in natural language, as well as represent information, including common sense. 

AGI must also have the capacity to detect (hear, see, etc.) and output the ability to act, such as moving items and switching places to explore. 



How far along are we in the process of developing artificial general intelligence, and who is involved?

In accordance with a 2020 study from the Global Catastrophic Risk Institute (GCRI), academic institutions, businesses, and different governmental agencies are presently working on 72 recognized AGI R&D projects. 



According to the poll, projects nowadays are often smaller, more geographically diversified, less open-source, more focused on humanitarian aims than academic ones, and more centered in private firms than projects in 2017. 

The comparison also reveals a decline in projects with academic affiliations, an increase in projects sponsored by corporations, a rise in projects with a humanitarian emphasis, a decline in programs with ties to the military, and a decline in US-based initiatives.


In AGI R&D, particularly military initiatives that are solely focused on fundamental research, governments and organizations have very little roles to play. 

However, recent programs seem to be more varied and are classified using three criteria, including business projects that are engaged in AGI safety and have humanistic end objectives. 

Additionally, it covers tiny private enterprises with a variety of objectives including academic programs that do not concern themselves with AGI safety but rather the progress of knowledge.

One of the most well-known organizations working on AGI is Carnegie Mellon University, which has a project called ACT-R that aims to create a generic cognitive architecture based on the basic cognitive and perceptual functions that support the human mind. 

The project may be thought of as a method of describing how the brain is structured such that different processing modules can result in cognition.


Another pioneering organization testing the limits of AGI is Microsoft Research AI, which has carried out a number of research initiatives, including developing a data set to counter prejudice for machine-learning models. 

The business is also investigating ways to advance moral AI, create a responsible AI standard, and create AI strategies and evaluations to create a framework that emphasizes the advancement of mankind.


The person behind the well-known video game franchises Commander Keen and Doom has launched yet another intriguing endeavor. 

Keen Technologies, John Carmack's most recent business, is an AGI development company that has already raised $20 million in funding from former GitHub CEO Nat Friedman and Cue founder Daniel Gross. 

Carmack is one of the AGI optimists who believes that it would ultimately help mankind and result in the development of an AI mind that acts like a human, which might be used as a universal remote worker.


So what does AGI's future hold? 

The majority of specialists are doubtful that AGI will ever be developed, and others believe that the urge to even develop artificial intelligence comparable to humans will eventually go away. 

Others are working to develop it so that everyone will benefit.

Nevertheless, the creation of AGI is still in the planning stages, and in the next decades, little progress is anticipated. 

Nevertheless, throughout history, scientists have debated whether developing technologies with the potential to change people's lives will benefit society as a whole or endanger it. 

This proposal was considered before to the invention of the vehicle, during the development of AC electricity, and when the atomic bomb was still only a theory.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.

Be sure to refer to the complete & active AI Terms Glossary here.


Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...