In the field of artificial intelligence, Steve Omohundro (1959–) is a well-known scientist, author, and entrepreneur.
He is the inventor of Self-Aware Systems, the chief
scientist of AIBrain, and an adviser to the Machine Intelligence Research
Institute (MIRI).
Omohundro is well-known for his insightful, speculative
studies on the societal ramifications of AI and the safety of
smarter-than-human computers.
Omohundro believes that a fully predictive artificial
intelligence science is required.
He thinks that if goal-driven artificial general
intelligences are not carefully created in the future, they would likely
generate negative activities, cause conflicts, or even lead to the extinction
of humanity.
Indeed, Omohundro argues that AIs with inadequate
programming might act psychopathically.
He claims that programmers often create flaky software and
programs that "manipulate bits" without knowing why.
Omohundro wants AGIs to be able to monitor and comprehend
their own operations, spot flaws, and rewrite themselves to improve
performance.
This is what genuine machine learning looks like.
The risk is that AIs may evolve into something that humans
will be unable to comprehend, make incomprehensible judgments, or have
unexpected repercussions.
As a result, Omohundro contends, artificial intelligence
must evolve into a discipline that is more predictive and anticipatory.
Omohundro also suggests in "The Nature of
Self-Improving Artificial Intelligence," one of his widely available
online papers, that a future self-aware system that will most likely access the
internet will be influenced by the scientific papers it reads, which
recursively justifies writing the paper in the first place.
AGI agents must be programmed with value sets that drive
them to pick objectives that benefit mankind as they evolve.
Self-improving systems like the ones Omohundro is working on
don't exist yet.
Inventive minds, according to Omohundro, have only produced
inert systems (chairs and coffee mugs), reactive systems (mousetraps and
thermostats), adaptive systems (advanced speech recognition systems and
intelligent virtual assistants), and deliberative systems (advanced speech
recognition systems and intelligent virtual assistants) (the Deep Blue
chess-playing computer).
Self-improving systems, as described by Omohundro, would
have to actively think and make judgments in the face of uncertainty regarding
the effects of self-modification.
The essential natures of self-improving AIs, according to
Omohundro, may be understood as rational agents, a notion he draws from
microeconomic theory.
Because humans are only imperfectly rational, the discipline
of behavioral economics has exploded in popularity in recent decades.
AI agents, on the other hand, must eventually establish
logical objectives and preferences ("utility functions") that sharpen
their ideas about their surroundings due to their self-improving cognitive
architectures.
These beliefs will then assist them in forming new aims and
preferences.
Omohundro draws influence from mathematician John von
Neumann and economist Oskar Morgenstern's contributions to the anticipated
utility hypothesis.
Completeness, transitivity, continuity, and independence are
the axioms of rational behavior proposed by von Neumann and Morgenstern.
For artificial intelligences, Omohundro proposes four
"fundamental drives": efficiency, self-preservation, resource acquisition,
and creativity.
These motivations are expressed as "behaviors" by
future AGIs with self-improving, rational agency.
Both physical and computational operations are included in
the efficiency drive.
Artificial intelligences will strive to make effective use
of limited resources such as space, mass, energy, processing time, and computer
power.
To prevent losing resources to other agents and enhance goal
fulfillment, the self-preservation drive will use powerful artificial
intelligences.
A passively behaving artificial intelligence is unlikely to
survive.
The acquisition drive is the process of locating new sources
of resources, trading for them, cooperating with other agents, or even stealing
what is required to reach the end objective.
The creative drive encompasses all of the innovative ways in
which an AGI may boost anticipated utility in order to achieve its many
objectives.
This motivation might include the development of innovative
methods for obtaining and exploiting resources.
Signaling, according to Omohundro, is a singular human
source of creative energy, variation, and divergence.
Humans utilize signaling to express their intentions
regarding other helpful tasks they are doing.
If A is more likely to be true when B is true than when B is
false, then A signals B.
Employers, for example, are more likely to hire potential
workers who are enrolled in a class that looks to offer benefits that the
company desires, even if this is not the case.
The fact that the potential employee is enrolled in class
indicates to the company that he or she is more likely to learn useful skills
than the candidate who is not.
Similarly, a billionaire does not need to gift another
billionaire a billion dollars to indicate that they are among the super-wealthy.
A huge bag containing several million dollars could suffice.
Omohundro's notion of fundamental AI drives was included
into Oxford philosopher Nick Bostrom's instrumental convergence thesis, which
claims that a few instrumental values are sought in order to accomplish an
ultimate objective, often referred to as a terminal value.
Self-preservation, goal content integrity (retention of
preferences over time), cognitive improvement, technical perfection, and
resource acquisition are among Bostrom's instrumental values (he prefers not to
call them drives).
Future AIs might have a reward function or a terminal value
of optimizing some utility function.
Omohundro wants designers to construct artificial general
intelligence with kindness toward people as its ultimate objective.
Military conflicts and economic concerns, on the other hand,
he believes, make the development of destructive artificial general
intelligence more plausible.
Drones are increasingly being used by military forces to
deliver explosives and conduct surveillance.
He also claims that future battles will almost certainly be
informational in nature.
In a future where cyberwar is a possibility, a cyberwar
infrastructure will be required.
Energy encryption, a unique wireless power transmission
method that scrambles energy so that it stays safe and cannot be exploited by
rogue devices, is one way to counter the issue.
Another area where information conflict is producing
instability is the employment of artificial intelligence in fragile financial
markets.
Digital cryptocurrencies and crowdsourcing marketplace
systems like Mechanical Turk are ushering in a new era of autonomous
capitalism, according to Omohundro, and we are unable to deal with the
repercussions.
Omohundro has spoken about the need for a complete digital
provenance for economic and cultural recordkeeping to prevent AI deception,
fakery, and fraud from overtaking human society as president of the company
Possibility Research, advocate of a new cryptocurrency called Pebble, and
advisory board member of the Institute for Blockchain Studies.
In order to build a verifiable "blockchain civilization
based on truth," he suggests that digital provenance methods and
sophisticated cryptography techniques monitor autonomous technology and better
check the history and structure of any alterations being performed.
Possibility Smart technologies that enhance computer
programming, decision-making systems, simulations, contracts, robotics, and
governance are the focus of research.
Omohundro has advocated for the creation of so-called Safe
AI scaffolding solutions to counter dangers in recent years.
The objective is to create self-contained systems that
already have temporary scaffolding or staging in place.
The scaffolding assists programmers who are assisting in the
development of a new artificial general intelligence.
The virtual scaffolding may be removed after the AI has been
completed and evaluated for stability.
The initial generation of restricted safe systems created in
this manner might be used to develop and test less constrained AI agents in the
future.
Utility functions aligned with agreed-upon human
philosophical imperatives, human values, and democratic principles would be
included in advanced scaffolding.
Self-improving AIs may eventually have inscribed the
Universal Declaration of Human Rights or a Universal Constitution into their
fundamental fabric, guiding their growth, development, choices, and contributions
to mankind.
Omohundro graduated from Stanford University with degrees in
mathematics and physics, as well as a PhD in physics from the University of
California, Berkeley.
In 1985, he co-created StarLisp, a high-level programming
language for the Thinking Machines Corporation's Connection Machine, a
massively parallel supercomputer in construction.
On differential and symplectic geometry, he wrote the book
Geometric Perturbation Theory in Physics (1986).
He was an associate professor of computer science at the
University of Illinois in Urbana-Champaign from 1986 to 1988.
He cofounded the Center for Complex Systems Research with
Stephen Wolfram and Norman Packard.
He also oversaw the university's Vision and Learning Group.
He created the Mathematica 3D graphics system, which is a
symbolic mathematical calculation application.
In 1990, he led an international team at the University of
California, Berkeley's International Computer Science Institute (ICSI) to
develop Sather, an object-oriented, functional programming language.
Automated lip-reading, machine vision, machine learning
algorithms, and other digital technologies have all benefited from his work.
Find Jai on Twitter | LinkedIn | Instagram
You may also want to read more about Artificial Intelligence here.
See also:
General and Narrow AI; Superintelligence.
References & Further Reading:
Bostrom, Nick. 2012. “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents.” Minds and Machines 22, no. 2: 71–85.
Omohundro, Stephen M. 2008a. “The Basic AI Drives.” In Proceedings of the 2008 Conference on Artificial General Intelligence, 483–92. Amsterdam: IOS Press.
Omohundro, Stephen M. 2008b. “The Nature of Self-Improving Artificial Intelligence.” https://pdfs.semanticscholar.org/4618/cbdfd7dada7f61b706e4397d4e5952b5c9a0.pdf.
Omohundro, Stephen M. 2012. “The Future of Computing: Meaning and Values.” https://selfawaresystems.com/2012/01/29/the-future-of-computing-meaning-and-values.
Omohundro, Stephen M. 2013. “Rational Artificial Intelligence for the Greater Good.” In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart, 161–79. Berlin: Springer.
Omohundro, Stephen M. 2014. “Autonomous Technology and the Greater Human Good.” Journal of Experimental and Theoretical Artificial Intelligence 26, no. 3: 303–15.
Shulman, Carl. 2010. Omohundro’s ‘Basic AI Drives’ and Catastrophic Risks. Berkeley, CA: Machine Intelligence Research Institute